How does a Computer Differ from a Brain

Akihiro Eguchi

2008.11.15

In 1957, Herbert A. Simon, who won the Turing Award in 1975, predicted that four things would be realized in Artificial Intelligence (A.I) within 10 years: Computers would compose music of aesthetic interest, prove a significant mathematical theorem, win a world chess champion, and develop psychological theories (Stewart, 1994).  The research on A.I was launched with the creation of a computer in 1945 because of the expectation of many people who called it a thinking machine and believed it would compete with humans someday.  Indeed, the technology of computers has advanced very quickly, and the predictions of Simon have been realized one after another.  For example, a computer proved the four color theorem mathematically, which human couldn’t, and it won the world championship of chess.  However, there is still one of Simon’s predictions which hasn’t come true yet: how can a computer understand the mind of a human?  Unless we find out and imitate the mechanisms of the human brain, we will never succeed in creating real A.I because of three critical differences between our brain and a computer: their ways of making decisions, ways of storing memory, and their flexibility.

            First of all, one of the main differences between computers and our brains can be attributed to the different structures.  A computer has a CPU as a brain which consists of more than a billion transistors.  For example, the CPU whose clock frequency is 2GHz can execute 2 billion programs per second successively.  In contrast, a brain is composed of more than 1 trillion neurons.  Even though each neuron can respond only around 300-400 times per second, because of their ability of parallel processing, the brain holds enormous potential.  This difference in structures gives computers and our brains different features.  For example, a computer can calculate any answers one by one amazingly faster than humans as long as the equation is given beforehand.  On the other hand, although the brain doesn’t have that fast a calculation ability, the brain can input and output at the same time. Furthermore, even a young child can tell apart people’s faces very easily, which is a very important element for cognition.  These features show that the simple but super fast calculations are not necessary for to the brain’s performance, but the brain seems to be more complicated because of its function of receiving any stimuli at any time and processing them parallelly.

            Because of those differences, even if the many pieces of input and output of computers and our brains seem to be similar, it is not a good idea to make a decision on the existence of intelligence just by looking at this data.  In 1980, a philosopher John Searle (1980) suggested a very interesting thought experiment which he called "The Chinese Room."  In this experiment, he presupposed that there is a man who can speak only English in a room which has only one small window.  Then, one piece of paper which Chinese characters are written on is thrown into the room. The job for the man is to look up those unknown characters from a huge instruction book and send back some characters from the book to the outside.  People outside call the paper sent into the room a Question and the paper coming from the room an Answer, and each of the questions and the answers forms a correspondent conversation.  Therefore, the people outside assume that the man inside can understand Chinese.  This experiment shows the doubtfulness of the existence of a computer’s intelligence and argues the importance of analyzing the work inside.

A neuroscientist, Jeff Hawkins (2004), finds the key difference in the memory-prediction framework between computers and our brains.  He stated in his book that our knowledge and cognition are stored in our brain as a model of the real world, and we are predicting the future anytime we use it.  The essence of intelligence is to predict a coming event.  He believes that the biggest flaw of a computer is the lack of prediction ability.  This hypothesis can prove the reason why a computer could beat a chess champion, but it still can’t win some other board games such as Chinese Go.  According to OMNI Magazine in June, 1991 (as cited in reference British Go Association, 2008),  there is 10120 possible ways to move in chess up to the end, while Go has 10761 possible ways to move.  Because of the lack of intuitive predictive ability, a computer can't finish up the complex calculation and makes a sloppy decision in a short time.  This fact shows us the critically different way of making decisions between computers and our brains, and reinforces the importance of prediction.

Since computers and our brains have different structures, the ways of making decisions are different.  Also, those different methods make the difference in the performance greater.  Therefore, even if the output of these two looks similar, they cannot be the same.  An output is not determined only by the input but also many other factors which have influences on the decision, such as the brain's ability to predict.

Our brains make these predictions based on memories which they have, and there are also some huge differences in the memory systems.  In cognitive psychology, people explain the mechanism of a memory by focusing on three different tasks: encoding, storage, and retrieval.  In a college textbook of psychology, the author Weiten (2006) explains that encoding is a task of converting sensory input into a memory code, storage keeps the memory code, and retrieval recollects the memory code from the storage.  Although both a computer and a brain have these three jobs in their memory systems, each of these is different in many ways.

First, accuracy is the most critical difference in encoding.  For a computer, accuracy is very important in encoding data because of its insufficient prediction ability.  Although computers have some error fixing programs, they can be helpful for only small errors.  To illustrate, if someone wants to copy data from a CD to his computer, there should be no scratches on the CD; otherwise, the computer can’t load the data from the CD because of some errors which are caused by the damage.  Because computers don’t have certainty in their prediction ability, they always need 100 % accurate data for encoding.  On the other hand, our brains don’t encode every piece of input which they receive, so the data lack accuracy.  According to the psychologist Max Wertheimer (1944) introducing the Gestalt theory, our behavior is not created by many pieces of individual input which are derived from objects, but it is determined by the overall framework of the whole which cannot be deconstructed to individual input.  It shows that because of the way of receiving input abstractly, our brains don’t emphasize small errors in its detail. Therefore, they can encode any input data even though some parts of it are obscure.

Second, people often describe the random access memory (RAM) of computers as the short term memory (STM) of our brains and the hard disc drive (HDD) as the long term memory (LTM) because of their similar jobs; however, their abilities to store data are very different.  In a computer, RAM receives temporary data which has been encoded from some input and stores it in its HDD unless it has no more blank space.  In contrast, a neuroscientist, Wilder Penfield reported that STM can store an unlimited amount of data into LTM permanently in spite of the fact that people seem to forget many things. In his experiment in 1963, he used electrical stimulation of the brain (ESB) on a human and found out that it sometimes triggered clear descriptions of distant memory (Weiten, 2006).  Therefore, forgetting occurs not because of a problem in storage, but because of the problem in its retrieval system.

The phenomenon of brain’s forgetting is one of the biggest key points which make our brains very different from computers.  When both computers and our brains try to store data into their memories, they create keys for retrieving it later; however, the systems of each are very different.  Computers create a specific reference to the address where the data in the HDD is stored instantly, so unless their users delete the references, computers can retrieve the data at any time. In a different way, because our brains don’t have a concept of data size, they can’t create a particular reference to data.  Therefore, they need a different way to retrieve stored data.  Weiten (2006) explains the theory of semantic networks by Allan M. Collins.  These networks have nodes of concepts which are linked together with similar concepts; therefore, when people imagine a word, they automatically think of related words.  It indicates that our brains link data to other data which already exists in its storage as a key for retrieval. Therefore, people can’t find data immediately because data in our brains is not tagged with labels which indicate its address, but people have to trace their complex semantic networks to reach the data for retrieval.  Forgetting occurs when a brain fails to trace the network.  It also explains the fact that people sometimes can’t remember particular data although it is somewhere in the brain.

To sum up, computers and our brains differ in each of three tasks of their memory systems.  Our brains can understand input abstractly, so they don’t need 100 % accurate descriptions of the input which computers do.  Also, brains have unlimited capacities while computers don’t.  However, whereas computers can retrieve their data whenever they want, our brains sometimes fail to do it because of their complex systems.  These features characterize the tendencies of computers being binary and brains being analog, and this difference is the key difference which leads to a lot of differences between these two.

Finally, while analyzing different ways to create behaviors between computers and our brains, many people find critical differences between these two in their flexibility.  One of the biggest weaknesses of computers is the adaptation to unfamiliar input because they can work only when they are given programs.  Although we recently have heard about computers which can learn, their learning method is just focusing on particular patterns of input.  Hawkins (2004) claims that every successful AI program was made to be good at only one particular task in order to achieve a purpose; therefore, the computer has neither an ability to accommodate new stimuli nor flexibility.  Therefore, even though many technologies are invented for computers to behave like humans in some ways, there is still a great lack of flexibility for them to become real A.I.

Our brains have the ability to modify their behaviors depending on their situation.  A natural scientist Piaget asserted that even if we receive a stimulus which we have never received before, it can't be absolutely new for us (Von & Glasersfeld, 1995).  His theory explains that even if our brains receive unknown stimulus, they can make their decision by either assimilating or accommodating using similar schema, which is an organized set of knowledge of a particular object or phenomenon from their previous experience. Therefore, we are always ready to encounter any new stimuli.

Also, while computers have different algorithms for different tasks, neuroscientist Vernon Mountcastle found the amazing fact about the neocortex from which our intelligence is derived.  Hawkins (2004) explained this discovery in his book the work in the neocortex is always the same regardless of any input or output, such as vision, hearing, and motion, and it executes the same algorithm in any place there.  Although there are some structural differences within parts differences in the neocortex is doing the same process everywhere. This discovery completely surpasses the current technology of A.I and emphasizes the extreme flexibleness of our brains.

In short, the inflexibleness of computers makes them greatly different from our brains.  While computers can’t respond to unfamiliar input easily, our brains have the ability to deal with the input either by assimilation or accommodation, using their predictions.  In addition, on the central processing part of our brains, cerebral neocortex, there is no specific algorithm for each different kind of input, but only one kind of algorithm has the responsibility for all kinds of stimuli.  Therefore, this extreme flexibility of our brains is the most different characteristic from the computers'.

Through our brief history of computers, people have developed a lot of new technologies to make them similar to our brains.  Therefore, we can find similarities between these two in their general frameworks; however, because of the complexity of our brains, people had to compromise to create original ways to make output.  The greatest differences between computers and brains can be attributed to their different trait: computers are likely to be binary while our brains tend to be analog.  Because of their different physical structures, computers calculate data in order while our brains analyze many different data at the same time.  Also, in their memory systems, computers store data into their HDD by creating exact references.  On the other hand, our brains store data by linking to some part of abstract semantic networks, but they have unlimited capacities and store the data permanently.  Moreover, although computers are specialized for certain types of input, our brains have much better flexibility with unfamiliar input.  Someday in the near future, a real A.I will be able to be created, but only through a completely different and more flexible approach.

References