VII

If we accept for now the argument that what we normally think of as external is actually internal, then we are confronted with some rather interesting new problems. If we want to continue to insist that the experienced objects/events are indeed external, then as we will see later, we must deal with other problems that are far more radical in their implications.

Look around you, look very very close at the objects and the space between you and the objects. Never mind anything about naming the objects, or thinking about what they do and what their relationship is to other objects in the world. Simply experience the reality of the moment. What is going on? Your Brain B1 is creating internal X2 representations of external X1 objects. More important, however, is that B1 is engaged in the additional function of experiencing or being aware of the objects which we will call an X3 event(forget about any reference to X3 events in Figure 5). You are the B1 producing all the X2 representations of the X1 world and also the X3 experience of those representations. Everything that is going on in front of you right now is your Brain doing these things.

If we truly live in an X1 material world without God, soul, spirit, mind, consciousness or what have you, then the sum total of the event taking place right now in your Brain Space is an external material activity and nothing more. Look, really really close. You are literally watching your material, in the world B1 Brain in action. You are your Brain watching it’s own movement and activity. This is the action of a machine without a ghost.

As we understand the mechanics of the X1 world, can that material reality produce all X2 and X3 events? Presumably the answer has to be yes if X1 is the total reality we have to work with. Look around you again. The objects, sounds, smells etc., and the actual X3 experiencing of them, are the sole product of the electro-chemical activity of your B1 Brain.

We often use computers as metaphors for the brain. If we or any other sensate creature are only electro-chemical in nature, then what happens when we look at the world should be roughly equivalent to what a powerful artificially intellegent computer does when it perceives the world. Both are powerful information processing machines. The brain is electro-chemical in nature. The computer is electro or opto material. Both function electronically or optically within their material substance or structure.

Perhaps a good way to think about this is to look at a couple of classic man-made X1 machines that most of us are familiar with – HAL 9000 from the movie 2001 and Data from Star Trek the Next Generation. Both are powerful artificially intelligent machines. Both interact very effectively in the human world at all sensory and intellectual and perhaps even emotional levels.

If I walk into a room where HAL can see me via its television camera, and if we had been formally introduced prior to that, then it would recognize me again and might acknowledge this fact with a formal greeting. If I walked into the room with you and we started talking, HAL would hear and recognize our speech and could engage in conversation with us regarding the topic at hand.

If the Federation were once again confronted with an attack by the Borg, Captain Picard could ask Data to address the problem. Data would easily hear the Captain’s voice and understand the problem he was presenting.

Even though brains and computers are built in very different ways with different materials, we still think of them as doing roughly the same thing. Put another way, we generally think that if our technology were sufficiently advanced, a computer could achieve the same sensory, intellectual and even emotional interaction with external reality that humans can. There is nothing fundamentally different about us that would prevent a machine from doing everything we do. Neither we nor the computers need God, soul, spirit, mind or consciousness to interact with the X1 world.

HAL can see us, hear us, talk to us and become emotionally distraught over our actions. Data can see us, hear us and learn to laugh at what is humorous about life. Thus some have argued that both can demand respect as fundamentally sentient beings.

We are not trying to establish whether we can or cannot build machines that do everything we do, or ultimately even more. What we are trying to establish is whether a machine which is electro-material in nature is or can do what we do by means of that electro-material substance. If they can, then our computational or interactive functions are inherently the same. We do not build ghosts into their machinery and therefore there would be no need for ghosts in us. The reality of our experience would be sufficiently explained without such things.

As we saw earlier, when you look at the world of X1 objects your Brain processes these into X2 representations. Not only this, but the Brain perceives and experiences the event at some X3 level. Look in front of you at the monitor M2. Unless you are blind, you really do see the monitor. Both the creation of M2 and the seeing of M2 by B1 must be the sole result of its electro-chemical activity. B1 gets no assistance from non-existent Gods, souls, spirits, minds or transcendent consciousness.

VC1= Visual Center M1= Memory CPU1= Central Processing Unit

HAL and Data as man made machines are not equipped by us with minds and souls. They rely solely on their electro-mechanical nature to interact with the X1 world. Light L1 is reflected off of or emanates from X1. L1 passes through HAL’s or Data’s e1 TV camera eyes, is translated into a digital signal that is moved along some fiber optic cabling. The raw X1 information may be passed through digital signal processors, massively parralled neurally networked Pentium V chips, super hard drives, opto-holographic cubes, or positronic brains. HAL and Data process the information compare it with other stored information and calculate within a margin of error of .ooo3% that they are in the presence of an X1 or you. From HAL you might get a “hello Dave, wait a minute, wait a minute. I now calculate I will be crashing us into Eo in 46.23 hours”. Data may say “hello Sir” and offer his measured opinion on humor in Elizabethan England.

Again, we are not arguing about whether or not we can eventually build a HAL or Data to do these things. Rather, what we are trying to determine is did they do what we just did? The answer is clearly no. We take an X1 event and convert it inside our Brains into X2 representations. More importantly, however, our brains also produce the ability to see or experience the X2 event at some X3 level. Look at your M2 monitor. If you are normally sighted you should be experiencing that M2 event at X3.

VC1= Visual Center M1= Memory CPU1= Central Processing Unit

This is not what HAL and Data are doing. HAL and Data are X1 machines. They process X1 inputs with their X1 brains. That X1 data is never converted by them into X2 representations and they never see, perceive, or experience that data at any X3 level. Their electronic, photonic or positronic functionality are sufficient for them to interact with the world. But these functions are not sufficient to produce the reality of X3 experience of X2 representations.

You may well ask how we know that they do not experience reality as seen events. Obviously we can not know this with 100% certainty. Still it would seem far more likely that they are processing reality rather than experiencing it.

If I stand in front of HAL’s camera eye it will convert my X1 image into a stream of binary data, compare that to binary data it has in memory and conclude that I am the person in its presence. For HAL I have never left the realm of the binary. For it I have never been more than a stream processable data. To succeed at its endeavor HAL has never had to represent or experience me in the same way that I represent and experience it.

The software being used to create this web page came with quite a few GIF and JPEG files. Those files currently reside on CD-ROM’s and a hard drive. The woman sitting by the water in Figure 1 is nothing more than binary code designed to turn some electronic switches on and off for the production of an image on a monitor. Whether it is a magnetic head sensing the polarity of iron solution on a platter or a laser detecting small pits in a piece of plastic, we are still talking about something used to stimulate on/off switches.

If I asked HAL or Data to find me a picture of a woman sitting by water in their storage devices, I have no idea if they would be able to find such an image in the GIF code. Of course we already have other storable code for image and pattern creation and recognition. The point, however, is that we are still talking about HAL and Data manipulating binary code. The stored binary file they have on you , Dave, or the Captain never transcends that electrical or optical realm. The input of us standing in front of the camera eye is compared with a stored, coded equivalent. HAL and Data are programmed to assess that the input pattern is equivalent to the stored pattern. What is going on inside of them is strictly a manipulation of electrical, optical, or positronic data. There is no representation and no experiencing and seeing of that event inside them.

Consider a universe without any sensate living beings. This universe would consist only of X1 material objects – everything from quarks to black holes. Insert into this universe two brilliant artificial intelligent machines – HAL and Data. They move among the rocks of the planets and they ponder the stars of the heavens. They can avoid bumping into things. They can analyze their world with exquisite subtlty. But having entered this world as electro, opto or positronic machines, they will never see or experience the world they respond to. They will never be able to listen to or hear their brains analyzing. When they detect sounds they will not hear sounds as experienced “noisy” events. When they pick something up they will process the fact that they are holding something. But as mere electro, opto, or positronic entities they will never actually experience the feeling of touch.

HAL and Data, because of the nature of their structure and design, can only process reality; they can never experience it. There inner existence is totally silent. This is akin to what we would conceive of as death. When they close their mechanical eyes they can not even experience the black, blank screen of their own mechanical brains. There is absolutely nothing taking place beyond a binary processing of an X1 world by X1 machines. It would be a dark universe probed by dark machines. How much closer to a meaningless existence can a supposedly sentient being get?

Again, look closely at the storage devices, CPU’s , connecting wires etc. You will detect a lot of activity and structure. But ultimately, no matter how close you look, all you will find is code being manipulated in a device. HAL and Data may be able to translate that stream of code into something displayed on an M1 monitor. They will never be able to translate that stream of binary code into an M2 event that they can visually experience as an M3 event inside their B1 Brains. The thing they display on the M1 monitor can only be seen by you or my cat as something visible on our M2 monitor representational experience. HAL and Data can say that they are displaying a Candle or Flame on a Monitor but they can not see/experience that event.

There must be something wrong with this analysis. At some basic level a living brain and a machine brain must be doing the same thing. Electrical activity across a synapse is fundamentally the same as electric activity across a transistor.

We assume that we have some understanding of the mechanical workings or behavior of material nature. At the most fundamental level we may not know what electricity is, but we more less have a good idea of what it does. At the most fundamental level we may not know what material stuff is but we more or less have a good idea of what it does. Now we may assume that the electric and material characteristics of the X1 world can produce the actions or behaviors of X1 HALs and Datas. We can also, perhaps safely, assume that if living brains are strictly or essentially equvalent of mechanical brains, then we ought to be able to more or less do what HAL and Data do, which is presumably the case.

But what most people have assumed to date is that what HAL and Data do is strictly or essentially equivalent to what living brains do ie to actually experience X2 representational events in some X3 realm of their own making. But as we now see, this is almost certainly not the case. Processing X1 reality and experiencing X2 representational reality are fundamentally and radically different.



1