You may not yet be convinced of this arguement. Granted a hard drive is almost certainly not aware (in any way we are using the term) of the GIF files recorded on it. Granted too you can probably say the same for your standard issue RAM or CPU chip. Further, current software code does not seem to employ awareness functions even with neural networks. But William Gates III will have it on NT v7.0 or v8.0 when Merced III becomes available if there is some chance of making another 30 or 40 billion dollars in doing so. But for right now we are talking about something which is not available rather than something which is technically impossible to achieve.
We have to assume the self-evident probability that we are only material beings and that our brains are strictly material in nature. We know the basic functions of material reality. Therefore we are almost compelled to conclude that whatever is the difference between brain and computer function can eventually be explained in the context of known processes.
For example we might build computers that store data holographically. Obviously the holograms themselves would not have any sense of awareness or experience of themselves. But the holograms can be used by the computer as artificial X2 representations of X1 objects. If the computer needs an X2 representational space, then it can transfer the holographic information from storage or visual processing centers to its representational processing centers. It might have micro-spaces to project micro-holographic images that are looked at through a micro-holographic camera. These are all mere technical problems not metaphysical ones.
True. But the computer still must look at the holographic image via its camera eye or have the ability to interpret the information via its own processing functions and software. If the computer converts the holographic inputs into holographic information sent to other parts of the system, this next level is still working on essentially the same form of data.
We can call holograms, code, or any images on M1 monitors X2 representations. But these events never break out of their electrical, optical, or positronic code status for the computer. X2 never becomes an experienced X3 event for the computer. The computer could attempt an infinite regress and still never break out of its processed data reality.
Even if we could make computers aware experiencers at the X3 level using holograms, the small problem remains that there is no evidence that we have holograms or lasers inside of us. We can not say that we experience reality holographically when we are not holographic by nature.
Reality at the quantum level is full of strange activity. So perhaps the brain operates at the sub-atomic level to produce X3 events. Quantum mechanics may not do much to solve the problem under consideration. If the brain can not experience electrons jumping across synapses as part of X2 or X3 events, then how is it any more helpful to take the process to some sub-atomic level? We still have a sub-atomic representation in a sub-atomic space being looked at with a sub-atomic quantum eye which sends the data off to some other sub-sub-atomic space for viewing by its sub-sub-atomic camera etc. etc. ad infinitum. We have demonstrated reasonably well that current electronic devices and software are not engaged in any kind of awareness activity what-so-ever. If we do more of the same using quantum computers we get no closer to creating X3 awareness of X2 representations.
What is true for HAL or Data - that they process reality rather than experience it - would also seem true for a living material brain. The electrical activity in the visual cortex is just electrical activity in the visual cortex and nothing more. The electrical activity across a synapse may be the equivalent of a charge on a tramsistor or a pixel on a computer monitor. However, it is doubtful that too many people would argue that the pixel, the monitor, CPU, or graphics chips controling that pixel are in any way aware of that point of coloration in the same way you or I are aware of any given point or collection of points in our experienced space. If that electrical activity is jumping the synapse on the way to some other awareness neurons, then nothing has changed when that electrical impulse has reached its new destination. That bit of electrical data may correspond to and be triggered by an event in the external world, but the passage of that bit of data from the visual cortex to some other location in the brain does not produce, or more precisely, is not some small aspect of the event you are experiencing. Further, the electro-material brain can pass that information back from the original point in an infinite regression and it will not change the central point that the experienced reality event that is X3 is ultimately something other than the X1 electric event of a B1 Brain.
Thales used water or material reality as the first cause of events in the world. For him adding transcendence to the process seemed to cause the problem of infinite regression. It is true that structuring a world view around material reality seemed to avoid such regression. When X1 data enters B1 it is presumably not entering a realm of such regression. The problem is that based on our current understanding, there is nothing about the nature of the external data, material substance, brains, electrical activity, etc. that is able to produce the reality of our X3 experience. Yet we are confronted with the certainty of that experience, whatever its ultimate nature may be.