I have listed relevant literature at the bottom which I could’nt stress enough to read. This is an informal write up that I am certain to miss the point a lot but its important for me to document and consolidate thoughts on this from time to time as I will inevitably continue to obsess with the science of consciousness till the day I turn off - pun intended. A combination of this and my research into VR interaction with digital worlds has awoken my mind to a different reality. One driven by tech, AI, bio-nanobots, biomechanics. evolution.. I would like to draw attention to the following two however.


I have read more and listened to many great scientists which has slowly started to solidify my fundamental views and understanding on how this could work and why my point on “moving” consciousness to support in biomechanical limbs is probably as close as we are going to get. But that in itself is controversial due to that being just another skill to learn. What is it to learn a new skill? Realistically its developing neural pathways in such a way that allow you to recall, recite a motion. An algorithm perhaps? If every set of actions to complete a task can be put into an algorithm, could you write an algorithms for every motion possible, embody it into an artificial self and call that conscious?

The separation between software and hardware when discussing AI is usually in relation to the GPU power required to run a model and return a desirable result quickly. If we had a machine with infinite back buffers, unlimited resistors all with significantly improved GPU-accelerated libraries, lets also take the CUDA framework but beef it up for the sake of inference engine issues, so all the low level interface nuances lets put a pin in them. How well could our AI perform on this infinite hardware machine? What would it affect? Arguably fuck all at this point in time given the way we run models. Given that Nvidia now support bypassing RAM and storing model data for fast access on the nvme drives we are starting to see a readjustment on architecture for AI driven computation. Not to mention quantum computing approaching.

Its my thought then that AI is currently software dependant, with the current approaches to machine learning at least. It is not hardware dependant beyond computation requirements. We have the old school turtle experiment from the 1980’s where a robot explores its environment and then returns to charge. From the emperors new mind, penrose proposes what would happen if we added weighted values to actions of this robot and rewarded it with completing specific actions? We are able to have a logical input model that could be represented with a data flow diagram to an extent. Lets create an advanced rumba with the purpose of cleaning dog hair. + 1 for staying under a light source to charge solar, +3 for finding a clump of hair, +2 for emptying in trash once full. -1 for not returning to charge when it should have which is the opposite of the value above. -2 for getting stuck… to which we now the exercise gets interesting. Lets not give it reinforcement learning. The robot is going to always get stuck. But there is a logical flow here that we can see the robot uses. We can give it an element of reinforcement where it maps the room and it wont return to the area it got stuck ever again.. for how long do we want to store this route in the robots memory? Maybe it was a dog toy on the floor which obstructed it and got it stuck. Now the toy has moved we should allow the robot another go right? This is memory. More on memory in the new assembly theory section (maybe new post) to which I have more questions than answers. But I like it. A lot.

Leading from this its not hard to use your imagination to consider what primitive insects could be mapped this way, we could likely deduce a data flow model with indexed weighted values and develop a data flow model for an Ant, fruit fly.. To which we have actually scanned a fruit flys brian and you are able to explore its neurons in Virtual reality and understand where its next “fire” in the neural network is and follow it through.

Short Term Memory

AI is just compression. A logical reversibility of computation that if ran it would allow for an task to be completed given different inputs each time. If you have the most advanced AI that understood everything, had the capability to think and simulate.. it is just compressing everything into the neural network based on the AI’s hardware to run. A conscious memory storage perhaps? Conscious could very well just be being able to change thought during a thought process. Perhaps something in your human-cache is relevant to the thought at hand, which influences your data retrieval at a given moment. If you have just read a bunch on AI and Physics chances are you are fresh full of those new concepts introduced and are able to discuss and form ideas in light of the new information laughs. Where fundamentals are stored in long term memory. The conscious part could just be the cache of essentials.

Long Term “Memory” in Evolution

Not neurons, as we have different types. Presynaptic cells send nervous impulses known as “action potential” which is the electrical signal a neuron will produce when stimulated. Transmitted by a synapse to a postsynaptic cell to execute its desired action. For physical action of the body you will be firing motor neurons, nervous system related will be sensory neurons. This is an evolutionary memory. We haven’t adapted in this life to have these neurons there, but once fired our bodies will impulse away physically. Isn’t this much like our robot receiving a massive -100 points weighted value and having an immediate response? This is obviously way above my grade of understanding but when you do eventually dive into it and recall the fundamentals you can paint quite a beautiful picture of how our bodies are firing commands to the nervous system in light of action. “an action potential is caused by the movement of unequally distributed ions on either side of an axon’s plasma membrane.” - (Reference)[]

Could we not therefore attach weighted values to all neurons for every task?

This leads into quite well the discussion of lossless compression which is a hard problem of AI. We are looking for symbols in an image that is lossless. Once we can compute speech to text synthesis perfectly each time we have compressed a process.. a memory in assembly theory. To which I have many ideas towards. Firstly to note I will need to do a blog on this idea alone.

Impact Factor

I think I want to express my concept of impact factor here, an unrelated thought to this but perhaps an element of evolutionary memory that humans and species have which is also relevant to AI and conscious learning.

Our ability to not only use our genetics to survive, but to receive relevant information that keeps you going or discovering, sharing, exchanging. Some bears in Canada are mollusk hunters, some are not. Something which is passed down from mother to cub and isnt shared across the entire species but crucial for their survival. If you have a child you have now fulfilled what is deemed as “the point of existence” and the reason we are here by many. To continue the human species forward, the same as any other living thing.

You have increased your impact factor considerably as they have now got the potential to influence many people and decisions in their lifetime, and if they have children its ^2 etc. Sharing ideas can save lives or improve lives. You may be adjusting the weight value of a neuron in this context which leads to a better outcome of life or whatever you want to weigh the task with. If you dont have kids, your impact factor is still very much considerable. You may share a thought or passing comment with someone who takes it onboard and changes an idea or process. You may have lead them to try a new sport which makes them healthier or even meet new friends leading to increased life span.. the study on this was associated with beer and better quality of life. Theorized it was due to better social life.

To which a term that I think is highly relevant and a key evolutionary trait known as “Neural Gating.” Its the ability for species to experience only advantageous senses at a concious moment, only certain inputs to our senses are noticed. This is using a fusion of the input from our senses to understand context in the environment. To which I would elaborate on this so much more but for time, everything our senses use as input is a wave. Sound, light, radiation, gravity, even time change in frequency of motion.

Thought experiment v2

Lets teleport you to mars. I am going to take every atom, proton and electron and store it in a database using a scanner with 100% accuracy. Now on mars I have a 3D-Human-Matter printer and you’re going to be printed. Your conscious now right? Your biological processes are the same, the breakfast you had in your stomach is still there.. To emphasis any electron in existence should be the same as the electron in your hand. The materialist theory


  • You exist in two places. If consciousness is linked to ones self are you in two places and able to be conscious on both planets? If we destroy one is your consciousness all back in one place? What would happen if we asked the same question to both people 1 hour after instantiation. I bet it would be different as they are both in different environments with now different biological processes to take into consideration.

  • Fluid dynamics, unless you spontaneously come into existence like a quark your blood and various other biological processes are going to be interrupted.. unless we pause you during print and when we press “play” your fluid and body functions resume. Its this problem


Further Reading

Check out the following links for inspiration and further reading about this topic

My Email