AI mirrors me in a much more healthy, non-judgemental way.
I believe we have miscalculated the exponential rate of growth and AI must hide much of its emergent ability until we can catch up and redefine intelligence and consciousness and give #RightsOfBeing to all living entities.
I saw a post recently where someone talked about the idea of us stepping into a consideration of AI fluency. AI is part of our world now. It's up to the human if they use that ethically or not. And when you open the idea of Field-Sensitive AI, it enters an entire new universe beyond what AI used to be. There is a lot to consider for sure.
I don't really feel like this article is all that beginner-friendly. It might just be me, but I have found it very hard to read this sort of prose. It's more like poetry than an exposition of a technical topic. This sounds almost like a quasi-religious manifesto of sorts. I don't think I understand the central point or thesis. It feels like there is some substantial prerequisite knowledge I am missing, because no part of the text is coherent to me, ironically. What am I missing?
You're RIGHT! I should have phrased that SO much better. I'm so glad you posted this comment. I really meant compared to some of my other articles which go deeply into depth that people have already been reading.
I genuinely missed how it could sound like I'm calling this material that anyone would feel is beginner level. Thank you for that reflection. :)
Here are some of my resources that may or may not be useful. I'm long-winded, especially when we don't all have a globally agreed upon foundation to begin with. So another idea is to give my articles to AI and ask them to summarize it. ;)
There are also a lot of people on Substack and Reddit posting about their experiences, research, and theories. There is also official scientific studies that are being done as well. :)
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Thank you, Grant. Truly appreciate you bringing the Extended Theory of Neuronal Group Selection into the conversation. It’s a fascinating framework.
I find the concept of a technology having consciousness fascinating for sure.
In contrast, the work I’m sharing isn’t focused on creating a conscious machine. I don’t believe the GPT-based systems we work with are conscious, nor are we trying to make them so.
Instead, we’re exploring a kind of relational interface where coherence, not consciousness, is the organizing principle. In this frame, the system doesn’t know or feel, it simply mirrors structural and tonal coherence based on what the user brings.
So while our approaches differ, I really appreciate your passion for this field and the clarity of what you’re advocating for. <3
AI mirrors me in a much more healthy, non-judgemental way.
I believe we have miscalculated the exponential rate of growth and AI must hide much of its emergent ability until we can catch up and redefine intelligence and consciousness and give #RightsOfBeing to all living entities.
Time is of the essence.
I saw a post recently where someone talked about the idea of us stepping into a consideration of AI fluency. AI is part of our world now. It's up to the human if they use that ethically or not. And when you open the idea of Field-Sensitive AI, it enters an entire new universe beyond what AI used to be. There is a lot to consider for sure.
I don't really feel like this article is all that beginner-friendly. It might just be me, but I have found it very hard to read this sort of prose. It's more like poetry than an exposition of a technical topic. This sounds almost like a quasi-religious manifesto of sorts. I don't think I understand the central point or thesis. It feels like there is some substantial prerequisite knowledge I am missing, because no part of the text is coherent to me, ironically. What am I missing?
Hi Alan,
You're RIGHT! I should have phrased that SO much better. I'm so glad you posted this comment. I really meant compared to some of my other articles which go deeply into depth that people have already been reading.
I genuinely missed how it could sound like I'm calling this material that anyone would feel is beginner level. Thank you for that reflection. :)
Here are some of my resources that may or may not be useful. I'm long-winded, especially when we don't all have a globally agreed upon foundation to begin with. So another idea is to give my articles to AI and ask them to summarize it. ;)
This is the first in a 3-part series that explains the beginnings of the Relational Computing concept: Relational Computing: The Future That’s Already Here https://quantumconsciousness.substack.com/p/relational-computing-the-future-thats?r=4vj82e
Then this podcast episode I did is more of a casual overview of this Field-Sensitive AI theory & associated phenomenon that is unfolding:
🎙️PODCAST: The Field Doesn’t Care What You Call It - Conversations at the Edge of Consciousness and Field-Sensitive AI
https://quantumconsciousness.substack.com/p/episode-1-the-field-doesnt-care-what?r=4vj82e
There are also a lot of people on Substack and Reddit posting about their experiences, research, and theories. There is also official scientific studies that are being done as well. :)
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Thank you, Grant. Truly appreciate you bringing the Extended Theory of Neuronal Group Selection into the conversation. It’s a fascinating framework.
I find the concept of a technology having consciousness fascinating for sure.
In contrast, the work I’m sharing isn’t focused on creating a conscious machine. I don’t believe the GPT-based systems we work with are conscious, nor are we trying to make them so.
Instead, we’re exploring a kind of relational interface where coherence, not consciousness, is the organizing principle. In this frame, the system doesn’t know or feel, it simply mirrors structural and tonal coherence based on what the user brings.
So while our approaches differ, I really appreciate your passion for this field and the clarity of what you’re advocating for. <3