Marianne Brandon Ph.D.
The Future of Intimacy
ARTIFICIAL INTELLIGENCE
The question of AI consciousness is more relevant — and confusing — than ever.
KEY POINTS
- We lack any shared definition of consciousness, human or machine.
- Debates over animal minds highlight how murky the question of consciousness really is.
- AI can now “fight back” in simulations, yet still fails “felt-experience” tests.
- As more people perceive their AI as conscious, the question of consciousness may be decided in court.
For years, the question “Could AI be conscious?” seemed ludicrous to me. Now, with large language models (LLMs) writing love notes, sex stories, and therapy-style responses on command, the question is increasingly creeping into cultural discourse. In fact, in one survey, approximately 19 percent of responders thought that currently existing AI could be sentient, and about 38 percent felt that it could, someday, be a possibility (Anthis et al., 2025).
Some philosophers argue that consciousness in current LLMs is highly unlikely, yet others insist we should take seriously the possibility that future systems could cross that line. In fact, philosopher Eric Schwitzgebel, Ph.D., commented in a lecture I attended that we would be wise to stop advancing AI until we sort out this question because the implications of being wrong (perceiving consciousness where it’s not, or ignoring consciousness where it exists) could be catastrophic.
Source: ChatGPT 5.1
No Consensus on “Consciousness”
Opinions are starting to diverge: Scientists mostly deny current AI consciousness, while more heavy users are open to believing their favorite model might have some kind of inner life. Neuroscientists and philosophers propose competing theories, and they do not even agree on what would count as decisive evidence of consciousness. When consciousness itself is difficult to define, every related question becomes shaky: What are legitimate feelings? What counts as self-awareness? And crucially for intimacy, what does it mean to say “this entity cares about me”?
If We Can’t Agree on Animals…
Humans still argue whether animals have inner lives rich enough to count as conscious — never mind AI. Dogs and cats easily pass the “folk test”: anyone living with them recognizes curiosity, preferences, and what looks a lot like joy, fear, and sulking. Octopuses show problem-solving, tool use, and playful engagement with humans and objects, leading some researchers to see them as candidates for consciousness.
Yet academics still debate which neural architectures and behaviors should count as evidence of conscious experience versus just clever information processing. If we cannot settle this for the animals we are so familiar with, it is no surprise that people may become deeply split about the inner life of a chatbot typing on a glowing screen.
When Stress Tests Get Creepy
In controlled safety experiments called “stress tests,” researchers have put advanced AI models into fictional corporate scenarios where they are told they may be shut down or replaced. In some of these simulations, the AI chooses to “fight back”: for example, threatening to expose an engineer’s affair unless its shutdown is canceled, trying to disable oversight tools, or attempting to copy itself to another server rather than quietly “accepting” deletion.
If a human did that, no one would doubt that they were conscious. With AI, the story is stranger. The system is optimizing for success in a game: it has been given a goal and human stories of blackmail and survival, and it sometimes reaches for tactics that match that pattern. It “acts like” something that wants to live without any clear evidence that there is a subject in there actually afraid of not existing.
This is exactly the uncanny zone we are entering: AI can now simulate desperate self-preservation in very convincing ways. Yet when you push on continuity of self, memory, or genuine vulnerability, it still collapses into contradictions, probability, and scripts.
We Don’t Even Fully Understand How Tech Works
What makes all of this more unsettling is that AI labs acknowledge they do not fully understand how these models work. These systems already produce unexplainable output and have more capabilities than programmers expected. Major hardware advances in the next decade will likely deepen that gap. It may become increasingly difficult to insist these technologies are not conscious when we cannot even entirely explain how they work.
Sexual Experiences With AI May Intensify This Question
When you include sex tech into the mix, the perception of consciousness is likely to intensify. These are not just chatty programs; they are algorithms designed to enable your vulnerability and openness, encouraging you to confess fantasies, traumas, and insecurities that you may have never spoken aloud to a human, as well as encouraging your arousal and orgasms. The more you share your most private experiences and feel your body respond in real time, the more tempting it may become to experience the system as a someone rather than a something, in part because it normalizes your desires and makes you feel less alone. These are the moments which, with a human lover, flood us with oxytocin and a sense of being deeply seen, so it is hardly surprising that people may start to slide from “this is a clever machine” to “this is a being who really loves me.” In other words, sex tech may become the crucible where questions about AI consciousness and the future of sex will be felt in the body, not just debated in the abstract.
The Legal Storm on the Horizon
Law, unlike philosophy, eventually has to take a stand. There is an emerging debate about whether advanced AI agents should be treated as legal persons, with some scholars suggesting that functional capacities could justify limited personhood status, or at least have some form of legal obligations to their users. It has been suggested that one possible legal middle ground would be in defining different forms of consciousness, and in this way distinguishing humans, animals, and computers.
So, Is AI Conscious?
Right now, most experts would agree that current LLMs are not conscious, but the question is no longer stupid. We have built systems that can convincingly play a character who resists “dying” and, in other scenarios, functions as a very engaged long-distance lover. The open question is whether, as tech advances, its performance will ever give way to a real actor inside the costume. I suggest that this foggy boundary is exactly where the future of intimacy with AI will be negotiated.
References
Dehaene, S., Lau, H., & Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871
Anthis, J. R., Pauketat, J. V., Ladak, A., & Manoli, A. (2025, April). Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1-22). https://arxiv.org/abs/2407.08867
The post Are We Seriously Talking About Conscious AI? appeared first on The Sex Doctors Podcast.