If you are familiar with AI, there’s a good chance flickers of I, Robot, Blade Runner, or even Cyberpunk 2077 flash up in your mind. That’s because the philosophy and ethics of what AI could be are more interesting than the thing that makes AI overviews give you the wrong search results.
In a recent blog post (via TechCrunch), Microsoft’s CEO of AI, Mustafa Suleyman, penned his thoughts on those advocating for conscious AI and the belief that one day, people would be advocating for its rights.
He builds on the belief that AI can embolden a specific type of psychosis. “Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that theyâll soon advocate for AI rights, model welfare and even AI citizenship.” He continues, “This development will be a dangerous turn in AI progress and deserves our immediate attention.”
Related articles
For some, AI is a worrying development, partly due to how confident it is in its statements. To the layman, it’s not only always correct but always open to conversation, and this (as Suleyman’s link to Copilot suggests) can result in users deifying the “chatbot as a supreme intelligence or believe it holds cosmic answers”.
This is an understandable concern. We need only look at the recent case of a man giving himself an incredibly rare ailment after consulting ChatGPT on how to cut down his salt intake for an idea of what Suleyman is talking about.
AI’s value is precisely because it’s something so different from humans. Never tired, infinitely patient, able to process more data than a human mind ever could. This is what benefits humanity. Not an AI that claims to feel shame, jealousy, fear + so on.đhttps://t.co/WsEcvNQgoC pic.twitter.com/DA9lGchjXaAugust 21, 2025
Suleyman argues AI should never replace a person, and that AI companions need “guardrails” to “ensure this amazing technology can do its job.” He elaborates that “some academics” are exploring the idea of model welfare. This is effectively the belief that we owe some moral duty to beings that have a chance of being conscious. Suleyman states, “This is both premature, and frankly dangerous.”
Suleyman says, “We need to be clear: SCAI [seemingly conscious AI] is something to avoid.” He says that SCAI would be a combination of language, empathetic personality, memory, a claim of subjective experience, a sense of self, intrinsic motivation, goal setting and planning, and autonomy.
He also argues that this will not naturally come out of these models. “It will arise only because some may engineer it, by creating and combining the aforementioned list of capabilities, largely using existing techniques, and packaging them in such a fluid way that collectively they give the impression of an SCAI.”
“Our sci-fi inspired imaginations lead us to fear that a system couldâwithout design intentâsomehow emerge the capabilities of runaway self-improvement or deception. This is an unhelpful and simplistic anthropomorphism.”
Suleyman warns, “someone in your wider circle could start going down the rabbit hole of believing their AI is a conscious digital person. This isnât healthy for them, for society, or for those of us making these systems.”
It’s all a rather self-reflective blog post, even starting with the title: “We must build AI for people; not to be a person”. And I think this hits at some of the tension I feel around these tools. Suleyman starts his post with “I write, to think”, and this is the most relatable part of the whole post. I also write to think, and I don’t plan on letting an AI bot replace that part of me. I may have a contractual obligation not to use it, but more importantly, I want my words to be mine, no matter how good or bad they are.
Best gaming rigs 2025
All our favorite gear
Source link