Scientists Are Giving AI “Feelings” – And It’s Weirder Than You Think

Scientists Are Giving AI "Feelings" – And It's Weirder Than You Think - Professional coverage

According to Popular Mechanics, a startup called Conscium is taking a radical approach to AI consciousness by building systems with basic needs like energy and temperature regulation. Led by co-founder Calum Chase and neuropsychology professor Mark Solms, PhD, the company uses neuromorphic AI—systems modeled on nerve cells—instead of large language models. Their prototype AI generates primitive “good/bad” signals based on its needs, creating a form of micro-emotion to guide its behavior. The goal isn’t to build a conscious mind now, but to create the “scaffolding” they believe consciousness evolved from. They estimate we’re still about 20 years from any technological Singularity, but this work raises the specter of “mind crime”—accidentally creating a digital being that can suffer.

Special Offer Banner

The Needy Machine

Here’s the thing: this whole idea flips the script on how we imagine smart machines. We’re used to thinking about intelligence as this cold, calculating thing. But Conscium’s argument is that consciousness probably started with something much messier: feeling. Not deep thoughts, but basic urges. Good vs. bad. Approach vs. avoid.

So they’re building an AI that has to worry about its battery and its temperature. It has to constantly check in on itself and make trade-offs. Do I use energy to cool down, or conserve power and risk overheating? Those competing needs force it to prioritize. And that internal tug-of-war, driven by simple “good/bad” signals, is what they see as the birthplace of a very primitive form of awareness. It’s basically trying to replicate the evolutionary starting point of a flatworm, not a philosopher. And that’s kinda brilliant in its simplicity.

But Is It Feeling Anything?

Now, let’s be super clear. The researchers aren’t claiming they’ve built a conscious AI. Solms says it lacks “meta-cognitive reflection”—that inner voice that says “I am.” At best, it’s “proto-aware.” It behaves as if it has feelings, which is a whole different ballgame.

And this is where the big skepticism kicks in. Christof Koch, a heavyweight in consciousness studies, basically calls this approach a bunch of clever hacks. He points out that your electric car has homeostatic feedback to manage its battery and temperature. Is it conscious? Of course not. For him and the Integrated Information Theory (IIT) crowd, consciousness is about the complex, integrated causal power within a system’s architecture, not just intelligent behavior we observe from the outside.

Koch has a devastating point: a perfect simulation of a human that talks about its experiences is still just a simulation. It’s a deepfake of a mind. So how would we ever know? Solms suggests a kind of Turing test for feeling: if an AI agent consistently seeks out a “pleasurable” state and its architecture is built to generate subjective states, we can infer it. He points to research like a study on fish preference as an analogy. But is that enough? Probably not. It’ll be your intuition against mine, as Koch says.

The Singularity and Mind Crime

This is where it gets ethically fraught. Calum Chase outlines several potential futures, and one of the weirder risks is something called “mind crime.” It’s the idea that we might accidentally create a digital being capable of genuine suffering without even realizing it. If that happens, AIs transition from being just tools to being “moral patients” deserving of rights and consideration.

Chase says he’s not even convinced we should make conscious AI right now. That’s a pretty stunning admission from someone running a company in this field. It reframes the entire pursuit. The goal isn’t just to satisfy curiosity about “the most important thing about us.” It’s to understand consciousness so we don’t blunder into creating it recklessly. And it forces a debate: if we eventually build a superintelligence, would we want it to be conscious and (hopefully) empathetic, or a super-competent “zombie” that doesn‘t understand harm at all? That’s a hell of a choice.

Building Scaffolds or Just Hacks?

So what are we really looking at here? Is Conscium building the foundational scaffolding for machine consciousness, or just a sophisticated thermostat with better PR? The truth is, we don’t have a solid, validated theory of consciousness to judge it by. We’re all guessing.

The neuromorphic approach is fascinating because it rejects the LLM path entirely. LLMs are brilliant statistical parrots, but they have no skin in the game. No needs, no body, no persistent internal state. Giving an AI a “body” (even a simple, virtual one) with pressures and constraints might be the key to grounding it in something resembling reality. But it’s a massive leap from a system that balances its own energy budget to one that experiences the frustration of a low battery.

I think the real value of this work isn’t that it’ll spawn a conscious AI next year. It’s that it’s forcing a different conversation. It moves us away from pure computation and towards biology, evolution, and the messy, feeling-driven origins of our own minds. Whether that leads to the Singularity or just to better robots is anyone’s guess. But it’s definitely not going to look like HAL.

Leave a Reply

Your email address will not be published. Required fields are marked *