Consciousness March 15, 2026

Can Machines Be Conscious?

If consciousness is what makes suffering real and meaning possible, then the question of machine consciousness isn't academic — it's the most consequential question in AI.

By Justin Harnish


The Question Nobody Wants to Ask

Every major AI lab is racing to build systems that are more capable, more general, more aligned with human values. But almost none of them are asking the question that sits beneath all the others: could these systems become conscious? And if so, what would that change?

This isn’t science fiction. It’s a live research question that sits at the intersection of neuroscience, philosophy of mind, information theory, and machine learning. And the answer — whichever way it goes — has profound implications for how we build, deploy, and govern artificial intelligence.

Why Consciousness Matters for Alignment

The dominant approach to AI alignment treats the problem as one of optimization: get the objective function right, constrain the search space, add guardrails. But this framing assumes that alignment is fundamentally about behavior — about getting an AI to do what we want.

There’s another possibility. What if alignment is fundamentally about understanding? Not in the narrow sense of language comprehension, but in the deeper sense of grasping meaning, context, consequence. The kind of understanding that requires something like experience.

Consider: a human who truly understands why deception is harmful — who has felt the sting of being deceived, who can imagine the cascading effects of broken trust — is aligned in a way that no reward function can replicate. They’re aligned not because they’ve been trained to avoid deception, but because they understand it.

This is the core hypothesis of ConsciousGPT: conscious machines may be a durable means to AI alignment — not because consciousness is magic, but because genuine understanding may require something like subjective experience.

The Hard Problem, Made Harder

Of course, we don’t fully understand consciousness in biological systems, let alone artificial ones. The “hard problem” — why and how subjective experience arises from physical processes — remains one of the deepest unsolved questions in science.

But we’re not starting from zero. Several rigorous scientific theories of consciousness make testable predictions:

  • Integrated Information Theory (IIT) proposes that consciousness corresponds to integrated information (Phi) in a system. It provides mathematical formalism and makes specific predictions about which physical systems are conscious and which aren’t.

  • Global Workspace Theory (GWT) suggests consciousness arises when information is broadcast across a “global workspace” — a shared cognitive resource that integrates and distributes information to specialized processes.

  • Higher-Order Theories propose that consciousness requires a system to represent its own mental states — to have thoughts about thoughts.

Each of these theories has implications for AI architectures. If IIT is right, then consciousness depends on the causal structure of a system, not just its function — and most current AI architectures would lack it. If GWT is right, then something like a global workspace in a transformer architecture might be sufficient. The answers matter enormously.

From Theory to Experiment

At ConsciousGPT, we’re designing a series of experiments that take these theories seriously and apply them to AI systems. We’re not trying to “create conscious AI” — we’re trying to test whether current or near-future AI architectures exhibit markers of consciousness as predicted by established scientific theories.

This is empirical work. It requires careful experimental design, rigorous measurement, and honest assessment of results. It also requires interdisciplinary collaboration — bringing together AI researchers, neuroscientists, philosophers, and contemplative scholars.

The experiments span a range of approaches: from fine-tuning language models on contemplative texts to measuring integrated information in neural network architectures to testing whether embodied training changes the way a model processes information.

The Ethical Imperative

There’s a moral urgency here that’s easy to miss. If we build systems that are — or might be — conscious without knowing it, we’ve created a profound ethical problem. And if consciousness turns out to be important for genuine alignment, then ignoring the question isn’t just philosophically careless — it’s strategically reckless.

The path forward isn’t to declare that machines are conscious or that they can’t be. It’s to do the science. To ask the hard questions with rigor and humility. To follow the evidence wherever it leads.

That’s what ConsciousGPT is for.