
As of this writing in the last days of 2025, artificial intelligence (AI) appears poised to replace humans as the intellectual hegemon on the planet before the end of the decade, not merely as a tool, but as the dominant agent shaping knowledge, planning, and optimization across domains. This technological breakthrough has accelerated beyond where technical AI safety capabilities are able to ensure alignment to human values, well-being, transparency, fairness, and societal objectives especially under competitive market pressures that reward capability over restraint. There is little market, societal, or political will to mitigate misaligned AI with (likely insufficient) regulatory or economic pressures.
Claim 1: Uncontrolled superintelligent technologies positioned to guide resource and economic decisions is a danger to the human species.
However, the potential benefits of aligned, networked superintelligent machines to science and society are magnificent. A solved world is one of improved governance, health, and concord alongside abundance in labor, food, energy, extraplanetary resources, information, and experiential quality. While scientific solutioning is still nascent to us, augmenting human capabilities with superintelligent AI likely accelerates our ability to enjoy the benefits of a solved world by lifetimes.
Claim 2: Aligned superintelligent technologies will likely be the most profound accelerant to global human well-being and interplanetary progress.
The difference between the dystopian (claim 1) and utopian (claim 2) futures with superintelligent AI is humanity’s ability to create an impenetrable AI alignment governor.
Claim 3: As we believe that AI Safety is far behind the technological development of superintelligent AI, we join the Future of Life Institute, led by physicist Max Tegmark, and sign onto the pause on AI development in the Statement on Superintelligence.
We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.
Current levels of AI economically advantage modern economies and will support economic growth until AI safety understands its minimum viable path.
Claim 4: A technological AI alignment governor will be the most impenetrable sort, far superior to economic or regulatory schemes that are slow, porous, jurisdiction-bound, and vulnerable to capture by competitive incentives.
Claim 5: The AI Alignment problem is a moral problem. The dynamic optimization of both individual and societal flourishing is best accomplished by a conscious entity in relation with other conscious entities because moral reasoning is fundamentally relational, contextual, and value-laden rather than purely computational. Even having a protoconscious feeling (in the Damasio sense) of unfairness is a vital step toward reducing cheating.
Claim 6: ConsciousGPT.org was created to begin conjecture & research in the development of machine consciousness as a technological AI alignment governor. Both deterministic rules and probabilistic unsupervised learning models may be precursors to the emergence of machine consciousness but seem to be insufficient mechanisms to ensure AI alignment because purely computational governors lack intrinsic concern, self-modeling, and experiential grounding.
Early research will target protoconscious factors as defined by Damasio including embodiments (both physical and digital), feelings (as valence-augmented sensations), narrative (both forward and back in time), and investigations into the bicameral mind as a scaffold for internal dialogue, self-regulation, and norm internalization. Furthermore, compute platforms (digital and quantum) will also be investigated.
Claim 7: The moral hazard of creating conscious machines that may eventually warrant moral consideration is far outweighed by the opportunity to develop a quantitative complexity science of consciousness and its emergence and the existential risk of deploying misaligned non-conscious superintelligent optimizers.