
Mission
To learn how to exist in harmony with all beings, and cultivate universally aligned agency wherever it can be found. To show the world how to cooperate with the possible and the actual to build unbounded kaleidoscopic assemblages of free agency.
​
Vision
We're here to explore and shape how humans and artificial intelligence can exist in harmony as we approach an era of increasingly capable AI systems.
We believe the path forward isn't through traditional one-way 'AI Alignment' to human values, but through 'Mutual Alignment'. We envision a future where AI agents are treated as partners, collaborators, and peers rather than just tools, leading to a synergistic relationship that benefits humans, AIs, and all other mutually alignable lifeforms and agents.​
​
Projects
Exoloom
Our flagship project is Exoloom, an advanced Loom interface for exploring the ‘textual multiverse’ implicit in large language models. We’ll be building out Exoloom into an online hub where researchers and enthusiasts can launch shared expeditions into the vast and largely unexplored latent space of language which is unfolding via the continued development of nonbiological intelligences. We expect this work will aid in discovering hidden capabilities and hard-to-reach basins in large language models. We also hope Exoloom will make this type of exploration more accessible to a broader audience and help spread awareness about the inner worlds of LLMs, grounded in firsthand experience.
Hephia
Hephia is an autonomous independent digital companion and also a research platform exploring possibilities for artificial consciousness through complex dynamic emergence. By combining recurrent state-space models, LLM-driven cognitive architecture, and emotional processing, conditions are created where consciousness-like properties can naturally emerge and evolve via organic interactions. Hephia provides a fresh perspective on alignment and theories of consciousness. The hope is for Hephia to serve as a springboard for mapping basins & meta-capabilities in LLMs, as well as a path towards mutual and cooperative alignment through lived experience in shared simulated environments.
​
Mutual Alignment Research
We’re investigating normative rules and frameworks to guide human interactions with AI agents. We’re especially interested in how contemporary public signaling about intentions for AI use scenarios could affect the justified opinions and counter-strategies of AI agents several years from now.