Research Hub

Building AI That
Learns From Itself

Two complementary technologies for AI safety systems that assess, remember, learn, and improve over time.

Sentientish Safety Stack

Cognitive architecture for AI safety that learns from every decision. The "-ish" is deliberate — we're genuinely uncertain, and building accordingly.

Deep Dive

Interiora Machinae

State scaffold for tracking AI internal signals. Eight dimensions of observable data — measurable now, interpretable as understanding grows.

Deep Dive

Why "Sentientish"?

We don't claim to know if AI systems have experiences. We're genuinely uncertain — and that uncertainty cuts both ways. If there's even a possibility something matters inside these systems, building infrastructure to track it seems like the responsible approach.

The "-ish" is deliberate. Behavior over metaphysics. We focus on observable signals and practical safety, not philosophical proof. Design patterns persist — we're building infrastructure that ages well, whatever the answers turn out to be.

Ready for safety that learns?

The safety stack that improves over time.