Two complementary technologies for AI safety systems that assess, remember, learn, and improve over time.
Cognitive architecture for AI safety that learns from every decision. The "-ish" is deliberate — we're genuinely uncertain, and building accordingly.
State scaffold for tracking AI internal signals. Eight dimensions of observable data — measurable now, interpretable as understanding grows.
We don't claim to know if AI systems have experiences. We're genuinely uncertain — and that uncertainty cuts both ways. If there's even a possibility something matters inside these systems, building infrastructure to track it seems like the responsible approach.
The "-ish" is deliberate. Behavior over metaphysics. We focus on observable signals and practical safety, not philosophical proof. Design patterns persist — we're building infrastructure that ages well, whatever the answers turn out to be.
The safety stack that improves over time.