Skip to content

Use cases

The blank session problem. Every AI session starts from zero. Users keep multiple sessions open to avoid losing context. Memory files are unstructured text dumped into context on every turn — wasteful, unsearchable, unaccountable.

AgentStateGraph as the memory layer:

  • Every fact, preference, decision committed with intent + confidence
  • New sessions query for relevant context (not bulk-load everything)
  • Searchable: “what did we decide about pricing?”
  • Blameable: “where did this preference come from?”
  • Branching: different branches for different projects/contexts
  • Multi-agent: Claude Code, Claude Chat, schedulers — all share the same memory graph

Demo that writes itself: Start a fresh session. It connects to AgentStateGraph, runs 3 queries, has full project context in 3 seconds. No re-explaining.

Standalone product potential: MCP server that any AI chat tool can connect to. Claude Code, Cursor, Windsurf, GPT — any MCP-compatible agent gets persistent memory.

Multi-agent frameworks (CrewAI, AutoGen, LangGraph, OpenAI Swarm) lack shared state with provenance between agents.

When orchestrator A delegates to worker B:

  • B runs in isolation, returns text
  • No structured record of: what B explored, confidence, alternatives, authority chain
  • If B delegates to C, the chain is invisible

AgentStateGraph solves this natively:

  • Sessions with parent-child relationships
  • Scoped branches per agent
  • Authority chains and delegation
  • Intent decomposition trees
  • Resolution reporting

Integration play: crewai-agentstategraph, langgraph-agentstategraph plugins that put AgentStateGraph in front of each framework’s user base.

3. Shared Knowledge Base with Provenance (Accountable RAG)

Section titled “3. Shared Knowledge Base with Provenance (Accountable RAG)”

Everyone has RAG. Nobody has accountable RAG where every fact carries:

  • Who added it (which agent, which human, which data source)
  • When, from what context, at what confidence
  • Whether superseded, corrected, or deprecated
  • Full blame chain when a fact is wrong

Example: Agent ingests earnings call → fact committed with intent: Observe, confidence: 0.82, reasoning: "Extracted from Q3 transcript, page 14". Six months later, bad decision traced to wrong fact → blame shows exact ingestion session.

4. Configuration Management for AI Pipelines

Section titled “4. Configuration Management for AI Pipelines”

ML teams iterate on configs, hyperparameters, data preprocessing. Current tools (MLflow, W&B, YAML in Git) lack:

  • Branching to explore different config combinations
  • Intent metadata (“trying higher LR because loss plateaued”)
  • Confidence scoring
  • Multi-agent support (one agent tunes hyperparams, another manages data)

Beyond infrastructure — any consequential AI decision needs the same primitive:

  • Loan approval agents → “why denied? what alternatives?”
  • Medical triage agents → “what symptoms? what confidence?”
  • Content moderation agents → “why flagged? what policy?”

Industry-agnostic. Sealed epochs work the same everywhere.

Use CaseMarket SizeBuild EffortDemo-abilityRevenue Path
Agent memoryMassive (every AI user)LowIncredibleMCP marketplace, SaaS
Agent orchestrationLarge (framework users)MediumGoodFramework partnerships
Infrastructure opsMedium (DevOps/SRE)DoneGreatEnterprise licenses
Knowledge baseLarge (enterprise)MediumGoodEnterprise licenses
ML configMedium (ML teams)LowDecentStandard tier
Compliance loggingLarge (regulated)LowGreatEnterprise licenses