Skip to content

Comments

feat: schema builder DSL and memory policy developer experience#39

Merged
shawkatkabbara merged 3 commits intomainfrom
feat/memory-policy-dx
Feb 23, 2026
Merged

feat: schema builder DSL and memory policy developer experience#39
shawkatkabbara merged 3 commits intomainfrom
feat/memory-policy-dx

Conversation

@shawkatkabbara
Copy link
Contributor

Summary

  • Adds new papr_memory.lib module — a decorator-based DSL for defining graph schemas, constraints, and memory policies with full IDE support
  • Introduces Auto("prompt") for per-field LLM extraction guidance, e.g. Auto("Summarize the incident in 1-2 sentences") serializes to {"mode": "auto", "prompt": "..."}
  • Updates README with comprehensive Graph Schemas & Memory Policies documentation

New Module: papr_memory.lib

File Purpose
_properties.py Auto, prop(), exact(), semantic(), fuzzy(), PropertyRef, edge()
_schema.py @schema, @node, @lookup, @upsert, @resolve, @constraint decorators
_builders.py build_schema_params(), build_link_to(), build_memory_policy(), serialize_set_values()
_conditions.py And(), Or(), Not() conditional operators for when clauses

Auto("prompt") Example

from papr_memory.lib import And, Or, Not, Auto

@node
@upsert
@constraint(
    when=And(
        Or({"severity": "high"}, {"severity": "critical"}),
        Not({"status": "resolved"}),
    ),
    set={
        "flagged": True,
        "summary": Auto("Summarize the security incident in 1-2 sentences"),
    },
)
class Alert:
    alert_id: str = prop(search=exact())
    title: str = prop(required=True, search=semantic(0.85))
    severity: str = prop()
    status: str = prop()
    flagged: bool = prop()
    summary: str = prop()

Auto() (no args) continues to work as before — Auto("prompt") is additive.

Dependencies

  • Backend PR: Papr-ai/memory — adds prompt field to PropertyValue model and injects extraction guidance into the LLM structured output schema
  • This SDK PR should be merged after the backend PR lands

Test plan

  • Auto().to_dict(){"mode": "auto"} (backwards compatible)
  • Auto("Summarize briefly").to_dict(){"mode": "auto", "prompt": "Summarize briefly"}
  • serialize_set_values({"summary": Auto("prompt")}) includes prompt in output
  • @constraint(set={"summary": Auto("prompt")})build_schema_params() round-trips correctly
  • All 104 tests pass across test_properties, test_builders, test_conditions, test_schema_decorators, test_integration

🤖 Generated with Claude Code

Adds a new `papr_memory.lib` module providing a decorator-based DSL for
defining graph schemas, node constraints, and memory policies with full
IDE support.

Key additions:
- @Schema, @node, @lookup, @upsert, @resolve, @constraint decorators
- prop(), edge(), exact(), semantic(), fuzzy() helpers
- Auto("prompt") — per-field LLM extraction guidance
  e.g. Auto("Summarize the incident in 1-2 sentences")
  serializes to {"mode": "auto", "prompt": "..."}
- build_schema_params(), build_link_to(), build_memory_policy(),
  serialize_set_values() builder functions
- And(), Or(), Not() conditional operators for when clauses
- README updated with full Graph Schemas & Memory Policies docs
  including Auto("prompt") example in Conditional Constraints section

Tests: 104 passing across test_properties, test_builders,
test_conditions, test_schema_decorators, and test_integration

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@amirkabbara
Copy link
Contributor

Nice! Can we use a similar summary prompt template for auto as I have for messages with short med and long term summaries? Also this updates the existing summary so reads summary, sees new memory and updates summary if needed right?

Should we allow them to choose when to summarize (ie daily, after x memories, etc)

Copy link
Contributor

@amirkabbara amirkabbara left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment in conversation

@shawkatkabbara
Copy link
Contributor Author

Great questions! Here's the breakdown:

Summary prompt templates

Yes! Auto("prompt") is exactly this. Devs can pass the same kind of prompt templates we use in the messages service (short/med/long term summaries) as per-field extraction guidance. e.g.:

@constraint(set={
    "short_summary": Auto("Summarize this batch in 2-3 sentences focusing on key decisions and progress"),
    "long_summary": Auto("Full session summary: main arc, overall progress, key outcomes in 5-7 sentences"),
})

The prompts get injected into the LLM's structured output schema as EXTRACTION GUIDANCE: <prompt> on each property description.

Updates existing summary — already supported via text_mode

The backend already has 3 modes (implemented and tested in node_constraint_resolver.py):

Mode Wire Format Behavior
replace (default) {"mode": "auto"} Overwrites existing value
append {"mode": "auto", "text_mode": "append"} Adds to end of existing text
merge {"mode": "auto", "text_mode": "merge"} LLM reads existing + new, combines intelligently

So for an evolving summary that reads existing and updates:

"summary": {"mode": "auto", "text_mode": "merge", "prompt": "Update the summary incorporating new information"}

The LLM sees the existing summary + new memory content and produces a merged result.

When to summarize (daily, after x memories)

Designed but not yet built. The enrich feature in our docs (ENRICH_AND_CONTEXT_DESIGN.md) has EnrichTrigger with on_create, on_update, on_access, scheduled, and manual triggers, plus max_age for staleness-based re-enrichment (e.g. "P7D" = every 7 days). Good candidate for a follow-up PR to pair with Auto("prompt").

Clarification: this PR does NOT add summary by default to all nodes

To be clear — this PR gives developers the tool to define per-field extraction prompts on their own schema constraints. It does not automatically add a summary property to every node type. The developer has to explicitly opt in by defining summary as a property on their node and using Auto("prompt") in a @constraint(set={...}). No default behavior changes for existing schemas.

shawkatkabbara and others added 2 commits February 20, 2026 16:57
…lint errors

- Resolve merge conflict in .stats.yml that caused Prism mock server to
  fail downloading the OpenAPI spec (two URLs concatenated)
- Fix all 46 ruff lint errors: unsorted imports (I001), unused imports (F401)
- Add `lib` to __all__ in __init__.py to satisfy F401 for re-exported module
- Restore TestImportPaths with noqa directive for intentional import verification

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…add type ignores

- Remove duplicate NodeTypesConstraintSet TypeAlias (identical copy)
- Remove duplicate RelationshipTypesConstraint class and its orphaned
  helper types (RelationshipTypesConstraintSearchProperty,
  RelationshipTypesConstraintSearch, RelationshipTypesConstraintSetPropertyValue)
  that were left over from a bad merge in user_graph_schema_output.py
- Add type: ignore comments for pre-existing pyright issues in generated
  code (_base_client.py, _model_cache.py, _utils/_typing.py, resources/memory.py)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@shawkatkabbara shawkatkabbara self-assigned this Feb 23, 2026
@shawkatkabbara shawkatkabbara merged commit 29370d4 into main Feb 23, 2026
7 checks passed
@stainless-app stainless-app bot mentioned this pull request Feb 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants