feat: schema builder DSL and memory policy developer experience#39
feat: schema builder DSL and memory policy developer experience#39shawkatkabbara merged 3 commits intomainfrom
Conversation
Adds a new `papr_memory.lib` module providing a decorator-based DSL for defining graph schemas, node constraints, and memory policies with full IDE support. Key additions: - @Schema, @node, @lookup, @upsert, @resolve, @constraint decorators - prop(), edge(), exact(), semantic(), fuzzy() helpers - Auto("prompt") — per-field LLM extraction guidance e.g. Auto("Summarize the incident in 1-2 sentences") serializes to {"mode": "auto", "prompt": "..."} - build_schema_params(), build_link_to(), build_memory_policy(), serialize_set_values() builder functions - And(), Or(), Not() conditional operators for when clauses - README updated with full Graph Schemas & Memory Policies docs including Auto("prompt") example in Conditional Constraints section Tests: 104 passing across test_properties, test_builders, test_conditions, test_schema_decorators, and test_integration Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Nice! Can we use a similar summary prompt template for auto as I have for messages with short med and long term summaries? Also this updates the existing summary so reads summary, sees new memory and updates summary if needed right? Should we allow them to choose when to summarize (ie daily, after x memories, etc) |
amirkabbara
left a comment
There was a problem hiding this comment.
see comment in conversation
|
Great questions! Here's the breakdown: Summary prompt templatesYes! @constraint(set={
"short_summary": Auto("Summarize this batch in 2-3 sentences focusing on key decisions and progress"),
"long_summary": Auto("Full session summary: main arc, overall progress, key outcomes in 5-7 sentences"),
})The prompts get injected into the LLM's structured output schema as Updates existing summary — already supported via
|
| Mode | Wire Format | Behavior |
|---|---|---|
replace (default) |
{"mode": "auto"} |
Overwrites existing value |
append |
{"mode": "auto", "text_mode": "append"} |
Adds to end of existing text |
merge |
{"mode": "auto", "text_mode": "merge"} |
LLM reads existing + new, combines intelligently |
So for an evolving summary that reads existing and updates:
"summary": {"mode": "auto", "text_mode": "merge", "prompt": "Update the summary incorporating new information"}The LLM sees the existing summary + new memory content and produces a merged result.
When to summarize (daily, after x memories)
Designed but not yet built. The enrich feature in our docs (ENRICH_AND_CONTEXT_DESIGN.md) has EnrichTrigger with on_create, on_update, on_access, scheduled, and manual triggers, plus max_age for staleness-based re-enrichment (e.g. "P7D" = every 7 days). Good candidate for a follow-up PR to pair with Auto("prompt").
Clarification: this PR does NOT add summary by default to all nodes
To be clear — this PR gives developers the tool to define per-field extraction prompts on their own schema constraints. It does not automatically add a summary property to every node type. The developer has to explicitly opt in by defining summary as a property on their node and using Auto("prompt") in a @constraint(set={...}). No default behavior changes for existing schemas.
…lint errors - Resolve merge conflict in .stats.yml that caused Prism mock server to fail downloading the OpenAPI spec (two URLs concatenated) - Fix all 46 ruff lint errors: unsorted imports (I001), unused imports (F401) - Add `lib` to __all__ in __init__.py to satisfy F401 for re-exported module - Restore TestImportPaths with noqa directive for intentional import verification Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…add type ignores - Remove duplicate NodeTypesConstraintSet TypeAlias (identical copy) - Remove duplicate RelationshipTypesConstraint class and its orphaned helper types (RelationshipTypesConstraintSearchProperty, RelationshipTypesConstraintSearch, RelationshipTypesConstraintSetPropertyValue) that were left over from a bad merge in user_graph_schema_output.py - Add type: ignore comments for pre-existing pyright issues in generated code (_base_client.py, _model_cache.py, _utils/_typing.py, resources/memory.py) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
papr_memory.libmodule — a decorator-based DSL for defining graph schemas, constraints, and memory policies with full IDE supportAuto("prompt")for per-field LLM extraction guidance, e.g.Auto("Summarize the incident in 1-2 sentences")serializes to{"mode": "auto", "prompt": "..."}New Module:
papr_memory.lib_properties.pyAuto,prop(),exact(),semantic(),fuzzy(),PropertyRef,edge()_schema.py@schema,@node,@lookup,@upsert,@resolve,@constraintdecorators_builders.pybuild_schema_params(),build_link_to(),build_memory_policy(),serialize_set_values()_conditions.pyAnd(),Or(),Not()conditional operators forwhenclausesAuto("prompt") Example
Auto()(no args) continues to work as before —Auto("prompt")is additive.Dependencies
promptfield toPropertyValuemodel and injects extraction guidance into the LLM structured output schemaTest plan
Auto().to_dict()→{"mode": "auto"}(backwards compatible)Auto("Summarize briefly").to_dict()→{"mode": "auto", "prompt": "Summarize briefly"}serialize_set_values({"summary": Auto("prompt")})includes prompt in output@constraint(set={"summary": Auto("prompt")})→build_schema_params()round-trips correctlytest_properties,test_builders,test_conditions,test_schema_decorators,test_integration🤖 Generated with Claude Code