Skip to content

Tags: PostHog/posthog-python

Tags

v7.7.0

Toggle v7.7.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat(ai): add OpenAI Agents SDK integration (#408)

* feat(ai): add OpenAI Agents SDK integration

Add PostHogTracingProcessor that implements the OpenAI Agents SDK
TracingProcessor interface to capture agent traces in PostHog.

- Maps GenerationSpanData to $ai_generation events
- Maps FunctionSpanData, AgentSpanData, HandoffSpanData, GuardrailSpanData
  to $ai_span events with appropriate types
- Supports privacy mode, groups, and custom properties
- Includes instrument() helper for one-liner setup
- 22 unit tests covering all span types

* feat(openai-agents): add $ai_group_id support for linking conversation traces

- Capture group_id from trace and include as $ai_group_id on all events
- Add _get_group_id() helper to retrieve group_id from trace metadata
- Pass group_id through all span handlers (generation, function, agent, handoff, guardrail, response, custom, audio, mcp, generic)
- Enables linking multiple traces in the same conversation thread

* feat(openai-agents): add enhanced span properties

- Add $ai_total_tokens to generation and response spans (required by PostHog cost reporting)
- Add $ai_error_type for cross-provider error categorization (model_behavior_error, user_error, input_guardrail_triggered, output_guardrail_triggered, max_turns_exceeded)
- Add $ai_output_choices to response spans for output content capture
- Add audio pass-through properties for voice spans:
  - first_content_at (time to first audio byte)
  - audio_input_format / audio_output_format
  - model_config
  - $ai_input for TTS text input
- Add comprehensive tests for all new properties

* Add $ai_framework property and standardize $ai_provider for OpenAI Agents

- Add $ai_framework="openai-agents" to all events for framework identification
- Standardize $ai_provider="openai" on all events (previously some used "openai_agents")
- Follows pattern from posthog-js where $ai_provider is the underlying LLM provider

* chore: bump version to 7.7.0 for OpenAI Agents SDK integration

* fix: add openai_agents package to setuptools config

Without this, the module is not included in the distribution
and users get an ImportError after pip install.

* fix: correct indentation in on_trace_start properties dict

* fix: prevent unbounded growth of span/trace tracking dicts

Add max entry limit and eviction for _span_start_times and
_trace_metadata dicts. If on_span_end or on_trace_end is never
called (e.g., due to an SDK exception), these dicts could grow
indefinitely in long-running processes.

* fix: resolve distinct_id from trace metadata in on_span_end

Previously on_span_end always called _get_distinct_id(None), which
meant callable distinct_id resolvers never received the trace object
for spans. Now the resolved distinct_id is stored at trace start and
looked up by trace_id during span end.

* refactor: extract _base_properties helper to reduce duplication

All span handlers repeated the same 6 base fields (trace_id, span_id,
parent_id, provider, framework, latency) plus the group_id conditional.
Extract into a shared helper to reduce ~100 lines of boilerplate.

* test: add missing edge case tests for openai agents processor

- test_generation_span_with_no_usage: zero tokens when usage is None
- test_generation_span_with_partial_usage: only input_tokens present
- test_error_type_categorization_by_type_field_only: type field without
  matching message content
- test_distinct_id_resolved_from_trace_for_spans: callable resolver
  uses trace context for span events
- test_eviction_of_stale_entries: memory leak prevention works

* fix: handle non-dict error_info in span error parsing

If span.error is a string instead of a dict, calling .get() would
raise AttributeError. Now falls back to str() for non-dict errors.

* style: apply ruff formatting

* style: replace lambda assignments with def (ruff E731)

* fix: restore full CHANGELOG.md history

The rebase conflict resolution accidentally truncated the changelog
to only the most recent entries. Restored all historical entries.

* fix: preserve personless mode for trace-id fallback distinct IDs

When no distinct_id is provided, _get_distinct_id falls back to
trace_id or "unknown". Since these are non-None strings, the
$process_person_profile=False check in _capture_event never fired,
creating unwanted person profiles keyed by trace IDs.

Track whether the user explicitly provided a distinct_id and use
that flag to control personless mode, matching the pattern used
by the langchain and openai integrations.

* fix: restore changelog history and fix personless mode edge cases

Two fixes from bot review:

1. CHANGELOG.md was accidentally truncated to 38 lines during rebase
   conflict resolution. Restored all 767 lines of history.

2. Personless mode now follows the same pattern as langchain/openai
   integrations: _get_distinct_id returns None when no user-provided
   ID is available, and callers set $process_person_profile=False
   before falling back to trace_id. This covers the edge case where
   a callable distinct_id returns None.

* fix: handle None token counts in generation span

Guard against input_tokens or output_tokens being None when computing
$ai_total_tokens to avoid TypeError.

* fix: check error_type_raw for all error categories

Check both error_type_raw and error_message for guardrail and
max_turns errors, consistent with how ModelBehaviorError and
UserError are already checked.

* fix: add type hints to instrument() function

* refactor: rename _safe_json to _ensure_serializable for clarity

The function validates JSON serializability and falls back to str(),
not serializes. Rename and update docstring to make the contract clear.

* refactor: emit $ai_trace at trace end instead of start

Move the $ai_trace event from on_trace_start to on_trace_end to
capture full metadata including latency, matching the LangChain
integration approach. on_trace_start now only stores metadata for
use by spans.

* style: fix ruff formatting

* fix: add TYPE_CHECKING imports for type hints in instrument()

v7.6.0

Toggle v7.6.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: add device_id to flags request payload (#407)

* feat: add device_id to flags request payload

Add device_id parameter to all feature flag methods, similar to how
distinct_id is handled. The device_id is included in the flags request
payload sent to the server.

- Add device_id parameter to Client methods and module-level functions
- Add context support via set_context_device_id() for automatic fallback
- Add tests for explicit device_id and context-based device_id
- Bump version to 7.6.0

v7.5.1

Toggle v7.5.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: avoid return from finally block to fix Python 3.14 SyntaxWarning (

…#361)

* fix: Avoid return from finally: block

This fixes a SyntaxWarning on Python 3.14.

```
❯ uvx --no-cache --python 3.14.0 --with posthog==6.7.11 python -c "import posthog"
Installed 11 packages in 5ms
.../lib/python3.14/site-packages/posthog/consumer.py:92: SyntaxWarning: 'return' in a 'finally' block
  return success
````

* add versioning info

---------

Co-authored-by: Paul D'Ambra <paul.dambra@gmail.com>

v7.5.0

Toggle v7.5.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: llma / error tracking integration (#376)

* feat: llma / error tracking integration

* capture all metadata in llm event

* instrument with contexts

* bump version

* indentation

* linting

* tests

* raise

* test: add exception capture integration tests for langchain

Add 6 tests covering the new LLMA + error tracking integration:
- capture_exception called on span/generation errors
- $exception_event_id added to AI events
- No capture when autocapture disabled
- AI properties passed to exception event
- Handles None return from capture_exception

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: pass context tags to capture() for test compatibility

- Export get_tags() from posthog module
- Explicitly pass context tags to capture() in AI utils
- Fix $ai_model fallback to extract from response.model
- Fix ruff formatting in langchain test_callbacks.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: disable auto-capture exceptions in LLM context

The new_context() defaults to capture_exceptions=True which would
auto-capture any exception regardless of enable_exception_autocapture
setting. This was inconsistent with LangChain callbacks which
explicitly check the setting.

Pass capture_exceptions=False to let exception handling be controlled
explicitly by the enable_exception_autocapture setting.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: isolate LLM context with fresh=True to avoid tag inheritance

Use fresh=True to start with a clean context for each LLM call.
This avoids inheriting $ai_* tags from parent contexts which could
cause mismatched AI metadata due to the tag merge order bug in
contexts.py (parent tags incorrectly override child tags).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: correct tag merge order so child tags take precedence

The collect_tags() method had a bug where parent tags would overwrite
child tags, despite the comment saying the opposite. This fix ensures
child context tags properly override parent tags.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: remove fresh=True now that tag merge order is fixed

With the collect_tags() bug fixed, child tags properly override parent
tags. LLM events can now inherit useful parent context tags (request_id,
user info, etc.) while still having their $ai_* tags take precedence.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test: add test for child tags overriding parent tags

Verifies that in non-fresh contexts, child tags properly override
parent tags with the same key while still inheriting other parent tags.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: add TODO for OpenAI/Anthropic/Gemini exception capture

Document that exception capture needs to be added for the direct SDK
wrappers, similar to how it's implemented in LangChain callbacks.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: David Newell <david@Mac.communityfibre.co.uk>
Co-authored-by: Andrew Maguire <andrewm4894@gmail.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

v7.4.3

Toggle v7.4.3's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: double counting anthropic langchain (#399)

v7.4.2

Toggle v7.4.2's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: add in_app configuration for python SDK (#396)

* add in_app configuration for python SDK

* bump version

* add in_app_modules to init script as well

v7.4.1

Toggle v7.4.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix(llma): extract model from response for OpenAI stored prompts (#395)

* fix: extract model from response for OpenAI stored prompts

When using OpenAI stored prompts, the model is defined in the OpenAI
dashboard rather than passed in the API request. This change adds a
fallback to extract the model from the response object when not
provided in kwargs.

Fixes PostHog/posthog#42861

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Apply suggestion from @greptile-apps[bot]

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Apply suggestion from @greptile-apps[bot]

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* test: add tests for model extraction fallback and bump to 7.4.1

- Add 8 tests covering model extraction from response for stored prompts
- Fix utils.py to add 'unknown' fallback for consistency
- Bump version to 7.4.1
- Update CHANGELOG.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* style: format utils.py with ruff

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix: remove 'unknown' fallback from non-streaming to match original behavior

Non-streaming originally returned None when model wasn't in kwargs.
Streaming keeps "unknown" fallback as that was the original behavior.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test: add test for None model fallback in non-streaming

Verifies that non-streaming returns None (not "unknown") when model
is not available in kwargs or response, matching original behavior.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

v7.4.0

Toggle v7.4.0's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat(flags): Add retry support for feature flag requests (#392)

* Add urllib3-based retry for feature flag requests

Use urllib3's built-in Retry mechanism for feature flag POST requests
instead of application-level retry logic. This is simpler and leverages
well-tested library code.

Key changes:
- Add `RETRY_STATUS_FORCELIST` = [408, 500, 502, 503, 504]
- Add `_build_flags_session()` with POST retries and `status_forcelist`
- Update `flags()` to use dedicated flags session
- Add tests for retry configuration and session usage

The flags session retries on:
- Network failures (connect/read errors)
- Transient server errors (408, 500, 502, 503, 504)

It does NOT retry on:
- 429 (rate limit) - need to wait, not hammer
- 402 (quota limit) - won't resolve with retries

* Make examples run without requiring personal api key

* Add integration tests for network retry behavior

Add tests that verify actual retry behavior, not just configuration:

- test_retries_on_503_then_succeeds: Spins up a local HTTP server that
  returns 503 twice then 200, verifying 3 requests are made
- test_connection_errors_are_retried: Verifies connection errors trigger
  retries by measuring elapsed time with backoff

Both tests use dynamically allocated ports for CI safety.

* Bump version to 7.4.0

v7.3.1

Toggle v7.3.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
fix: remove unused $exception_message and $exception_type (#383)

* fix: remove unused

* fix: remove exception type

* fix: wip

v7.0.1

Toggle v7.0.1's commit message

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
feat: use repr in code variables (#372)