Skip to content

AttributeError in Events Compaction with DatabaseSessionService #3530

@mendac

Description

@mendac

Kaggle Day 2 Section 3 - Persistence

Hardware: Raspberry Pi 5, 16GB, Raspberry Pi OS Trixie

Summary

When using EventsCompactionConfig with DatabaseSessionService, the application intermittently crashes with an
AttributeError: 'dict' object has no attribute 'start_timestamp' in _process_compaction_events().
The error occurs because compaction event objects are being serialized/deserialized as dictionaries from the database
but the code expects objects with attribute access.

Environment

  • google-adk version: 1.18.0
  • google-genai version: 1.49.0
  • Python version: 3.13.5
  • Operating System: Linux (Raspberry Pi OS, aarch64)
  • Database: SQLite (via SQLAlchemy)

Steps to Reproduce

  1. Create an app with EventsCompactionConfig enabled
  2. Use DatabaseSessionService with SQLite backend
  3. Run multiple conversation turns (enough to trigger compaction - e.g., 3+ turns with compaction_interval=3)
  4. The error occurs when compaction data exists in the database from a previous run

Minimal reproduction code:

import os
import asyncio
from google.adk.agents import LlmAgent
from google.adk.apps.app import App, EventsCompactionConfig
from google.adk.models.google_llm import Gemini
from google.adk.sessions import DatabaseSessionService
from google.adk.runners import Runner
from google.genai import types

# Configure API key
os.environ["GOOGLE_API_KEY"] = "blahblahblah"
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "FALSE"

# Create agent with compaction
agent = LlmAgent(
    model=Gemini(model="gemini-2.5-flash-lite"),
    name="test_agent",
    description="Day 2 Section 3 compaction bug"
)

# Create app with compaction enabled
app = App(
    name="compaction_bug_test",
    root_agent=agent,
    events_compaction_config=EventsCompactionConfig(
        compaction_interval=3,
        overlap_size=1,
    ),
)

# Use database session service (this is where the bug manifests)
session_service = DatabaseSessionService(db_url="sqlite:///test_compaction.db")
runner = Runner(app=app, session_service=session_service)

async def test():
    session = await session_service.create_session(
        app_name="compaction_bug_test",
        user_id="test_user",
        session_id="test_session"
    )

    # First run works just fine, but...
    for i in range(4):
        query = types.Content(role="user", parts=[types.Part(text=f"Message {i+1}")])
        async for event in runner.run_async(
            user_id="test_user",
            session_id=session.id,
            new_message=query
        ):
            pass

    print("Survived first run - run again to trigger the bug")

asyncio.run(test())

To reproduce the error:

  • Run the script once (works fine)
  • Run it again immediately (persistently crashes with AttributeError)

Expected Behavior

Compaction should work seamlessly with DatabaseSessionService, properly serializing and deserializing compaction event objects across sessions.

Actual Behavior

On subsequent runs when compaction data exists in the database, the application crashes with:

Traceback (most recent call last):
  File "/home/pi/adk/2a3.py", line 135, in <module>
    asyncio.run(main())
  File "/home/pi/adk/2a3.py", line 55, in run_session
    async for event in runner_instance.run_async(
        user_id=USER_ID, session_id=session.id, new_message=query
    ):
  File "/home/pi/AC1/lib/python3.13/site-packages/google/adk/runners.py", line 653, in _exec_with_plugin
    async for event in agen:
      yield (modified_event if modified_event else event)
  File "/home/pi/AC1/lib/python3.13/site-packages/google/adk/flows/llm_flows/contents.py", line 354, in _get_contents
    events_to_process = _process_compaction_events(raw_filtered_events)
  File "/home/pi/AC1/lib/python3.13/site-packages/google/adk/flows/llm_flows/contents.py", line 265, in _process_compaction_events
    compaction.start_timestamp is not None
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'start_timestamp'

Root Cause Analysis

The issue is in /google/adk/flows/llm_flows/contents.py at line 265 in _process_compaction_events():

compaction.start_timestamp is not None  # Line 265

The code assumes compaction is an object with attributes, but when retrieved from the database via DatabaseSessionService, it's deserialized as a dictionary. The code should either:

  1. Use dictionary access: compaction.get('start_timestamp') or compaction['start_timestamp']
  2. Properly deserialize compaction events into the expected object type

Impact

This bug completely breaks the persistence feature when using events compaction, critical for me not to lose more hair.

Additional Context

  • The error is predictable because it only occurs when compaction data already exists in the database
  • Fresh database runs work fine on the first execution
  • The issue appears to be a serialization/deserialization mismatch in the session storage layer
  • This affects any production use case requiring persistent sessions with compaction

Requested Fix

In order for me to finish this lab, please ensure compaction event objects are properly serialized/deserialized when using DatabaseSessionService, maintaining attribute access compatibility or updating all access patterns to handle dictionary objects consistently.

Metadata

Metadata

Assignees

Labels

services[Component] This issue is related to runtime services, e.g. sessions, memory, artifacts, etc

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions