Skip to content

Trans article #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 1, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,18 @@ readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"beautifulsoup4>=4.13.3",
"celery>=5.5.2",
"django>=5.1.7",
"django-tailwind[reload]>=3.8.0",
"gunicorn>=23.0.0",
"httpx>=0.28.1",
"llm>=0.24.2",
"llm-gemini>=0.18.1",
"logfire[django,psycopg,system-metrics]>=3.16.0",
"logfire[celery,django,psycopg,system-metrics]>=3.16.0",
"lxml>=5.3.2",
"markdown>=3.7",
"psycopg[binary]>=3.2.5",
"pydantic-ai-slim[anthropic,openai]>=0.2.6",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify AI package versions and compatibility.

The addition of pydantic-ai-slim[anthropic,openai]>=0.2.6 looks appropriate for AI functionality.


🌐 Web query:

What is the latest version of pydantic-ai-slim and are there any compatibility issues with the specified extras?

💡 Result:

The latest version of pydantic-ai-slim is 0.2.12, released on May 29, 2025[1]. This package serves as an agent framework/shim to use Pydantic with large language models (LLMs), offering a slimmer version with minimal required dependencies.

Optional Extras

Pydantic-ai-slim provides several optional dependency groups that can be installed based on your specific needs:

  • logfire — installs the logfire PyPI package
  • evals — installs the pydantic-evals PyPI package
  • openai — installs the openai PyPI package
  • vertexai — installs google-auth and requests PyPI packages
  • anthropic — installs the anthropic PyPI package
  • groq — installs the groq PyPI package
  • mistral — installs the mistralai PyPI package
  • cohere — installs the cohere PyPI package
  • duckduckgo — installs the duckduckgo-search PyPI package
  • tavily — installs the tavily-python PyPI package[2]

You can install multiple extras simultaneously using the syntax:

pip install "pydantic-ai-slim[openai,vertexai,logfire]"

or

uv add "pydantic-ai-slim[openai,vertexai,logfire]"
```[2]

## Compatibility Issues

There have been some compatibility issues noted:

1. **Databricks ML Runtime**: There's an incompatibility with Databricks' ML runtime 16.3ML. The issue relates to dependency conflicts, particularly with the `requests` package. Pydantic-ai requires `requests (>=2.32.3)`, but Databricks ML runtime uses version 2.32.2, causing version solving to fail[7].

2. **AWS Lambda Deployment**: When deploying applications with pydantic packages to AWS Lambda, especially using frameworks like serverless, there can be issues with package metadata. The `slim` setting in serverless deployment configuration might need to be set to `false` to ensure proper inclusion of metadata for dependencies[6].

3. **Breaking Changes**: While not specific to the slim package, the parent project pydantic-ai has undergone breaking changes in recent versions. In v0.2.0 (May 12, 2025), there were changes to the `usage` field in `ModelResponse` and modifications to the return type of `Model.request`[4].

For the most up-to-date information on compatibility and how to use specific models with the required dependencies, it's recommended to check the models documentation mentioned in the installation guide[2].

Citations:

- 1: https://pypi.org/project/pydantic-ai-slim/
- 2: https://ai.pydantic.dev/install/
- 3: https://github.com/pydantic/pydantic-ai/releases
- 4: https://ai.pydantic.dev/changelog/
- 5: https://pypistats.org/packages/pydantic-ai
- 6: https://docs.pydantic.dev/latest/integrations/aws_lambda/
- 7: https://github.com/pydantic/pydantic-ai/issues/1582
- 8: https://www.piwheels.org/project/pydantic-ai-slim/
- 9: https://www.youtube.com/watch?v=ferNt5JTaGQ
- 10: https://docs.pydantic.dev/2.3/version-compatibility/

---



</details>

**Pin pydantic-ai-slim to the latest release and verify compatibility**  

- In pyproject.toml (line 20), bump the requirement:  
  ```diff
  - "pydantic-ai-slim[anthropic,openai]>=0.2.6",
  + "pydantic-ai-slim[anthropic,openai]>=0.2.12",
  • Confirm that the anthropic and openai extras cover all needed LLM providers.
  • Be aware of known compatibility issues:
    • Databricks ML Runtime 16.3ML ships requests==2.32.2 (pydantic-ai-slim needs ≥2.32.3).
    • AWS Lambda/serverless may require slim: false to include package metadata.
  • Ensure no regressions from parent pydantic-ai’s breaking changes (v0.2.0) around ModelResponse.usage and Model.request return types.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"pydantic-ai-slim[anthropic,openai]>=0.2.6",
"pydantic-ai-slim[anthropic,openai]>=0.2.12",
🤖 Prompt for AI Agents
In pyproject.toml at line 20, update the pydantic-ai-slim dependency version
from ">=0.2.6" to the latest stable release ">=0.2.12". Verify that the included
extras "anthropic" and "openai" sufficiently cover all required LLM providers
for your project. Additionally, review and address any compatibility issues such
as the Databricks ML Runtime requests version conflict and AWS Lambda deployment
metadata inclusion by adjusting serverless settings if needed. Finally, test to
ensure no regressions occur due to breaking changes in pydantic-ai v0.2.0
related to ModelResponse.usage and Model.request return types.

"pydantic-ai[logfire]>=0.2.6",
"readtime>=3.0.0",
"requests>=2.32.3",
Expand Down
9 changes: 9 additions & 0 deletions pythonkr_backend/pythonkr_backend/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from django.conf import settings

if settings.CELERY_ALWAYS_EAGER is False:
from .celery import app as celery_app
__all__ = ('celery_app',)
else:
__all__ = ()
41 changes: 41 additions & 0 deletions pythonkr_backend/pythonkr_backend/celery.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import logfire
from celery import Celery
from celery.signals import worker_init, beat_init

app = Celery('proj')


# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django apps.
app.autodiscover_tasks()


@worker_init.connect()
def init_worker(*args, **kwargs):
logfire.configure(service_name="celery-worker", send_to_logfire='if-token-present')
logfire.instrument_celery()

@beat_init.connect()
def init_beat(*args, **kwargs):
logfire.configure(service_name="celery-beat", send_to_logfire='if-token-present')
logfire.instrument_celery()

@app.task
def add(x: int, y: int):
return x + y

add.delay(42, 50)
Comment on lines +28 to +32
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Remove task execution at module level.

Executing the task at module import time will cause connection errors during Django startup, as evidenced by the pipeline failure.

 @app.task
 def add(x: int, y: int):
+    """Add two integers and return the result."""
     return x + y

-add.delay(42, 50)

If you need to test the task, consider adding it to a management command or view instead.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@app.task
def add(x: int, y: int):
return x + y
add.delay(42, 50)
@app.task
def add(x: int, y: int):
"""Add two integers and return the result."""
return x + y
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 29-29: Missing function or method docstring

(C0116)

🪛 GitHub Actions: pk web site backend testing

[error] 32-32: Celery task 'add.delay(42, 50)' failed due to kombu.exceptions.OperationalError: [Errno 111] Connection refused. The AMQP connection to the message broker could not be established.

🤖 Prompt for AI Agents
In pythonkr_backend/pythonkr_backend/celery.py around lines 28 to 32, the task
add.delay(42, 50) is being executed at the module level, which causes connection
errors during Django startup. Remove the add.delay(42, 50) call from the module
to prevent this. If you want to test the task, move the call to a Django
management command or a view where it can be triggered explicitly.



app.conf.beat_schedule = {
"add-every-30-seconds": {
"task": "tasks.add",
"schedule": 30.0,
"args": (16, 16),
},
}
Comment on lines +35 to +41
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix beat schedule task reference.

The task reference is incorrect and will cause the scheduled task to fail.

-app.conf.beat_schedule = {  
+app.conf.beat_schedule = {
     "add-every-30-seconds": {
-        "task": "tasks.add",
+        "task": "pythonkr_backend.celery.add",
         "schedule": 30.0,
         "args": (16, 16),
     },
 }

🏁 Script executed:

#!/bin/bash
# Verify the correct task path for the beat schedule
# Description: Check if there are other task definitions that might affect the correct path

ast-grep --pattern '@$_.task
def $_($_) {
  $$$
}'

Length of output: 52


To determine the correct import path for your Celery task, let’s locate where add is defined and how it’s decorated:


🏁 Script executed:

#!/bin/bash
# Find all Celery task definitions and look for `add`
rg -n -E '@(app\.task|shared_task)' --glob '*.py' -C3
rg -n 'def add' --glob '*.py' -C3

Length of output: 921


Update beat_schedule to use the fully qualified task name

The beat_schedule entry currently points at "tasks.add", but your @app.task def add is defined in pythonkr_backend/pythonkr_backend/celery.py, so Celery expects the module path pythonkr_backend.celery.add.

• File: pythonkr_backend/pythonkr_backend/celery.py
Lines: 35–41

-app.conf.beat_schedule = {  
-    "add-every-30-seconds": {
-        "task": "tasks.add",
-        "schedule": 30.0,
-        "args": (16, 16),
-    },
-}
+app.conf.beat_schedule = {
+    "add-every-30-seconds": {
+        "task": "pythonkr_backend.celery.add",
+        "schedule": 30.0,
+        "args": (16, 16),
+    },
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
app.conf.beat_schedule = {
"add-every-30-seconds": {
"task": "tasks.add",
"schedule": 30.0,
"args": (16, 16),
},
}
app.conf.beat_schedule = {
"add-every-30-seconds": {
"task": "pythonkr_backend.celery.add",
"schedule": 30.0,
"args": (16, 16),
},
}
🧰 Tools
🪛 Pylint (3.3.7)

[convention] 35-35: Trailing whitespace

(C0303)


[convention] 41-41: Final newline missing

(C0304)

🤖 Prompt for AI Agents
In pythonkr_backend/pythonkr_backend/celery.py around lines 35 to 41, the
beat_schedule task reference "tasks.add" is incorrect and will cause the
scheduled task to fail. Update the "task" value to the fully qualified module
path where the add task is defined, which is "pythonkr_backend.celery.add". This
ensures Celery can locate and execute the task correctly.

21 changes: 4 additions & 17 deletions pythonkr_backend/pythonkr_backend/settings/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,23 +187,10 @@
"0.0.0.0",
]

# LOGGING

LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"console": {
"class": "logging.StreamHandler",
},
},
"root": {
"handlers": ["console"],
"level": "INFO",
},
}

# import logfire
# setup logfire
#logfire.configure(environment='base', service_name="web")
#logfire.instrument_django()
#logfire.instrument_django()

# testing
CELERY_ALWAYS_EAGER = True
10 changes: 9 additions & 1 deletion pythonkr_backend/pythonkr_backend/settings/localtesting.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,15 @@
BAKERY_MULTISITE = True
BUILD_DIR = os.path.join("/app/bakery_static", "build")


# check WSGI environment
IS_PRODUCTION_SERVER = os.environ.get('IS_WSGI_ENVIRONMENT', 'False') == 'True'


# logfire settings
if IS_PRODUCTION_SERVER:
logfire.configure(environment='localtest')
logfire.instrument_django()
logfire.instrument_django()

# testing
CELERY_ALWAYS_EAGER = True
13 changes: 12 additions & 1 deletion pythonkr_backend/pythonkr_backend/settings/prod.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,15 @@
logfire.configure(environment='prod', service_name="web", service_version=sha_service_version)
logfire.instrument_django()
logfire.instrument_system_metrics()
#logfire.instrument_psycopg('psycopg')
#logfire.instrument_psycopg('psycopg')

# celery
CELERY_BROKER_PASSWORD = os.environ.get("CELERY_PASSWORD","FALSE")
CELERY_BROKER_USERNAME = os.environ.get("CELERY_USERNAME","FALSE")
CELERY_BROKER_VHOST = os.environ.get("CELERY_VHOST","FALSE")
# Celery Configuration Options
CELERY_ALWAYS_EAGER = False
CELERY_TIMEZONE = "Asia/Seoul"
CELERY_TASK_TRACK_STARTED = True
CELERY_BROKER_URL = f"amqp://{CELERY_BROKER_USERNAME}:{CELERY_BROKER_PASSWORD}@localhost:5672/{CELERY_BROKER_VHOST}"
CELERY_TASK_TIME_LIMIT = 30 * 60
125 changes: 123 additions & 2 deletions uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.