Skip to content

feat: Add QueryWeaver Python SDK for serverless Text2SQL#384

Open
DvirDukhan wants to merge 8 commits intostagingfrom
dvirdu_python_api
Open

feat: Add QueryWeaver Python SDK for serverless Text2SQL#384
DvirDukhan wants to merge 8 commits intostagingfrom
dvirdu_python_api

Conversation

@DvirDukhan
Copy link

@DvirDukhan DvirDukhan commented Feb 4, 2026

Summary

This PR introduces a standalone Python SDK (queryweaver_sdk) that exposes QueryWeaver's Text2SQL functionality as an embeddable library. Users can now convert natural language to SQL directly in their Python applications without running a web server.

Features

New SDK Package (queryweaver_sdk/)

  • QueryWeaver class - Main entry point with async methods:

    • connect_database(db_url) - Connect PostgreSQL/MySQL databases
    • query(database, question) - Convert natural language to SQL and execute
    • get_schema(database) - Retrieve database schema
    • list_databases() - List connected databases
    • delete_database(database) - Remove database from FalkorDB
    • refresh_schema(database) - Re-sync schema after changes
    • execute_confirmed(database, sql) - Execute confirmed destructive operations
  • Result models (models.py):

    • QueryResult - SQL query, results, AI response, confirmation flags
    • SchemaResult - Tables (nodes) and relationships (links)
    • DatabaseConnection - Connection status and metadata
    • RefreshResult - Schema refresh status
  • Connection management (connection.py):

    • FalkorDBConnection - Explicit FalkorDB connection handling
    • Supports URL-based or environment variable configuration

Usage Example

from queryweaver_sdk import QueryWeaver

async def main():
    qw = QueryWeaver(falkordb_url="redis://localhost:6379")
    await qw.connect_database("postgresql://user:pass@host/mydb")
    
    result = await qw.query("mydb", "Show me all customers from NYC")
    print(result.sql_query)   # SELECT * FROM customers WHERE city = 'NYC'
    print(result.results)      # [{...}, {...}]
    print(result.ai_response)  # "Found 42 customers..."
    
    await qw.close()

Modern Python Packaging

  • pyproject.toml with PEP 517/518 compliance (hatchling backend)
  • Optional dependencies:
    • pip install queryweaver - SDK only (minimal deps)
    • pip install queryweaver[server] - Full FastAPI server
    • pip install queryweaver[dev] - Development tools
    • pip install queryweaver[all] - Everything
  • uv support - Fast modern package manager (auto-detected in Makefile)
  • License: Corrected to AGPL-3.0-or-later

Testing Infrastructure

  • docker-compose.test.yml - FalkorDB + PostgreSQL + MySQL test services
  • Integration tests (tests/test_sdk/) - 15 passing tests covering:
    • Initialization and connection management
    • Database operations (connect, list, delete)
    • Schema retrieval
    • Query execution with LLM
    • Model serialization

Updated Makefile

make build-package      # Build wheel + sdist
make docker-test-services  # Start test databases
make test-sdk           # Run SDK integration tests
make docker-test-stop   # Stop test services

Architecture

The SDK uses lazy imports for api.* modules to allow:

  • from queryweaver_sdk import QueryWeaver without FalkorDB running
  • Connection deferred until QueryWeaver() instantiation

Core functions in api/core/text2sql.py and api/core/schema_loader.py now have _sync variants that return structured dataclasses instead of streaming generators.

Requirements

  • Python 3.12+
  • FalkorDB instance (local or remote)
  • OpenAI or Azure OpenAI API key
  • Target SQL database (PostgreSQL or MySQL)

Breaking Changes

None - existing server functionality unchanged.

Testing

# Start test services
make docker-test-services

# Run SDK tests (requires OPENAI_API_KEY)
make test-sdk

# Cleanup
make docker-test-stop

Files Changed

New Files

  • queryweaver_sdk/__init__.py - Package exports
  • queryweaver_sdk/client.py - Main QueryWeaver class
  • queryweaver_sdk/models.py - Result dataclasses
  • queryweaver_sdk/connection.py - FalkorDB connection wrapper
  • pyproject.toml - Modern Python packaging
  • docker-compose.test.yml - Test infrastructure
  • tests/test_sdk/ - Integration test suite

Modified Files

  • api/core/text2sql.py - Added _sync functions
  • api/core/schema_loader.py - Added load_database_sync()
  • Makefile - Added uv support and SDK targets
  • .github/workflows/tests.yml - Added SDK test job

Summary by CodeRabbit

  • New Features
    • Introduced Python SDK with QueryWeaver client for programmatic database interaction
    • Added support for PostgreSQL and MySQL database connectivity
    • Added natural language to SQL query conversion with schema management
    • Added database schema refresh and management capabilities
    • Published queryweaver package with proper Python packaging and distribution

@railway-app
Copy link

railway-app bot commented Feb 4, 2026

This PR was not deployed automatically as @DvirDukhan does not have access to the Railway project.

In order to get automatic PR deploys, please add @DvirDukhan to your workspace on Railway.

@overcut-ai
Copy link

overcut-ai bot commented Feb 4, 2026

Completed Working on "Code Review"

✅ Workflow completed successfully.


👉 View complete log

@github-actions
Copy link

github-actions bot commented Feb 4, 2026

Dependency Review

The following issues were found:
  • ✅ 0 vulnerable package(s)
  • ✅ 0 package(s) with incompatible licenses
  • ✅ 0 package(s) with invalid SPDX license definitions
  • ⚠️ 4 package(s) with unknown licenses.
See the Details below.

License Issues

pyproject.toml

PackageVersionLicenseIssue Type
falkordb>= 1.2.2NullUnknown License
psycopg2-binary>= 2.9.11NullUnknown License
litellm>= 1.80.9NullUnknown License
tqdm>= 4.67.1NullUnknown License

OpenSSF Scorecard

PackageVersionScoreDetails
pip/falkordb >= 1.2.2 UnknownUnknown
pip/jsonschema >= 4.25.0 🟢 7.9
Details
CheckScoreReason
Code-Review⚠️ 2Found 2/10 approved changesets -- score normalized to 2
Security-Policy🟢 10security policy file detected
Maintained🟢 1030 commit(s) and 3 issue activity found in the last 90 days -- score normalized to 10
Token-Permissions🟢 9detected GitHub workflow tokens with excessive permissions
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies🟢 8dependency not pinned by hash detected -- score normalized to 8
License🟢 10license file detected
Fuzzing🟢 10project is fuzzed
Signed-Releases⚠️ 0Project has not signed or included provenance with any releases.
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: some github tokens can't read classic branch protection rules: https://github.com/ossf/scorecard-action/blob/main/docs/authentication/fine-grained-auth-token.md
Vulnerabilities🟢 100 existing vulnerabilities detected
Packaging🟢 10packaging workflow detected
SAST🟢 10SAST tool is run on all commits
pip/litellm >= 1.80.9 UnknownUnknown
pip/psycopg2-binary >= 2.9.11 UnknownUnknown
pip/pymysql >= 1.1.0 🟢 5.4
Details
CheckScoreReason
Maintained⚠️ 00 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 0
Security-Policy🟢 10security policy file detected
Code-Review⚠️ 2Found 8/27 approved changesets -- score normalized to 2
Packaging⚠️ -1packaging workflow not detected
Dangerous-Workflow🟢 10no dangerous workflow patterns detected
CII-Best-Practices⚠️ 0no effort to earn an OpenSSF best practices badge detected
Token-Permissions⚠️ 0detected GitHub workflow tokens with excessive permissions
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies⚠️ 0dependency not pinned by hash detected -- score normalized to 0
Fuzzing🟢 10project is fuzzed
License🟢 10license file detected
Signed-Releases⚠️ -1no releases found
Vulnerabilities🟢 91 existing vulnerabilities detected
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: some github tokens can't read classic branch protection rules: https://github.com/ossf/scorecard-action/blob/main/docs/authentication/fine-grained-auth-token.md
SAST⚠️ 2SAST tool is not run on all commits -- score normalized to 2
pip/tqdm >= 4.67.1 🟢 7
Details
CheckScoreReason
Dangerous-Workflow⚠️ -1no workflows found
Maintained🟢 1016 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 10
Code-Review🟢 4Found 2/5 approved changesets -- score normalized to 4
Packaging⚠️ -1packaging workflow not detected
Token-Permissions⚠️ -1No tokens found
Binary-Artifacts🟢 10no binaries found in the repo
Pinned-Dependencies⚠️ -1no dependencies found
CII-Best-Practices🟢 5badge detected: Passing
Security-Policy⚠️ 0security policy file not detected
Vulnerabilities🟢 100 existing vulnerabilities detected
Fuzzing🟢 10project is fuzzed
License🟢 9license file detected
Signed-Releases🟢 85 out of the last 5 releases have a total of 5 signed artifacts.
Branch-Protection⚠️ -1internal error: error during branchesHandler.setup: internal error: some github tokens can't read classic branch protection rules: https://github.com/ossf/scorecard-action/blob/main/docs/authentication/fine-grained-auth-token.md
SAST⚠️ 0SAST tool is not run on all commits -- score normalized to 0

Scanned Files

  • pyproject.toml

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 4, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review
📝 Walkthrough

Walkthrough

This PR introduces a comprehensive Python SDK for QueryWeaver, providing synchronous and asynchronous interfaces for text-to-SQL query generation. It includes new backend modules for sync operations, a complete queryweaver_sdk package with client and model classes, Docker-based test infrastructure, and updated CI/CD configuration with uv/pipenv support.

Changes

Cohort / File(s) Summary
CI/CD & Build Configuration
.github/workflows/tests.yml, Makefile, pyproject.toml, docker-compose.test.yml
Added SDK test job to CI with Redis, PostgreSQL, MySQL services; Makefile restructured with uv/pipenv detection, new test targets (test-sdk, test-all, docker-test-services), and build-package target; pyproject.toml introduced with project metadata, dependencies, Hatch build config, and test/lint settings; docker-compose.test.yml provides test service definitions with health checks.
SDK Package Core
queryweaver_sdk/__init__.py, queryweaver_sdk/models.py, queryweaver_sdk/connection.py, queryweaver_sdk/client.py
New SDK package exposing public API surface (QueryWeaver class, result/metadata models, FalkorDB connection wrapper). Client provides async interface for database operations (connect, query, schema retrieval, deletion, refresh). Models define structured result types with metadata, analysis, and compatibility accessors. Connection class manages FalkorDB lifecycle with lazy initialization and flexible configuration.
API Backend Sync Support
api/core/schema_loader.py, api/core/text2sql.py, api/core/text2sql_sync.py
Extended schema_loader with load_database_sync() for non-streaming database schema loading; refined text2sql error handling to catch specific Redis/connection errors; introduced comprehensive text2sql_sync module with query_database_sync(), execute_destructive_operation_sync(), and refresh_database_schema_sync() providing end-to-end sync query pipeline with relevancy checking, SQL analysis, healing, and confirmation workflow.
Test Infrastructure
tests/test_sdk/__init__.py, tests/test_sdk/conftest.py, tests/test_sdk/test_queryweaver.py
New test suite for SDK with pytest configuration, fixtures for external services (FalkorDB, PostgreSQL, MySQL), event loop management, and comprehensive integration tests covering initialization, database operations, schema retrieval, query execution, and model serialization.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant QueryWeaver
    participant Cache as Relevancy/Memory
    participant Analyzer as Analysis Agent
    participant Executor as SQL Executor
    participant Healer as Healer Agent
    participant Formatter as Response Formatter
    participant Database

    User->>QueryWeaver: query_database_sync(user_id, graph_id, chat_data)
    QueryWeaver->>QueryWeaver: Validate & initialize context
    QueryWeaver->>Cache: Check relevancy & find tables
    Cache-->>QueryWeaver: Relevant tables identified
    QueryWeaver->>Analyzer: Analyze natural language → SQL
    Analyzer-->>QueryWeaver: SQL + confidence + validity
    QueryWeaver->>Executor: Execute SQL
    rect rgba(200, 100, 100, 0.5)
    alt SQL Execution Fails
        Executor-->>QueryWeaver: Error
        QueryWeaver->>Healer: Heal SQL
        Healer-->>QueryWeaver: Fixed SQL
        QueryWeaver->>Executor: Re-execute
    end
    end
    Executor->>Database: Run query
    Database-->>Executor: Results
    Executor-->>QueryWeaver: Results + execution_time
    QueryWeaver->>Formatter: Format AI response
    Formatter-->>QueryWeaver: Polished response
    QueryWeaver-->>User: QueryResult (SQL, results, analysis, metadata)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Poem

🐰 Hop along with QueryWeaver's new way,
Async clients dancing, sync queries at play!
Databases connected through FalkorDB's door,
From natural speech to SQL we explore! ✨
Tests spin up services in Docker's embrace,
Healing and confidence keeping the pace! 🚀

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: Add QueryWeaver Python SDK for serverless Text2SQL' clearly and specifically describes the main change: introducing a new Python SDK package for Text2SQL functionality.
Docstring Coverage ✅ Passed Docstring coverage is 98.53% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch dvirdu_python_api

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@DvirDukhan DvirDukhan changed the base branch from main to staging February 4, 2026 20:51
assert conn_result.success

# First query
result1 = await qw.query(
find_task.cancel()
try:
await find_task
except asyncio.CancelledError:
DvirDukhan and others added 3 commits February 4, 2026 22:53
- Add detailed assertions for query results (customer names, counts, etc.)
- Add tests for filter queries, count aggregation, and joins
- Validate SQL query structure and result data
- Add session-scoped event loop to fix pytest-asyncio issues
- Handle async event loop cleanup errors gracefully with skip
- Expand model serialization tests
@DvirDukhan DvirDukhan requested a review from galshubeli February 4, 2026 21:25
Dvir Dukhan added 2 commits February 4, 2026 23:25
Disable warnings that are intentional architectural choices:
- C0415: import-outside-toplevel (lazy imports for SDK)
- W0718: broad-exception-caught (error handling)
- R0902: too-many-instance-attributes (dataclasses)
- R0903: too-few-public-methods
- R0911: too-many-return-statements
- R0913/R0917: too-many-arguments (SDK API design)
- C0302: too-many-lines
- Extract SDK sync functions to new api/core/text2sql_sync.py module
- Split QueryResult into composition: QueryResult + QueryMetadata + QueryAnalysis
- Reduce local variables in query_database_sync with helper functions
- Fix broad exception handling - use specific Redis/Connection/OS errors
- Refactor query method to accept Union[str, QueryRequest]
- Add compatibility properties to QueryResult for backwards compatibility
- Document lazy imports in client.py module docstring

Pylint score improved from 9.81/10 to 9.91/10
Remaining E0401 errors are missing dependencies in venv, not code issues
@galshubeli
Copy link
Collaborator

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🤖 Fix all issues with AI agents
In @.github/workflows/tests.yml:
- Around line 130-134: The CI .env creation step ("Create test environment
file") currently only writes FASTAPI_SECRET_KEY; update that step to also append
FALKORDB_URL with the proper test URL (use the same FALKORDB_URL value for the
SDK job and ensure the unit-tests step that generates the CI .env uses the
identical value) so both workflow steps produce a .env containing
FASTAPI_SECRET_KEY and FALKORDB_URL; keep the rest of the workflow (Python 3.12,
pipenv sync --dev, starting FalkorDB for tests, Playwright browser install)
unchanged.

In `@api/core/schema_loader.py`:
- Around line 230-237: The except block currently embeds raw exception text
(str(e)) into the DatabaseConnection.message which may leak credentials;
instead, update the except handler that catches (RedisError, ConnectionError,
OSError) to log the full exception (logging.exception already does this) but
return a generic error message in the DatabaseConnection (e.g., "Error
connecting to database") and preserve success=False, tables_loaded=0; reference
the DatabaseConnection construction in this except block and the exception
variable e when making the change.
- Around line 212-221: The returned DatabaseConnection currently sets
database_id to just the extracted db_name (from url.split(...)) which omits the
required user namespace; in the load_database_sync function build the namespaced
graph_id by prepending the available user_id (e.g., f"{user_id}_{db_name}") and
return that as database_id in the DatabaseConnection so callers receive the same
namespaced graph_id produced by the loaders' refresh_graph_schema methods;
update the DatabaseConnection(...) call to use the constructed namespaced id
instead of raw db_name, keeping tables_loaded, success and message the same.

In `@api/core/text2sql_sync.py`:
- Around line 199-202: The current error handling in the block checking
healing_result uses "raise exec_error", which loses the original traceback;
update the handler in the function where healing_result and exec_error are
defined (the block that currently reads if not healing_result.get("success"):
raise exec_error) to use a bare "raise" so the original exception traceback is
preserved when re-raising the caught error.
- Around line 338-345: The fire-and-forget asyncio.create_task call that invokes
ctx.memory_tool.save_query_memory (using ctx.chat.queries_history[-1] and
final_sql) can drop exceptions; update this to capture the created Task (e.g.,
task = asyncio.create_task(...)) and either await it at an appropriate point or
wrap the coroutine body in a try/except that logs exceptions via the existing
logger (or attach a done callback that logs task.exception()). Ensure you
reference the asyncio.create_task invocation and the
ctx.memory_tool.save_query_memory call when implementing the change so failures
are surfaced instead of being silently lost.
- Around line 58-60: The _graph_name function currently always prefixes graph_id
with user_id; change it to match the original behavior by returning graph_id
unchanged when it already starts with GENERAL_PREFIX, otherwise return the
namespaced f"{user_id}_{graph_id}". Update the logic in function _graph_name to
check for GENERAL_PREFIX (the same constant used in api/core/text2sql.py) and
apply the prefixing only when the graph_id does not start with that prefix.

In `@Makefile`:
- Line 71: The Makefile currently silences pylint failures with "|| true"
causing lint errors to be ignored; update the rule that runs "$(RUN_CMD) pylint
$(shell git ls-files '*.py') || true" to remove the "|| true" so pylint's
non-zero exit status fails the make target (or alternatively capture and re-exit
with the pylint status), ensuring the lint step using RUN_CMD and pylint
actually propagates failures in CI.
- Line 55: The test-unit Makefile target currently runs pytest with -k "not e2e"
but still includes SDK integration tests; update the pytest invocation in the
rule that uses $(RUN_CMD) python -m pytest tests/ to also exclude the SDK
integration directory (e.g., add an additional -k filter like 'and not test_sdk'
or use -k "not e2e and not test_sdk") so tests/test_sdk are skipped when running
the unit target; ensure you update the same command using the $(RUN_CMD)
invocation so test-unit no longer runs SDK integration tests.

In `@pyproject.toml`:
- Line 47: Replace the moving-target git ref for the graphiti-core dependency
with a specific commit SHA to ensure reproducible installs: locate the
dependency line that currently reads "graphiti-core @
git+https://github.com/FalkorDB/graphiti.git@staging" in pyproject.toml and
change the branch ref to a full commit hash (format:
git+https://...@<commit-sha>), committing the updated pyproject.toml so future
installs use that exact commit; update the SHA deliberately when you want to
pull upstream changes.

In `@queryweaver_sdk/client.py`:
- Around line 76-83: _setup_connection currently writes the FalkorDB connection
into module-global api.extensions.db which will be overwritten when multiple
QueryWeaver instances exist; change this by removing the direct assignment and
instead either (a) add an instance-level accessor on QueryWeaver that other
components call to get the connection, (b) add a registration API on
api.extensions (e.g. api.extensions.register_db(instance_id, db) and use a
per-instance key) or (c) use a context/local registry to hold the connection for
the current instance, and update callers to obtain the connection via that new
accessor/registry; if you opt not to change behavior, add documentation to the
QueryWeaver class and _setup_connection noting that only a single SDK instance
is supported and that it mutates api.extensions.db.
- Around line 99-101: The current code truncates graph_id via
graph_id.strip()[:200] then only checks for emptiness but errors claim "must be
non-empty and less than 200 characters"; fix by validating length before
truncation and return a consistent error: compute clean = graph_id.strip(), if
not clean or len(clean) > 200 then raise the appropriate error (either raise
ValueError with message "Invalid graph_id, must be non-empty and less than 200
characters." or, to match api/core/text2sql.py:_graph_name, raise
GraphNotFoundError with "Invalid graph_id, must be less than 200 characters.");
only truncate after passing validation if truncation is needed, and update the
raised exception type/message to match the chosen behavior.

In `@queryweaver_sdk/connection.py`:
- Around line 111-116: The close() method must also close internal non-pooled
FalkorDB Redis connections: before setting self._db = None, detect when
self._pool is None and self._db exists, access the internal connection via
self._db.connection and await its aclose() to release the underlying
redis.asyncio.Redis client; keep the existing pool disconnect logic for when
self._pool is not None and ensure both branches null out self._db afterwards.

In `@tests/test_sdk/conftest.py`:
- Around line 109-144: The fixture currently reads TEST_MYSQL_URL into the
variable url but then ignores it and hardcodes credentials when creating the
pymysql connection; update the fixture to parse TEST_MYSQL_URL (falling back to
the existing default) and extract host, port, user, password, and database, then
pass those parsed values into pymysql.connect instead of the hardcoded
localhost/root/root/testdb; look for the variable url and the
pymysql.connect(...) call in conftest.py and replace the hardcoded args with the
parsed components (use urllib.parse.urlparse or similar) so the fixture respects
the env var and avoids hardcoded credentials.
- Around line 147-153: The queryweaver fixture yields a QueryWeaver instance but
never closes it, leaking connections; wrap the yield in a try/finally and call
the instance cleanup method (e.g., qw.close() or await qw.aclose() if async) in
the finally block so the QueryWeaver created in the fixture (symbol:
QueryWeaver, variable: qw, fixture name: queryweaver) is properly closed after
tests complete.
- Around line 21-26: Remove the custom session-scoped event_loop fixture (the
function named event_loop) from conftest.py; this redefinition is
deprecated/removed in pytest-asyncio. Delete the event_loop fixture and instead
mark async tests with pytest.mark.asyncio(scope="session") (or
loop_scope="session" for 0.24+) or set asyncio_default_fixture_loop_scope =
"session" in pytest configuration so tests get a session-scoped loop without
redefining event_loop.

In `@tests/test_sdk/test_queryweaver.py`:
- Line 224: The assert in the test (the assertion comparing "Bob Jones" against
customer_names) uses an unnecessary f-string; change the assertion message in
the line containing assert "Bob Jones" not in customer_names to use a plain
string (remove the leading f from the message) so it reads: assert "Bob Jones"
not in customer_names, "'Bob Jones' should not be in NYC results".
- Around line 391-412: The test is calling QueryResult with flattened fields
that don't exist; instead build the nested QueryMetadata and QueryAnalysis
objects and pass them into QueryResult (e.g., create a QueryMetadata instance
for sql_query and results, and a QueryAnalysis instance for ai_response,
confidence, is_destructive, requires_confirmation, execution_time), then call
QueryResult(..., metadata=that_metadata, analysis=that_analysis) and update
assertions to read from d["metadata"] and d["analysis"] (or the dict keys
produced by QueryResult.to_dict()) to match the model's structure.
- Around line 447-465: The test calls QueryResult with a non-existent confidence
parameter; instead instantiate a QueryMetadata with the confidence value and
pass it via the QueryResult.metadata field. Update the
test_query_result_default_values to import QueryMetadata (from
queryweaver_sdk.models) and create metadata = QueryMetadata(confidence=0.8) then
construct QueryResult(sql_query="SELECT 1", results=[], ai_response="Test",
metadata=metadata) and keep the same assertions for default optional fields on
the QueryResult instance.
🧹 Nitpick comments (7)
tests/test_sdk/test_queryweaver.py (3)

85-86: Use a more specific exception type instead of bare Exception.

Catching Exception is too broad and may mask unrelated failures. Based on the InvalidArgumentError raised by the SDK for invalid URLs (per api/core/schema_loader.py), use that specific exception.

♻️ Proposed fix
     `@pytest.mark.asyncio`
     async def test_connect_invalid_url(self, queryweaver):
         """Test connecting with invalid URL format."""
-        with pytest.raises(Exception):  # Should raise InvalidArgumentError
+        from api.core.errors import InvalidArgumentError
+        with pytest.raises(InvalidArgumentError):
             await queryweaver.connect_database("invalid://url")

266-269: Rename unused loop variable key to _key.

The loop variable is not used within the loop body.

♻️ Proposed fix
-            for key, val in first_result.items():
+            for _key, val in first_result.items():
                 if isinstance(val, int):
                     count_value = val
                     break

51-51: Unused has_llm_key fixture parameter.

The has_llm_key fixture is injected but never used in these test methods. If this is intentional (to ensure LLM key presence before running), consider adding a brief comment or using pytest.mark.usefixtures("has_llm_key") as a class decorator instead.

Also applies to: 68-68, 94-94, 143-143, 185-185, 234-234, 285-285, 336-336, 369-369

queryweaver_sdk/__init__.py (1)

40-51: Consider sorting __all__ alphabetically for consistency.

Static analysis suggests sorting the exports. This is optional but improves maintainability.

♻️ Proposed fix
 __all__ = [
+    "ChatMessage",
+    "DatabaseConnection",
+    "FalkorDBConnection",
+    "QueryAnalysis",
+    "QueryMetadata",
+    "QueryRequest",
+    "QueryResult",
     "QueryWeaver",
-    "QueryResult",
-    "QueryMetadata",
-    "QueryAnalysis",
-    "SchemaResult", 
-    "DatabaseConnection",
     "RefreshResult",
-    "QueryRequest",
-    "ChatMessage",
-    "FalkorDBConnection",
+    "SchemaResult",
 ]
queryweaver_sdk/models.py (1)

1-209: Consider adding a factory method for backward-compatible construction.

The pipeline failure shows tests using QueryResult(confidence=0.95, ...) which doesn't work with the current signature. While fixing the tests is the right approach, you could also add a @classmethod factory for convenience if flat-kwarg construction is a common pattern.

♻️ Optional factory method
`@classmethod`
def from_flat(
    cls,
    sql_query: str,
    results: list[dict[str, Any]],
    ai_response: str,
    confidence: float = 0.0,
    execution_time: float = 0.0,
    is_valid: bool = True,
    is_destructive: bool = False,
    requires_confirmation: bool = False,
    missing_information: str = "",
    ambiguities: str = "",
    explanation: str = "",
) -> "QueryResult":
    """Create QueryResult from flat keyword arguments."""
    return cls(
        sql_query=sql_query,
        results=results,
        ai_response=ai_response,
        metadata=QueryMetadata(
            confidence=confidence,
            execution_time=execution_time,
            is_valid=is_valid,
            is_destructive=is_destructive,
            requires_confirmation=requires_confirmation,
        ),
        analysis=QueryAnalysis(
            missing_information=missing_information,
            ambiguities=ambiguities,
            explanation=explanation,
        ),
    )
api/core/text2sql_sync.py (2)

472-477: Use logging.exception for automatic traceback inclusion.

logging.exception automatically includes the traceback when called from an exception handler.

♻️ Proposed fix
     except (RedisError, ConnectionError, OSError) as e:
-        logging.error("Error executing SQL query: %s", str(e))
+        logging.exception("Error executing SQL query")
         return _build_query_result(
             sql_query=analysis.sql_query,
             results=[],
-            ai_response=f"Error executing SQL query: {str(e)}",
+            ai_response=f"Error executing SQL query: {e!s}",

Apply the same pattern at lines 576 and 631.


181-183: Move success return outside the try block.

Per Ruff TRY300, returning inside try can mask exceptions raised during the return statement itself.

♻️ Proposed fix
     try:
         query_results = context.loader_class.execute_sql_query(sql_query, context.db_url)
-        return sql_query, query_results
     except (RedisError, ConnectionError, OSError) as exec_error:
         # healing logic...
+    else:
+        return sql_query, query_results

Comment on lines +130 to +134
- name: Create test environment file
run: |
cp .env.example .env
echo "FASTAPI_SECRET_KEY=test-secret-key" >> .env

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add FALKORDB_URL to the CI .env.

Please include the FalkorDB URL in the generated .env for the SDK job (and keep the unit-tests step in sync) so the CI environment file matches the workflow guidelines.

🔧 Proposed fix
     - name: Create test environment file
       run: |
         cp .env.example .env
         echo "FASTAPI_SECRET_KEY=test-secret-key" >> .env
+        echo "FALKORDB_URL=redis://localhost:6379" >> .env

As per coding guidelines: GitHub Actions workflows should set up Python 3.12, use pipenv (sync --dev), create a CI .env using FALKORDB_URL, start FalkorDB for tests, and install Playwright browsers for E2E.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Create test environment file
run: |
cp .env.example .env
echo "FASTAPI_SECRET_KEY=test-secret-key" >> .env
- name: Create test environment file
run: |
cp .env.example .env
echo "FASTAPI_SECRET_KEY=test-secret-key" >> .env
echo "FALKORDB_URL=redis://localhost:6379" >> .env
🤖 Prompt for AI Agents
In @.github/workflows/tests.yml around lines 130 - 134, The CI .env creation
step ("Create test environment file") currently only writes FASTAPI_SECRET_KEY;
update that step to also append FALKORDB_URL with the proper test URL (use the
same FALKORDB_URL value for the SDK job and ensure the unit-tests step that
generates the CI .env uses the identical value) so both workflow steps produce a
.env containing FASTAPI_SECRET_KEY and FALKORDB_URL; keep the rest of the
workflow (Python 3.12, pipenv sync --dev, starting FalkorDB for tests,
Playwright browser install) unchanged.

Comment on lines +212 to +221
# Extract database name from the message or URL
# The loader typically returns the graph_id in the final message
db_name = url.split("/")[-1].split("?")[0] # Extract DB name from URL

return DatabaseConnection(
database_id=db_name,
success=True,
tables_loaded=tables_loaded,
message="Database connected and schema loaded successfully",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Inspect loader.load implementations for graph_id/database_id derivation
rg -n -C3 "async def load\(" api/loaders -g '*.py'

Repository: FalkorDB/QueryWeaver

Length of output: 1426


🏁 Script executed:

#!/bin/bash
# Search for graph_id and database_id usage patterns
rg -n -C3 "graph_id|database_id" api/loaders api/core queryweaver_sdk -g '*.py'

Repository: FalkorDB/QueryWeaver

Length of output: 28477


🏁 Script executed:

#!/bin/bash
# Check DatabaseConnection usage in SDK
rg -n -C3 "DatabaseConnection" queryweaver_sdk -g '*.py'

Repository: FalkorDB/QueryWeaver

Length of output: 2361


🏁 Script executed:

#!/bin/bash
# Also check the schema_loader.py file to understand context
cat -n api/core/schema_loader.py | head -230 | tail -50

Repository: FalkorDB/QueryWeaver

Length of output: 2160


Return the actual namespaced graph_id instead of extracting database name from URL.

The loader creates a namespaced graph_id in the format {user_id}_{database_name} (as shown in the loaders' refresh_graph_schema methods), but load_database_sync returns only the database name extracted from the URL. This mismatch causes SDK calls using the returned database_id to fail. Since user_id is already available in the function, construct and return the namespaced graph_id: database_id=f"{user_id}_{db_name}".

🤖 Prompt for AI Agents
In `@api/core/schema_loader.py` around lines 212 - 221, The returned
DatabaseConnection currently sets database_id to just the extracted db_name
(from url.split(...)) which omits the required user namespace; in the
load_database_sync function build the namespaced graph_id by prepending the
available user_id (e.g., f"{user_id}_{db_name}") and return that as database_id
in the DatabaseConnection so callers receive the same namespaced graph_id
produced by the loaders' refresh_graph_schema methods; update the
DatabaseConnection(...) call to use the constructed namespaced id instead of raw
db_name, keeping tables_loaded, success and message the same.

Comment on lines +230 to +237
except (RedisError, ConnectionError, OSError) as e:
logging.exception("Error loading database: %s", str(e))
return DatabaseConnection(
database_id="",
success=False,
tables_loaded=0,
message=f"Error connecting to database: {str(e)}",
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid returning raw exception text (may include credentials).

str(e) can include connection strings or user info. Prefer a generic message and keep the detailed error only in logs.

🔧 Proposed fix
         return DatabaseConnection(
             database_id="",
             success=False,
             tables_loaded=0,
-            message=f"Error connecting to database: {str(e)}",
+            message="Error connecting to database",
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except (RedisError, ConnectionError, OSError) as e:
logging.exception("Error loading database: %s", str(e))
return DatabaseConnection(
database_id="",
success=False,
tables_loaded=0,
message=f"Error connecting to database: {str(e)}",
)
except (RedisError, ConnectionError, OSError) as e:
logging.exception("Error loading database: %s", str(e))
return DatabaseConnection(
database_id="",
success=False,
tables_loaded=0,
message="Error connecting to database",
)
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 231-231: Redundant exception object included in logging.exception call

(TRY401)


[warning] 236-236: Use explicit conversion flag

Replace with conversion flag

(RUF010)

🤖 Prompt for AI Agents
In `@api/core/schema_loader.py` around lines 230 - 237, The except block currently
embeds raw exception text (str(e)) into the DatabaseConnection.message which may
leak credentials; instead, update the except handler that catches (RedisError,
ConnectionError, OSError) to log the full exception (logging.exception already
does this) but return a generic error message in the DatabaseConnection (e.g.,
"Error connecting to database") and preserve success=False, tables_loaded=0;
reference the DatabaseConnection construction in this except block and the
exception variable e when making the change.

Comment on lines +58 to +60
def _graph_name(user_id: str, graph_id: str) -> str:
"""Generate namespaced graph name."""
return f"{user_id}_{graph_id}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

_graph_name doesn't handle GENERAL_PREFIX like the original in api/core/text2sql.py.

The original _graph_name in api/core/text2sql.py (lines 99-108) checks for GENERAL_PREFIX and returns the graph_id unchanged if it starts with that prefix. This implementation always prefixes with user_id_.

🔧 Proposed fix to align with original behavior
 def _graph_name(user_id: str, graph_id: str) -> str:
     """Generate namespaced graph name."""
+    graph_id = graph_id.strip()[:200]
+    if not graph_id:
+        raise InvalidArgumentError("Invalid graph_id, must be non-empty")
+    
+    if GENERAL_PREFIX and graph_id.startswith(GENERAL_PREFIX):
+        return graph_id
+    
     return f"{user_id}_{graph_id}"
🤖 Prompt for AI Agents
In `@api/core/text2sql_sync.py` around lines 58 - 60, The _graph_name function
currently always prefixes graph_id with user_id; change it to match the original
behavior by returning graph_id unchanged when it already starts with
GENERAL_PREFIX, otherwise return the namespaced f"{user_id}_{graph_id}". Update
the logic in function _graph_name to check for GENERAL_PREFIX (the same constant
used in api/core/text2sql.py) and apply the prefixing only when the graph_id
does not start with that prefix.

Comment on lines +199 to +202
if not healing_result.get("success"):
raise exec_error

return healing_result["sql_query"], healing_result["query_results"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use bare raise to preserve the original traceback.

Using raise exec_error loses the traceback from the healing attempt. Use bare raise instead.

🔧 Proposed fix
         if not healing_result.get("success"):
-            raise exec_error
+            raise
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 200-200: Use raise without specifying exception name

Remove exception name

(TRY201)

🤖 Prompt for AI Agents
In `@api/core/text2sql_sync.py` around lines 199 - 202, The current error handling
in the block checking healing_result uses "raise exec_error", which loses the
original traceback; update the handler in the function where healing_result and
exec_error are defined (the block that currently reads if not
healing_result.get("success"): raise exec_error) to use a bare "raise" so the
original exception traceback is preserved when re-raising the caught error.

Comment on lines +109 to +144
url = os.getenv("TEST_MYSQL_URL", "mysql://root:root@localhost:3306/testdb")

# Verify connection and create test schema
try:
import pymysql
conn = pymysql.connect(
host='localhost',
user='root',
password='root',
database='testdb'
)
cursor = conn.cursor()

# Create test tables
cursor.execute("DROP TABLE IF EXISTS products")
cursor.execute("""
CREATE TABLE IF NOT EXISTS products (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
category VARCHAR(50),
price DECIMAL(10,2)
)
""")

cursor.execute("""
INSERT INTO products (name, category, price) VALUES
('Laptop', 'Electronics', 999.99),
('Mouse', 'Electronics', 29.99),
('Desk', 'Furniture', 199.99)
""")
conn.commit()
conn.close()
except Exception as e:
pytest.skip(f"MySQL not available: {e}")

return url
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

MySQL fixture ignores url and hardcodes credentials.

The fixture reads TEST_MYSQL_URL into url but then ignores it entirely, hardcoding localhost, root, root, testdb. This creates inconsistency and a potential security concern with hardcoded credentials.

🔧 Proposed fix - parse URL or use env vars consistently
 `@pytest.fixture`(scope="session")
 def mysql_url():
     """Provide MySQL connection URL with test database."""
     url = os.getenv("TEST_MYSQL_URL", "mysql://root:root@localhost:3306/testdb")
     
     # Verify connection and create test schema
     try:
         import pymysql
+        from urllib.parse import urlparse
+        parsed = urlparse(url.replace("mysql://", "mysql+pymysql://"))
+        
         conn = pymysql.connect(
-            host='localhost',
-            user='root',
-            password='root',
-            database='testdb'
+            host=parsed.hostname or 'localhost',
+            port=parsed.port or 3306,
+            user=parsed.username or 'root',
+            password=parsed.password or 'root',
+            database=parsed.path.lstrip('/') or 'testdb'
         )
🧰 Tools
🪛 ast-grep (0.40.5)

[warning] 113-118: A secret is hard-coded in the application. Secrets stored in source code, such as credentials, identifiers, and other types of sensitive data, can be leaked and used by internal or external malicious actors. Use environment variables to securely provide credentials and other secrets or retrieve them from a secure vault or Hardware Security Module (HSM).
Context: pymysql.connect(
host='localhost',
user='root',
password='root',
database='testdb'
)
Note: [CWE-798]: Use of Hard-coded Credentials [OWASP A07:2021]: Identification and Authentication Failures [REFERENCES]
https://cheatsheetseries.owasp.org/cheatsheets/Secrets_Management_Cheat_Sheet.html

(python-pymysql-hardcoded-secret-python)

🪛 Ruff (0.14.14)

[error] 117-117: Possible hardcoded password assigned to argument: "password"

(S106)


[warning] 141-141: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@tests/test_sdk/conftest.py` around lines 109 - 144, The fixture currently
reads TEST_MYSQL_URL into the variable url but then ignores it and hardcodes
credentials when creating the pymysql connection; update the fixture to parse
TEST_MYSQL_URL (falling back to the existing default) and extract host, port,
user, password, and database, then pass those parsed values into pymysql.connect
instead of the hardcoded localhost/root/root/testdb; look for the variable url
and the pymysql.connect(...) call in conftest.py and replace the hardcoded args
with the parsed components (use urllib.parse.urlparse or similar) so the fixture
respects the env var and avoids hardcoded credentials.

Comment on lines +147 to +153
@pytest.fixture
def queryweaver(falkordb_url):
"""Provide initialized QueryWeaver instance."""
from queryweaver_sdk import QueryWeaver

qw = QueryWeaver(falkordb_url=falkordb_url, user_id="test_user")
yield qw
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

QueryWeaver instance is not closed after use.

The queryweaver fixture yields the instance but doesn't close it, potentially leaking the FalkorDB connection.

🔧 Proposed fix
 `@pytest.fixture`
-def queryweaver(falkordb_url):
+async def queryweaver(falkordb_url):
     """Provide initialized QueryWeaver instance."""
     from queryweaver_sdk import QueryWeaver
     
     qw = QueryWeaver(falkordb_url=falkordb_url, user_id="test_user")
     yield qw
+    await qw.close()
🤖 Prompt for AI Agents
In `@tests/test_sdk/conftest.py` around lines 147 - 153, The queryweaver fixture
yields a QueryWeaver instance but never closes it, leaking connections; wrap the
yield in a try/finally and call the instance cleanup method (e.g., qw.close() or
await qw.aclose() if async) in the finally block so the QueryWeaver created in
the fixture (symbol: QueryWeaver, variable: qw, fixture name: queryweaver) is
properly closed after tests complete.

assert "Alice Smith" in customer_names, f"Expected 'Alice Smith' in results, got {customer_names}"
assert "Carol White" in customer_names, f"Expected 'Carol White' in results, got {customer_names}"
# Bob Jones should NOT be in results (he's from Los Angeles)
assert "Bob Jones" not in customer_names, f"'Bob Jones' should not be in NYC results"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove extraneous f-prefix from string literal.

This f-string has no placeholders and should be a regular string.

🔧 Proposed fix
-            assert "Bob Jones" not in customer_names, f"'Bob Jones' should not be in NYC results"
+            assert "Bob Jones" not in customer_names, "'Bob Jones' should not be in NYC results"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert "Bob Jones" not in customer_names, f"'Bob Jones' should not be in NYC results"
assert "Bob Jones" not in customer_names, "'Bob Jones' should not be in NYC results"
🧰 Tools
🪛 Ruff (0.14.14)

[error] 224-224: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In `@tests/test_sdk/test_queryweaver.py` at line 224, The assert in the test (the
assertion comparing "Bob Jones" against customer_names) uses an unnecessary
f-string; change the assertion message in the line containing assert "Bob Jones"
not in customer_names to use a plain string (remove the leading f from the
message) so it reads: assert "Bob Jones" not in customer_names, "'Bob Jones'
should not be in NYC results".

Comment on lines 391 to 412
def test_query_result_to_dict(self):
"""Test QueryResult serialization."""
from queryweaver_sdk.models import QueryResult

result = QueryResult(
sql_query="SELECT * FROM customers",
results=[{"id": 1, "name": "Alice"}],
ai_response="Found 1 customer",
confidence=0.95,
is_destructive=False,
requires_confirmation=False,
execution_time=0.5,
)

d = result.to_dict()
assert d["sql_query"] == "SELECT * FROM customers"
assert d["confidence"] == 0.95
assert d["results"] == [{"id": 1, "name": "Alice"}]
assert d["ai_response"] == "Found 1 customer"
assert d["is_destructive"] is False
assert d["requires_confirmation"] is False
assert d["execution_time"] == 0.5
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Test uses incorrect constructor signature for QueryResult.

The pipeline failure indicates QueryResult.__init__() got an unexpected keyword argument 'confidence'. Per queryweaver_sdk/models.py, QueryResult accepts metadata: QueryMetadata and analysis: QueryAnalysis as nested objects, not top-level confidence, is_destructive, etc.

🐛 Proposed fix
+        from queryweaver_sdk.models import QueryMetadata
+        
         result = QueryResult(
             sql_query="SELECT * FROM customers",
             results=[{"id": 1, "name": "Alice"}],
             ai_response="Found 1 customer",
-            confidence=0.95,
-            is_destructive=False,
-            requires_confirmation=False,
-            execution_time=0.5,
+            metadata=QueryMetadata(
+                confidence=0.95,
+                is_destructive=False,
+                requires_confirmation=False,
+                execution_time=0.5,
+            ),
         )
🤖 Prompt for AI Agents
In `@tests/test_sdk/test_queryweaver.py` around lines 391 - 412, The test is
calling QueryResult with flattened fields that don't exist; instead build the
nested QueryMetadata and QueryAnalysis objects and pass them into QueryResult
(e.g., create a QueryMetadata instance for sql_query and results, and a
QueryAnalysis instance for ai_response, confidence, is_destructive,
requires_confirmation, execution_time), then call QueryResult(...,
metadata=that_metadata, analysis=that_analysis) and update assertions to read
from d["metadata"] and d["analysis"] (or the dict keys produced by
QueryResult.to_dict()) to match the model's structure.

Comment on lines 447 to 465
def test_query_result_default_values(self):
"""Test QueryResult with minimal required values."""
from queryweaver_sdk.models import QueryResult

result = QueryResult(
sql_query="SELECT 1",
results=[],
ai_response="Test",
confidence=0.8,
)

# Check defaults for optional fields
assert result.is_destructive is False
assert result.requires_confirmation is False
assert result.execution_time == 0.0
assert result.is_valid is True
assert result.missing_information == ""
assert result.ambiguities == ""
assert result.explanation == ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Test uses incorrect constructor signature for QueryResult.

Same issue as above - confidence should be passed via QueryMetadata.

🐛 Proposed fix
+        from queryweaver_sdk.models import QueryMetadata
+        
         result = QueryResult(
             sql_query="SELECT 1",
             results=[],
             ai_response="Test",
-            confidence=0.8,
+            metadata=QueryMetadata(confidence=0.8),
         )
🤖 Prompt for AI Agents
In `@tests/test_sdk/test_queryweaver.py` around lines 447 - 465, The test calls
QueryResult with a non-existent confidence parameter; instead instantiate a
QueryMetadata with the confidence value and pass it via the QueryResult.metadata
field. Update the test_query_result_default_values to import QueryMetadata (from
queryweaver_sdk.models) and create metadata = QueryMetadata(confidence=0.8) then
construct QueryResult(sql_query="SELECT 1", results=[], ai_response="Test",
metadata=metadata) and keep the same assertions for default optional fields on
the QueryResult instance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants