Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: mypy for all type check #10921

Merged
merged 155 commits into from
Dec 24, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
155 commits
Select commit Hold shift + click to select a range
aadd4eb
feat: mypy for all type check
yihong0618 Nov 21, 2024
ab4233a
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 21, 2024
df6a6a4
fix: merge main and 4195 errors left
yihong0618 Nov 21, 2024
3fae64b
fix: tiny fix and make ci happy
yihong0618 Nov 21, 2024
804c778
fix: make lockfile happy
yihong0618 Nov 21, 2024
2cc302d
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 21, 2024
c11b984
fix: wip
yihong0618 Nov 22, 2024
3d3045b
fix: lint
yihong0618 Nov 22, 2024
a14d022
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 22, 2024
2264ea8
fix: wip and less than 4000 errors 3908 now
yihong0618 Nov 22, 2024
f378c5b
fix: all import-untyped done
yihong0618 Nov 22, 2024
34c7a04
fix: wip 3608 left
yihong0618 Nov 22, 2024
234b57e
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 22, 2024
bffe75a
fix: wip 3497 left
yihong0618 Nov 22, 2024
ccad160
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 22, 2024
343794d
chore: poetry lock update
yihong0618 Nov 22, 2024
7f78f36
fix: wip 3372 left
yihong0618 Nov 23, 2024
448069a
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 23, 2024
1404c74
chore: lock file
yihong0618 Nov 23, 2024
0ab4b4a
fix: wip and will change to python3.12
yihong0618 Nov 24, 2024
e65d4f9
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 24, 2024
c245e60
fix: drop conflict
yihong0618 Nov 24, 2024
81e43f8
fix: lock version
yihong0618 Nov 24, 2024
dcf8118
fix: wip
yihong0618 Nov 25, 2024
db3103c
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 25, 2024
4eb9b57
fix: merge main
yihong0618 Nov 25, 2024
30e8810
fix: wip
yihong0618 Nov 26, 2024
da9101b
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 26, 2024
2b59110
fix: wip
yihong0618 Nov 27, 2024
97ea17a
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 27, 2024
1d9a832
fix: Xinference api_key can be None
yihong0618 Nov 27, 2024
eb61d31
fix: lint
yihong0618 Nov 27, 2024
d744934
fix: wip 3145 left
yihong0618 Nov 27, 2024
76b4953
Merge branch 'main' into hy/type_hints_all
yihong0618 Nov 27, 2024
9a20d0f
fix: wip and ignores tools and models
yihong0618 Nov 28, 2024
ec12e73
chore(pyproject.toml): Move mypy configurations to `mypy.ini`
laipz8200 Dec 1, 2024
6d2fa1c
chore(*): Remove variable types
laipz8200 Dec 1, 2024
22b697d
fix(graph_engine): Fix type error under `graph_engine`
laipz8200 Dec 1, 2024
c5ad98c
fix(app_dsl_service): Check if tenant_id is None before create App
laipz8200 Dec 1, 2024
d347b2b
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 2, 2024
ecfc89c
fix: merge
yihong0618 Dec 2, 2024
6485187
fix: update lock
yihong0618 Dec 2, 2024
fff6316
fix: update ruff check
yihong0618 Dec 2, 2024
2613302
fix: still wip
yihong0618 Dec 2, 2024
2a62300
fix: wip
yihong0618 Dec 2, 2024
e20043d
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 2, 2024
de7bbfd
fix: wip
yihong0618 Dec 2, 2024
f6ac94e
fix: wip 1336 left
yihong0618 Dec 2, 2024
6031e44
fix: Dict -> dict
yihong0618 Dec 2, 2024
59bbdfd
fix: feishu and 1256 left
yihong0618 Dec 2, 2024
760ae27
fix: wip
yihong0618 Dec 2, 2024
9172fd0
fix: wip 1180 left
yihong0618 Dec 3, 2024
e5e592b
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 3, 2024
cea8441
fix: wip 1115 left
yihong0618 Dec 4, 2024
6068803
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 4, 2024
922765c
fix: vdb error
yihong0618 Dec 4, 2024
57f041c
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 11, 2024
0870888
fix: wip
yihong0618 Dec 11, 2024
6ee7943
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 11, 2024
4202335
chore
yihong0618 Dec 11, 2024
61a9f8c
fix: wip 1100+ left....
yihong0618 Dec 11, 2024
a713d02
Merge branch 'hy/type_hints_all' of https://github.com/yihong0618/dif…
yihong0618 Dec 12, 2024
6b0665a
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 12, 2024
5c892b3
fix: less than 1000 -> 999 continue to work hope can be done every da…
yihong0618 Dec 12, 2024
3a132b9
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 12, 2024
6114942
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 13, 2024
2ffd2cb
fix: on progress 886 left
yihong0618 Dec 13, 2024
4919612
fix: wip 740 left
yihong0618 Dec 14, 2024
fb9f691
fix: process 695 left
yihong0618 Dec 14, 2024
d234486
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 14, 2024
5dc7e7a
chore: merge main
yihong0618 Dec 14, 2024
98fd95a
fix: less than 500 now
yihong0618 Dec 15, 2024
f86e05f
fix: ruff check
yihong0618 Dec 15, 2024
c6d9b9e
fix: wip less than 400 now
yihong0618 Dec 16, 2024
ef2c985
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 16, 2024
aaa5fce
fix: wip less than 400 errors 393 now
yihong0618 Dec 16, 2024
00b8f03
fix: ruff lint
yihong0618 Dec 16, 2024
4360568
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 17, 2024
ab55b4e
fix: merge
yihong0618 Dec 17, 2024
7e9ea21
fix: lock
yihong0618 Dec 17, 2024
dc98269
fix: wip 340 left
yihong0618 Dec 17, 2024
0a35a99
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 19, 2024
bad2e9b
fix: less than 300 now
yihong0618 Dec 19, 2024
a9bfa05
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 19, 2024
9bc0fa3
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 20, 2024
a9df950
fix: less than 200 now
yihong0618 Dec 20, 2024
2c89106
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 20, 2024
9821a0a
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 20, 2024
6a82272
fix: wip
yihong0618 Dec 20, 2024
e919dc8
fix: error
yihong0618 Dec 20, 2024
5b77eaa
fix
yihong0618 Dec 20, 2024
2b2dfb4
fix: less that 100 now
yihong0618 Dec 21, 2024
6b7062e
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 21, 2024
e025679
fix: merge main
yihong0618 Dec 21, 2024
7f81d6b
Revert "fix: merge"
yihong0618 Dec 21, 2024
2663b82
fix: wip 65 left
yihong0618 Dec 22, 2024
943a0ae
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 22, 2024
b9dfd0f
fix: wip 30 left
yihong0618 Dec 22, 2024
8a388a2
fix: ci import error
yihong0618 Dec 22, 2024
9b10d16
fix: lint
yihong0618 Dec 22, 2024
07dcf80
fix: done and some not sure type add FIXME
yihong0618 Dec 23, 2024
986f71b
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
252d538
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
767df75
fix: it
yihong0618 Dec 23, 2024
112f06b
ci: add mypy tests
yihong0618 Dec 23, 2024
c34002c
ci: add test for mypy
yihong0618 Dec 23, 2024
b08694d
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
93cc22d
fix: mypy version
yihong0618 Dec 23, 2024
36a6d12
Merge branch 'hy/type_hints_all' of https://github.com/yihong0618/dif…
yihong0618 Dec 23, 2024
95fb46c
fix: lock
yihong0618 Dec 23, 2024
057d2d1
fix: must in api files
yihong0618 Dec 23, 2024
aa4a7a6
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
bbb5c1b
fix: yml...
yihong0618 Dec 23, 2024
515b26b
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
fd42e3a
fix: mypy
yihong0618 Dec 23, 2024
cb9f238
Merge branch 'main' into hy/type_hints_all
yihong0618 Dec 23, 2024
4ca3b03
fix: super lint
yihong0618 Dec 24, 2024
ba1619b
ci: better mypy check
yihong0618 Dec 24, 2024
5256229
fix: remove unnecessary type ignore comments from computed fields
laipz8200 Dec 24, 2024
c12315b
fix: update type hints for computed fields and headers
laipz8200 Dec 24, 2024
0450409
fix: remove unnecessary type ignore comments and improve mypy command
laipz8200 Dec 24, 2024
37379ac
fix: update mypy command to use python module and remove unused pytho…
laipz8200 Dec 24, 2024
98d8c44
fix: improve document syncing logic and handle missing documents
laipz8200 Dec 24, 2024
af2a26e
fix: handle missing documents in retry indexing task and streamline e…
laipz8200 Dec 24, 2024
6686b74
fix: enhance load method with overloads for better type hinting and s…
laipz8200 Dec 24, 2024
1a9d127
fix: refactor create_clusters function to use keyword arguments for c…
laipz8200 Dec 24, 2024
15bdacc
fix: refactor update_clusters function to use named parameters for im…
laipz8200 Dec 24, 2024
49937d4
fix: remove unnecessary type ignore comment for CORS import
laipz8200 Dec 24, 2024
0deae8e
fix: update parser_api_schema return type to Mapping for improved typ…
laipz8200 Dec 24, 2024
a26560d
fix: replace computed_field decorators with property for improved cla…
laipz8200 Dec 24, 2024
1f3177c
fix: remove unnecessary type ignore comment for yaml import
laipz8200 Dec 24, 2024
8e81c22
fix: update max_active_requests to use mapped_column for improved typ…
laipz8200 Dec 24, 2024
168926b
fix: simplify return statement in AppDslService by removing unnecessa…
laipz8200 Dec 24, 2024
923dfc1
fix: streamline count statement in ConversationService for improved r…
laipz8200 Dec 24, 2024
5d5fb07
fix: remove unnecessary type hint in MessageService for improved clarity
laipz8200 Dec 24, 2024
9873f44
fix: remove unnecessary return type hints in ProviderConfiguration an…
laipz8200 Dec 24, 2024
41fac0b
feat: add explore API resources for audio, completion, conversation, …
laipz8200 Dec 24, 2024
b7c91c4
fix: enforce model_instance requirement in agent runners for improved…
laipz8200 Dec 24, 2024
5d3c3a4
fix: enhance content handling in CotCompletionAgentRunner for improve…
laipz8200 Dec 24, 2024
1aaccd7
fix: simplify return statement in get_model_credentials for improved …
laipz8200 Dec 24, 2024
690a8f0
fix: remove unnecessary type casting in FunctionCallAgentRunner for i…
laipz8200 Dec 24, 2024
88b32e9
fix: improve message handling in CotAgentRunner by enforcing type che…
laipz8200 Dec 24, 2024
fa0c4d8
fix: enforce content type validation in CotAgentRunner for improved e…
laipz8200 Dec 24, 2024
e20a2bd
fix: enhance message content handling in AppGeneratorTTSPublisher for…
laipz8200 Dec 24, 2024
0841ec1
fix: streamline conversation variable handling in AdvancedChatAppRunn…
laipz8200 Dec 24, 2024
363ac59
fix: remove unnecessary type ignores in generate_response_converter f…
laipz8200 Dec 24, 2024
edfca45
fix: update type hints to use Mapping and Sequence for improved type …
laipz8200 Dec 24, 2024
14b229d
fix: update inputs type hints to use Mapping for improved type safety
laipz8200 Dec 24, 2024
683118f
fix: update type hints in BaseAppGenerator and WorkflowAppGenerator f…
laipz8200 Dec 24, 2024
46f3a14
fix: improve invoke result handling in AppRunner for better type safe…
laipz8200 Dec 24, 2024
33759d5
fix: update type hints in WorkflowBasedAppRunner and QueueNodeSucceed…
laipz8200 Dec 24, 2024
675f8f6
fix: update type hints in AppGenerateEntity to use Sequence for files…
laipz8200 Dec 24, 2024
bafaa40
fix: update type hints in file_manager to use Mapping for improved ty…
laipz8200 Dec 24, 2024
e0c6963
fix: replace @computed_field with @property for improved clarity and …
laipz8200 Dec 24, 2024
44fe95e
fix: replace @computed_field with @property in MultiModalPromptMessag…
laipz8200 Dec 24, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix: wip 3372 left
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
  • Loading branch information
yihong0618 committed Nov 23, 2024
commit 7f78f36d28c90b72591fdaf6f5dd398c7ce72aa9
3 changes: 2 additions & 1 deletion api/commands.py
Original file line number Diff line number Diff line change
Expand Up @@ -555,7 +555,8 @@ def create_tenant(email: str, language: Optional[str] = None, name: Optional[str
if language not in languages:
language = "en-US"

name = name.strip()
if name is not None:
name = name.strip()

# generate random password
new_password = secrets.token_urlsafe(16)
Expand Down
2 changes: 1 addition & 1 deletion api/controllers/console/app/wraps.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
from models.model import AppMode


def get_app_model(view: Optional[Callable] = None, *, mode: Union[AppMode, list[AppMode]] = None):
def get_app_model(view: Optional[Callable] = None, *, mode: Union[AppMode, list[AppMode], None] = None):
def decorator(view_func):
@wraps(view_func)
def decorated_view(*args, **kwargs):
Expand Down
22 changes: 12 additions & 10 deletions api/core/agent/base_agent_runner.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import json
import logging
import uuid
from collections.abc import Mapping, Sequence
from datetime import datetime, timezone
from typing import Optional, Union, cast

Expand Down Expand Up @@ -112,12 +111,14 @@ def __init__(
db.session.close()

# check if model supports stream tool call
# FIXME confirm here, model_instance is not None
assert model_instance is not None
llm_model = cast(LargeLanguageModel, model_instance.model_type_instance)
model_schema = llm_model.get_model_schema(model_instance.model, model_instance.credentials)
features = model_schema.features if model_schema and model_schema.features else []
self.stream_tool_call = ModelFeature.STREAM_TOOL_CALL in features
self.files = application_generate_entity.files if ModelFeature.VISION in features else []
self.query = None
self.query = Optional[str]
self._current_thoughts: list[PromptMessage] = []

def _repack_app_generate_entity(
Expand Down Expand Up @@ -145,7 +146,7 @@ def _convert_tool_to_prompt_message_tool(self, tool: AgentToolEntity) -> tuple[P

message_tool = PromptMessageTool(
name=tool.tool_name,
description=tool_entity.description.llm,
description=tool_entity.description.llm if tool_entity.description else "",
parameters={
"type": "object",
"properties": {},
Expand All @@ -167,7 +168,7 @@ def _convert_tool_to_prompt_message_tool(self, tool: AgentToolEntity) -> tuple[P
continue
enum = []
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
enum = [option.value for option in parameter.options] if parameter.options else []

message_tool.parameters["properties"][parameter.name] = {
"type": parameter_type,
Expand All @@ -187,8 +188,8 @@ def _convert_dataset_retriever_tool_to_prompt_message_tool(self, tool: DatasetRe
convert dataset retriever tool to prompt message tool
"""
prompt_tool = PromptMessageTool(
name=tool.identity.name,
description=tool.description.llm,
name=tool.identity.name if tool.identity else "unknown",
description=tool.description.llm if tool.description else "",
parameters={
"type": "object",
"properties": {},
Expand All @@ -210,14 +211,14 @@ def _convert_dataset_retriever_tool_to_prompt_message_tool(self, tool: DatasetRe

return prompt_tool

def _init_prompt_tools(self) -> tuple[Mapping[str, Tool], Sequence[PromptMessageTool]]:
def _init_prompt_tools(self) -> tuple[dict[str, Tool], list[PromptMessageTool]]:
"""
Init tools
"""
tool_instances = {}
prompt_messages_tools = []

for tool in self.app_config.agent.tools if self.app_config.agent else []:
for tool in self.app_config.agent.tools or [] if self.app_config.agent else []:
try:
prompt_tool, tool_entity = self._convert_tool_to_prompt_message_tool(tool)
except Exception:
Expand All @@ -234,7 +235,8 @@ def _init_prompt_tools(self) -> tuple[Mapping[str, Tool], Sequence[PromptMessage
# save prompt tool
prompt_messages_tools.append(prompt_tool)
# save tool entity
tool_instances[dataset_tool.identity.name] = dataset_tool
if dataset_tool.identity is not None:
tool_instances[dataset_tool.identity.name] = dataset_tool

return tool_instances, prompt_messages_tools

Expand All @@ -258,7 +260,7 @@ def update_prompt_message_tool(self, tool: Tool, prompt_tool: PromptMessageTool)
continue
enum = []
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options]
enum = [option.value for option in parameter.options] if parameter.options else []

prompt_tool.parameters["properties"][parameter.name] = {
"type": parameter_type,
Expand Down
53 changes: 30 additions & 23 deletions api/core/agent/cot_agent_runner.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import json
from abc import ABC, abstractmethod
from collections.abc import Generator
from typing import Optional
from typing import Optional, cast

from core.agent.base_agent_runner import BaseAgentRunner
from core.agent.entities import AgentScratchpadUnit
Expand All @@ -12,6 +12,7 @@
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
PromptMessageTool,
ToolPromptMessage,
UserPromptMessage,
)
Expand All @@ -28,9 +29,9 @@ class CotAgentRunner(BaseAgentRunner, ABC):
_ignore_observation_providers = ["wenxin"]
_historic_prompt_messages: list[PromptMessage] | None = None
_agent_scratchpad: list[AgentScratchpadUnit] | None = None
_instruction: str | None = None
_instruction: str = "" # FIXME this must be str for now
_query: str | None = None
_prompt_messages_tools: list[PromptMessage] | None = None
_prompt_messages_tools: list[PromptMessageTool] = []

def run(
self,
Expand Down Expand Up @@ -66,10 +67,10 @@ def run(
tool_instances, self._prompt_messages_tools = self._init_prompt_tools()

function_call_state = True
llm_usage = {"usage": None}
llm_usage: dict[str, Optional[LLMUsage]] = {"usage": None}
final_answer = ""

def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
def increase_usage(final_llm_usage_dict: dict[str, Optional[LLMUsage]], usage: LLMUsage):
if not final_llm_usage_dict["usage"]:
final_llm_usage_dict["usage"] = usage
else:
Expand Down Expand Up @@ -124,7 +125,7 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
if not chunks:
raise ValueError("failed to invoke llm")

usage_dict: dict = {}
usage_dict: dict[str, Optional[LLMUsage]] = {"usage": None}
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks, usage_dict)
scratchpad = AgentScratchpadUnit(
agent_response="",
Expand All @@ -144,25 +145,30 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
if isinstance(chunk, AgentScratchpadUnit.Action):
action = chunk
# detect action
scratchpad.agent_response += json.dumps(chunk.model_dump())
if scratchpad.agent_response is not None:
scratchpad.agent_response += json.dumps(chunk.model_dump())
scratchpad.action_str = json.dumps(chunk.model_dump())
scratchpad.action = action
else:
scratchpad.agent_response += chunk
scratchpad.thought += chunk
if scratchpad.agent_response is not None:
scratchpad.agent_response += chunk
if scratchpad.thought is not None:
scratchpad.thought += chunk
yield LLMResultChunk(
model=self.model_config.model,
prompt_messages=prompt_messages,
system_fingerprint="",
delta=LLMResultChunkDelta(index=0, message=AssistantPromptMessage(content=chunk), usage=None),
)

scratchpad.thought = scratchpad.thought.strip() or "I am thinking about how to help you"
self._agent_scratchpad.append(scratchpad)
if scratchpad.thought is not None:
scratchpad.thought = scratchpad.thought.strip() or "I am thinking about how to help you"
if self._agent_scratchpad is not None:
self._agent_scratchpad.append(scratchpad)

# get llm usage
if "usage" in usage_dict:
increase_usage(llm_usage, usage_dict["usage"])
if usage_dict["usage"] is not None:
increase_usage(llm_usage, usage_dict["usage"])
else:
usage_dict["usage"] = LLMUsage.empty_usage()

Expand All @@ -171,9 +177,9 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
tool_name=scratchpad.action.action_name if scratchpad.action else "",
tool_input={scratchpad.action.action_name: scratchpad.action.action_input} if scratchpad.action else {},
tool_invoke_meta={},
thought=scratchpad.thought,
thought=scratchpad.thought or "",
observation="",
answer=scratchpad.agent_response,
answer=scratchpad.agent_response or "",
messages_ids=[],
llm_usage=usage_dict["usage"],
)
Expand Down Expand Up @@ -214,7 +220,7 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
agent_thought=agent_thought,
tool_name=scratchpad.action.action_name,
tool_input={scratchpad.action.action_name: scratchpad.action.action_input},
thought=scratchpad.thought,
thought=scratchpad.thought or "",
observation={scratchpad.action.action_name: tool_invoke_response},
tool_invoke_meta={scratchpad.action.action_name: tool_invoke_meta.to_dict()},
answer=scratchpad.agent_response,
Expand Down Expand Up @@ -252,8 +258,8 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
answer=final_answer,
messages_ids=[],
)

self.update_db_variables(self.variables_pool, self.db_variables_pool)
if self.variables_pool is not None and self.db_variables_pool is not None:
self.update_db_variables(self.variables_pool, self.db_variables_pool)
# publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
Expand Down Expand Up @@ -312,8 +318,9 @@ def _handle_invoke_action(

# publish files
for message_file_id, save_as in message_files:
if save_as:
self.variables_pool.set_file(tool_name=tool_call_name, value=message_file_id, name=save_as)
if save_as is not None and self.variables_pool:
# FIXME the save_as type is confusing, it should be a string or not
self.variables_pool.set_file(tool_name=tool_call_name, value=message_file_id, name=str(save_as))

# publish message file
self.queue_manager.publish(
Expand Down Expand Up @@ -387,8 +394,8 @@ def _organize_historic_prompt_messages(
if isinstance(message, AssistantPromptMessage):
if not current_scratchpad:
current_scratchpad = AgentScratchpadUnit(
agent_response=message.content,
thought=message.content or "I am thinking about how to help you",
agent_response=cast(str, message.content),
thought=cast(str, message.content) or "I am thinking about how to help you",
action_str="",
action=None,
observation=None,
Expand All @@ -405,7 +412,7 @@ def _organize_historic_prompt_messages(
pass
elif isinstance(message, ToolPromptMessage):
if current_scratchpad:
current_scratchpad.observation = message.content
current_scratchpad.observation = cast(str, message.content)
elif isinstance(message, UserPromptMessage):
if scratchpads:
result.append(AssistantPromptMessage(content=self._format_assistant_message(scratchpads)))
Expand Down
1 change: 1 addition & 0 deletions api/core/agent/cot_chat_agent_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@ def _organize_prompt_messages(self) -> list[PromptMessage]:
assistant_messages = []
else:
assistant_message = AssistantPromptMessage(content="")
assistant_message.content = "" # FIXME: type check tell mypy that assistant_message.content is str
for unit in agent_scratchpad:
if unit.is_final():
assistant_message.content += f"Final Answer: {unit.agent_response}"
Expand Down
11 changes: 8 additions & 3 deletions api/core/agent/cot_completion_agent_runner.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
import json
from typing import Optional
from typing import Optional, cast

from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage, UserPromptMessage
Expand All @@ -11,7 +11,11 @@ def _organize_instruction_prompt(self) -> str:
"""
Organize instruction prompt
"""
if self.app_config.agent is None:
raise ValueError("Agent configuration is not set")
prompt_entity = self.app_config.agent.prompt
if prompt_entity is None:
raise ValueError("prompt entity is not set")
first_prompt = prompt_entity.first_prompt

system_prompt = (
Expand All @@ -33,7 +37,8 @@ def _organize_historic_prompt(self, current_session_messages: Optional[list[Prom
if isinstance(message, UserPromptMessage):
historic_prompt += f"Question: {message.content}\n\n"
elif isinstance(message, AssistantPromptMessage):
historic_prompt += message.content + "\n\n"
if message.content is not None:
historic_prompt += cast(str, message.content) + "\n\n"

return historic_prompt

Expand All @@ -50,7 +55,7 @@ def _organize_prompt_messages(self) -> list[PromptMessage]:
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
assistant_prompt = ""
for unit in agent_scratchpad:
for unit in agent_scratchpad or []:
if unit.is_final():
assistant_prompt += f"Final Answer: {unit.agent_response}"
else:
Expand Down
5 changes: 3 additions & 2 deletions api/core/agent/fc_agent_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
llm_usage.total_price += usage.total_price

model_instance = self.model_instance
assert model_instance is not None

while function_call_state and iteration_step <= max_iteration_steps:
function_call_state = False
Expand All @@ -75,7 +76,7 @@ def increase_usage(final_llm_usage_dict: dict[str, LLMUsage], usage: LLMUsage):
# the last iteration, remove all tools
prompt_messages_tools = []

message_file_ids = []
message_file_ids: list[str] = []
agent_thought = self.create_agent_thought(
message_id=message.id, message="", tool_name="", tool_input="", messages_ids=message_file_ids
)
Expand Down Expand Up @@ -391,7 +392,7 @@ def _init_system_message(

return prompt_messages

def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
def _organize_user_query(self, query: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
Expand Down
10 changes: 5 additions & 5 deletions api/core/agent/output_parser/cot_output_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def parse_action(json_str):
except:
return json_str or ""

def extra_json_from_code_block(code_block) -> Generator[Union[dict, str], None, None]:
def extra_json_from_code_block(code_block) -> Generator[Union[str, AgentScratchpadUnit.Action], None, None]:
code_blocks = re.findall(r"```(.*?)```", code_block, re.DOTALL)
if not code_blocks:
return
Expand Down Expand Up @@ -67,15 +67,15 @@ def extra_json_from_code_block(code_block) -> Generator[Union[dict, str], None,
for response in llm_response:
if response.delta.usage:
usage_dict["usage"] = response.delta.usage
response = response.delta.message.content
if not isinstance(response, str):
response_content = response.delta.message.content
if not isinstance(response_content, str):
continue

# stream
index = 0
while index < len(response):
while index < len(response_content):
steps = 1
delta = response[index : index + steps]
delta = response_content[index : index + steps]
yield_delta = False

if delta == "`":
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import uuid
from typing import Optional

from core.app.app_config.entities import DatasetEntity, DatasetRetrieveConfigEntity
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from core.app.app_config.entities import (
AdvancedChatMessageEntity,
AdvancedChatPromptTemplateEntity,
AdvancedCompletionPromptTemplateEntity,
PromptTemplateEntity,
Expand All @@ -25,7 +26,9 @@ def convert(cls, config: dict) -> PromptTemplateEntity:
chat_prompt_messages = []
for message in chat_prompt_config.get("prompt", []):
chat_prompt_messages.append(
{"text": message["text"], "role": PromptMessageRole.value_of(message["role"])}
AdvancedChatMessageEntity(
**{"text": message["text"], "role": PromptMessageRole.value_of(message["role"])}
)
)

advanced_chat_prompt_template = AdvancedChatPromptTemplateEntity(messages=chat_prompt_messages)
Expand Down
Loading