Skip to content

Commit 06467d1

Browse files
committed
Added Gemini integration
1 parent 28b890d commit 06467d1

File tree

8 files changed

+116
-26
lines changed

8 files changed

+116
-26
lines changed

README.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,15 +41,16 @@ cloud-based.
4141

4242
| Log Source | Availability |
4343
|------------|--------------|
44-
| Log files | |
44+
| Log files | |
4545
| ELK Stack | |
4646
| Graylog | |
4747

4848
#### LLM Integrations
4949

5050
| LLM Integration | Availability |
5151
|-----------------|--------------|
52-
| Ollama | ✔️ |
52+
| Ollama | ✅️ |
53+
| Gemini | ✅️ |
5354
| OpenAI | |
5455
| Amazon Bedrock | |
5556

@@ -94,7 +95,11 @@ loguru run
9495

9596
```json
9697
{
97-
"num_chunks_to_return": 100,
98+
"service": "gemini",
99+
"gemini": {
100+
"api_key": "your-api-key",
101+
"llm_name": "gemini-1.5-flash"
102+
},
98103
"ollama": {
99104
"hosts": [
100105
"http://localhost:11434/"
@@ -105,6 +110,7 @@ loguru run
105110
"temperature": 0.1
106111
}
107112
},
113+
"num_chunks_to_return": 100,
108114
"data_sources": [
109115
{
110116
"type": "filesystem",

README.rst

Lines changed: 58 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,61 @@
11
Loguru CLI
22
==========
33

4-
An interactive commandline interface that brings intelligence to your logs.
4+
.. image:: https://raw.githubusercontent.com/Loguru-AI/Loguru-CLI/main/images/loguru-small.png
5+
:align: center
6+
7+
.. epigraph:: An interactive commandline interface that brings intelligence to your logs.
8+
9+
10+
11+
*********************
12+
What is it?
13+
*********************
14+
15+
**Loguru-CLI** (read as "Log Guru" 📋🧘) is a Python package that brings intelligence to your logs. It is designed to be a universal tool for log aggregation and analysis, with seamless integrations with any LLM (Large Language Model), whether self-hosted or cloud-based.
16+
17+
For more details, check out our GitHub repository: https://github.com/Loguru-AI/Loguru-CLI
18+
19+
*********************
20+
Features
21+
*********************
22+
23+
* Leverage LLMs to gain insights from your logs.
24+
* Easily integrate with any LLM (self-hosted or cloud-service offerings).
25+
* Easily hook up any log sources to gain insights on your logs. Perform refined/advanced queries supported by the
26+
logging platform/tool (by applying capabilities such as function-calling (tooling) of LLM) and gain insights on the
27+
results.
28+
* Save and replay history.
29+
* Scan and rebuild index from your logs.
30+
31+
.. tip:: Currently supports filesystem-based logs only, with plans to extend support to more log sources soon.
32+
33+
*********************
34+
Getting Started
35+
*********************
36+
37+
Install Loguru::
38+
39+
pip install loguru-cli
40+
41+
Show config::
42+
43+
loguru show-config
44+
45+
Scan and rebuild index from log files::
46+
47+
loguru scan
48+
49+
Run app::
50+
51+
loguru run
52+
53+
Example Interaction:
54+
55+
.. code-block:: javascript
56+
57+
>>> List all the errors
58+
1. The error message indicates that there is a problem connecting to the PostgreSQL database at localhost on port 5432. Specifically, it says "Connection refused". This means that either the hostname or port number is incorrect, or the postmaster (the process that manages the PostgreSQL server) is not accepting TCP/IP connections.
59+
2. The stack trace shows that the problem is occurring in the HikariCP connection pool, which is being used to manage connections to the database. Specifically, it says "Exception during pool initialization". This suggests that there may be a problem with the configuration of the connection pool or the database connection settings.
60+
3. It is also possible that there is a firewall or network issue preventing the connection from being established. For example, if there is a firewall on the server running PostgreSQL, it may be blocking incoming connections on port 5432.
61+

images/loguru-small.png

9.96 KB
Loading

images/loguru.png

75.6 KB
Loading

loguru/core/fs_log_rag.py

Lines changed: 34 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@
1010
from langchain_community.vectorstores import FAISS
1111
from langchain_core.documents import Document
1212
from langchain_core.prompts import PromptTemplate
13+
from langchain_google_genai import ChatGoogleGenerativeAI
1314
from langchain_huggingface import HuggingFaceEmbeddings
1415

1516
from loguru import LOGURU_DATA_DIR, HUGGING_FACE_EMBEDDINGS_DEVICE_TYPE
@@ -164,7 +165,8 @@ def ask(self, question: str, stream: bool = False) -> tuple[str, list[Document]]
164165
You are an honest assistant.
165166
You will accept contents of a log file and you will answer the question asked by the user appropriately.
166167
If you don't know the answer, just say you don't know. Don't try to make up an answer.
167-
If you find time, date or timestamps in the logs, make sure to convert the timestamp to more human-readable format in your response as DD/MM/YYYY HH:SS
168+
If you find time, date or timestamps in the logs,
169+
make sure to convert the timestamp to more human-readable format in your response as DD/MM/YYYY HH:SS
168170
169171
### Context:
170172
{context}
@@ -176,21 +178,38 @@ def ask(self, question: str, stream: bool = False) -> tuple[str, list[Document]]
176178
"""
177179

178180
prompt = PromptTemplate.from_template(template)
179-
llm = ChatOllama(
180-
temperature=0,
181-
base_url=self._ollama_api_base_url,
182-
model=self._model_name,
183-
streaming=True,
184-
# seed=2,
185-
top_k=10,
186-
# A higher value (100) will give more diverse answers, while a lower value (10) will be more conservative.
187-
top_p=0.3,
188-
# Higher value (0.95) will lead to more diverse text, while a lower value (0.5) will generate more
189-
# focused text.
190-
num_ctx=3072, # Sets the size of the context window used to generate the next token.
191-
verbose=False
192-
)
193181

182+
service = self._config.service
183+
184+
llm = None
185+
if service == 'ollama':
186+
llm = ChatOllama(
187+
temperature=0,
188+
base_url=self._ollama_api_base_url,
189+
model=self._model_name,
190+
streaming=True,
191+
# seed=2,
192+
top_k=10,
193+
# A higher value (100) will give more diverse answers, while a lower value (10) will be more conservative.
194+
top_p=0.3,
195+
# Higher value (0.95) will lead to more diverse text, while a lower value (0.5) will generate more
196+
# focused text.
197+
num_ctx=3072, # Sets the size of the context window used to generate the next token.
198+
verbose=False
199+
)
200+
elif service == 'gemini':
201+
# https://python.langchain.com/v0.2/docs/integrations/chat/google_generative_ai/
202+
llm = ChatGoogleGenerativeAI(
203+
model=self._config.gemini.llm_name,
204+
temperature=0,
205+
max_tokens=None,
206+
timeout=None,
207+
max_retries=2,
208+
google_api_key=self._config.gemini.api_key
209+
)
210+
else:
211+
services = ['ollama', 'gemini']
212+
print(f"Invalid service: {service}. Available services are {','.join(services)}")
194213
if stream:
195214
llm.callbacks = [StreamingStdOutCallbackHandler()]
196215

loguru/core/models/config.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,14 @@ class DataSource(BaseModel):
4545
ds_params: Params = Field(..., description="Parameters for the data source")
4646

4747

48+
class Gemini(BaseModel):
49+
api_key: str = Field(..., description="Gemini API Key")
50+
llm_name: str = Field(..., description="Gemini Model Name. Ex: gemini-1.5-flash, gemini-1.5-pro")
51+
52+
4853
class Config(BaseModel):
4954
num_chunks_to_return: int = Field(..., description="Number of chunks to return")
55+
service: str = Field(..., description="LLM service type. Ex: ollama, gemini")
5056
ollama: Ollama = Field(..., description="Ollama configuration")
57+
gemini: Optional[Gemini] = Field(..., description="Gemini configuration")
5158
data_sources: List[DataSource] = Field(..., description="List of data sources")

requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,5 @@ langchain-experimental==0.0.62
44
langchain-huggingface==0.0.3
55
sentence-transformers==3.0.1
66
#faiss-gpu==1.7.2
7-
faiss-cpu==1.8.0.post1
7+
faiss-cpu==1.8.0.post1
8+
langchain-google-genai==1.0.8

setup.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,25 @@
11
import os
22
import re
33
import sys
4-
from m2r import parse_from_file
4+
55
from setuptools import setup, find_packages
66

77

88
def get_requirements_to_install():
99
__curr_location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
1010
requirements_txt_file_as_str = f"{__curr_location__}/requirements.txt"
11-
with open(requirements_txt_file_as_str, 'r') as reqfile:
12-
libs = reqfile.readlines()
11+
with open(requirements_txt_file_as_str, 'r') as req_file:
12+
libs = req_file.readlines()
1313
for i in range(len(libs)):
1414
libs[i] = libs[i].replace('\n', '')
1515
return libs
1616

1717

1818
def get_description() -> str:
1919
__curr_location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
20-
requirements_txt_file_as_str = f'{__curr_location__}/README.rst'
21-
with open(requirements_txt_file_as_str, 'r') as reqfile:
22-
desc = reqfile.read()
20+
rst_txt_file_as_str = f'{__curr_location__}/README.rst'
21+
with open(rst_txt_file_as_str, 'r') as rst_file:
22+
desc = rst_file.read()
2323
return desc
2424

2525

0 commit comments

Comments
 (0)