Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ Slash Your LLM API Costs by 10x 💰, Boost Speed by 100x ⚡

📔 This project is undergoing swift development, and as such, the API may be subject to change at any time. For the most up-to-date information, please refer to the latest [documentation]( https://gptcache.readthedocs.io/en/latest/) and [release note](https://github.com/zilliztech/GPTCache/blob/main/docs/release_note.md).

**NOTE:** As the number of large models is growing explosively and their API shape is constantly evolving, we no longer add support for new API or models. We encourage the usage of using the get and set API in gptcache, here is the demo code: https://github.com/zilliztech/GPTCache/blob/main/examples/adapter/api.py

## Quick Install

`pip install gptcache`
Expand Down Expand Up @@ -279,7 +281,7 @@ GPTCache offers the following primary benefits:
- **Decreased expenses**: Most LLM services charge fees based on a combination of number of requests and [token count](https://openai.com/pricing). GPTCache effectively minimizes your expenses by caching query results, which in turn reduces the number of requests and tokens sent to the LLM service. As a result, you can enjoy a more cost-efficient experience when using the service.
- **Enhanced performance**: LLMs employ generative AI algorithms to generate responses in real-time, a process that can sometimes be time-consuming. However, when a similar query is cached, the response time significantly improves, as the result is fetched directly from the cache, eliminating the need to interact with the LLM service. In most situations, GPTCache can also provide superior query throughput compared to standard LLM services.
- **Adaptable development and testing environment**: As a developer working on LLM applications, you're aware that connecting to LLM APIs is generally necessary, and comprehensive testing of your application is crucial before moving it to a production environment. GPTCache provides an interface that mirrors LLM APIs and accommodates storage of both LLM-generated and mocked data. This feature enables you to effortlessly develop and test your application, eliminating the need to connect to the LLM service.
- **Improved scalability and availability**: LLM services frequently enforce [rate limits](https://platform.openai.com/docs/guides/rate-limits), which are constraints that APIs place on the number of times a user or client can access the server within a given timeframe. Hitting a rate limit means that additional requests will be blocked until a certain period has elapsed, leading to a service outage. With GPTCache, you can easily scale to accommodate an increasing volume of of queries, ensuring consistent performance as your application's user base expands.
- **Improved scalability and availability**: LLM services frequently enforce [rate limits](https://platform.openai.com/docs/guides/rate-limits), which are constraints that APIs place on the number of times a user or client can access the server within a given timeframe. Hitting a rate limit means that additional requests will be blocked until a certain period has elapsed, leading to a service outage. With GPTCache, you can easily scale to accommodate an increasing volume of queries, ensuring consistent performance as your application's user base expands.

## 🤔 How does it work?

Expand Down Expand Up @@ -348,7 +350,7 @@ This module is created to extract embeddings from requests for similarity search
- [ ] Support other storages.
- **Vector Store**:
The **Vector Store** module helps find the K most similar requests from the input request's extracted embedding. The results can help assess similarity. GPTCache provides a user-friendly interface that supports various vector stores, including Milvus, Zilliz Cloud, and FAISS. More options will be available in the future.
- [x] Support [Milvus](https://milvus.io/), an open-source vector database for production-ready AI/LLM applicaionts.
- [x] Support [Milvus](https://milvus.io/), an open-source vector database for production-ready AI/LLM applications.
- [x] Support [Zilliz Cloud](https://cloud.zilliz.com/), a fully-managed cloud vector database based on Milvus.
- [x] Support [Milvus Lite](https://github.com/milvus-io/milvus-lite), a lightweight version of Milvus that can be embedded into your Python application.
- [x] Support [FAISS](https://faiss.ai/), a library for efficient similarity search and clustering of dense vectors.
Expand Down
2 changes: 1 addition & 1 deletion docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ refer to the implementation of [milvus](https://github.com/zilliztech/GPTCache/b

## Add a new data manager

refer to the implementation of [MapDataManager, SSDataManager](https://github.com/zilliztech/GPTCache/blob/main/gptcache/cache/data_manager.py).
refer to the implementation of [MapDataManager, SSDataManager](https://github.com/zilliztech/GPTCache/blob/main/gptcache/manager/data_manager.py).

1. Implement the [DataManager](https://github.com/zilliztech/GPTCache/blob/main/gptcache/manager/data_manager.py) interface
2. Add the new store to the [get_data_manager](https://github.com/zilliztech/GPTCache/blob/main/gptcache/manager/data_manager.py) method
Expand Down
45 changes: 37 additions & 8 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,21 @@
# Example

- [How to run Visual Question Answering with MiniGPT-4](#How-to-run-Visual-Question-Answering-with-MiniGPT-4)
- [How to set the **embedding** function](#How-to-set-the-embedding-function)
- [How to set the **data manager** class](#How-to-set-the-data-manager-class)
- [How to set the **similarity evaluation** interface](#How-to-set-the-similarity-evaluation-interface)
- [Other cache init params](#Other-cache-init-params)
- [How to run with session](#How-to-run-with-session)
- [How to use GPTCache server](#How-to-use-GPTCache-server)
- [Benchmark](#Benchmark)
- [Example](#example)
- [How to run Visual Question Answering with MiniGPT-4](#how-to-run-visual-question-answering-with-minigpt-4)
- [How to set the `embedding` function](#how-to-set-the-embedding-function)
- [Default embedding function](#default-embedding-function)
- [Suitable for embedding methods consisting of a cached storage and vector store](#suitable-for-embedding-methods-consisting-of-a-cached-storage-and-vector-store)
- [Custom embedding](#custom-embedding)
- [How to set the `data manager` class](#how-to-set-the-data-manager-class)
- [How to set the `similarity evaluation` interface](#how-to-set-the-similarity-evaluation-interface)
- [Request cache parameter customization](#request-cache-parameter-customization)
- [How to run with session](#how-to-run-with-session)
- [Run in `with` method](#run-in-with-method)
- [Custom Session](#custom-session)
- [How to use GPTCache server](#how-to-use-gptcache-server)
- [Start server](#start-server)
- [Benchmark](#benchmark)
- [How to use post-process function](#how-to-use-post-process-function)

## How to run Visual Question Answering with MiniGPT-4

Expand Down Expand Up @@ -686,3 +694,24 @@ similarity evaluation func: pair_evaluation (search distance)
| 0.95 | 0.12s | 425 | 25 | 549 |
| 0.9 | 0.23s | 804 | 77 | 118 |
| 0.8 | 0.26s | 904 | 92 | 3 |
## How to use post-process function

You can use the LlmVerifier() function to process the cached answer list after recall. This is similar to `first` or `random_one`, but it will call a LLM to verify whether the recalled question is truly similar to the user's question. You can define your own system prompt to decide under what circumstances the LLM should actively reject. You can also choose a small model to perform the verification step, so only a small additional cost is required.
Example usage:

```python
from gptcache.processor.post import post

# ... (init cache, embedding, data_manager, etc.)

cache.init(
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
post_process_messages_func=LlmVerifier(client=None,
system_prompt=custom_prompt,
model="gpt-3.5-turbo")
)
```

See [processor/post_example.py](./processor/post_example.py) for a runnable example.
47 changes: 47 additions & 0 deletions examples/processor/llm_verifier_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import time
import os

from gptcache import cache
from gptcache.adapter import openai
from gptcache.embedding import Onnx
from gptcache.manager import manager_factory
from gptcache.processor.post import LlmVerifier
from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation

print("This example demonstrates how to use LLM verification with OpenAI's GPT-3.5 Turbo model.")
cache.set_openai_key()

onnx = Onnx()
data_manager = manager_factory("sqlite,faiss", vector_params={"dimension": onnx.dimension})




custom_prompt = """You are a helpful assistant. Your task is to verify whether the answer is semantically consistent with the question.
If the answer is consistent, respond with "yes". If it is not consistent, respond with "no".
You must only respond in "yes" or "no". """

verifier = LlmVerifier(client=None,
system_prompt=custom_prompt,
model="gpt-3.5-turbo")

cache.init(
embedding_func=onnx.to_embeddings,
data_manager=data_manager,
similarity_evaluation=SearchDistanceEvaluation(),
post_process_messages_func=verifier
)

question = 'what is github'

for _ in range(3):
start = time.time()
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
messages=[{
'role': 'user',
'content': question
}],
)
print(f"Response: {response['choices'][0]['message']['content']}")
print(f"Time: {round(time.time() - start, 2)}s\n")
2 changes: 1 addition & 1 deletion gptcache/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""gptcache version"""
__version__ = "0.1.43"
__version__ = "0.1.44"

from gptcache.config import Config
from gptcache.core import Cache
Expand Down
110 changes: 63 additions & 47 deletions gptcache/adapter/adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import numpy as np

from gptcache import cache
from gptcache.processor.post import temperature_softmax
from gptcache.processor.post import temperature_softmax, LlmVerifier
from gptcache.utils.error import NotInitError
from gptcache.utils.log import gptcache_log
from gptcache.utils.time import time_cal
Expand Down Expand Up @@ -189,6 +189,12 @@ def post_process():
scores=[t[0] for t in cache_answers],
temperature=temperature,
)
elif chat_cache.post_process_messages_func is LlmVerifier:
return_message = chat_cache.post_process_messages_func(
messages=[t[1] for t in cache_answers],
scores=[t[0] for t in cache_answers],
original_question=pre_embedding_data
)
else:
return_message = chat_cache.post_process_messages_func(
[t[1] for t in cache_answers]
Expand All @@ -200,29 +206,30 @@ def post_process():
func_name="post_process",
report_func=chat_cache.report.post,
)()
chat_cache.report.hint_cache()
cache_whole_data = answers_dict.get(str(return_message))
if session and cache_whole_data:
chat_cache.data_manager.add_session(
cache_whole_data[2], session.name, pre_embedding_data
)
if cache_whole_data and not chat_cache.config.disable_report:
# user_question / cache_question / cache_question_id / cache_answer / similarity / consume time/ time
report_cache_data = cache_whole_data[3]
report_search_data = cache_whole_data[2]
chat_cache.data_manager.report_cache(
pre_store_data if isinstance(pre_store_data, str) else "",
report_cache_data.question
if isinstance(report_cache_data.question, str)
else "",
report_search_data[1],
report_cache_data.answers[0].answer
if isinstance(report_cache_data.answers[0].answer, str)
else "",
cache_whole_data[0],
round(time.time() - start_time, 6),
)
return cache_data_convert(return_message)
if return_message is not None:
chat_cache.report.hint_cache()
cache_whole_data = answers_dict.get(str(return_message))
if session and cache_whole_data:
chat_cache.data_manager.add_session(
cache_whole_data[2], session.name, pre_embedding_data
)
if cache_whole_data and not chat_cache.config.disable_report:
# user_question / cache_question / cache_question_id / cache_answer / similarity / consume time/ time
report_cache_data = cache_whole_data[3]
report_search_data = cache_whole_data[2]
chat_cache.data_manager.report_cache(
pre_store_data if isinstance(pre_store_data, str) else "",
report_cache_data.question
if isinstance(report_cache_data.question, str)
else "",
report_search_data[1],
report_cache_data.answers[0].answer
if isinstance(report_cache_data.answers[0].answer, str)
else "",
cache_whole_data[0],
round(time.time() - start_time, 6),
)
return cache_data_convert(return_message)

next_cache = chat_cache.next_cache
if next_cache:
Expand Down Expand Up @@ -444,6 +451,13 @@ def post_process():
scores=[t[0] for t in cache_answers],
temperature=temperature,
)
elif chat_cache.post_process_messages_func is LlmVerifier:
return_message = chat_cache.post_process_messages_func(
messages=[t[1] for t in cache_answers],
scores=[t[0] for t in cache_answers],
original_question=pre_embedding_data,
temperature=temperature,
)
else:
return_message = chat_cache.post_process_messages_func(
[t[1] for t in cache_answers]
Expand All @@ -455,36 +469,38 @@ def post_process():
func_name="post_process",
report_func=chat_cache.report.post,
)()
chat_cache.report.hint_cache()
cache_whole_data = answers_dict.get(str(return_message))
if session and cache_whole_data:
chat_cache.data_manager.add_session(
cache_whole_data[2], session.name, pre_embedding_data
)
if cache_whole_data:
# user_question / cache_question / cache_question_id / cache_answer / similarity / consume time/ time
report_cache_data = cache_whole_data[3]
report_search_data = cache_whole_data[2]
chat_cache.data_manager.report_cache(
pre_store_data if isinstance(pre_store_data, str) else "",
report_cache_data.question
if isinstance(report_cache_data.question, str)
else "",
report_search_data[1],
report_cache_data.answers[0].answer
if isinstance(report_cache_data.answers[0].answer, str)
else "",
cache_whole_data[0],
round(time.time() - start_time, 6),
)
return cache_data_convert(return_message)
if return_message is not None:
chat_cache.report.hint_cache()
cache_whole_data = answers_dict.get(str(return_message))
if session and cache_whole_data:
chat_cache.data_manager.add_session(
cache_whole_data[2], session.name, pre_embedding_data
)
if cache_whole_data:
# user_question / cache_question / cache_question_id / cache_answer / similarity / consume time/ time
report_cache_data = cache_whole_data[3]
report_search_data = cache_whole_data[2]
chat_cache.data_manager.report_cache(
pre_store_data if isinstance(pre_store_data, str) else "",
report_cache_data.question
if isinstance(report_cache_data.question, str)
else "",
report_search_data[1],
report_cache_data.answers[0].answer
if isinstance(report_cache_data.answers[0].answer, str)
else "",
cache_whole_data[0],
round(time.time() - start_time, 6),
)
return cache_data_convert(return_message)

next_cache = chat_cache.next_cache
if next_cache:
kwargs["cache_obj"] = next_cache
kwargs["cache_context"] = context
kwargs["cache_skip"] = cache_skip
kwargs["cache_factor"] = cache_factor
kwargs["search_only"] = search_only_flag
llm_data = adapt(
llm_handler, cache_data_convert, update_cache_callback, *args, **kwargs
)
Expand Down
8 changes: 7 additions & 1 deletion gptcache/manager/factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,12 @@ def manager_factory(manager="map",
maxmemory_samples=eviction_params.get("maxmemory_samples", scalar_params.get("maxmemory_samples")),
)

if eviction_manager == "memory":
return get_data_manager(s, v, o, None,
eviction_params.get("max_size", 1000),
eviction_params.get("clean_size", None),
eviction_params.get("eviction", "LRU"),)

e = EvictionBase(
name=eviction_manager,
**eviction_params
Expand Down Expand Up @@ -194,7 +200,7 @@ def get_data_manager(
vector_base = VectorBase(name=vector_base)
if isinstance(object_base, str):
object_base = ObjectBase(name=object_base)
if isinstance(eviction_base, str):
if isinstance(eviction_base, str) and eviction_base != "memory":
eviction_base = EvictionBase(name=eviction_base)
assert cache_base and vector_base
return SSDataManager(cache_base, vector_base, object_base, eviction_base, max_size, clean_size, eviction)
Loading
Loading