Skip to content

A new package designed to analyze and structure user-submitted text, specifically focusing on community feedback and moderation. The package leverages the capabilities of llmatch-messages to process a

Notifications You must be signed in to change notification settings

chigwell/feedback-analyzer-mod

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Feedback Analyzer Mod

PyPI version License: MIT Downloads LinkedIn

A Python package designed to analyze and structure user-submitted text, specifically focusing on community feedback and moderation. This tool leverages the capabilities of llmatch-messages to process and extract meaningful insights from user inputs, such as forum posts, comments, or feedback forms. By using pattern matching and retry logic, the package ensures that the extracted data is consistent and formatted correctly, making it easier for moderators to review and respond to user feedback.

Features

  • Pattern Matching: Extracts structured data from unstructured user inputs.
  • Retry Logic: Ensures consistent and reliable data extraction.
  • Flexible LLM Integration: Supports various LLM providers, including LLM7, OpenAI, Anthropic, and Google.
  • Easy Integration: Simple API for seamless integration into existing workflows.

Installation

pip install feedback_analyzer_mod

Usage

Basic Usage

from feedback_analyzer_mod import feedback_analyzer_mod

user_input = "Your user input text here"
response = feedback_analyzer_mod(user_input)
print(response)

Using a Custom LLM

OpenAI

from langchain_openai import ChatOpenAI
from feedback_analyzer_mod import feedback_analyzer_mod

llm = ChatOpenAI()
response = feedback_analyzer_mod(user_input, llm=llm)
print(response)

Anthropic

from langchain_anthropic import ChatAnthropic
from feedback_analyzer_mod import feedback_analyzer_mod

llm = ChatAnthropic()
response = feedback_analyzer_mod(user_input, llm=llm)
print(response)

Google

from langchain_google_genai import ChatGoogleGenerativeAI
from feedback_analyzer_mod import feedback_analyzer_mod

llm = ChatGoogleGenerativeAI()
response = feedback_analyzer_mod(user_input, llm=llm)
print(response)

Parameters

  • user_input (str): The user input text to process.
  • llm (Optional[BaseChatModel]): The LangChain LLM instance to use. If not provided, the default ChatLLM7 will be used.
  • api_key (Optional[str]): The API key for LLM7. If not provided, the environment variable LLM7_API_KEY will be used.

Default LLM

The package uses ChatLLM7 from langchain_llm7 by default. You can get a free API key by registering at LLM7.

Rate Limits

The default rate limits for LLM7 free tier are sufficient for most use cases of this package. If you want higher rate limits, you can pass your own API key via the environment variable LLM7_API_KEY or directly via the api_key parameter.

Author

Issues

For any issues or suggestions, please open an issue on GitHub.

Releases

No releases published

Packages

No packages published

Languages