Skip to content

A new package that analyzes user-provided JavaScript code snippets to determine if they can run concurrently without conflicts or race conditions. It takes the code as text input and returns a structu

Notifications You must be signed in to change notification settings

chigwell/typhonlint-threadsafe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

TyphonLint-ThreadSafe

PyPI version License: MIT Downloads LinkedIn

Thread Safety Analyzer for JavaScript Code

A package that analyzes JavaScript code snippets to detect potential race conditions and concurrency conflicts, helping developers ensure thread-safe implementations in multi-threaded environments like Node.js or web workers.


🚀 Installation

Install via pip:

pip install typhonlint_threadsafe

📝 Usage

Basic Usage (Default LLM7)

from typhonlint_threadsafe import typhonlint_threadsafe

# Analyze JavaScript code for thread safety
response = typhonlint_threadsafe(
    user_input="""
    // Example JavaScript code snippet
    let sharedVar = 0;
    function increment() {
        sharedVar++;
    }
    setInterval(increment, 100);
    """
)
print(response)

Custom LLM Integration

You can replace the default LLM (ChatLLM7) with any LangChain-compatible model:

Using OpenAI

from langchain_openai import ChatOpenAI
from typhonlint_threadsafe import typhonlint_threadsafe

llm = ChatOpenAI()
response = typhonlint_threadsafe(
    user_input="...",  # Your JS code here
    llm=llm
)

Using Anthropic

from langchain_anthropic import ChatAnthropic
from typhonlint_threadsafe import typhonlint_threadsafe

llm = ChatAnthropic()
response = typhonlint_threadsafe(
    user_input="...",  # Your JS code here
    llm=llm
)

Using Google Generative AI

from langchain_google_genai import ChatGoogleGenerativeAI
from typhonlint_threadsafe import typhonlint_threadsafe

llm = ChatGoogleGenerativeAI()
response = typhonlint_threadsafe(
    user_input="...",  # Your JS code here
    llm=llm
)

🔧 Parameters

Parameter Type Description
user_input str The JavaScript code snippet to analyze.
api_key Optional[str] LLM7 API key (if not provided, falls back to LLM7_API_KEY env var).
llm Optional[BaseChatModel] Custom LangChain LLM instance (defaults to ChatLLM7).

🔑 API Key & Rate Limits

  • Default LLM: Uses ChatLLM7 from langchain_llm7.
  • Free Tier: Sufficient for most use cases.
  • Custom API Key: Pass via api_key parameter or LLM7_API_KEY env var.
  • Get API Key: Register at LLM7.

📌 Features

  • Detects race conditions in JavaScript code.
  • Validates concurrency patterns.
  • Structured output for easy integration.
  • Supports custom LLMs for flexibility.

📝 Output

Returns a list of structured assessments indicating:

  • Potential thread safety issues.
  • Safe concurrency patterns.
  • Confirmed correct implementations.

📂 License

MIT


📧 Support & Issues

For bugs or feature requests, open an issue on GitHub.


👤 Author

Eugene Evstafev (@chigwell) 📧 hi@euegne.plus

About

A new package that analyzes user-provided JavaScript code snippets to determine if they can run concurrently without conflicts or race conditions. It takes the code as text input and returns a structu

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages