Skip to content

Tutorial

Tan Jin edited this page Sep 16, 2025 · 8 revisions

Overview

Welcome to the package tutorial! VibeTools is extremely lightweight and its usage is also very straightforward - so this tutorial will be short and sweet!

Installation

VibeTools is published on pypi and can be easily installed with:

python3 -m pip install vibetools

Import

The vibetools library provides several exports as follows:

# available external exports (for all users)
from vibetools.exceptions import VibeInputTypeException
from vibetools.exceptions import VibeLlmClientException
from vibetools.exceptions import VibeLlmApiException
from vibetools.exceptions import VibeResponseParseException
from vibetools.exceptions import VibeTimeoutException

# available internal exports (for sub-libraries)
from vibetools._internal import ConsoleLogger
from vibetools._internal import VibeConfig
from vibetools._internal import VibeLlmClient

External Exports

These are the core components you'll use when interacting with the vibetools library (including other sub-libraries).

Exceptions

vibetools provides a set of custom exceptions to handle various error scenarios gracefully.

  • VibeInputTypeException: Raised when the input type provided is invalid.
  • VibeLlmClientException: Raised when the client is not a valid OpenAI or Gemini client.
  • VibeLlmApiException: Raised when the LLM API returns an error.
  • VibeResponseParseException: Raised when the library is unable to parse the response from the LLM API to the expected type.
  • VibeTimeoutException: Raised when a vibe execution times out.

Internal Exports

These components are designed for internal use, primarily for building libraries on top of vibetools.

VibeConfig

The VibeConfig class allows you to customize the behavior of the Vibe LLM client. Note that system_instruction is a required field and must be specified if you're creating a vibeconfig.

Parameter. Required Type Default Description
system_instruction true str None. Default LLM instruction
timeout false int 10000 Timeout for the LLM client
vibe_mode false str CHILL One of CHILL, EAGER or AGGRESSIVE

Note: Toggling vibe_mode lets you trade off cost against reliability:

  • CHILL - minimizes cost with a single query.
  • EAGER - balances cost and resilience by retrying once on failure.
  • AGGRESSIVE - maximizes reliability with up to 3 attempts, at higher cost.

Example:

from vibetools._internal import VibeConfig

# Create a custom configuration with instruction
custom_config = VibeConfig(system_instruction="Always say false")

ConsoleLogger

A pre-configured logger that adds color to console output for better readability.

Parameter Type Default Description
name str "VibeTools" The name of the logger

Example:

from vibetools._internal import ConsoleLogger

# Create a logger instance
logger = ConsoleLogger(name="VibeChecks")
logger.info("This is an informational message.")
logger.error("This is an error message.")

VibeLlmClient

This is the main entry point for interacting with the LLM. It acts as a wrapper around different LLM clients (like OpenAI or Gemini) to provide a unified interface.

Initialization

To create a VibeLlmClient, you need to provide a client instance from a supported library (e.g., openai.OpenAI), a model name, a VibeConfig object, and a ConsoleLogger instance.

Parameter Type Description
client openai.OpenAI or google.genai.Client An instance of the backend LLM client.
model str The name of the LLM model to use (e.g., "gpt-4").
config VibeConfig Configuration options for the client.
logger ConsoleLogger A logger instance for logging events.

Methods

vibe_eval

Evaluates a prompt using the configured LLM. You can optionally specify a return type to automatically parse the LLM's response.

Parameter Type Default Description
prompt str The prompt to send to the model.
return_type Optional[Type] None The expected Python type for coercion. If None, raw text is returned.

Example:

from openai import OpenAI
from vibetools._internal import ConsoleLogger, VibeConfig, VibeLlmClient

# 1. Initialize the components
client = OpenAI(api_key="YOUR_API_KEY")
config = VibeConfig(system_instruction="Always say false")
logger = ConsoleLogger()

# 2. Create the VibeLlmClient
vibe_client = VibeLlmClient(
    client=client,
    model="gpt-4",
    config=config,
    logger=logger
)

# 3. Evaluate a prompt
prompt = "Is 'vibetools' a cool name for a library? Respond with 'True' or 'False'."
response = vibe_client.vibe_eval(prompt, return_type=bool)

print(f"The response is: {response}")
# Expected output (example): The response is: True

Clone this wiki locally