Our Mission: Democratize and raise awareness about Artificial Intelligence development through visual and interactive experimentation.
MLV-Lab is a pedagogical ecosystem designed to explore the fundamental concepts of AI without requiring advanced mathematical knowledge. Our philosophy is "Show, don't tell": we move from abstract theory to concrete, visual practice.
This project has two main audiences:
- AI Enthusiasts: A tool to play, train, and observe intelligent agents solving complex problems from the terminal.
- AI Developers: A sandbox with standard environments (compatible with Gymnasium) to design, train, and analyze agents from scratch.
| Name | Environment | Saga | Baseline | Details | Preview |
|---|---|---|---|---|---|
AntLost-v1mlv/AntLost-v1 |
Errant Drone | 🐜 Ants | Random | README.md | ![]() |
AntScout-v1mlv/AntScout-v1 |
Lookout Scout | 🐜 Ants | Q-Learning | README.md | ![]() |
AntMaze-v1mlv/AntMaze-v1 |
Dungeons & Pheromones | 🐜 Ants | Q-Learning | README.md | ![]() |
MLV-Lab is controlled through an interactive shell called MLVisual. The workflow is designed to be intuitive and user-friendly.
Requirement: Python 3.10+
# Install uv package manager inside the virtual environment
pip install uv
# Create a dedicated virtual environment
uv venv
# Install mlvlab in the virtual environment
uv pip install mlvlab
# For development (local installation)
uv pip install -e ".[dev]"
# Launch the interactive shell
uv run mlv shellOnce inside the MLV-Lab> shell, we recommend following this logical flow to get acquainted with an environment. The philosophy is to explore, play, train, and finally, watch the artificial intelligence in action.
- 🗺️ Discover (
list): Start by seeing what worlds you can explore. Thelistcommand will show you the available environment sagas. - 🕹️ Play (
play): Once you choose an environment, play it in manual mode to understand its mechanics, controls, and objective. - 🤖 Train (
train): Now, let the AI learn how to solve it. Thetraincommand will start the training process for the baseline agent. - 🎬 Evaluate (
eval): Watch the agent you just trained apply what it has learned. Theevalcommand loads the training result and displays it visually. - 📚 Learn (
docs): If you want to dive deeper into the technical details of the environment, thedocscommand will open the full documentation for you.
This cycle of play -> train -> evaluate is the heart of the MLV-Lab experience.
Here is a concrete example that follows the recommended flow, with comments explaining each step.
# Launch the interactive shell
uv run mlv shell
# 1. Discover what environments are in the "Ants" category
MLV-Lab> list ants
# 2. Play to understand the objective of AntScout-v1
MLV-Lab> play AntScout-v1
# 3. Train an agent with a specific seed (so it can be repeated)
MLV-Lab> train AntScout-v1 --seed 123
# 4. Evaluate the result of that specific in a live simulation
MLV-Lab> eval AntScout-v1 --seed 123
# 6. Check the documentation to learn more
MLV-Lab> docs AntScout-v1
# Exit the session
MLV-Lab> exitYou can use MLV-Lab environments in your own Python projects, just like any other Gymnasium-compatible library.
This workflow assumes you want to write your own Python scripts that import the mlvlab package.
# Create a dedicated virtual environment for your project (if you don't already have one)
uv venv
# Install mlvlab inside that virtual environment
uv pip install mlvlabFirst, create a file (for example, my_agent.py) with your code:
import gymnasium as gym
import mlvlab # Important! This "magic" line registers the "mlv/..." environments in Gymnasium
# Create the environment as you normally would
env = gym.make("mlv/AntScout-v1", render_mode="human")
obs, info = env.reset()
for _ in range(100):
# Here is where your logic for selecting an action goes
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
env.close()Next, run the script using uv run, which will ensure it uses the Python from your virtual environment:
uv run python my_agent.pyNote: In editors like Visual Studio Code, you can automate this last step. Simply select the Python interpreter located inside your virtual environment (the path will be something like .venv/Scripts/python.exe) as the interpreter for your project. That way, when you press the "Run" button, the editor will automatically use the correct environment.
Returns a listing of available environment categories or environments from a specific unit.
- Basic usage:
list - Options: ID of category to filter (e.g.,
list ants).
Examples:
list
list antsRuns the environment in interactive mode (human) to test manual control.
- Basic usage:
play <env-id> - Parameters:
- env_id: Environment ID (e.g.,
AntScout-v1). - --seed, -s: Seed for map reproducibility. If not specified, uses environment default.
- env_id: Environment ID (e.g.,
Example:
play AntScout-v1 --seed 42Trains the environment's baseline agent and saves weights/artifacts in data/<env-id>/<seed-XYZ>/.
- Basic usage:
train <env-id> - Parameters:
- env_id: Environment ID.
- --seed, -s: Training seed. If not indicated, generates a random one and displays it.
- --eps, -e: Number of episodes (overrides environment baseline configuration value).
- --render, -r: Render training in real time. Note: this can significantly slow down training.
Example:
train AntScout-v1 --seed 123 --eps 500 --renderEvaluates an existing training by loading Q-Table/weights from the corresponding run directory. By default, opens window (human mode) and visualizes agent using its weights.
- Basic usage:
eval <env-id> [options] - Parameters:
- env_id: Environment ID.
- --seed, -s: Seed of
runto evaluate. If not indicated, uses latestrunavailable for that environment. - --eps, -e: Number of episodes to run during evaluation. Default: 5.
- --speed, -sp: Speed multiplication factor, default is
1.0, to see at half speed put.5.
Examples:
# Visualize agent using weights from latest training
eval AntScout-v1
# Visualize specific training
eval AntScout-v1 --seed 123
# Evaluate 10 episodes
eval AntScout-v1 --seed 123 --eps 10Launches the interactive view (Analytics View) of the environment with simulation controls, metrics, and model management.
- Basic usage:
view <env-id>
Example:
view AntScout-v1Opens a browser with the README.md file associated with the environment, providing full details.
It also displays a summary in the terminal in the configured language:
- Basic usage:
docs <env-id>
Example:
docs AntScout-v1Manages MLV-Lab configuration including language settings (the package detects the system language automatically):
- Basic usage:
config <action> [key] [value] - Actions:
- get: Show current configuration or specific key
- set: Set a configuration value
- reset: Reset configuration to defaults
- Common keys:
- locale: Language setting (
enfor English,esfor Spanish)
- locale: Language setting (
Examples:
# Show current configuration
config get
# Show specific setting
config get locale
# Set language to Spanish
config set locale es
# Reset to defaults
config resetIf you want to add new environments or functionality to MLV-Lab core:
-
Clone the repository.
-
Create a virtual environment with uv.
uv venv
-
Install the project in editable mode with development dependencies:
uv pip install -e ".[dev]" -
Launch the development shell:
uv run mlv shell
This installs mlvlab (editable mode) and also the tools from the [dev] group.
MLV-Lab supports multiple languages. The default language is English en, and Spanish es is fully supported as an alternative language.
The language can be configured in two ways:
-
Automatic Detection: The system automatically detects your system language and uses Spanish if available, otherwise defaults to English.
-
Manual Language Change: The desired language can be forced if it does not match the user's prefences:
# Launch the interactive shell uv run mlv shell # Set language to English config set locale en # Set language to Spanish config set locale es
- English (
en): Default language. - Spanish (
es): Fully translated alternative.


