Skip to content

Explore zero-shot reasoning strategies with Ollama in this interactive FastAPI project. Visualize AI thought processes using Chain, Tree, and Graph-of-Thought. πŸŒŸπŸ’»

License

Notifications You must be signed in to change notification settings

Ahta14/zeroshot-reasoning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Zero-Shot Reasoning: Structured Output for Visual Reasoning

Zero-Shot Reasoning Releases

Table of Contents

Overview

Zero-shot reasoning allows models to make inferences without prior examples. This repository provides structured output for visual zero-shot reasoning using the Ollama framework. By leveraging large language models (LLMs), we create a backend and frontend system that supports both chain-of-thought and graph-of-thought reasoning. This project aims to simplify complex reasoning tasks and enhance the capabilities of AI in understanding visual inputs.

Features

  • Backend Support: Built with FastAPI and Uvicorn for efficient processing.
  • Frontend Visualization: Utilizes Vis.js for interactive data visualization.
  • Chain-of-Thought Reasoning: Supports multi-step reasoning processes.
  • Graph-of-Thoughts: Implements a graphical representation of reasoning paths.
  • Large Language Models: Integrates state-of-the-art LLMs for enhanced reasoning capabilities.
  • Zero-Shot Learning: Allows models to perform tasks without specific training examples.

Installation

To set up the project, follow these steps:

  1. Clone the repository:

    git clone https://github.com/Ahta14/zeroshot-reasoning.git
  2. Navigate to the project directory:

    cd zeroshot-reasoning
  3. Install the required dependencies:

    pip install -r requirements.txt
  4. Start the backend server:

    uvicorn app.main:app --reload
  5. Open your browser and navigate to http://localhost:8000 to access the frontend.

Usage

After installation, you can start using the zero-shot reasoning capabilities. Here’s how:

  1. Access the API: Use the provided endpoints to submit visual inputs for reasoning.
  2. Visualize Results: The frontend will display reasoning paths and outputs using Vis.js.
  3. Experiment: Try different visual inputs to see how the model responds without prior training.

For detailed usage instructions, check the Releases section for example configurations and data formats.

Directory Structure

The repository follows a standard structure for easy navigation:

zeroshot-reasoning/
β”‚
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ main.py         # Entry point for the FastAPI application
β”‚   β”œβ”€β”€ models.py       # Defines data models for input and output
β”‚   β”œβ”€β”€ routes.py       # API route definitions
β”‚   └── utils.py        # Utility functions for processing
β”‚
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ index.html      # Main HTML file for the frontend
β”‚   β”œβ”€β”€ script.js       # JavaScript for frontend logic
β”‚   └── styles.css      # CSS for styling the frontend
β”‚
β”œβ”€β”€ requirements.txt     # List of dependencies
└── README.md            # Project documentation

Contributing

We welcome contributions to enhance the project. To contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or fix.
  3. Make your changes and commit them.
  4. Push your changes to your fork.
  5. Create a pull request to the main repository.

Please ensure your code follows the existing style and includes appropriate tests.

License

This project is licensed under the MIT License. See the LICENSE file for details.


For further updates and releases, visit the Releases section.

About

Explore zero-shot reasoning strategies with Ollama in this interactive FastAPI project. Visualize AI thought processes using Chain, Tree, and Graph-of-Thought. πŸŒŸπŸ’»

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •