Skip to content

Commit a15cf05

Browse files
Add files via upload
1 parent 89e23de commit a15cf05

File tree

1 file changed

+134
-0
lines changed

1 file changed

+134
-0
lines changed
Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {
6+
"id": "441900e0"
7+
},
8+
"source": [
9+
"# 🧠 Mistral v0.3 (7B) Conversational Agent\n",
10+
"A conceptual demo notebook to illustrate the workflow and output of the Mistral model, even if it doesn't run on Colab Free Tier."
11+
]
12+
},
13+
{
14+
"cell_type": "markdown",
15+
"metadata": {
16+
"id": "b4ba0b88"
17+
},
18+
"source": [
19+
"## 📄 Description\n",
20+
"This notebook showcases the typical flow of using Mistral v0.3 (7B) for conversational tasks. While the model is too large for Google Colab Free Tier, this version visually explains what the code would do.\n"
21+
]
22+
},
23+
{
24+
"cell_type": "markdown",
25+
"source": [
26+
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Mistral_v0.3_Conversational_Demo.ipynb)\n"
27+
],
28+
"metadata": {
29+
"id": "yzXURvxIlIDD"
30+
}
31+
},
32+
{
33+
"cell_type": "markdown",
34+
"source": [
35+
"# 📦 Dependencies"
36+
],
37+
"metadata": {
38+
"id": "IUQSTocQk7ml"
39+
}
40+
},
41+
{
42+
"cell_type": "code",
43+
"execution_count": null,
44+
"metadata": {
45+
"id": "6f2afc57"
46+
},
47+
"outputs": [],
48+
"source": [
49+
"\n",
50+
"!pip install transformers accelerate torch\n"
51+
]
52+
},
53+
{
54+
"cell_type": "markdown",
55+
"metadata": {
56+
"id": "9d5dad27"
57+
},
58+
"source": [
59+
"## 🛠 Tools Used\n",
60+
"- 🤗 Transformers: For loading the model and tokenizer.\n",
61+
"- PyTorch: For model inference.\n",
62+
"- Accelerate: To manage device placement (CPU/GPU).\n"
63+
]
64+
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {
68+
"id": "cd936b40"
69+
},
70+
"source": [
71+
"## 🧾 YAML Prompt Format (Conceptual)\n",
72+
"```yaml\n",
73+
"task: conversation\n",
74+
"input: \"What's the capital of France?\"\n",
75+
"history: []\n",
76+
"temperature: 0.7\n",
77+
"top_p: 0.9\n",
78+
"```\n"
79+
]
80+
},
81+
{
82+
"cell_type": "code",
83+
"execution_count": null,
84+
"metadata": {
85+
"id": "8abaaa62"
86+
},
87+
"outputs": [],
88+
"source": [
89+
"# 🧠 Main Inference Logic (Illustrative)\n",
90+
"from transformers import AutoModelForCausalLM, AutoTokenizer\n",
91+
"import torch\n",
92+
"\n",
93+
"model_id = \"mistralai/Mistral-7B-Instruct-v0.3\"\n",
94+
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
95+
"model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map=\"auto\")\n",
96+
"\n",
97+
"prompt = \"What's the capital of France?\"\n",
98+
"inputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\n",
99+
"outputs = model.generate(**inputs, max_new_tokens=100)\n",
100+
"response = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
101+
"print(response)\n"
102+
]
103+
},
104+
{
105+
"cell_type": "markdown",
106+
"metadata": {
107+
"id": "def486c6"
108+
},
109+
"source": [
110+
"## 📤 Output (Example)\n",
111+
"> \"The capital of France is Paris.\"\n",
112+
"\n",
113+
"![output](https://upload.wikimedia.org/wikipedia/commons/e/e6/Paris_Night.jpg)\n",
114+
"\n"
115+
]
116+
}
117+
],
118+
"metadata": {
119+
"accelerator": "GPU",
120+
"colab": {
121+
"gpuType": "T4",
122+
"provenance": []
123+
},
124+
"kernelspec": {
125+
"display_name": "Python 3",
126+
"name": "python3"
127+
},
128+
"language_info": {
129+
"name": "python"
130+
}
131+
},
132+
"nbformat": 4,
133+
"nbformat_minor": 0
134+
}

0 commit comments

Comments
 (0)