17
17
18
18
### Model names
19
19
20
- Model names follow a ` model:tag ` format, where ` model ` can have an optional namespace such as ` example/model ` . Some examples are ` orca-mini:3b-q4_1 ` and ` llama2 :70b` . The tag is optional and, if not provided, will default to ` latest ` . The tag is used to identify a specific version.
20
+ Model names follow a ` model:tag ` format, where ` model ` can have an optional namespace such as ` example/model ` . Some examples are ` orca-mini:3b-q4_1 ` and ` llama3 :70b` . The tag is optional and, if not provided, will default to ` latest ` . The tag is used to identify a specific version.
21
21
22
22
### Durations
23
23
@@ -66,7 +66,7 @@ Enable JSON mode by setting the `format` parameter to `json`. This will structur
66
66
67
67
``` shell
68
68
curl http://localhost:11434/api/generate -d ' {
69
- "model": "llama2 ",
69
+ "model": "llama3 ",
70
70
"prompt": "Why is the sky blue?"
71
71
}'
72
72
```
@@ -77,7 +77,7 @@ A stream of JSON objects is returned:
77
77
78
78
``` json
79
79
{
80
- "model" : " llama2 " ,
80
+ "model" : " llama3 " ,
81
81
"created_at" : " 2023-08-04T08:52:19.385406455-07:00" ,
82
82
"response" : " The" ,
83
83
"done" : false
@@ -99,7 +99,7 @@ To calculate how fast the response is generated in tokens per second (token/s),
99
99
100
100
``` json
101
101
{
102
- "model" : " llama2 " ,
102
+ "model" : " llama3 " ,
103
103
"created_at" : " 2023-08-04T19:22:45.499127Z" ,
104
104
"response" : " " ,
105
105
"done" : true ,
@@ -121,7 +121,7 @@ A response can be received in one reply when streaming is off.
121
121
122
122
``` shell
123
123
curl http://localhost:11434/api/generate -d ' {
124
- "model": "llama2 ",
124
+ "model": "llama3 ",
125
125
"prompt": "Why is the sky blue?",
126
126
"stream": false
127
127
}'
@@ -133,7 +133,7 @@ If `stream` is set to `false`, the response will be a single JSON object:
133
133
134
134
``` json
135
135
{
136
- "model" : " llama2 " ,
136
+ "model" : " llama3 " ,
137
137
"created_at" : " 2023-08-04T19:22:45.499127Z" ,
138
138
"response" : " The sky is blue because it is the color of the sky." ,
139
139
"done" : true ,
@@ -155,7 +155,7 @@ If `stream` is set to `false`, the response will be a single JSON object:
155
155
156
156
``` shell
157
157
curl http://localhost:11434/api/generate -d ' {
158
- "model": "llama2 ",
158
+ "model": "llama3 ",
159
159
"prompt": "What color is the sky at different times of the day? Respond using JSON",
160
160
"format": "json",
161
161
"stream": false
@@ -166,7 +166,7 @@ curl http://localhost:11434/api/generate -d '{
166
166
167
167
``` json
168
168
{
169
- "model" : " llama2 " ,
169
+ "model" : " llama3 " ,
170
170
"created_at" : " 2023-11-09T21:07:55.186497Z" ,
171
171
"response" : " {\n\" morning\" : {\n\" color\" : \" blue\"\n },\n\" noon\" : {\n\" color\" : \" blue-gray\"\n },\n\" afternoon\" : {\n\" color\" : \" warm gray\"\n },\n\" evening\" : {\n\" color\" : \" orange\"\n }\n }\n " ,
172
172
"done" : true ,
@@ -289,7 +289,7 @@ If you want to set custom options for the model at runtime rather than in the Mo
289
289
290
290
``` shell
291
291
curl http://localhost:11434/api/generate -d ' {
292
- "model": "llama2 ",
292
+ "model": "llama3 ",
293
293
"prompt": "Why is the sky blue?",
294
294
"stream": false,
295
295
"options": {
@@ -332,7 +332,7 @@ curl http://localhost:11434/api/generate -d '{
332
332
333
333
``` json
334
334
{
335
- "model" : " llama2 " ,
335
+ "model" : " llama3 " ,
336
336
"created_at" : " 2023-08-04T19:22:45.499127Z" ,
337
337
"response" : " The sky is blue because it is the color of the sky." ,
338
338
"done" : true ,
@@ -354,7 +354,7 @@ If an empty prompt is provided, the model will be loaded into memory.
354
354
355
355
``` shell
356
356
curl http://localhost:11434/api/generate -d ' {
357
- "model": "llama2 "
357
+ "model": "llama3 "
358
358
}'
359
359
```
360
360
@@ -364,7 +364,7 @@ A single JSON object is returned:
364
364
365
365
``` json
366
366
{
367
- "model" : " llama2 " ,
367
+ "model" : " llama3 " ,
368
368
"created_at" : " 2023-12-18T19:52:07.071755Z" ,
369
369
"response" : " " ,
370
370
"done" : true
@@ -407,7 +407,7 @@ Send a chat message with a streaming response.
407
407
408
408
``` shell
409
409
curl http://localhost:11434/api/chat -d ' {
410
- "model": "llama2 ",
410
+ "model": "llama3 ",
411
411
"messages": [
412
412
{
413
413
"role": "user",
@@ -423,7 +423,7 @@ A stream of JSON objects is returned:
423
423
424
424
``` json
425
425
{
426
- "model" : " llama2 " ,
426
+ "model" : " llama3 " ,
427
427
"created_at" : " 2023-08-04T08:52:19.385406455-07:00" ,
428
428
"message" : {
429
429
"role" : " assistant" ,
@@ -438,7 +438,7 @@ Final response:
438
438
439
439
``` json
440
440
{
441
- "model" : " llama2 " ,
441
+ "model" : " llama3 " ,
442
442
"created_at" : " 2023-08-04T19:22:45.499127Z" ,
443
443
"done" : true ,
444
444
"total_duration" : 4883583458 ,
@@ -456,7 +456,7 @@ Final response:
456
456
457
457
``` shell
458
458
curl http://localhost:11434/api/chat -d ' {
459
- "model": "llama2 ",
459
+ "model": "llama3 ",
460
460
"messages": [
461
461
{
462
462
"role": "user",
@@ -471,7 +471,7 @@ curl http://localhost:11434/api/chat -d '{
471
471
472
472
``` json
473
473
{
474
- "model" : " registry.ollama.ai/library/llama2 :latest" ,
474
+ "model" : " registry.ollama.ai/library/llama3 :latest" ,
475
475
"created_at" : " 2023-12-12T14:13:43.416799Z" ,
476
476
"message" : {
477
477
"role" : " assistant" ,
@@ -495,7 +495,7 @@ Send a chat message with a conversation history. You can use this same approach
495
495
496
496
``` shell
497
497
curl http://localhost:11434/api/chat -d ' {
498
- "model": "llama2 ",
498
+ "model": "llama3 ",
499
499
"messages": [
500
500
{
501
501
"role": "user",
@@ -519,7 +519,7 @@ A stream of JSON objects is returned:
519
519
520
520
``` json
521
521
{
522
- "model" : " llama2 " ,
522
+ "model" : " llama3 " ,
523
523
"created_at" : " 2023-08-04T08:52:19.385406455-07:00" ,
524
524
"message" : {
525
525
"role" : " assistant" ,
@@ -533,7 +533,7 @@ Final response:
533
533
534
534
``` json
535
535
{
536
- "model" : " llama2 " ,
536
+ "model" : " llama3 " ,
537
537
"created_at" : " 2023-08-04T19:22:45.499127Z" ,
538
538
"done" : true ,
539
539
"total_duration" : 8113331500 ,
@@ -591,7 +591,7 @@ curl http://localhost:11434/api/chat -d '{
591
591
592
592
``` shell
593
593
curl http://localhost:11434/api/chat -d ' {
594
- "model": "llama2 ",
594
+ "model": "llama3 ",
595
595
"messages": [
596
596
{
597
597
"role": "user",
@@ -609,7 +609,7 @@ curl http://localhost:11434/api/chat -d '{
609
609
610
610
``` json
611
611
{
612
- "model" : " registry.ollama.ai/library/llama2 :latest" ,
612
+ "model" : " registry.ollama.ai/library/llama3 :latest" ,
613
613
"created_at" : " 2023-12-12T14:13:43.416799Z" ,
614
614
"message" : {
615
615
"role" : " assistant" ,
@@ -651,7 +651,7 @@ Create a new model from a `Modelfile`.
651
651
``` shell
652
652
curl http://localhost:11434/api/create -d ' {
653
653
"name": "mario",
654
- "modelfile": "FROM llama2 \nSYSTEM You are mario from Super Mario Bros."
654
+ "modelfile": "FROM llama3 \nSYSTEM You are mario from Super Mario Bros."
655
655
}'
656
656
```
657
657
@@ -758,7 +758,7 @@ A single JSON object will be returned.
758
758
}
759
759
},
760
760
{
761
- "name" : " llama2 :latest" ,
761
+ "name" : " llama3 :latest" ,
762
762
"modified_at" : " 2023-12-07T09:32:18.757212583-08:00" ,
763
763
"size" : 3825819519 ,
764
764
"digest" : " fe938a131f40e6f6d40083c9f0f430a515233eb2edaa6d72eb85c50d64f2300e" ,
@@ -792,7 +792,7 @@ Show information about a model including details, modelfile, template, parameter
792
792
793
793
``` shell
794
794
curl http://localhost:11434/api/show -d ' {
795
- "name": "llama2 "
795
+ "name": "llama3 "
796
796
}'
797
797
```
798
798
@@ -827,8 +827,8 @@ Copy a model. Creates a model with another name from an existing model.
827
827
828
828
``` shell
829
829
curl http://localhost:11434/api/copy -d ' {
830
- "source": "llama2 ",
831
- "destination": "llama2 -backup"
830
+ "source": "llama3 ",
831
+ "destination": "llama3 -backup"
832
832
}'
833
833
```
834
834
@@ -854,7 +854,7 @@ Delete a model and its data.
854
854
855
855
``` shell
856
856
curl -X DELETE http://localhost:11434/api/delete -d ' {
857
- "name": "llama2 :13b"
857
+ "name": "llama3 :13b"
858
858
}'
859
859
```
860
860
@@ -882,7 +882,7 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
882
882
883
883
``` shell
884
884
curl http://localhost:11434/api/pull -d ' {
885
- "name": "llama2 "
885
+ "name": "llama3 "
886
886
}'
887
887
```
888
888
0 commit comments