You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: assemblyai/lemur.py
+8-8Lines changed: 8 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -208,7 +208,7 @@ def question(
208
208
Args:
209
209
questions: One or a list of questions to ask.
210
210
context: The context which is shared among all questions. This can be a string or a dictionary.
211
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
211
+
final_model: The model that is used for the final prompt after compression is performed.
212
212
max_output_size: Max output size in tokens
213
213
timeout: The timeout in seconds to wait for the answer(s).
214
214
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -252,7 +252,7 @@ def question_async(
252
252
Args:
253
253
questions: One or a list of questions to ask.
254
254
context: The context which is shared among all questions. This can be a string or a dictionary.
255
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
255
+
final_model: The model that is used for the final prompt after compression is performed.
256
256
max_output_size: Max output size in tokens
257
257
timeout: The timeout in seconds to wait for the answer(s).
258
258
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -295,7 +295,7 @@ def summarize(
295
295
Args:
296
296
context: An optional context on the transcript.
297
297
answer_format: The format on how the summary shall be summarized.
298
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
298
+
final_model: The model that is used for the final prompt after compression is performed.
299
299
max_output_size: Max output size in tokens
300
300
timeout: The timeout in seconds to wait for the summary.
301
301
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -334,7 +334,7 @@ def summarize_async(
334
334
Args:
335
335
context: An optional context on the transcript.
336
336
answer_format: The format on how the summary shall be summarized.
337
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
337
+
final_model: The model that is used for the final prompt after compression is performed.
338
338
max_output_size: Max output size in tokens
339
339
timeout: The timeout in seconds to wait for the summary.
340
340
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -375,7 +375,7 @@ def action_items(
375
375
Args:
376
376
context: An optional context on the transcript.
377
377
answer_format: The preferred format for the result action items.
378
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
378
+
final_model: The model that is used for the final prompt after compression is performed.
379
379
max_output_size: Max output size in tokens
380
380
timeout: The timeout in seconds to wait for the action items response.
381
381
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -415,7 +415,7 @@ def action_items_async(
415
415
Args:
416
416
context: An optional context on the transcript.
417
417
answer_format: The preferred format for the result action items.
418
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
418
+
final_model: The model that is used for the final prompt after compression is performed.
419
419
max_output_size: Max output size in tokens
420
420
timeout: The timeout in seconds to wait for the action items response.
421
421
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -451,7 +451,7 @@ def task(
451
451
452
452
Args:
453
453
prompt: The prompt to use for this task.
454
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
454
+
final_model: The model that is used for the final prompt after compression is performed.
455
455
max_output_size: Max output size in tokens
456
456
timeout: The timeout in seconds to wait for the task.
457
457
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
@@ -485,7 +485,7 @@ def task_async(
485
485
486
486
Args:
487
487
prompt: The prompt to use for this task.
488
-
final_model: The model that is used for the final prompt after compression is performed (options: "basic", "default", "assemblyai/mistral-7b", and "anthropic/claude-2-1").
488
+
final_model: The model that is used for the final prompt after compression is performed.
489
489
max_output_size: Max output size in tokens
490
490
timeout: The timeout in seconds to wait for the task.
491
491
temperature: Change how deterministic the response is, with 0 being the most deterministic and 1 being the least deterministic.
LeMUR features four model modes, Basic, Default, Mistral 7B, and Claude v2.1, that allow you to
1848
-
configure your request to suit your needs. These options tell LeMUR whether to use the more
1849
-
advanced Default model or the cheaper, faster, but simplified Basic model. The implicit setting
1850
-
is Default when no option is explicitly passed in.
1847
+
LeMUR features different model modes that allow you to configure your request to suit your needs.
1848
+
"""
1849
+
1850
+
claude3_5_sonnet="anthropic/claude-3-5-sonnet"
1851
+
"""
1852
+
Claude 3.5 Sonnet is the most intelligent model to date, outperforming Claude 3 Opus on a wide range of evaluations, with the speed and cost of Claude 3 Sonnet.
1853
+
"""
1854
+
1855
+
claude3_opus="anthropic/claude-3-opus"
1856
+
"""
1857
+
Claude 3 Opus is good at handling complex analysis, longer tasks with many steps, and higher-order math and coding tasks.
1858
+
"""
1859
+
1860
+
claude3_haiku="anthropic/claude-3-haiku"
1861
+
"""
1862
+
Claude 3 Haiku is the fastest model that can execute lightweight actions.
1863
+
"""
1864
+
1865
+
claude3_sonnet="anthropic/claude-3-sonnet"
1866
+
"""
1867
+
Claude 3 Sonnet is a legacy model with a balanced combination of performance and speed for efficient, high-throughput tasks.
1868
+
"""
1869
+
1870
+
claude2_1="anthropic/claude-2-1"
1871
+
"""
1872
+
Claude 2.1 is a legacy model similar to Claude 2.0. The key difference is that it minimizes model hallucination and system prompts, has a larger context window, and performs better in citations.
1873
+
"""
1874
+
1875
+
claude2_0="anthropic/claude-2"
1876
+
"""
1877
+
Claude 2.0 is a legacy model that has good complex reasoning. It offers more nuanced responses and improved contextual comprehension.
1851
1878
"""
1852
1879
1853
1880
default="default"
1854
1881
"""
1855
-
LeMUR Default is the standard model to use. It is capable of handling any task as well, or better
1856
-
than LeMUR Basic. Default is capable of more nuanced and complex questions, where Basic would provide poor results.
1882
+
Legacy model. The same as `claude2_0`.
1883
+
"""
1857
1884
1858
-
Additionally, responses provide more insightful responses with Default.
1859
-
The drawback of this expanded functionality and quality is execution speed and cost. Default is up to 20% slower than
1860
-
Basic and has an increased cost.
1885
+
claude_instant1_2="anthropic/claude-instant-1-2"
1886
+
"""
1887
+
Claude Instant is a legacy model that is optimized for speed and cost. Claude Instant can complete requests up to 20% faster than Claude 2.0.
1861
1888
"""
1862
1889
1863
1890
basic="basic"
1864
1891
"""
1865
-
LeMUR Basic is a simplified model optimized for speed and cost allowing you to complete simple requests quickly, and cheaply.
1866
-
LeMUR Basic can complete requests up to 20% faster than Default.
1867
-
1868
-
The best use cases for Basic include summary and simple questions with factual answers. It is not recommended to use Basic
1869
-
for complex/subjective tasks where answers require more nuance to be effective.
1892
+
Legacy model. The same as `claude_instant1_2`.
1870
1893
"""
1871
1894
1872
1895
mistral7b="assemblyai/mistral-7b"
1873
1896
"""
1874
1897
Mistral 7B is an open source model that works well for summarization and answering questions.
0 commit comments