Skip to content

Commit

Permalink
Replaced 'hallucination' references in other lessons (with 'fabricati…
Browse files Browse the repository at this point in the history
…on')
  • Loading branch information
nitya authored Nov 10, 2023
1 parent 25b21e7 commit a0c2df5
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion 01-introduction-to-genai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The input of a large language model is known as prompt, while the output is know

The examples above are quite simple and don’t want to be an exhaustive demonstration of Large Language Models capabilities. They just want to show the potential of using generative AI, in particular but not limited to educational context.

Also, the output of a generative AI model is not perfect and sometimes the creativity of the model can work against it, resulting in an output which is a combination of words that the human user can interpret as a mystification of reality, or it can be offensive. Generative AI is not intelligent - at least in the more comprehensive definition of intelligence, including critical and creative reasoning or emotional intelligence; it is not deterministic, and it is not trustworthy, since hallucinations, such as erroneous references, content, and statements, may be combined with correct information, and presented in a persuasive and confident manner. In the following lessons, we’ll be dealing with all these limitations and we’ll see what we can do to mitigate them.
Also, the output of a generative AI model is not perfect and sometimes the creativity of the model can work against it, resulting in an output which is a combination of words that the human user can interpret as a mystification of reality, or it can be offensive. Generative AI is not intelligent - at least in the more comprehensive definition of intelligence, including critical and creative reasoning or emotional intelligence; it is not deterministic, and it is not trustworthy, since fabrications, such as erroneous references, content, and statements, may be combined with correct information, and presented in a persuasive and confident manner. In the following lessons, we’ll be dealing with all these limitations and we’ll see what we can do to mitigate them.

## Assignment

Expand Down
2 changes: 1 addition & 1 deletion 02-exploring-and-comparing-different-llms/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ Prompt engineering with context is the most cost-effective approach to kick-off
LLMs have the limitation that they can use only the data that has been used during their training to generate an answer. This means that they don’t know anything about the facts that happened after their training process, and they cannot access non-public information (like company data).
This can be overcome through RAG, a technique that augments prompt with external data in the form of chunks of documents, considering prompt length limits. This is supported by Vector database tools (like [Azure Vector Search](https://learn.microsoft.com/azure/search/vector-search-overview?WT.mc_id=academic-105485-koreyst)) that retrieve the useful chunks from varied pre-defined data sources and add them to the prompt Context.

This technique is very helpful when a business doesn’t have enough data, enough time, or resources to fine-tune an LLM, but still wishes to improve performance on a specific workload and reduce risks of hallucinations, i.e., mystification of reality or harmful content.
This technique is very helpful when a business doesn’t have enough data, enough time, or resources to fine-tune an LLM, but still wishes to improve performance on a specific workload and reduce risks of fabrications, i.e., mystification of reality or harmful content.

### Fine-tuned model

Expand Down
2 changes: 1 addition & 1 deletion 04-prompt-engineering-fundamentals/1-introduction.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Exercise 3: Hallucinations\n",
"### Exercise 3: Fabrications\n",
"Explore what happens when you ask the LLM to return completions for a prompt about a topic that may not exist, or about topics that it may not know about because it was outside it's pre-trained dataset (more recent). See how the response changes if you try a different prompt, or a different model."
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ Now let's look at common best practices that are recommended by [Open AI](https:
| Use cues to jumpstart completions | Nudge it towards a desired outcome by giving it some leading words or phrases that it can use as a starting point for the response.|
|Double Down | Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc. Iterate & validate to see what works.|
| Order Matters | The order in which you present information to the model may impact the output, even in the learning examples, thanks to recency bias. Try different options to see what works best.|
|Give the model an “out” | Give the model a _fallback_ completion response it can provide if it cannot complete the task for any reason. This can reduce chances of models generating false or hallucinatory responses. |
|Give the model an “out” | Give the model a _fallback_ completion response it can provide if it cannot complete the task for any reason. This can reduce chances of models generating false or fabricated responses. |
| | |
```

Expand Down

0 comments on commit a0c2df5

Please sign in to comment.