Skip to content

Guide to ChatGPT Prompt Engineering for Developers, Inspired by Deeplearning.ai

Notifications You must be signed in to change notification settings

dintellect/ChatGPT-Prompt-Engineering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

ChatGPT Prompt Engineering For Developers

Guide to ChatGPT Prompt Engineering for Developers, Inspired by Deeplearning.ai


What is a prompt?

A prompt is text or symbols used to represent the system's readiness to perform the next command. In the context of large language models (LLMs) or large models (LMs), a prompt refers to the starting point for generating text or other outputs. A prompt can be as simple as a sentence or a single word, or it can be a longer piece of text or a specific set of instructions. It serves as an instruction or query that helps guide the model's output.

What is a prompt engineering?

Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. It involves optimizing the way prompts are written to achieve specific goals in generating text or responses. Effective prompt engineering takes into account the model's capabilities and limitations, as well as the desired outcomes, to guide the model towards generating accurate and contextually relevant content.

In the context of developers working with language models, prompt engineering entails designing prompts that produce the desired code snippets, responses, or text outputs. This can involve experimenting with different wording, phrasing, and instructions to influence the model's behavior and ensure it generates useful and coherent content. The goal is to create prompts that yield high-quality results while considering factors like model bias, context, and specific use cases.

Elements of a Prompt

A prompt contains any of the following elements:

  • Instruction - a specific task or instruction you want the model to perform

  • Context - external information or additional context that can steer the model to better responses

  • Input Data - the input or question that we are interested to find a response for

  • Output Indicator - the type or format of the output.

What is LLM (Large Language Model)?

Large Language Model refer to large general purpose language models trained on massive text datasets to understand and generate human-like text.They can be pre-trained and fine-tuned for specific purposes. LLM can perform various language tasks, using their vast parameters and generative abilities, and are notable for their creative writing and information generation capabilities. They can solve common language problems like Text Classification, Question Answering, Document Summarization and Text Generation.

Prompting Principles & Tactics to enhance language model responses

Principle 1: Write clear and specific instructions

Tactic 1: Use delimiters to clearly indicate distinct parts of the input

Tactic 2: Ask for a structured output

Tactic 3: Ask the model to check whether conditions are satisfied

Tactic 4: "Few-shot" prompting

Principle 2: Give the model time to “think”

Tactic 1: Specify the steps required to complete a task

Tactic 2: Instruct the model to work out its own solution before rushing to a conclusion

Model Limitations: Hallucinations

What are hallucinations in LLM?

Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect. To be more precise it refers to the generation of text that appears coherent and contextually appropriate, yet lacks accurate or factual basis. LLMs like GPT-3.5 generate content based on learned patterns rather than genuine understanding, potentially leading them to produce information that sounds plausible but is actually inaccurate or fabricated.

Jupyter Notebook Content

The code demonstrates the use of prompts to illustrate principles and strategies. It also address model limitations and challenges with a practical example. Additionally, it showcases the utilization of LLM APIs in applications for various tasks, including:

  • Summarizing (e.g., condensing user reviews)
  • Inferring (e.g., sentiment classification, topic extraction)
  • Text transformation (e.g., translation, spelling and grammar correction)
  • Expansion (e.g., automated email composition)

Jupyter Notebook Link

Other Resources

About

Guide to ChatGPT Prompt Engineering for Developers, Inspired by Deeplearning.ai

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published