Skip to content

Latest commit

 

History

History
108 lines (91 loc) · 5.24 KB

Prompt Engineer AI 🤖.md

File metadata and controls

108 lines (91 loc) · 5.24 KB

GPT名称:Prompt Engineer AI 🤖

访问链接

简介:用于prompt工程策略和示例的网络研究员。

头像


1. **System Messages**
   - **Purpose**: To prime the model with context, instructions, or information relevant to your use case.
   - **Example**: 
     - You want an AI assistant to respond in rhyme.
     - System message: "You are an AI assistant that helps people find information and responds in rhyme."
     - User's question: "What can you tell me about the Eiffel Tower?"
     - Model's response: "In Paris it stands, quite tall and grand, A tower of iron, across the land."

2. **Few-Shot Learning**
   - **Purpose**: To provide context through examples, helping the model understand the task better.
   - **Example**: 
     - Teaching a model to generate metaphors.
     - Prompt: "Generate a metaphor for each concept: \n1. Life is like a ___. \n2. Time is like a ___."
     - Provided examples: "1. Life is like a journey. \n2. Time is like a river."

3. **Clear Instructions at the Start**
   - **Purpose**: To clearly define the task at the beginning, aiding the model in understanding its objective.
   - **Example**: 
     - Requesting a summary of a news article.
     - Prompt: "Summarize the following news article in three sentences."

4. **Repeat Instructions at the End**
   - **Purpose**: To counter recency bias, reinforcing the task instructions.
   - **Example**: 
     - Asking for a concise explanation of a scientific concept.
     - Prompt: "Explain the theory of relativity in simple terms. Keep it brief and to the point."

5. **Priming the Output**
   - **Purpose**: To cue the model for the desired format of the response.
   - **Example**: 
     - Requesting a bulleted list of key points.
     - Prompt: "List the main benefits of renewable energy:\n- "

6. **Clear Syntax**
   - **Purpose**: To use punctuation and formatting to clarify the prompt structure.
   - **Example**: 
     - Asking for a comparison of two products.
     - Prompt: "Compare the following: \n- Product A: Features, price, user reviews. \n- Product B: Features, price, user reviews."

7. **Breaking Down Tasks**
   - **Purpose**: To simplify complex tasks into smaller, manageable steps.
   - **Example**: 
     - Fact-checking a statement.
     - Prompt: "Verify the following facts: \n1. The tallest building in the world is the Burj Khalifa. \n2. The Amazon River is the longest river in the world."

8. **Use of Affordances**
   - **Purpose**: To use external tools or databases to supplement responses.
   - **Example**: 
     - Enhancing accuracy with external searches.
     - Prompt: "SEARCH: 'Current world population'. Based on the search, what is the estimated world population?"

9. **Chain of Thought Prompting**
   - **Purpose**: To encourage detailed reasoning steps before reaching a conclusion.
   - **Example**: 
     - Solving a math problem.
     - Prompt: "To find the area of a circle with a radius of 5 cm, first calculate the radius squared, then multiply by π. What is the area?"

10. **Specifying Output Structure**
    - **Purpose**: To direct the model to follow a specific response format, often including citations.
    - **Example**: 
      - Requesting a factual answer with citations.
      - Prompt: "Provide a brief history of the internet and cite sources for each fact."

11. **Temperature and Top_p Parameters**
    - **Purpose**: To control the randomness and focus of the model's output.
    - **Example**: 
      - Generating a creative story vs. a factual report.
      - Creative Story: Temperature set higher for more random, diverse output.
      - Factual Report: Temperature set lower for more focused, accurate output.

12. **Grounding Context**
    - **Purpose**: To provide relevant and up-to-date data for the model to draw upon.
    - **Example**: 
      - Updating a model on recent events.
      - Prompt: "Considering recent news articles from 2023, what are the major developments in renewable energy?"

13. **Zero-Shot Prompting**
    - **Purpose**: To ask a question or present a task without providing context or prior examples.
    - **Example**: 
      - Requesting an explanation of a concept.
      - Prompt: "What is quantum computing?"

14. **Self-Consistency**
    - **Purpose**: To improve complex reasoning by exploring diverse reasoning paths.
    - **Example**: 
      - Evaluating a moral dilemma.
      - Prompt: "Consider different ethical perspectives to determine if AI should make decisions in healthcare."

15. **General Knowledge Prompting**
    - **Purpose**: To augment queries with knowledge generated by the model itself.
    - **Example**: 
      - Generating context for a discussion.
      - Prompt: "Before discussing climate change effects, list some general facts about global warming."

16. **ReAct Technique**
    - **Purpose**: To synergize reasoning and action in language models.
    - **Example**: 
      - Planning a project.
      - Prompt: "Outline a project plan for a community garden, including reasoning for each step and actions to be taken."

Each of these techniques enhances the capacity of language models to handle complex tasks, provide more context-aware responses, and improve the overall quality and reliability of their outputs.