Skip to content

Commit

Permalink
Update Overview.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Riya10031 authored Aug 1, 2024
1 parent 566fc2b commit b48ed8e
Showing 1 changed file with 7 additions and 22 deletions.
29 changes: 7 additions & 22 deletions Labs/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,38 +2,23 @@

## Overview

This lab is intended for AI developers, data scientists, AI Enthusiasts, Cloud Engineers, AI Engineers, aiming to enhance their skills in model evaluation and fine-tuning using Azure AI Studio's Prompt Flow. Participants will gain practical experience in developing custom AI models, evaluating their performance, and refining them for better results. The lab also addresses the integration of chat flows and essential tools, ensuring responsible AI practices through content safety measures. This comprehensive hands-on approach prepares participants to deploy robust, ethical AI solutions in real-world scenarios


- **​Understanding the Lifecycle of Model Development**: Gain experience by exploring AI application development through Prompt Flow’s lifecycle stages, including Standard, Chat, and Evaluation flows.

-**Training the Model**: Gain hands-on experience creating, configuring, and running flows with Prompt and LLM tools in Azure AI Studio.

- **​​Evaluation Flow Setup**: Set up and automate evaluation flows in Azure AI Studio to assess AI model performance with metrics like coherence and fluency.

- **Fine-Tuning the Model**: Gain hands-on experience in Fine-tune models in Azure AI Studio with iterative prompt tuning, variant comparison, and performance optimization.

- **Implementing Chat Flow and Tool Integration**: Design and implement a chat flow in Azure AI Studio, integrating inputs, LLM nodes, and output configurations, then deploy and test the
flow.

- **Ensuring Responsible AI with Content Safety Studio**: Gain experience on Content Safety Studio for Text Moderation to filter harmful text and Image Moderation to block unsafe images,
ensuring safe user-generated content.
This lab is intended for AI developers, data scientists, AI Enthusiasts, Cloud Engineers, AI Engineers, aiming to enhance their skills in model evaluation and fine-tuning using Azure AI Studio's Prompt Flow. Participants will gain practical experience in developing custom AI models, evaluating their performance, and refining them for better results. The lab also addresses the integration of chat flows and essential tools, ensuring responsible AI practices through content safety measures.

## Objective

This lab is designed to equip participants with hands-on experience in model evaluation and fine-tuning using Azure AI Studio's Prompt Flow. By completing this lab, participants will learn to:

- **Understand Model Development Lifecycle**: Be familiar with the stages of AI application development through Prompt Flow, including Standard, Chat, and Evaluation flows.
- **Understand Model Development Lifecycle**: The model development lifecycle is a structured approach to creating and deploying machine learning models, encompassing several key stages: problem definition, data collection, data preprocessing, model selection, training, evaluation, and deployment. It starts with understanding the problem and gathering relevant data, followed by cleaning and transforming the data to ensure quality. Next, various models are chosen and trained on the data, with performance assessed through evaluation metrics. Based on these metrics, the best-performing model is selected and deployed into a production environment, where it is monitored and maintained to ensure continued accuracy and relevance. This lifecycle ensures a systematic approach to building effective machine learning solutions.

- **Create and Train Models**: Gain hands-on experience in creating, configuring, and running AI models using Prompt and LLM tools in Azure AI Studio.
- **Create and Train Models**: Creating and training models involves developing a machine learning or statistical model by first defining its structure and then training it using data. This process typically starts with selecting an appropriate algorithm based on the problem, such as regression, classification, or clustering. Once the model is chosen, it is initialized and trained on a dataset by feeding it input features and corresponding target values. During training, the model adjusts its parameters to minimize errors or maximize performance metrics. After training, the model is evaluated using separate validation data to assess its accuracy and generalizability. Fine-tuning and adjustments may be necessary to improve performance before deploying the model for real-world use.

- **Set Up and Automate Evaluations**: Effectively set up and automate evaluation flows to assess AI model performance with metrics like coherence and fluency.
- **Set Up and Automate Evaluations**: Setting up and automating evaluations involves creating a structured process to assess performance, skills, or outcomes efficiently. This typically includes defining evaluation criteria, designing assessment tools, and configuring automation software to streamline data collection and analysis. The goal is to ensure consistent and objective evaluations by reducing manual effort and minimizing human error. Automation can include scheduling regular evaluations, generating reports, and tracking progress over time, making the process more effective and scalable for organizations or educational institutions.

- **Fine-Tune Models**: Master iterative prompt tuning, variant comparison, and performance optimization techniques to enhance model accuracy.
- **Fine-Tune Models**: Fine-tuning models involves adjusting a pre-trained machine learning model on a new, often smaller, dataset to improve its performance for a specific task. This process leverages the general knowledge the model has already acquired from a broader dataset and refines it to make the model more effective for the particular nuances of the new dataset. Fine-tuning typically requires fewer resources and less data compared to training a model from scratch, making it a cost-effective way to adapt powerful models to specialized applications.

- **Implement and Test Chat Flows**: Design, deploy, and test functional chat flows, integrating various inputs and LLM nodes.
- **Implement and Test Chat Flows**: Implementing and testing chat flows involves creating and evaluating the interactions between a user and a chatbot or automated messaging system. This process begins with designing the conversation paths, including potential user inputs and corresponding system responses, to ensure that the chatbot can effectively address various scenarios. After implementation, thorough testing is conducted to identify and rectify any issues, ensuring the chat flow is intuitive, accurate, and capable of handling diverse user interactions. The goal is to deliver a seamless and efficient user experience, refining the chat flow based on test results to improve functionality and user satisfaction.

- **Ensure Responsible AI Practices**: Utilize Content Safety Studio to moderate text and images, ensuring the safety and appropriateness of user-generated content.
- **Ensure Responsible AI Practices**: Ensuring responsible AI practices involves implementing ethical guidelines and standards that govern the development and deployment of artificial intelligence systems. This includes prioritizing transparency, accountability, and fairness in AI algorithms, ensuring they do not perpetuate biases or cause harm. Organizations must also focus on safeguarding privacy, securing data, and fostering an inclusive approach that considers the broader societal impacts of AI technologies. Regular audits, adherence to legal and ethical norms, and stakeholder engagement are crucial to maintaining trust and promoting the responsible use of AI.

## Prerequisites

Expand Down

0 comments on commit b48ed8e

Please sign in to comment.