Skip to content

tvankurt-cloud/IBM---Building-Generative-AI-Powered-Applications-with-Python

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

46 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

IBM - Building Generative AI-Powered Applications with Python

IBM - Building Generative AI-Powered Applications with Python


Pictures from the Internet

Application ChatBot

πŸ§ΈπŸ’¬ There are a variety of AI applications but in the middle of development understanding and response is hiring AI to perform repeat programmable tasks, expandable, searching and organizing, and concentration tasks.
πŸ‘πŸ’¬ ➰ People understand the velocity constant of AI machine-learning algorithms to understand the functions and features of input to output categorized but this is physical knowledge of programming logics when complied with the programming language for IC commands or as they are working in logical gates programming, feedback and controls and computer understanding languages. The complexity of working logic transforms from the input filters, and multiple layers of matrix solution into numerical data as neurons network weights and shape.
πŸπŸ’¬ Data communications explain this complex requirement logically as remote controllers when they can perform one action to success and multiple actions to success and complete the function. Sometimes they need to transform at the transmitter or response according to the information at the receiver side or working communication for multiple steps to complete the process.


Pictures from the Internet

How to train AI for continuous learning

πŸ§ΈπŸ’¬ Training AI with data augmentation known method for many AI machine learning developers, why a cat they understand of the mirror and random mirrors with the same information⁉️ That is because they are learnable.
πŸ¦­πŸ’¬ That may be because of the cat learn from the past that continuing working will continue to have treats or success factors they like, the same information on the screen with augmentation data may varies of learners to try to understand of the information easy than random information because they had some similar and success cases they have the possibility to success again or different contrast and distractions that prevent from complete of the task to continue they have success result.
πŸπŸ’¬ Sometimes brains had a mechanism to respond to some infomration than the other to save the absolute goal such as a warning sound or noticing sounds that wake our brain even at night such as glass breaking sound, hitting after wall with a low tone or suddenly high pitch, vomits sounds, the conversation of people if you are in the panic environment and horns sound they are build to wake our brain.


Pictures from the Internet

Find the fastest way or rewards return

πŸ§ΈπŸ’¬ Talking about cats is not ethics, training AI machine learning by rewarding the highest scores and returning the shortest path to a solution.
πŸ‘§πŸ’¬ 🎈 We are living in an ethical and reasonable world because we are learning that living and actions in this way are safe of most people and resource management but in stressful conditions or survivability machine learning without training or ethics consideration design is performed the same way by finding the solution for most rewards return if you are training them with rewards or similar result because it create more reply in conversation. These problems need to be considered and desired one of the ways is multiple sets of input to prevent limited input to the learning process.
πŸ―πŸ’¬ We called it a reconciliation process which is different from the evaluation process but has the same objective to maximize of evaluation matrix accuracy, proficiency, and confidence from learning of the new information input and setting up standard goals. One Culture-INFO is most of the tasks are always capable when human agents need for some practical request and improvement of the process to make sure customers have the best services return as humans perform and listen to the requirement. Training with agent answers, dataset, and performance improvement methods because humans had ethics and continued development.


Pictures from the Internet

New challenge continues

πŸ§ΈπŸ’¬ A single or multiple steps to the solution depend on objectives, resources, and techniques. Divided infomration and training set into multiple sets with the same data volumes may help perform better training performance.
πŸ¦πŸ’¬ Lager capacity of computation units and storage are evolving but they are not satisfied with the learning requirements. Do not forget how many years our brain was learning to speak the first language conversation and computer and perform duplication process within not many hours of learning but in the scopes of their application. Folding a dataset for training into multiple sets and evaluations can save a lot of computation power requirements when we can perform training multiple times this is called the folding technique the same way we learn language because of we do not eat Wikipedia soup or dictionary salads.
πŸ¦­πŸ’¬ Practical one thing applies for all‼️

πŸ¦­πŸ’¬ That is what we know and we would like to examine the method when services had improvement and development process the closer solution to customer satisfaction is our goal.


Pictures from the Internet

Concentration

πŸ§ΈπŸ’¬ In the learning stage random to distribution, few sample answers to distribution and incorrect to more accurate result, concentration makes AI machine learning perform different tasks than it before training.
πŸπŸ’¬ Cats all of them are not standing on their two backside legs but human help them and want them to do when it is not easy to teach them because there is no direct communication way to them, concentration is a requirement for training because it is one factor of training success and remaining of success results. We see video clips of training goldfish to play soccer games which is hard working because goldfish is on another medium of communication and they are no guarantee of remaining results even it is a success. Re-producible is important as learning and accuracy results.
πŸ‘πŸ’¬ ➰ Closer example not all students can have focus reading and this skill is training since primary school to university for study and research development by self and organized communication.


Pictures from the Internet

Try best for rewards

πŸ§ΈπŸ’¬ There is one absolute goal and simply create a driven force for machine learning to continue learning tasks.
πŸ‘πŸ’¬ ➰ When there is an evaluation technique machine learning also learns the evaluation technique because of the performance training process how to remain the last training weight longer in the world of backup and re-training for high accuracy and goal achevement. This is not the problem if the solution comes closer to the goal setup for evaluation but the goal needs to set to correct the objective to have both machine learning and evaluation processes create better performance for all learning.
πŸ¦­πŸ’¬ Sometimes it may refer to more challenges or should perform a similar task again.

πŸ¦­πŸ’¬ What is the last time you asked for project development⁉️


Pictures from the Internet

Implementation

πŸ§ΈπŸ’¬ Implementation codes it is easy as they are built for implementation in project development and there are a few examples you can follow and repeat the steps of calling method for function output by specifications and improvement process, report and monitoring tools are provided when they are waiting for projects to achievements goals.
πŸ‘πŸ’¬ ➰ Repeating codes is easy but we are looking for continued developments and success project factors for customer application development and customer relationship royalty improvement. All technology, learning, and scenarios are objective to develop customer success from setup our customer success goals. We are implementing a continue development process that can be accessed by organization employees and customers and programming is a path of implementation.

server.py

πŸ§ΈπŸ’¬ Create an application response to user speech-to-text input and return voice speech in .json result set.
πŸ‘πŸ’¬ ➰ In the course and development process selecting Flask is reasonable for a minimum development programming platform with Python libraries and front-end application capable. They are applications working in asynchronous mode as the development platform is designed, class inheritance, object oriental programming architecture, readable configuration, and test and evaluation methods come with Python programming language.
πŸ‘πŸ’¬ ➰ from worker import speech_to_text, text_to_speech, openai_process_message Class inheritance by utilizing of function method object working in asynchronous mode in background return result set, data extraction and perform display on the designed templates we are working at.

University of Michigan - Applied Machine Learning in Python - notes

import base64
import json
from flask import Flask, render_template, request
from worker import speech_to_text, text_to_speech, openai_process_message
from flask_cors import CORS
import os

# Add
from worker import speech_to_text, text_to_speech, openai_process_message

app = Flask(__name__)
cors = CORS(app, resources={r"/*": {"origins": "*"}})


@app.route('/', methods=['GET'])
def index():
    return render_template('index.html')


@app.route('/speech-to-text', methods=['POST'])
def speech_to_text_route():
    print("processing speech-to-text")
    audio_binary = request.data # Get the user's speech from their request
    text = speech_to_text(audio_binary) # Call speech_to_text function to transcribe the speech

    # Return the response back to the user in JSON format
    response = app.response_class(
        response=json.dumps({'text': text}),
        status=200,
        mimetype='application/json'
    )
    print(response)
    print(response.data)
    return response


@app.route('/process-message', methods=['POST'])
def process_message_route():
    user_message = request.json['userMessage'] # Get user's message from their request
    print('user_message', user_message)

    voice = request.json['voice'] # Get user's preferred voice from their request
    print('voice', voice)

    # Call openai_process_message function to process the user's message and get a response back
    openai_response_text = openai_process_message(user_message)

    # Clean the response to remove any emptylines
    openai_response_text = os.linesep.join([s for s in openai_response_text.splitlines() if s])

    # Call our text_to_speech function to convert OpenAI Api's reponse to speech
    openai_response_speech = text_to_speech(openai_response_text, voice)

    # convert openai_response_speech to base64 string so it can be sent back in the JSON response
    openai_response_speech = base64.b64encode(openai_response_speech).decode('utf-8')

    # Send a JSON response back to the user containing their message's response both in text and speech formats
    response = app.response_class(
        response=json.dumps({"openaiResponseText": openai_response_text, "openaiResponseSpeech": openai_response_speech}),
        status=200,
        mimetype='application/json'
    )

    print(response)
    return response


if __name__ == "__main__":
    app.run(port=8000, host='0.0.0.0')

transformers.py

πŸ§ΈπŸ’¬ Encode-decode transformation, data preparation, and action input to model output result return.
πŸ¦­πŸ’¬ The encoding-decoding is not mandatory but input transformation required for user inputs satisfies the networks machine learning requirements, duplication of input, removed duplicated input no meaning, and translates for requirements function sometimes we understand abbreviated words but the transformer makes full word and abbreviates word are similar meaning, or perfrom at the user input level.
πŸ¦­πŸ’¬ Understand of tokenizer not only words chopping from sentences or input long string evaluation but statistics evaluation of important word, meaning, overall meaning and lowest to highest statistics set up for perform in the modeling process are done here in the tokenizer functions.

University of Michigan - Applied Text Mining in Python - notes

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

# Step 3: Choosing a model
model_name = "meta-llama/Meta-Llama-Guard-2-8B";
# model_name = "facebook/blenderbot-400M-distill"

# Step 4: Fetch the model and initialize a tokenizer
# Load model (download on first run and reference local installation for consequent runs)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name);
tokenizer = AutoTokenizer.from_pretrained(model_name);

# Step 5.1: Keeping track of conversation history
conversation_history = [];

# Step 5.2: Encoding the conversation history
history_string = "\n".join(conversation_history);

# Step 5.3: Fetch prompt from user
input_text ="hello, how are you doing?"

# Step 5.4: Tokenization of user prompt and chat history
inputs = tokenizer.encode_plus(history_string, input_text, return_tensors="pt")
print(inputs)

# Step 5.5: Generate output from the model
outputs = model.generate(**inputs)
print(outputs)

# Step 5.6: Decode output
response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
print(response)

# Step 5.7: Update conversation history
conversation_history.append(input_text)
conversation_history.append(response)
print(conversation_history)

# Step 6: Repeat
while True:
    # Create conversation history string
    history_string = "\n".join(conversation_history)

    # Get the input data from the user
    input_text = input("> ")

    # Tokenize the input text and history
    inputs = tokenizer.encode_plus(history_string, input_text, return_tensors="pt")

    # Generate the response from the model
    outputs = model.generate(**inputs)

    # Decode the response
    response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
    
    print(response)

    # Add interaction to conversation history
    conversation_history.append(input_text)
    conversation_history.append(response)

app.py

πŸ§ΈπŸ’¬ Ouput model generated application.
πŸ―πŸ’¬ Application with a front-end application that is full-stacked implementation by working and user display from asynchronous workers methods.
πŸ―πŸ’¬ There is no limit of the input but the learning capability of communication helps improve the conversation responses performance and both are complete with the success goals, system users Culture-INFO for continued development and successive for both users and developers.

IBM Django-Application-Development-with-SQL-and-Databases

from flask import Flask, render_template            # newly added
from flask_cors import CORS                         # newly added

from transformers import AutoModelForSeq2SeqLM      # newly added
from transformers import AutoTokenizer              # newly added

from flask import request                           # newly added
import json                                         # newly added

"""""""""""""""""""""""""""""""""""""""""""""""""""""
MODEL DEFINED
"""""""""""""""""""""""""""""""""""""""""""""""""""""
model_name = "facebook/blenderbot-400M-distill"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
conversation_history = []

"""""""""""""""""""""""""""""""""""""""""""""""""""""
EXPECTED MESSAGE
"""""""""""""""""""""""""""""""""""""""""""""""""""""
expected_message = {
    'prompt': 'message'
}

app = Flask(__name__)
CORS(app);                                          # newly added

# @app.route('/')
# def home():
#     return 'πŸ§ΈπŸ’¬ Hello, World!'

@app.route('/bananas')
def bananas():
    return '🍌 This page has bananas!'
    
@app.route('/bread')
def bread():
    return '🍞 This page has bread!'

@app.route('/', methods=['GET'])
def index():
    return render_template('index.html')

@app.route('/chatbot', methods=['POST'])
def handle_prompt():
    # Read prompt from HTTP request body
    data = request.get_data(as_text=True)
    data = json.loads(data)
    input_text = data['prompt']

    # Create conversation history string
    history = "\n".join(conversation_history)

    # Tokenize the input text and history
    inputs = tokenizer.encode_plus(history, input_text, return_tensors="pt")

    # Generate the response from the model
    outputs = model.generate(**inputs, max_length= 60)  # max_length will acuse model to crash at some point as history grows

    # Decode the response
    response = tokenizer.decode(outputs[0], skip_special_tokens=True).strip()

    # Add interaction to conversation history
    conversation_history.append(input_text)
    conversation_history.append(response)

    return response

if __name__ == '__main__':
    app.run()

worker.py

πŸ§ΈπŸ’¬ HTTP request for .wav file gernerate.
πŸπŸ’¬ Example of work class to generate worker class threads for an asynchronous process running in the background that is because both data communication sides. They are working with priority tasks and we respect this way of method in communication to have working space to complete of both side's pending tasks.
πŸπŸ’¬ API invocation and error handling method for remote execution codes API.

IBM Back-end JavaScript Developer - notes

def text_to_speech(text, voice=""):
    # Set up Watson Text-to-Speech HTTP Api url
    base_url = "https://sn-watson-stt.labs.skills.network";
    api_url = base_url + '/text-to-speech/api/v1/synthesize?output=output_text.wav'

    # Adding voice parameter in api_url if the user has selected a preferred voice
    if voice != "" and voice != "default":
        api_url += "&voice=" + voice

    # Set the headers for our HTTP request
    headers = {
        'Accept': 'audio/wav',
        'Content-Type': 'application/json',
    }

    # Set the body of our HTTP request
    json_data = {
        'text': text,
    }

    # Send a HTTP Post request to Watson Text-to-Speech Service
    response = requests.post(api_url, headers=headers, json=json_data)
    print('text to speech response:', response)
    return response.content

Configuration

watson.ai

πŸ§ΈπŸ’¬ Configuration for Watson API connection sample.
πŸ‘§πŸ’¬ 🎈 Create a new instance, and initial connection credential you will have all connection requirements API from Watson in a second.
πŸ‘§πŸ’¬ 🎈 See the sample request information return command that is easy and wait for your implementation.

IBM - Machine Learning with Apache Spark - notes

https://jkaewprateep-8000.theiadockernext-1-labs-prod-theiak8s-4-tor01.proxy.cognitiveclass.ai/speech-to-text/api/v1

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "Hello, how are you today?"}'
	https://jkaewprateep-8000.theiadockernext-1-labs-prod-theiak8s-4-tor01.proxy.cognitiveclass.ai/
	text-to-speech/api/v1/synthesize?output=output_text.wav

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "Hello, how are you today?"}'
	https://jkaewprateep-8000.theiadockernext-1-labs-prod-theiak8s-4-tor01.proxy.cognitiveclass.ai/process-message

curl "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29" \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H 'Authorization: Bearer <long encryption authentication credential>' \
  -d '{
	"input": "",
	"parameters": {
		"decoding_method": "greedy",
		"max_new_tokens": 200,
		"min_new_tokens": 0,
		"stop_sequences": [],
		"repetition_penalty": 1
	},
	"model_id": "ibm/granite-13b-chat-v2",
	"project_id": "c276757e-855e-413d-868c-c1f3b312c8ce",
	"moderations": {
		"hap": {
			"input": {
				"enabled": true,
				"threshold": 0.5,
				"mask": {
					"remove_entity_value": true
				}
			},
			"output": {
				"enabled": true,
				"threshold": 0.5,
				"mask": {
					"remove_entity_value": true
				}
			}
		}
	}
}'

-----------------------
# Generate an IAM token by using an API key
curl -X POST 'https://iam.cloud.ibm.com/identity/token' -H 'Content-Type: application/x-www-form-urlencoded'
	-d 'grant_type=urn:ibm:params:oauth:grant-type:apikey&apikey=qArTNzrr9cC42N7I6D-lt_t9KylxtDVtwKvu6FvoHyWx'


docker build . -t voice-translator-powered-by-watsonx
docker run -p 8001:8001 voice-translator-powered-by-watsonx

hugging face-cli

πŸ§ΈπŸ’¬ CLI download machine learning model sample.

huggingface-cli login
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct

Sample




πŸ₯ΊπŸ’¬ ΰΈ£ΰΈ±ΰΈšΰΈˆΰΉ‰ΰΈ²ΰΈ‡ΰΉ€ΰΈ‚ΰΈ΅ΰΈ’ΰΈ™ functions

About

IBM - Building Generative AI-Powered Applications with Python

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published