Skip to content

chiragsurti/Promptflowex

Repository files navigation

PromptFlow Entity Extraction with Azure Functions

This project implements a chain of thought reasoning and entity extraction system using Azure OpenAI GPT-4o, deployed as an Azure Function with comprehensive observability through OpenTelemetry and Azure Application Insights.

Architecture

Input Text → Chain of Thought Reasoning → Entity Extraction → Structured Output
     ↓                    ↓                      ↓              ↓
Azure Function → PromptFlow → Azure OpenAI → Response + Telemetry
     ↓
Application Insights (Traces, Metrics, Logs)

Features

  • Chain of Thought Reasoning: Step-by-step analysis of input text
  • Entity Extraction: Structured extraction of persons, organizations, locations, dates, products, and other entities
  • Azure Functions: Serverless hosting with HTTP triggers
  • OpenTelemetry: Comprehensive observability with traces, metrics, and logs
  • Azure Application Insights: Centralized monitoring and analytics
  • Error Handling: Robust error handling with detailed logging
  • Health Checks: Built-in health monitoring endpoint

Prerequisites

  1. Azure Subscription with the following resources:

    • Azure OpenAI Service with GPT-4o deployment
    • Azure Functions (Python 3.9+)
    • Azure Application Insights
    • Azure Storage Account
  2. Development Environment:

    • Python 3.9 or higher
    • Azure Functions Core Tools
    • Azure CLI
    • Git

Setup Instructions

1. Clone and Prepare the Project

git clone <your-repo-url>
cd Promptflowex

2. Install Dependencies

pip install -r requirements.txt

3. Configure Environment Variables

Copy the template files and update with your Azure resource details:

cp .env.template .env
cp local.settings.json.template local.settings.json

Update the following in your .env and local.settings.json:

  • AZURE_OPENAI_ENDPOINT: Your Azure OpenAI service endpoint
  • AZURE_OPENAI_API_KEY: Your Azure OpenAI API key
  • AZURE_OPENAI_DEPLOYMENT_NAME: Your GPT-4o deployment name
  • APPLICATIONINSIGHTS_CONNECTION_STRING: Your Application Insights connection string
  • AzureWebJobsStorage: Your Azure Storage connection string

4. Create Azure OpenAI Connection

The application will automatically create the PromptFlow connection, or you can create it manually:

# Via Azure CLI (if using Azure ML workspace)
az ml connection create -f azure_openai_connection.yaml -g <resource-group> -w <workspace-name>

5. Run Locally

# Start the Azure Functions runtime
func start

# The function will be available at:
# POST http://localhost:7071/api/process_text
# GET  http://localhost:7071/api/health

API Usage

Process Text Endpoint

POST /api/process_text

Request Body:

{
  "text": "John Smith from Microsoft will visit our New York office on December 15th, 2024 to discuss the new Azure AI services."
}

Response:

{
  "status": "success",
  "input": "John Smith from Microsoft will visit our New York office on December 15th, 2024 to discuss the new Azure AI services.",
  "reasoning": "Step-by-step analysis of the text...",
  "extracted_entities": {
    "persons": ["John Smith"],
    "organizations": ["Microsoft"],
    "locations": ["New York"],
    "dates": ["December 15th, 2024"],
    "products": ["Azure AI services"],
    "other_entities": {}
  },
  "processing_time_ms": 1250.75
}

Health Check Endpoint

GET /api/health

Response:

{
  "status": "healthy",
  "service": "promptflow-entity-extraction",
  "timestamp": "2024-11-05T10:30:00.000Z",
  "version": "1.0.0"
}

Testing Examples

Example 1: Business Text

curl -X POST http://localhost:7071/api/process_text \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Sarah Johnson from Contoso Corporation will attend the quarterly review meeting in Seattle on January 20th, 2025 to present the Q4 financial results."
  }'

Example 2: Technical Text

curl -X POST http://localhost:7071/api/process_text \
  -H "Content-Type: application/json" \
  -d '{
    "text": "The deployment of the new microservices architecture on Azure Kubernetes Service is scheduled for March 1st, 2025. The team lead, Alex Chen, will coordinate with the DevOps team in London."
  }'

Deployment to Azure

1. Create Azure Resources

# Create resource group
az group create --name rg-promptflow --location eastus

# Create storage account
az storage account create --name stpromptflow --resource-group rg-promptflow --location eastus --sku Standard_LRS

# Create Application Insights
az monitor app-insights component create --app promptflow-insights --location eastus --resource-group rg-promptflow

# Create Function App
az functionapp create --resource-group rg-promptflow --consumption-plan-location eastus --runtime python --runtime-version 3.9 --functions-version 4 --name func-promptflow --storage-account stpromptflow --app-insights promptflow-insights

2. Configure Application Settings

# Set environment variables in Azure Function App
az functionapp config appsettings set --name func-promptflow --resource-group rg-promptflow --settings \
  "AZURE_OPENAI_ENDPOINT=<your-endpoint>" \
  "AZURE_OPENAI_API_KEY=<your-key>" \
  "AZURE_OPENAI_DEPLOYMENT_NAME=gpt-4o" \
  "OTEL_SERVICE_NAME=promptflow-entity-extraction"

3. Deploy the Function

# Deploy using Azure Functions Core Tools
func azure functionapp publish func-promptflow

Observability and Monitoring

OpenTelemetry Features

The application includes comprehensive observability:

  1. Distributed Tracing:

    • Request tracing across all components
    • Custom spans for prompt flow operations
    • Context propagation and baggage
  2. Metrics:

    • Request counters with success/error status
    • Processing duration histograms
    • Custom business metrics
  3. Logging:

    • Structured logging with correlation IDs
    • Error logging with stack traces
    • Performance logging

Application Insights Dashboard

Monitor your application in Azure Application Insights:

  1. Live Metrics: Real-time request and performance data
  2. Application Map: Service dependencies and call flows
  3. Performance: Request duration and throughput analysis
  4. Failures: Error rates and exception analysis
  5. Logs: Structured log analysis with KQL queries

Sample KQL Queries

// Request performance over time
requests
| where timestamp > ago(1h)
| summarize avg(duration), count() by bin(timestamp, 5m)

// Error analysis
exceptions
| where timestamp > ago(24h)
| summarize count() by type, outerMessage

// Custom metrics
customMetrics
| where name in ("promptflow_requests_total", "promptflow_processing_duration_ms")
| summarize avg(value) by name, bin(timestamp, 1h)

Project Structure

Promptflowex/
├── function_app.py              # Azure Function app with OpenTelemetry
├── flow.dag.yaml               # PromptFlow definition
├── chain_of_thought.jinja2     # Chain of thought prompt template
├── entity_extraction.jinja2    # Entity extraction prompt template
├── requirements.txt            # Python dependencies
├── host.json                   # Azure Functions host configuration
├── azure_openai_connection.yaml # Connection configuration
├── .env.template              # Environment variables template
├── local.settings.json.template # Local Azure Functions settings
├── .gitignore                 # Git ignore rules
└── README.md                  # This documentation

Troubleshooting

Common Issues

  1. Connection Errors:

    • Verify Azure OpenAI endpoint and API key
    • Check network connectivity
    • Ensure proper RBAC permissions
  2. PromptFlow Errors:

    • Verify flow.dag.yaml syntax
    • Check template file paths
    • Ensure connection exists
  3. Observability Issues:

    • Verify Application Insights connection string
    • Check OpenTelemetry configuration
    • Ensure proper instrumentation

Debug Mode

Enable debug logging by setting:

export PYTHONPATH="."
export AZURE_FUNCTIONS_ENVIRONMENT="Development"

Best Practices

Security

  • Use Azure Key Vault for sensitive configuration
  • Implement proper authentication and authorization
  • Enable HTTPS only
  • Use managed identities where possible

Performance

  • Monitor cold start times
  • Optimize prompt templates for efficiency
  • Implement proper caching strategies
  • Use appropriate Function App hosting plans

Monitoring

  • Set up alerts for error rates and performance thresholds
  • Create custom dashboards for business metrics
  • Implement proper log retention policies
  • Use structured logging for better analysis

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests and documentation
  5. Submit a pull request

Support

For issues and questions:

  1. Check the troubleshooting section
  2. Review Azure Functions and PromptFlow documentation
  3. Open an issue in the repository
  4. Contact the development team

About

Promptflow example with open telemetry tracing and hosted as function app

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published