Companion materials
Join Datadog and Google Cloud for a webinar on LLM observability with a focus on monitoring Vertex AI with Gemini. You’ll learn how to get the most out of your large language models, as well as Datadog’s pragmatic approach to monitoring and observability.
We’ll discuss generative AI application monitoring topics including:
- Hallucinations and evaluations
- Cost, speed, and quality trade-offs
- LLM application chains and end-to-end observability
- Trust and privacy concerns
In this session, you'll learn about generative AI deployments as well as how Datadog monitors the health and performance of your Vertex AI with Gemini LLM footprint!
- Datadog LLM Observability instrumentation documentation
- Innovate faster with enterprise-ready AI, enhanced by Gemini models
- Google Cloud TPU
- Monitor your Google Gemini apps with Datadog LLM Observability
- Best practices for monitoring LLM prompt injection attacks to protect sensitive data
- Get granular LLM observability by instrumenting your LLM chains
- Monitor Google Cloud Vertex AI with Datadog
- Try Gemini 2.0 models
- Google AI Studio
- Vertex AI
- Gemini for Google Cloud
- Gemini for Google Workspace
- Datadog DASH
- The CIO’s Guide to Artificial Intelligence
GPU monitoring is in private preview. If you want to learn more, reach out to this email address gpu-monitoring-product@datadoghq.com to request admittance to the preview