Join us for two expert talks on building reliable LLM applications and using AI to enhance observability. From testing prompts to monitoring complex systems — we’ll explore how GenAI is powering real-world solutions.
47 RSVP'd
Organized by: GDG Cloud Kraków
Join us for a deep dive into the practical side of AI and large language models (LLMs) in cloud-native environments.
In this session, we’ll explore the challenges and strategies behind building, testing, securing, and monitoring modern AI applications—from smart prompting to system observability with GenAI at the core.
🎤 Agenda:
Lukasz Stanczak - Principal II Software Engineer Sabre
In today's fast-paced digital world, Site Reliability Engineering (SRE) is crucial for maintaining system reliability and performance. Lukasz Stanczak explores how Artificial Intelligence (AI) is revolutionizing SRE practices, making them more efficient and effective.
Key Topics:
Mete Atamel - Senior Developer Advocate Google
When you change prompts or modify the Retrieval-Augmented Generation (RAG) pipeline in your LLM applications, how do you know it’s making a difference? You don’t—until you measure. But what should you measure, and how? Similarly, how can you ensure your LLM app is resilient against prompt injections or avoids providing harmful responses? More robust guardrails on inputs and outputs are needed beyond basic safety settings.
In this talk, we’ll explore various evaluation frameworks such as Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM outputs, understand the types of metrics they offer, and how these metrics are useful. We’ll also dive into testing and security frameworks like LLM Guard to ensure your LLM apps are safe and limited to precisely what you need.
Thursday, May 15, 2025
3:30 PM – 6:00 PM (UTC)
Developer Advocate
Sabre
Principal II Software Engineer
Contact Us