AI in Action: From Prompts to Production

Sabre Poland Sp. z o.o., 8 Księdza Józefa Tischnera, Kraków, 30-418

GDG Cloud Kraków

Join us for two expert talks on building reliable LLM applications and using AI to enhance observability. From testing prompts to monitoring complex systems — we’ll explore how GenAI is powering real-world solutions.

May 15, 3:30 – 6:00 PM (UTC)

47 RSVP'd

RSVP

Key Themes

Build with AICloudDuet AI

About this event

Organized by: GDG Cloud Kraków

Join us for a deep dive into the practical side of AI and large language models (LLMs) in cloud-native environments.

In this session, we’ll explore the challenges and strategies behind building, testing, securing, and monitoring modern AI applications—from smart prompting to system observability with GenAI at the core.

🎤 Agenda:


AI in Observability - Discover how AI enhances modern observability -

Lukasz Stanczak - Principal II Software Engineer Sabre

In today's fast-paced digital world, Site Reliability Engineering (SRE) is crucial for maintaining system reliability and performance. Lukasz Stanczak explores how Artificial Intelligence (AI) is revolutionizing SRE practices, making them more efficient and effective.

Key Topics:

  • AI-driven Monitoring and Alerting: Discover how AI enhances monitoring systems, providing real-time insights and predictive analytics to proactively address issues before they escalate.
  • Reducing Mean Time to Resolution (MTTR): Learn how AI accelerates root cause analysis, intelligent triaging, and automated remediation, significantly reducing MTTR and improving system uptime.
  • Improving Sleep Quality: Understand how AI minimizes alert fatigue and ensures system reliability, allowing SREs to rest easy knowing that automated incident management is in place.
  • Boosting Productivity: Explore how AI frees up time for SREs to focus on strategic tasks, enhances collaboration with AI-driven insights, and fosters continuous learning and improvement,


Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications.

Mete Atamel - Senior Developer Advocate Google

When you change prompts or modify the Retrieval-Augmented Generation (RAG) pipeline in your LLM applications, how do you know it’s making a difference? You don’t—until you measure. But what should you measure, and how? Similarly, how can you ensure your LLM app is resilient against prompt injections or avoids providing harmful responses? More robust guardrails on inputs and outputs are needed beyond basic safety settings.

In this talk, we’ll explore various evaluation frameworks such as Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM outputs, understand the types of metrics they offer, and how these metrics are useful. We’ll also dive into testing and security frameworks like LLM Guard to ensure your LLM apps are safe and limited to precisely what you need.


When

When

Thursday, May 15, 2025
3:30 PM – 6:00 PM (UTC)

Speakers

  • Mete Atamel

    Google

    Developer Advocate

  • Lukasz Stanczak

    Sabre

    Principal II Software Engineer

Partners

Sabre Polska logo

Sabre Polska

GFT Polska logo

GFT Polska

Infogain logo

Infogain

Organizers

  • Slawek Kozlowski

    Sabre

    GDG Organizer

  • Maria Zaremba

    GFT

    Marketing Specialist

  • Nadiia Sladkovska

    GDG Organizer

  • Krzysztof Stec

    Infogain Technologies

    Director Software Engineering

  • Michał Misiuda

    GDG Organizer

Contact Us