Join us for a power-packed night of learning, sharing, and networking at AI Dev Day - Silicon Valley.
1 RSVP'd
We are excited to bring the AI developer community together to learn and discuss the latest trends, practical experiences, and best practices in the field of AI, LLMs, generative AI, and machine learning.
Tech Talk 1: Evaluating LLM-based applications
Speaker: Josh Tobin, founder @Gantry
Abstract: Evaluating LLM-based applications can feel like more of an art than a science. In this talk, we will give a hands-on introduction to evaluating language models. You will come away with knowledge and tools you can use to evaluate your own applications, and answers to questions like: Where do I get evaluation data from, anyway? Is it possible to evaluate generative models in an automated way? What metrics can I use? What is the role of human evaluation?
Tech Talk 2: Real-Time Training and Scoring in AI/ML
Speaker: Wes Wagner, Solutions Engineer @Redpanda
Abstract: This session will provide a broad understanding of how to train and score a real-time model on streaming data, facilitated by Kafka/Redpanda, in contrast with traditional batch-processing methods. We will discuss the important aspects of time series data and time-aware features in the context of real-time analytics. Additionally, we will cover how to merge multiple data streams for more complex feature creation and scoring.
Tech Talk 3: Working with LLMs at Scale
Speaker: Yujian Tang, Developer @Zilliz
Abstract: we’ll introduce LLMs and two main problems they face when it comes to production: high cost and lack of domain knowledge. We then introduce vector databases as a solution to this problem. We cover how a vector database can facilitate data injection and caching through the use of vector embeddings.
Lightning Talk 1: From Generic To Genius: Personalize Generative AI
Speaker: Ryan Michael, VP of Engineering @Kaskada/DataStax
Abstract: Generative AI has already demonstrated immense value, but systems like ChatGPT don’t know anything about who we are as individuals. At Kaskada, we have developed a compute engine to help LLM’s understand who they’re talking to and what they’re talking about. Kaskada does this by augmenting prompts with real-time contextual information and making it easy to recreate the context of past prompts, significantly accelerating the prompt engineering process. In this talk, we introduce the abstraction that makes this possible: the concept of timelines. Timelines can be interpreted as a history of changes or as snapshots at specific time points.
Lightning Talk 2: Practical Data Considerations for building Production-Ready LLM Applications
Speaker: Simon Suo, Cofounder / CTO @LlamaIndex
Abstract: Building an LLM application is easy, but putting it in production is hard. As an AI engineer, you are starting to ask: how do I better manage and structure my data to improve my Q&A system? In this talk, we will discuss practical data considerations for building production-ready LLM applications. You will walk away with concepts and tools to help you diagnose problems and improve your application.
July 25 – 26, 2023
11:30 PM – 3:00 AM (UTC)
11:30 PM | Checkin, food/drink and networking |
12:30 AM | Tech talks and panel |
1:00 AM | Open discussion and mixer |
AICamp
Founder
Women Techmaker Ambassador, GDG Silicon Valley Lead and CEO
Contact Us