Machine Learning Talks

GDG Stuttgart
Sat, Aug 26, 1:00 PM (CEST)

23 RSVP'ed

Discover the forefront of machine learning with GDG Stuttgart! Join us for an enlightening event featuring two captivating talks: (1) Machine Unlearning (by Yuqicheng Zhu) and (2) Bias in Language Models (by Sakshi Shukla)

Check out what happened

About this event

Join GDG Stuttgart for our exciting events where we delve into the fascinating world of machine learning! Our group is dedicated to bringing you the latest insights and expertise from leading minds in the field.

Get ready for an insightful evening as we host two compelling talks that explore intriguing facets of machine learning. Our first talk by Yuqicheng Zhu will unravel the concept of "machine unlearning". While machine learning has achieved significant success, the process of selectively removing information from models when needed remains challenging. One such technique, known as membership inference, has emerged, enabling the determination of whether specific data was used in training a model. This has raised substantial privacy concerns. To address these issues, the concept of machine unlearning has emerged. We will explore the intricacies of this concept, understanding why it is a necessity in the ever-evolving landscape of machine learning. Furthermore, we will unravel the mechanisms behind machine unlearning, delving into how it can be effectively implemented to mitigate privacy concerns and enhance the integrity of machine learning processes.

In our second talk, Sakshi Shukla will delve into the vital topic of bias in language models. As language models continue to shape interactions and communications across various platforms, various biases can occur due to training data bias, reflecting real-world bias or amplification of popular trend. Understanding and mitigating biases is of paramount importance. Debiasing layers will be introduced for language models to eliminate the words roles for any kind of type cast. The most common form of debiasing witnessed in language models are based on gender roles in language. Therefore causing language models to inherit this property. To reduce gender bias, it is fairly important to add debiasing layers to help the model understand generalization of word roles with context and fairness.

GDG Stuttgart is proud to provide a platform for machine learning enthusiasts, researchers, and practitioners to come together, share their knowledge, and engage in thought-provoking discussions. Whether you're a seasoned professional or just beginning your journey into the world of machine learning, this event promises to expand your horizons and deepen your understanding of cutting-edge techniques.

Don't miss out on this opportunity to connect with like-minded individuals, expand your network, and stay at the forefront of machine learning advancements. Mark your calendars and join us for an evening of exploration, discovery, and inspiration. We can't wait to see you there!


  • Yuqicheng Zhu

    Yuqicheng Zhu

    University of Stuttgart & Bosch Center for Artificial Intelligence

    PhD Student

    View Profile
  • Daniel Camacho Corrales

    Daniel Camacho Corrales

    Mercedes-Benz Group AG

    GDG Organiser

    View Profile