Fine-Tune Gemma Models in Keras using LoRA

GDG on Campus Sudan University of Science & Technology - Khartoum, Sudan

Learn how to fine-tune Gemma models in Keras using LoRA (Low-Rank Adaptation) to optimize pre-trained models efficiently while reducing computational costs.

Mar 7, 9:00 – 10:30 PM (UTC)

21 RSVP'd

Key Themes

Build with AIInternational Women's DayMachine LearningWomen Techmakers

About this event

In this session, we’ll explore how to fine-tune Gemma models using Keras and LoRA (Low-Rank Adaptation)—an efficient method to adapt pre-trained models for specific tasks while reducing computational costs. You’ll learn how to leverage LoRA to enhance model performance without retraining the entire model, making it more efficient and practical for deployment on resource-constrained devices.

Join Eman Elrefai, an NLP Engineer, to discover how to get the most out of modern language models while maintaining performance efficiency and cost-effectiveness. This session is ideal for researchers and developers interested in optimizing NLP models with advanced techniques.


Speakers

  • Eman Elrefai

  • Sukaina Asmieda

    GDG Benghaz & Women Techmakers

    Lead and Ambassador

Organizers

  • Hiba Eljozouly

    Organizer

  • Dr. Anwar Dafa-Alla

    Sudanese Researchers Foundation (SRF)

    Chapter Supervisor

Contact Us