Homoglyph-Based Attacks: Circumventing LLM Detectors (DevFest Surrey 2024 Virtual)

GDG Surrey

As large language models (LLMs) become more and more skilled at writing human-like text, the ability to detect what they generate is critical. This session explores a novel attack vector, homoglyph-based attacks, that effectively bypasses state-of-the-art LLM detectors.

Nov 24, 6:00 – 7:00 PM (UTC)

46 RSVP'd

Key Themes

AIBuild with AICloudCommunity BuildingDevFestGeminiMachine Learning

About this event

We'll begin by explaining the idea behind homoglyphs, characters that look similar but are encoded differently. You'll learn how these can be used to manipulate tokenization and evade detection systems. We'll cover the mechanisms of how homoglyphs alter text representation, discuss their impact on existing LLM detectors, and present a comprehensive evaluation of their effectiveness against various detection methods.

Join us for an engaging exploration of this emerging threat and to gain insight into how security researchers can stay ahead of evolving evasion techniques.

You’ll gain valuable insights into: 

🔹 The mechanics of homoglyphs and how they disrupt tokenization. 

🔹 The impact of homoglyphs on current LLM detection systems. 

🔹 Cutting-edge evaluation of these methods against top detectors.

Speaker

  • Aldan Creo

    Accenture Labs

    Technology Research Specialist

Organizers

  • Priti yadav

    GDG Organizer

  • Yashi Girdhar

    Software Developer, Event Manager

  • Sam Huo

    Senior Software Developer, Co-Organizer

  • Sowndarya Venkateswaran

    Data Scientist, Instructor

  • Riya Eliza

    The University of British Columbia

    Data Scientist Co-Organizer GDG Surrey

  • Darshan Parikh

    Autodesk Inc.

    Android Engineer | Co - Organizer

  • Justin Xiao

    British Columbia Institute of Technology

    Outreach Coordinator

  • Bishneet Rekhi

    Volunteer

Contact Us