Join DevFest Scotland Online for an brilliant session on LLM detectors! As large language models (LLMs) become more and more skilled at writing human-like text, the ability to detect what they generate is critical. This session explores a novel attack vector, homoglyph-based attacks, that effectively bypasses state-of-the-art LLM detectors.
15 RSVP'd
Join DevFest Scotland Online for an brilliant session on LLM detectors!
As large language models (LLMs) become more and more skilled at writing human-like text, the ability to detect what they generate is critical. This session explores a novel attack vector, homoglyph-based attacks, that effectively bypasses state-of-the-art LLM detectors.
We'll begin by explaining the idea behind homoglyphs, characters that look similar but are encoded differently. You'll learn how these can be used to manipulate tokenization and evade detection systems. We'll cover the mechanisms of how homoglyphs alter text representation, discuss their impact on existing LLM detectors, and present a comprehensive evaluation of their effectiveness against various detection methods.
Join us for an engaging exploration of this emerging threat and to gain insight into how security researchers can stay ahead of evolving evasion techniques.
Accenture Labs
Technology Research Specialist
Charles River Laboratories
Tech Director | AI/ML GDE
Charles River Laboratories
Organizer
JP Morgan Chase & Co
Organizer
JPMorgan Chase & Co.
Organizer
Glasgow Caledonian University
Organiser
Jordanhill School
Youth Team Leader
University of Strathclyde
Team member
University of Glasgow
Team member
Charles River Laboratories
Team member
JP Morgan
Technical BA
PGT Student
Contact Us