Niklas Bunzel and Raphael Antonius Frick
The presentation explores the security challenges and opportunities posed by Generative AI (GenAI). While GenAI offers tremendous potential, it also has a darker side, such as its use in creating deepfakes that can spread misinformation, manipulate political events, or facilitate fraud, as demonstrated in a live deepfake example. Malicious variants of GenAI, are used in phishing attacks, social engineering schemes, and the creation of malware. Additionally, GenAI enables more intelligent network attacks through autonomous botnets decreasing the risk of exposure.
Despite these risks, GenAI also provides defensive advantages by enhancing security measures, such as improving threat detection, strengthening access control, and identifying code vulnerabilities. This is exemplified in a live demo showcasing deepfake and AI-based content detection.
The presentation also examines the different types of attacks that AI models, including GenAI, are susceptible to, across any task, model, or modality. This includes adversarial attacks, where inputs are specifically crafted to deceive AI systems. Additionally, attacks such as Prompt Injection and Visual Prompt Injection manipulate inputs to mislead models.
However, navigating the complex landscape of AI compliance is essential. Organizations must adhere to regulations like the EU AI Act and standards such as ISO 27090, while also following guidelines from bodies like OWASP to ensure the security, transparency, and ethical use of AI systems. The OWASP AI Exchange plays a key role in modeling threats to GenAI, addressing risks and point out solutions. To defend against these threats, various detection and mitigation techniques have been developed and will briefly be presented.
Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/