-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
4855af5
commit a164415
Showing
1 changed file
with
11 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
--- | ||
type: event | ||
date: 2024-05-02T16:00:00+1:00 | ||
speaker: Sahar Abdelnabi | ||
affiliation: Microsoft | ||
title: "On New Security and Safety Challenges Posed by LLMs and How to Evaluate Them" | ||
bio: "Sahar Abdelnabi is an AI security researcher at Microsoft Security Response Center (Cambridge). Previously, she was a PhD candidate at CISPA Helmholtz Center for Information Security, advised by Prof. Dr. Mario Fritz and she obtained her MSc degree at Saarland University. Her research interests lie in the broad intersection of machine learning with security, safety, and sociopolitical aspects. This includes the following areas: 1) Understanding and mitigating the failure modes of machine learning models, their biases, and their misuse scenarios. 2) How machine learning models could amplify or help counter existing societal and safety problems (e.g., misinformation, biases, stereotypes, cybersecurity risks, etc.). 3) Emergent challenges posed by new foundation and large language models." | ||
abstract: "Large Language Models (LLMs) are integrated into many widely used and real-world applications and use-case scenarios. With their capabilities and agentic-like adoption, they open new frontiers to assist in various tasks. However, they also bring new security and safety risks. Unlike previous models with static generation, LLMs’ nature of dynamic, multi-turn, and flexible functionality makes them notoriously hard to robustly evaluate and control. This talk will cover some of these new potential risks imposed by LLMs, how to evaluate them, and the challenges of mitigations. " | ||
zoom: https://us02web.zoom.us/meeting/register/tZMpceiupz4qH9OpLXTQ4m268hieklVJy1NL | ||
youtube: https://youtube.com/live/gKsiUi3qMiA?feature=share | ||
--- |