FormaliSE 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Mon 28 Apr 2025 09:00 - 10:30 at 203 - Invited Speaker 2

Large language models (LLMs) have demonstrated impressive capabilities for coding tasks including writing and reasoning about code. They improve upon previous neural network models of code that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities . However, it is known that these pre-LLM code models are vulnerable to adversarial examples, i.e. small syntactic perturbations that do not change the program’s semantics, such as the inclusion of “dead code” through false conditions, the addition of inconsequential print statements, or change in control flow, designed to “fool” the models. LLMs can also be vulnerable to the same adversarial perturbations. In this talk we discuss the effect of adversarial perturbations on coding tasks with LLMs and propose effective defenses against such adversaries. The coding tasks we’ll consider include both classification (e.g., use LLMs for summarization, vulnerability detection) and code generation (e.g., use LLMs for code completion, based on prompts plus code snippets).

This program is tentative and subject to change.

Mon 28 Apr

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:30
Invited Speaker 2Research Track at 203
09:00
90m
Keynote
Adversarial Perturbations and Self-Defenses for Large Language Models on Coding Task
Research Track
K: Corina S. Pasareanu Carnegie Mellon University Silicon Valley, NASA Ames Research Center