CS-725 Topics in Language-based Software Security

Co-taught by Mathias Payer and Manuel Egele. Fall semester 2021, 2 credit course.

News

Course overview

Unsafe languages like C/C++ are widely used for their great promise of performance. Unfortunately, these languages are prone to a large set of different types of memory and type errors that allow the exploitation of several attack vectors such as code reuse, privilege escalation, or information leaks. On a high level memory and type safety would solve all these problems. Safe languages can (somewhat) cheaply enforce these properties. Unfortunately, these guarantees come at a high cost if retrofitted onto existing languages.

When working with unsafe languages, three fundamental approaches exist to protect against software flaws: formal verification (proving the absence of bugs), software testing (finding bugs), and mitigation (protecting against the exploitation of bugs). In this seminar, we will primarily focus on the latter two approaches. Formal verification, while giving strong guarantees, struggles to scale to large software.

This seminar explores three areas: the understanding of attack vectors, approaches to software testing, and mitigation strategies. First you need to understand what kind of software flaws exist in low level software and how those flaws can be exploited.

Each student will pick one topic (one specific testing approach, mitigation, or attack vector) from the list of topics below. The student is expected to organize the material and prepare a presentation of the topic for the other students. The main goals of this seminar are:

  1. understanding and defining the security policy and corresponding guarantees/trade-offs implemented by given work;
  2. reasoning about the power and effectiveness (completeness in regard to attack vectors covered, strength of the guarantees, and effectiveness) of different security policies (and being able to compare between them);
  3. reasoning on the computational and resource cost of mechanisms and possible downsides;
  4. alternative implementations of the policy at other levels of abstraction.
  5. developing skills to present a technical topic in computer science to an audience of peers;
  6. learning how to identify possible research topics and articulating differences to existing related work.

Your grade is based on:

  1. technical presentation of your topic and writing a 1 page summary of the presentation after your topic (80%);
  2. active participation in class which includes reading the assigned papers and asking questions/participating in the discussions (20%).

Topic presentations

The length of presentations for research papers should be between around 30 minutes, followed by 15 minutes of discussion. You can structure the presentation as follows:

  1. Motivation of the paper (1-2 slides, ~5 minutes)
  2. Key research questions (1-2 slides, ~5 minutes)
  3. Presentation of the core design and implementation of the research paper (4-8 slides, ~10 minutes)
  4. Evaluation of the security policy (2-3 slides, ~5 minutes)
  5. Material for discussion: advantages, disadvantages, limitations of the approach (2-3 slides, ~5 minutes)
  6. Summary slide of the paper: policy, defense property (at which point in memory model), implementation (language, compiler, or runtime)

When preparing the presentation, send a PDF version of your slides to Manuel and Mathias before your talk.

Topics

This list is non-exhaustive and the list may be adapted during class and students may suggest other recent software security papers they are interested in. The open book Software Security: Principles, Policies, and Protection [1] provides an overview of many topics but does not go into depth for each policy.

Software Flaws and Mitigations

Fuzzing

Sanitization

Schedule

The seminar meets Mondays from 10:15 to 11:00 in BC04. A draft of the schedule looks as follows but remember that no plan survives contact with reality!

Date Topic Presenter(s) Reading Material
9/27 Introduction Mathias Payer [2]
10/04 SoK: Benchmarking Flaws Nicolas [3]
10/11 Diveristy (SoK) Andrés [5]
10/18 Shadow Stacks Florian [4]
10/25 Fuzzing Introduction Qiang [8], [9]
11/01 Evaluating Fuzz Testing Tao [18]
11/08 Residual Risk Zhiyao [10]
11/15 ParmeSan Antony [12]
11/22 Constraint-Guided Fuzz Duo [13]
11/29 GREYONE Hossein [14]
12/06 SoK: Sanitization Marcel [15]
12/13 DangSan Ergys [17]
12/20   ?  

References

[1]Software Security: Principles, Policies, and Protection. Mathias Payer
[2](1, 2) SoK: Eternal War in Memory. Laszlo Szekeres, Mathias Payer, Tao Wei, and Dawn Song. In Oakland'13: Proc. Int'l Symp. on Security and Privacy, 2013.
[3](1, 2) SoK: Benchmarking Flaws in Systems Security. Erik van der Kouwe, Gernot Heiser, Dennis Andriesse, Herbert Bos, Cristiano Giuffrida. In EuroSP'19
[4](1, 2) SoK: Shining Light on Shadow Stacks. Nathan Burow, Xinping Zhang, Mathias Payer. In Oakland'19
[5](1, 2) SoK: Automated Software Diversity. Per Larsen, Andrei Homescu, Stefan Brunthaler, Michael Franz. In Oakland'14
[6]Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, Jay Ligatti. In CCS'05
[7]Control-Flow Integrity: Precision, Security, and Performance. Nathan Burow, Scott A. Carr, Joseph Nash, Per Larsen, Michael Franz, Stefan Brunthaler, and Mathias Payer. In CSUR'17
[8](1, 2) Fuzzing: Hack, Art, and Science. Patrice Godefroid. In CACM'20
[9](1, 2) The Art, Science, and Engineering of Fuzzing: A Survey Valentin J M Manes, HyungSeok Han, Choongwoo Han, Sang Kil Cha, Manuel Egele, Edward J Schwartz, Maverick Woo In FSE’21
[10](1, 2) Estimating Residual Risk in Greybox Fuzzing. Marcel Boehme, Danushka Liyanage, and Valentin Wuestholz. In FSE’21
[11]AFL++: Combining Incremental Steps of Fuzzing Research. Andrea Fioraldi, Dominik Maier, Heiko Eissfeldt, Marc Heuse. In WOOT'20
[12](1, 2) ParmeSan: Sanitizer-guided Greybox Fuzzing. Sebastian Oesterlund, Kaveh Razavi, Herbert Bos, and Cristiano Giuffrida. In SEC'20
[13](1, 2) Constraint-guided Directed Greybox Fuzzing. Gwangmu Lee, Woochul Shim, and Byoungyoung Lee. In SEC'21
[14](1, 2) GREYONE: Data Flow Sensitive Fuzzing. Shuitao Gan, Chao Zhang, Peng Chen, Bodong Zhao, Xiaojun Qin, Dong Wu, and Zuoning Chen. In SEC'20
[15](1, 2) SoK: Sanitizing for Security. Dokyung Song, Julian Lettner, Prabhu Rajasekaran, Yeoul Na, Stijn Volckaert, Per Larsen, Michael Franz. In Oaklannd ’19
[16]AddressSanitizer: A Fast Address Sanity Checker. Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. In Usenix Security'12
[17](1, 2) DangSan: Scalable Use-after-free Detection. Erik van der Kouwe, Vinod Nigade, Cristiano Giuffrida. In EuroSYS'17
[18]Evaluating Fuzz Testing. George Klees, Andrew Ruef, Shiyi Wei, and Michael Hicks. In CCS'18