CS590 LBS Language-based Software Security

(aka The Eternal War in Memory)

Mathias Payer -- Fall semester 2014, 3 credit course


Course overview

Unsafe languages like C/C++ are widely used for their great promise of performance. Unfortunately, these languages are prone to a large set of different types of memory errors that allow the exploitation of several attack vectors. A (non-exhaustive) list of low-level memory errors is (i) buffer overflows, (ii) integer overflows, (iii) off by one error, (iv) improper null termination, (v) unchecked format string, (vi) memory allocation errors like double free or use after free, and many many more. Combinations of one or more of these errors may be used to execute possible memory attacks like code corruption, control-flow hijack attacks, data-only attacks, or information leaks.

On a high level memory safety (and type safety) would solve all these problems. Safe languages can (somewhat) cheaply enforce these properties. Unfortunately, these guarantees come at a pretty high cost if retrofitted onto existing languages.

This seminar explores different security policies that are applied to low level languages without type or memory safety to enforce (relatively) stronger protection and guarantees compared to the status quo. We will evaluate different security policies at different levels of abstractions: based on language changes, source code changes, and binary rewriting that enforce some guarantees at different stages of an ongoing exploit. Each student will pick one topic (one specific security policy) from the list of topics below. The student is expected to organize the material and prepare a presentation of the topic for the other students. In addition, students will propose and work on an independent course project. The main goals of this seminar are:

  1. understanding and defining the security policy implemented by given work;
  2. reasoning about the power and effectiveness (completeness in regard to attack vectors covered, strength of the guarantees) of different security policies (and being able to compare between them);
  3. reasoning on the computational and resource cost of security policies and possible downsides;
  4. alternative implementations of the policy at other levels.

General goals of a seminar are:

  1. learning how to understand and prepare an overview of a topic for other;
  2. students through reading the relevant papers, articles, or surveys and through practical investigations;
  3. learning how to reflect on a topic;
  4. presenting a technical topic in computer science to an audience of peers;
  5. learning how to identify possible research topics and articulating differences to existing related work.

Your grade is based on:

  1. design and implementation of class project (65%) (hard deadlines: 09/23 for description, 12/01 for code, 12/10 for write-up (extended from 12/08); weekly status updates on 11/07, 11/14, 11/21, and 11/28 mandatory);
  2. technical presentation of your topic and writing a 1 page summary of the presentation after your topic (deadline: 12/10; extended from 12/08) (30%);
  3. active participation in class (5%);
  4. for academic honesty refer to the Purdue integrity/code of conduct;
  5. except as by prior arrangement or notification by the professor of an extension before the deadline, missing or late work will be counted as a zero/fail.

Topic presentations

The length of presentations for research papers should be between 20 and 30 minutes. You can structure the presentation as follows:

  1. Motivation of the paper (1-2 slides, ~3 minutes)
  2. Presentation of the core design and implementation of the research paper (4-8 slides, ~10 minutes)
  3. Evaluation of the security policy (2-3 slides, ~4 minutes)
  4. Material for discussion: advantages, disadvantages, limitations of the approach (2-3 slides, ~5 minutes)
  5. Summary slide of the paper: policy, defense property (at which point in memory model), implementation (language, compiler, or runtime)

Course project

For the course project you can either choose to (i) design and implement extension of existing policy or (ii) develop metrics and implement benchmark that measures effectiveness of existing policy. The project follows the following milestones:

  1. propose and discuss details and scope of the project. Deadline: 09/19 (past)
  2. design and implementation of the project. Deadline: 12/01 (past, extended from 11/17)
  3. documentation and presentation of the course project. Deadline: 12/10 (extended from 12/08)

The project will be graded based on the 1 page proposal that you have to turn in on 09/17 and the documentation that you have to hand in on 12/10. As documentation you'll write a research paper (3 to 5 pages in ACM sigplan format) that includes motivation, introduction, design, implementation, and evaluation of your research project. The paper will be reviewed according to academic standards, including merit, contribution, feasibility, implementation, and evaluation.


A list of covered and related topics ranges from memory safety [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14] over data and pointer integrity [15], [16], [17], [18], randomization [19], [20], [21], [22], [23], data and control flow integrity [24], [25], [26], [27], [28], and dynamic policies [29], [30], [31] to software-based fault isolation [32], [33]. This list is non-exhaustive and the list may be adapted during class and students may suggest other policies they are interested in.


The seminar meets every Monday from 4:30p to 6:20p in REC112 and Wednesday from 12:30p to 1:20p in REC122. Office hours are each Monday from 3:30p to 4:20p in LWSN 3154M. A draft of the schedule looks as follows but remember that no plan survives contact with reality!

Date Topic Presenter(s) Material
8/25 Course administration [1] and Eternal War in Memory [2] Mathias Payer [3]
8/27 Project/paper selection    
9/03 Project/paper selection    
9/08 Memory safety (spatial, language-based) Daniele, Terry [4], [5]
9/10 Memory safety (spatial, source-based) Sergei [6],
9/15 -"- Scott, Denis [7], [8]
9/17 Deadline for project proposals    
9/22 Memory safety (temporal, source-based) Xilun [9]
9/22 Memory safety (temporal, library-based) Gregory [10],
9/24 -"- Nathan [11]
9/29 -"- Nathan [12]
9/29 Memory safety (temporal, binary-based) Xilun [13]
10/01 -"- Robert [14]
10/06 no class (OSDI)    
10/08 no class (OSDI)    
10/13 Fall break, no class    
10/15 Project presentations    
10/20 Data Integrity Prachi, Yidan [15], [16]
10/22 Pointer Integrity Denis [17]
10/27 Data Space Randomization Prachi [19], [20] [21]
10/29 No class (workshop)    
11/03 Instruction Set Randomization Michael, Michael [22], [23]
11/05 Data-Flow Integrity Jeff [24]
11/10 Instruction Set Random. Michael [22]
11/10 Project discussion    
11/12 Control-Flow Integrity Sergei [25]
11/17 -"- Syed, Pinar [26], [27]
11/19 Code Pointer Integrity Syed [28]
11/24 Dynamic policies Shagufta, Pinar [29], [30]
12/01 -"- Daniele [31]
12/01 Software-based Fault Isolation Servio [32]
12/03 -"- Scott [33]
12/08 Project presentations    
12/10 Project presentations    
12/15 Project presentations (backup)  

System security basics

The 10kstudents initiative prepared some material to introduce students to different forms of memory corruptions and other security problems. Using this material students become more aware of security implications in code.

If you want to get more hands-on experience and practical exposure to security problems then the b01lers, the Purdue Capture-The-Flag team is a great place to start as well. In weekly meetings we discuss security problems and possible attack vectors.

[1]Course administration slides
[2]Eternal War in Memory slides
[3]SoK: Eternal War in Memory. Laszlo Szekeres, Mathias Payer, Tao Wei, and Dawn Song. In Oakland'13: Proc. Int'l Symp. on Security and Privacy, 2013.
[4](1, 2) CCured: type-safe retrofitting of legacy software. George C. Necula, Jeremy Condit, Matthew Harren, Scott McPeak, and Westley Weimer. In ACM POPL'02 extended TOPLAS version
[5](1, 2) Cyclone: A safe dialect of C. Trevor Jim, Greg Morrisett, Dan Grossman, Michael Hicks, James Cheney, and Yanling Wang. In Usenix ATC
[6](1, 2) SoftBound: Highly Compatible and Complete Spatial Memory Safety for C. Santosh Nagarakatte, Jianzhou Zhao, Milo M. K. Martin, Steve Zdancewic. In PLDI'09
[7](1, 2) Baggy Bounds Checking: An Efficient and Backwards-Compatible Defense against Out-of-Bounds Errors. Periklis Akritidis, Manuel Costa, Miguel Castro, Steven Hand. In Usenix Security'09
[8](1, 2) AddressSanitizer: A Fast Address Sanity Checker. Konstantin Serebryany, Derek Bruening, Alexander Potapenko, and Dmitry Vyukov. In Usenix Security'12
[9](1, 2) CETS: Compiler-Enforced Temporal Safety for C. Santosh Nagarakatte, Jianzhou Zhao, Milo M. K. Martin, and Steve Zdancewic. In ISMM'10
[10](1, 2) Cling: A Memory Allocator to Migate Dangling Pointers. Periklis Akritidis. In Usenix Security'10
[11](1, 2) DieHard: Probabilistic Memory Safety for Unsafe Languages. Emery D. Berger and Benjamin G. Zorn. In PLDI'06
[12](1, 2) DieHarder: Securing the Heap. Gene Novark and Emery D. Berger. In CCS'10
[13](1, 2) MemCheck: Using Valgrind to detect undefined value errors with bit-precision. Julian Seward and Nicholas Nethercote. In Usenix ATC'05
[14](1, 2) How to Shadow Every Byte of Memory Used by a Program. Nicholas Nethercote and Julian Seward. In VEE'07
[15](1, 2) Preventing memory error exploits with WIT. Periklis Akritidis, Cristian Cadar, Costin Raiciu, Manuel Costa, and Miguel Castro. In IEEE SP'08 (Oakland)
[16](1, 2) Body armor for binaries: preventing buffer overflows without recompilation. Asia Slowinska, Traian Stancescu, and Herbert Bos. In Usenix ATC'12.
[17](1, 2) PointGuard: Protecting Pointers from Buffer Overflow Vulnerabilities. Crispin Cowan, Steve Beattie, John Johansen, and Perry Wagle. Usenix Security'03
[18]Breaking the memory secrecy assumption. Raoul Strckx, Yves Younan, Pieter Philippaerts, Frank Piessens, Sven Lachmund, and Thomas Walter. In Eurosec'09
[19](1, 2) Data Space Randomization. Sandeep Bhatkar and R. Sekar. In DIMVA'08.
[20](1, 2) Address Space Layout Randomization. PaX team.
[21](1, 2) Binary Stirring: Self-randomizing Instuction Addresses of Legacy x86 Binary Code. Richard Wartell, Vishwath Mohan, Kevin W. Hamlen, and Zhiqiang Lin. In CCS'12
[22](1, 2, 3) Countering Code-Injection Attacks with Instruction Set Randomization. Gaurav S. Kc, Angelos D. Keromytis, and Vassilis Prevelakis. In CCS'03.
[23](1, 2) ILR: Where'd my gadgets go? Jason Hiser, Anh Nguyen-Tuong, Michele Co, Matthew Hall, and Jack W. Davidson. In IEEE SP'12 (Oakland).
[24](1, 2) Securing software by enforcing Data-Flow Integrity. Miguel Castro, Manuel Costa, Tim Harris. In OSDI'06.
[25](1, 2) Control-Flow Integrity. Martin Abadi, Mihai Budiu, Ulfar Erlingsson, Jay Ligatti. In CCS'05
[26](1, 2) XFI: Software Guards for System Address Spaces. Ulfar Erlingsson, Martin Abadi, Michael Vrable, Mihai Budiu, and George C. Necula. In OSDI'06.
[27](1, 2) HyperSafe: A Lightweight Approach to Provide Lifetime Hypervisor Control-Flow Integrity. Zhi Wang and Xuxian Jiang. In IEEE SP'10
[28](1, 2) Code-Pointer Integrity. Volodymyr Kuznetsov, Mathias Payer, Laszlo Szekeres, George Candea, R. Sekar, Dawn Song. In OSDI'14.
[29](1, 2) Safe Loading - A Foundation for Secure Execution of Untrusted Programs. Mathias Payer, Tobias Hartmann, and Thomas R. Gross. In IEEE SP'12 (Oakland).
[30](1, 2) Secure Execution Via Program Shepherding. Vladimir Kiriansky, Derek Bruening, and Saman Amarasinghe. In Usenix Security'02.
[31](1, 2) Improving Host Security with System call Policies. Niels Provos. In Usenix Security'03.
[32](1, 2) PittSFIeld: Evaluating SFI for a CISC Architecture. Stephen McCamant and Greg Morrisett. In Usenix Security'06.
[33](1, 2) Native Client: A Sandbox for Portable, Untrusted x86 Native Code. Bennet Yee, David Sehr, Greg Dardyk, Brad Chen, Robert Muth, Tavis Ormandy, Shiki Okasaka, Neha Narula, and Nicholas Fullagar. In IEEE SP'08 (Oakland)