The 9th International "Summer" School on Information Security and Protection will be jointly hosted by the Australian National University and the Defence Science and Technology Group. It will be held in Canberra, Australia, 9-13 July, 2018.
There is an increasing need for Software Security and Protection due to the growing sensitivity of user data coupled with the ubiquitous nature of computing systems ranging from tiny embedded devices to powerful data centres.
The International Summer School on Information Security and Protection (ISSISP) highlights this need by bringing internationally renowned speakers from diverse fields in Computer Science such as Software Engineering, Formal Methods, Computer Architecture, Operating Systems, and Programming Languages.
In particular, topics to be covered include vulnerability detection and analysis, forensics, language safety and information protection. The summer school caters towards students and professionals from academia, government, and industry. Courses will include both lectures and hands-on sessions.
Scientific advisory board
Christian Collberg, University of Arizona
Jack Davidson, University of Virginia
Bjorn De Sutter, Ghent University
Roberto Giacobazzi, Universitá di Verona
Yuan Xiang Gu, Irdeto
Arun Lakhotia, University of Lousiana
Symbolic analysis of programs have been first studied for the purpose of program verification. In recent decades, symbolic execution technology has witnessed renewed interest due to the maturity of Satisfiability Modulo Theory (SMT) solvers. Powerful dynamic analysis tools have leveraged the back-end solvers to systematically navigate large search spaces leading to application of symbolic analysis in test generation or vulnetrability detection. In this tutorial, we will first study the power of state-of-the-art symbolic execution based approaches for various security vulnerability analysis tasks such as finding zero-day vulnerabilities and/or crash reproduction. We compare the symbolic analysis approaches to grey-box fuzz testing approaches (which are routinely employed in industrial settings). Through such comparison, we can try to understand the state-of-practice of symbolic execution in terms of its ability to detect software vulnerabilities.
In the later part of the tutorial, we will conceptualize the use of symbolic execution approaches/tools for purposes beyond guiding search. We will sketch the usage of symbolic execution in inferring specification of intended program behavior. This is done by analyzing a buggy program against selected tests. Such specification inference capability can be combined with program synthesis techniques to automatically repair programs. Automated program repair via symbolic execution goes beyond search-based approaches which lift patches from elsewhere in the program. Such an approach can construct “imaginative” patches, serves as a test-bed for the grand-challenge of automated programming, and contributes to the vision of self-healing software.
Abhik Roychoudhury is a Professor of Computer Science at National University of Singapore. His research focuses on software testing and analysis, software security and trust-worthy software construction. His research group has built scalable techniques for testing, debugging and repair of programs using systematic semantic analysis. The capability to automatically repair programs at a large scale contributes to the vision of self-healing software. He has been an ACM Distinguished Speaker (2013–19). He is currently leading the TSUNAMi center, a large five-year long targeted research effort on trust-worthy software funded by National Research Foundation. He is also leading the Singapore Cyber-security Consortium, which is a Consortium of over 30 companies in the cyber-security space engaging with academia for research and collaboration.
He has been Program Chair of ACM International Symposium on Software Testing and Analysis (ISSTA) 2016, General Chair of ACM SIGSOFT Symposium on Foundations of Software Engineering (FSE) 2022, and an Editorial Board member of IEEE Transactions on Software Engineering (TSE) since 2014. Abhik received his Ph.D. in Computer Science from the State University of New York at Stony Brook in 2000.
In the first part of this lecture, we will study some of the most powerful reverse engineering attack methods, including methods deployed today by professional hackers and state-of-the-art academic methods such as fuzzing, pointer chaining, pattern matching, symbolic execution, statistical attacks, and generic de-obfuscation techniques.
In the second part, we will present results from experiments with various kinds of attackers that help us to understand how the different techniques are combined into full fledged attacks.
In the third part, will discuss some defenses against some of the advanced attacks, such as more advanced obfuscations, but also anti-taint techniques, anti-debugging techniques, anti-tracing techniques, anti-tampering techniques, and code renewability.
Bjorn De Sutter is associate professor in the Computer Systems Lab at the Faculty of Engineering and Architecture of Ghent University. He got both his MSc and his PhD from Ghent University in 1997 resp. 2002. Amongst others, he teaches courses on Compilers, and on Software Hacking and Protection, which are also his major research interests. The focus of his compiler technology research is on increasing the productivity of programmers by automating necessary tasks for non-functional requirements, incl. the protection of software against all kinds of attacks. His more than 80 peer-reviewed publications cover protections against fault injection attacks, against tampering attacks, against reverse engineering attacks, against memory exploit attacks, against patch-based attacks, and against timing side channel attacks. He coordinated the EU FP7 STREP ASPIRE (Advanced Software Protection: Integration, Research, and Exploitation), which was evaluated by the EC as excellent in Feb. 2017, and participated in numerous other national and international projects. He received the Annual FWO Barco Award in 1998 for his Master thesis, and is the recipient of a 2013 HiPEAC Technology Transfer Award.
We will look at traditional ways of obfuscating code, i.e. ways to make software harder to understand and analyze. There will be a significant hands-on practical component to this part of the course, where we will obfuscate code using automatic tools.
Christian Collberg is a Professor at the Department of Computer Science at the University of Arizona. He got his PhD from Lund University, Sweden and taught at University of Auckland, New Zealand, before joining Arizona. His current research interests lie in the protection of software from reverse engineering and tampering, as well as in the reproducibility of computer science research (see FindResearch.org). He is the author of "Surreptitious Software, Obfuscation, Watermarking, and Tamperproofing for Software Protection".
The vast majority of code running on computing platforms is written in C/C++ or running on a VM which itself is written in one of these languages. So-called safe languages that employ PL-based security have proven to be less secure than hoped: because it had become a vector for attack, Apple stopped shipping the Java VM with Mac OS X, and another ostensibly safe language, JavaScript, has been the subject of numerous attacks. Defending against exploits in depth requires approaches to harden C and C++ code, ideally without the need for rewriting code and without imposing significant performance penalties.
The original attacks on C/C++ applications depended on "smashing" the stack; because modern compilers include effective and cheap countermeasures to stack-smashing attacks, the focus of this lecture is on the heap, which is much more complex and expensive to protect. In this class, I will explain a range of heap-based attacks and how they interact with existing runtime systems and memory managers. I will talk about the challenges of coping with these issues, and describe our DieHard/DieHarder systems, which leverage randomization and novel allocation algorithms to efficiently and provably reduce the vulnerabilities of C/C++ applications to a wide class of heap-based attacks; this work directly inspired reliability and security enhancements in Microsoft Windows. I will talk about how these techniques enable self-healing from attack via statistical inference, and describe recent advances that further close the performance gap between unprotected and protected code while providing a high degree of protection.
Emery Berger is a Professor in the College of Information and Computer Sciences at the University of Massachusetts Amherst, the flagship campus of the UMass system. He graduated with a Ph.D. in Computer Science from the University of Texas at Austin in 2002. Professor Berger has been a Visiting Scientist at Microsoft Research and at the Universitat Politecnica de Catalunya (UPC) / Barcelona Supercomputing Center (BSC). Professor Berger’s research spans programming languages, runtime systems, and operating systems, with a particular focus on systems that transparently improve reliability, security, and performance. He is the creator of a number of influential software systems including Hoard, a fast and scalable memory manager that accelerates multithreaded applications (used by companies including British Telecom, Cisco, Crédit Suisse, Reuters, Royal Bank of Canada, SAP, and Tata, and on which the Mac OS X memory manager is based); DieHard, an error-avoiding memory manager that directly influenced the design of the Windows 7 Fault-Tolerant Heap; and DieHarder, a secure memory manager that was an inspiration for hardening changes made to the Windows 8 heap.
His honors include a Microsoft Research Fellowship, an NSF CAREER Award, a Lilly Teaching Fellowship, a Most Influential Paper Award at PLDI 2016 and OOPSLA 2012, a Google Research Award, and a Microsoft SEIF Award. Professor Berger currently serves as an elected member-at-large of the SIGPLAN Executive Committee. He recently served as Program Chair for PLDI 2016 and spent a decade as Associate Editor of the ACM Transactions on Programming Languages and Systems.
Virtual machines (PVMs) are often used to provide software protection. In this tutorial, we discuss the role and power of process-level virtual machines (PVMs) in software protection. The key advantage of PVMs is that they support efficient run-time transformations of the code to be protected. When used in combination with other software protection techniques such as white-box cryptography, guard networks, opaque predicates, etc., PVMs force the attacker to perform resource-intensive dynamic analysis. The tutorial discusses how efficient process-level virtual machines are constructed and how they are used to thwart both static and dynamic analyses by crackers through application of several novel transformations enabled by PVMs.
The second talk will focus on DARPA's Cyber Grand Challenge (CGC), which was announced in November 2013. The challenge was to build an autonomous cyber reasoning system that could analyze software, identify vulnerabilities, formulate patches and deploy them on a network all in real time. To test these capabilities, the cyber reasoning systems would face off against each other in a cyber version of capture the flag (CTF) with the winning team receiving a $2M cash prize. 104 teams from across the world entered the competition and began designing and building the supercomputers necessary to compete. In this talk I will describe the CGC competition, present some of the unique challenges faced by the teams, and provide an overview of technologies used to address these challenges. I will also present detailed analysis of the telemetry collected during the CGC final event. The talk will conclude by describing a new program spawned by CGC, CHESS (Computers and Humans Exploring Software Security), and some its unique challenges.
I am Professor of Computer Science at the University of Virginia. I received my Ph.D. from the University of Arizona in 1981 and joined the University of Virginia in 1982. Over the years, I have worked in many areas including compilers, embedded systems, high-performance computing, and computer security. My current research foci is on software protection and computer security. In the area of software security, my group has developed dynamic methods to prevent adversaries from compromising intellectual property. In the area of computer security, we have developed methods for "hardening" binaries to make them impervious to cyber attack. In 2014–2016, I lead a team at the University of Virginia that competed in the DARPA Cyber Grand Challenge (CGC)—a contest to build an autonomous supercomputer to automatically analyze vulnerable binaries and patch them—all without human intervention. Our entry, produced in collaboration with Grammatech, Inc., finished second winning a $1M prize. I am also leading a DARPA project, "Double Helix: High Assurance N-variant Systems." Double Helix is a binary analysis and transformation system that processes binary applications and produces variants with diverse binary structures that are intended to be deployed within a multi-variant system. A unique aspect of Double Helix is that it employs structured diversity to guarantee that variants behave differently yielding high-assurance systems. Both CGC and Double Helix rely on high-precision binary analysis and rewriting to reverse engineer and transform binaries.
Applications written in C++ are prone to memory corruption and type confusion vulnerabilities as the C++ programming language enforces neither memory nor type safety. Memory corruption is caused, e.g., by buffer overflows or using already freed memory. Type confusion happens if the runtime type of an object differs from the statically expected type due to an incorrect type cast. Type casting is a core feature of C++ that enables modularity but for performance, the majority of casts are only checked statically at compile time due to the expense of the C++ runtime type checking mechanism.
Due to such low level errors, applications can be vulnerable to exploits which allow an adversary to escalate privileges, execute arbitrary code, leak information, or simply cause denial of service to legitimate requests. The two fundamental approaches to secure software are either finding and fixing the underlying bugs or to mitigate the effects of vulnerabilities.
In the first session, we will focus on bug finding techniques using compiler-based instrumentation. To discover type safety violations, and thereby protect against type confusion vulnerabilities. We introduce HexType, a mechanism that enforces type safety for C++. HexType makes all type casts explicit. Upon allocation, metadata is recorded for each newly created object. Each cast, explicit, implicit, or static results in a runtime check using the recorded metadata. Type violations result in program termination. Our type safety mechanism integrates into existing testing frameworks or can be combined with a fuzzing framework to discover new vulnerabilities.
Even with substantial fuzzing efforts, some bugs will remain undiscovered. In the second session, we will discuss compiler-based mitigations. We have developed a mechanism to protect against control- flow hijacking attacks that leverages the unique properties of C++ to augment existing Control-Flow Integrity (CFI) mechanisms. CFI is an upcoming mitigation that is already deployed on Google Chrome and Microsoft Edge and at the verge of wide deployment. CFI measures, for each call site, the set of allowed targets using a static analysis. At runtime, the actual value is compared to the allowed targets, terminating at a miscompare. The static analysis is prone to over- approximation. We propose to make the analysis dynamic by guaranteeing the integrity of object types at runtime, thereby improving the precision of the analysis and protecting against a set of code reuse attacks that CFI would miss.
Mathias Payer is a security researcher and an assistant professor in computer science at Purdue university, leading the HexHive group. His research focuses on protecting applications in the presence of vulnerabilities, with a focus on memory corruption. He is interested in system security, binary exploitation, user-space software-based fault isolation, binary translation/recompilation, and (application) virtualization.
Before joining Purdue in 2014 he spent two years as PostDoc in Dawn Song's BitBlaze group at UC Berkeley. He graduated from ETH Zurich with a Dr. sc. ETH in 2012, focusing on low-level binary translation and security. He analyzed different exploit techniques and wondered how we can enforce integrity for a subset of data (e.g., code pointers). All prototype implementations are open-source. In 2014, he founded the b01lers Purdue CTF team.
For the first lecture, I will talk about the benefits of type- and spatial-safety as a means of ruling out memory-based attacks. I will talk about ongoing work on the Checked C project that aims to retrofit C programs to be spatially safe, incrementally. I will also talk about my prior work on the Cyclone programming language that aimed to avoid temporal safety violations (e.g., "use after free"). This work is now reflected in many ways in the Rust programming language.
For my second lecture, I will talk about random (or "fuzz") testing, which aims to find bugs in insecure programs. This is a complementary technology to Checked C, for the parts of the program not protected by Checked C's checks. Fuzzers appear to be very successful at finding bugs, so it's important to understand why they work. Unfortunately, the existing research literature does not provide uniformly good scientific evidence on this front. We surveyed the recent research literature and assessed the experimental evaluations carried out by 32 fuzzing papers. We found problems in every evaluation we considered. We then performed our own extensive experimental evaluation using an existing fuzzer. Our results showed that the general problems we found in existing experimental evaluations can indeed translate to actual wrong or misleading assessments. We developed some guidelines, and open problems, that may address these issues.
Michael Hicks is a Professor in the Department of Computer Science at the University of Maryland. His research focuses on using programming languages and automated analyses to improve the security, reliability, and availability of software. He is the current Chair of the ACM Special Interest Group on Programming Languages (SIGPLAN) and blogs for a general computer science audience, both researchers and practitioners, at http://www.pl-enthusiast.net.
Binary translation, analysis and recompilation with McSema.
Peter Goodman is a senior security engineer at Trail of Bits. He is an expert at designing and implementing binary translation and instrumentation systems. In a past life, he was an academic and before that, a competitive ski racer.
S2E is a platform for in-vivo analysis of software systems that combines a virtual machine with symbolic execution. Users install and run in S2E an unmodified software stack, including programs, libraries, the OS kernel, and drivers. Symbolic execution then automatically explores thousands of paths through the system, allowing users to check desired properties even in corner-case situations and at any level of the software stack. S2E works on unmodified binaries, does not require writing code in special languages, or model the environment.
In this tutorial, you will learn how to use S2E to find bugs and vulnerabilities in Windows applications and device drivers. In the first part of the tutorial, you will learn the basics of symbolic execution with S2E with hands-on testing of Windows applications. In the second part of the tutorial, we will cover Windows kernel driver testing and show how S2E can help you build well-tested drivers with robust error recovery code using symbolic fault injection. You will try integrated code coverage tools and profilers to quickly diagnose symbolic execution bottlenecks, which you will then resolve with techniques such as state merging and function modeling. Finally, we will show how to automatically generate proofs of vulnerabilities with S2E.
To follow this tutorial optimally, you will need a Windows installation (either in a VM or native) with Visual Studio 2017 and the Windows Driver Kit version 1709. You will use that to hack and build the sample drivers we provide. We will provide remote access to Linux machines with S2E and guest images pre-installed. You may of course install S2E and build the Windows images on your own machine ahead of time if you wish, it’s fully automated.
Vitaly Chipounov is co-founder and chief architect at Cyberhaven, where he is designing novel security products to keep enterprise data safe from malware and malicious insiders. He got his MSc and PhD at EPFL, Switzerland, at the Dependable Systems Laboratory led by Prof. George Candea. During his PhD, he built S2E – a platform for multi-path analysis of software systems – currently used by dozens of researchers and companies around the world to build tools that find bugs and security vulnerabilities in computer systems. He participated in the DARPA Cyber Grand Challenge as part of the CodeJitsu team, where S2E was one of the fastest at finding vulnerabilities and generating exploits.
Cyber attacks are becoming increasingly targeted and stealthy. For example, advanced targeted attacks (also called advanced persistent threats or APTs) tend to be stealthy, low-and-slow, and sometimes disguised via psychosocial campaigns. Such trends in cyber attacks call not only for proactive technologies that prevent or deter attacks, but also reactive technologies that detect and investigate ongoing or finished attacks.
In this mini-course, we focus on the investigation – or forensics – of cyber attacks, with the goals of (i) understanding the intent, strategy, and detailed steps and targets of an attack, (ii) collecting various types of evidence for possible legal proceedings, (iii) revealing hidden impacts of an attack to minimize loss and to prevent similar attacks in the future. We will cover two kinds of cyber forensics techniques: (1) temporal forensics involves analyzing log data to identify events related to an attack and their causal relations; and (2) memory forensics involves analyzing the memory snapshot of a system to extract critical information. Students will get a chance to learn the state-of-the-art and conduct labs on realistic attack forensics.
Xiangyu Zhang is a professor and University Scholar at Purdue University. He works on dynamic and static software analysis and their applications in software security, forensic analysis, software debugging and testing. He has received the 2006 ACM SIGPLAN Distinguished Doctoral Dissertation Award, NSF Career Award, ACM SIGSOFT Distinguished Paper Awards, Best Student Paper Award on USENIX Security'14, Best Paper Award on CCS'15 and Distinguished Paper Awards on NDSS'16 and USENIX SECURITY'17. He has also co-supervised a dissertation that won the ACM SIGSAC Doctoral Dissertation Award in 2017.
While the digital world connects devices, peoples and things in a speed beyond humans’ imagination, hostile digital environments have been becoming more dominated from consumer devices to home networks, to the public Internet, to the cloud and web services, and to the Internet of Things, where traditional security models are inadequate to address emerging threats and attacks. The anything/anytime/anywhere properties of modern connectivity make software security and digital asset protection much more challenging and critical.
This lecture is to discuss trends in threats and security paint points, new challenges to information security, white-box software attack scenarios, introduce software security immunity and security patterns (that are abstracted from many application domains in terms of use cases, vulnerability and threat analysis, and security solutions), and security lifecycle of digital applications mandating protection from creation and development, through distribution and then ultimately consumption from being deployed and maintained in the field. Moreover, we introduce homomorphic obfuscation technology and detail certain software protection technologies currently available in markets as a guide to the state of the art. Finally, we will brief AI and ML security problems and real challenges to cyber-security on connected and intelligent vehicles.
Mr. Gu was the co-founder of Cloakware Corporation that. In 2007, Cloakware was acquired by Irdeto. Since then, as a chief architect and a senior research director of Irdeto, Mr. Gu is also leading the development of next generation Cloakware technology, and research collaboration with research communities worldwide. Mr. Gu has been invited and visited over 40 universities and research institutes in North American, Europe and Asia, and organizing international security forums (workshops, summer schools, and association) and becomes an active speaker at many international conferences and workshops to promote software security and protection. Mr. Gu was invited being a guest professor of Northwest University in China. As a native Chinese, Mr. Gu often represents Irdeto to involve many business development and research collaboration activities in China. Prior to joining Cloakware, Mr. Gu has worked as a senior scientist and architect at Nortel Networks. Previously, Mr. Gu was a visiting professor at the Computer Science School of McGill University at Montreal of Canada between 1988-1990. Before relocated to Canada, Mr. Gu was a professor in the Computer Science Department at Northwest University in China. Mr. Gu received the First Outstanding Young Scientists Foundation Award from the Chinese Academy of Sciences in 1985, and has about four decades of software research and development knowledge and expertise, and has published more than two dozens of patents and patent applications.
ISSISP 2018 will be held at University House on the grounds of the Australian National University in Canberra, the nation’s capital, It is well-served by domestic flights from the major Australian airports, and internationally from New Zealand, Singapore, and Qatar.