Software Archeology

Archeology is the study of a culture by way of its artifacts. Only instead of studying clay pots and bones, I study software development artifacts. The idea is to understand the culture of a software development team by looking at their bug reports, version control histories, source code, communication logs, documentation, code reviews, vulnerability disclosures, and anything else that they produce in addition to the actual executable software. The goal is to apply empirical approaches to a given team, so that our questions are "what happened?" instead of "what if?".

Developer Collaboration and Software Security

Behind every piece of software is a team of people. In large software development projects, no single person can possibly know every aspect of the system, so the team must self-organize into various structures of communication and coordination. Lack of team cohesion, miscommunications, and misguided effort can lead to all kinds of problems, including security vulnerabilities. In my research, I focus on examining the statistical relationships between development team structure, developer activity, and security vulnerabilities.

Our findings have brought up some interesting associations, as well as some predictive models. I have published a few papers on this topic, and I highly recommend glancing at the abstracts to see what kinds of results I've come across.

Vulnerability of the Day

Our future programmers need to know about the most relevant, code-level vulnerabilities in the wild today. To answer this, Vulnerability of the Day is a pedagogically-curated collection of vulnerability demonstrations for undergraduate software engineering students. That's a lot of fancy talk for a bunch of neat code demos. VotD is a set of brief coding demonstrations that the instructor can use at the beginning of every day of a software engineering class.

Go check out the open source project on GitHub: votd.github.io

Software Metrics Validation

Does "lines of code" really measure software size? How do we know? Software metrics have long been studied, but are often criticized for not being fully "validated". Yet, we need some form of software measurement to perform sound statistical research with both practical and theoretical implications. One of the focuses of my research has been examining the philsophical underpinnings of software metrics validation, such as software metrics validation criteria. Stay tuned for publications, but until then, here are some thought-provoking questions that I study:

  • What is the purpose of metrics: to tell us about the very nature of software, or to satisfy specific business goals? What happens if those two interests are in conflict?
  • How many empirical case studies need to be performed to declare a metric "valid"?
  • If a metric is shown to be a predictor of expensive, post-release defects, does it tell us how we should develop software?
  • If I am proposing a new software metric, how should I demonstrate that it is a valid metric?

Protection Poker

One project I've been involved in is a new agile practice called "Protection Poker". The goal of this "game" is to provide risk assessment for security vulnerabilities in your project while you are developing your software. More importantly, however, Protection Poker provides a way for development teams to have valuable discussions about security concerns of their product.

Test-Driven Development

One of my passions is Test-Driven Development, or any form of automated testing. I use it in my everday development whenever possible, and I try to get my students addicted to it whenever I can, too!