Interests

Empirical software engineering, software security, collaborative software development, open source development, socio-technical factors, metrics and measurement, applied machine learning and data mining

Bio

Andy has been an assistant professor of Software Engineering at RIT since 2011. Prior to then, Andy received his PhD in Computer Science at North Carolina State University in Raleigh, North Carolina under Laurie Williams. His doctoral dissertation, titled Investigating the Relationship between Developer Collaboration and Software Security involved formulating metrics to examine the socio-technical structure of software development teams using social network analysis. His research has resulted in many top-tier academic publications. He also earned his Masters at NCSU in 2008. Andy received his Bachelors of Arts at Calvin College, Grand Rapids, MI where he was a double-major in Computer Science and Mathematics.

Recent Publications

Analyzing Security Data Andrew Meneely The Art and Science of Analyzing Software Data pp. 213–227
Security is a challenging and strange property of software. Security is not about understanding how a customer might use the system; security is about ensuring that an attacker cannot abuse the system. Instead of defining what the system should do, security is about ensuring that system does not do something malicious. As a result, applying traditional software analytics to security leads to some unique challenges and caveats. In this chapter, we will discuss four "gotchas" of analyzing security data, along with vulnerabilities and severity scoring. We will describe a method commonly-used for collecting security data in open source projects. We will describe some of the state-of-the-art in analyzing security data today.
 @article{MeneelyASD2015,
  author = {Meneely, Andrew},
  title = {Analyzing Security Data},
  journal = {The Art and Science of Analyzing Software Data},
  publisher = {Elsevier},
  pages = {213--227},
  doi = {},
  abstract = {Security is a challenging and strange property of software. Security is not about understanding how a customer might use the system; security is about ensuring that an attacker cannot abuse the system. Instead of defining what the system should do, security is about ensuring that system does not do something malicious. As a result, applying traditional software analytics to security leads to some unique challenges and caveats. In this chapter, we will discuss four "gotchas" of analyzing security data, along with vulnerabilities and severity scoring. We will describe a method commonly-used for collecting security data in open source projects. We will describe some of the state-of-the-art in analyzing security data today.}
}
 

An Insider Threat Activity in a Software Security Course Daniel E. Krutz, Andrew Meneely, & Samuel A. Malachowsky 2015 IEEE Frontiers in Education Conference (FIE) pp. to appear

Software development teams face a critical threat to the security of their systems: insiders. A malicious insider is a person who violates an authorized level of access in a software system. Unfortunately, when creating software, developers do not typically account for insider threat. Students learning software development are unaware of the impacts of malicious actors and are far too often untrained in prevention methods against them. A few of the defensive mechanisms to protect against insider threats include eliminating system access once an employee leaves an organization, enforcing principle of least privilege, code reviews, and constant monitoring for suspicious activity. At the Department of Software Engineering at the Rochester Institute of Technology, we require a course titled Engineering of Secure Software and have created an activity designed to prepare students for the problem of insider threats. At the beginning of this activity, student teams are given the task of designing a moderately sized secure software system. The goal of this insider is to manipulate the team into creating a flawed system design that would allow attackers to perform malicious activities once the system has been created. When the insider is revealed at the conclusion of the project, students discuss countermeasures regarding the malicious actions the insiders were able to plan or complete, along with methods of prevention that may have been employed by the team to detect the malicious developer. In this paper, we describe the activity along with the results of a survey. We discuss the benefits and challenges of the activity with the goal of giving other instructors the tools they need to conduct this activity at their institution. While many institutions do not offer courses in computer security, this self-contained activity may be used in any computing course to enforce the importance of protecting against insider threats.
 @article{KrutzFIE2015,
  author = {Krutz, Daniel E. and Meneely, Andrew and Malachowsky, Samuel A.},
  title = {An Insider Threat Activity in a Software Security Course},
  journal = {2015 IEEE Frontiers in Education Conference (FIE)},
  pages = {to appear},
  doi = {},
  abstract = {Software development teams face a critical threat to the security of their systems: insiders. A malicious insider is a person who violates an authorized level of access in a software system. Unfortunately, when creating software, developers do not typically account for insider threat. Students learning software development are unaware of the impacts of malicious actors and are far too often untrained in prevention methods against them. A few of the defensive mechanisms to protect against insider threats include eliminating system access once an employee leaves an organization, enforcing principle of least privilege, code reviews, and constant monitoring for suspicious activity. At the Department of Software Engineering at the Rochester Institute of Technology, we require a course titled Engineering of Secure Software and have created an activity designed to prepare students for the problem of insider threats. At the beginning of this activity, student teams are given the task of designing a moderately sized secure software system. The goal of this insider is to manipulate the team into creating a flawed system design that would allow attackers to perform malicious activities once the system has been created. When the insider is revealed at the conclusion of the project, students discuss countermeasures regarding the malicious actions the insiders were able to plan or complete, along with methods of prevention that may have been employed by the team to detect the malicious developer. In this paper, we describe the activity along with the results of a survey. We discuss the benefits and challenges of the activity with the goal of giving other instructors the tools they need to conduct this activity at their institution. While many institutions do not offer courses in computer security, this self-contained activity may be used in any computing course to enforce the importance of protecting against insider threats.}
}
 

Do Bugs Foreshadow Vulnerabilities? A Study of the Chromium Project Felivel Camilo, Andrew Meneely, & Meiyappan Nagappan 2015 International Working Conference on Mining Software Repositories pp. to appear

ACM Distinguished Paper
Best Paper MSR 2015
As developers face ever-increasing pressure to engineer secure software, researchers are building an understanding of security-sensitive bugs (i.e. vulnerabilities). Research into min- ing software repositories has greatly increased our understanding of software quality via empirical study of bugs. However, con- ceptually vulnerabilities are different from bugs: they represent abusive functionality as opposed to wrong or insufficient function- ality commonly associated with traditional, non-security bugs. In this study, we performed an in-depth analysis of the Chromium project to empirically examine the relationship between bugs and vulnerabilities. We mined 374,686 bugs and 703 post-release vulnerabilities over five Chromium releases that span six years of development. Using logistic regression analysis, we examined how various categories of pre-release bugs (e.g. stability, compatibility, etc.) are associated with post-release vulnerabilities. While we found statistically significant correlations between pre-release bugs and post-release vulnerabilities, we also found the asso- ciation to be weak. Number of features, SLOC, and number of pre-release security bugs are, in general, more closely associated with post-release vulnerabilities than any of our non-security bug categories. In a separate analysis, we found that the files with highest defect density did not intersect with the files of highest vulnerability density. These results indicate that bugs and vulnerabilities are empirically dissimilar groups, warranting the need for more research targeting vulnerabilities specifically.
 @article{CamiloMSR2015,
  author = {Camilo, Felivel and Meneely, Andrew and Nagappan, Meiyappan},
  title = {Do Bugs Foreshadow Vulnerabilities? A Study of the Chromium Project},
  journal = {2015 International Working Conference on Mining Software Repositories},
  pages = {to appear},
  award = { ACM Distinguished Paper Award, MSR 2015 Best Paper},
  doi = {},
  abstract = {As developers face ever-increasing pressure to engineer secure software, researchers are building an understanding of security-sensitive bugs (i.e. vulnerabilities). Research into min- ing software repositories has greatly increased our understanding of software quality via empirical study of bugs. However, con- ceptually vulnerabilities are different from bugs: they represent abusive functionality as opposed to wrong or insufficient function- ality commonly associated with traditional, non-security bugs. In this study, we performed an in-depth analysis of the Chromium project to empirically examine the relationship between bugs and vulnerabilities. We mined 374,686 bugs and 703 post-release vulnerabilities over five Chromium releases that span six years of development. Using logistic regression analysis, we examined how various categories of pre-release bugs (e.g. stability, compatibility, etc.) are associated with post-release vulnerabilities. While we found statistically significant correlations between pre-release bugs and post-release vulnerabilities, we also found the asso- ciation to be weak. Number of features, SLOC, and number of pre-release security bugs are, in general, more closely associated with post-release vulnerabilities than any of our non-security bug categories. In a separate analysis, we found that the files with highest defect density did not intersect with the files of highest vulnerability density. These results indicate that bugs and vulnerabilities are empirically dissimilar groups, warranting the need for more research targeting vulnerabilities specifically.}
}
 

An Empirical Investigation of Socio-technical Code Review Metrics and Security Vulnerabilities Andrew Meneely, Alberto C. Rodriguez Tejeda, Brian Spates, Shannon Trudeau, Danielle Neuberger, Katherine Whitlock, Christopher Ketant, & Kayla Davis Proceedings of the 6th International Workshop on Social Software Engineering pp. 37–44 2014

One of the guiding principles of open source software development is to use crowds of developers to keep a watchful eye on source code. Eric Raymond declared Linus' Law as "many eyes make all bugs shallow", with the socio-technical argument that high quality open source software emerges when developers combine together their collective experience and expertise to review code collaboratively. Vulnerabilities are a particularly nasty set of bugs that can be rare, difficult to reproduce, and require specialized skills to recognize. Does Linus' Law apply to vulnerabilities empirically? In this study, we analyzed 159,254 code reviews, 185,948 Git commits, and 667 post-release vulnerabilities in the Chromium browser project. We formulated, collected, and analyzed various metrics related to Linus' Law to explore the connection between collaborative reviews and vulnerabilities that were missed by the review process. Our statistical association results showed that source code files reviewed by more developers are, counter-intuitively, more likely to be vulnerable (even after accounting for file size). However, files are less likely to be vulnerable if they were reviewed by developers who had experience participating on prior vulnerability-fixing reviews. The results indicate that lack of security experience and lack of collaborator familiarity are key risk factors in considering Linus' Law with vulnerabilities.
 @article{MeneelySSE2014,
  author = {Meneely, Andrew and Tejeda, Alberto C. Rodriguez and Spates, Brian and Trudeau, Shannon and Neuberger, Danielle and Whitlock, Katherine and Ketant, Christopher and Davis, Kayla},
  title = {An Empirical Investigation of Socio-technical Code Review Metrics and Security Vulnerabilities},
  booktitle = {Proceedings of the 6th International Workshop on Social Software Engineering},
  series = {SSE 2014},
  year = {2014},
  isbn = {978-1-4503-3227-9},
  location = {Hong Kong, China},
  pages = {37--44},
  numpages = {8},
  url = {http://doi.acm.org/10.1145/2661685.2661687},
  doi = {10.1145/2661685.2661687},
  acmid = {2661687},
  publisher = {ACM},
  address = {New York, NY, USA},
  keywords = {code review, socio-technical, vulnerability},
  abstract = {One of the guiding principles of open source software development is to use crowds of developers to keep a watchful eye on source code.  Eric Raymond declared Linus' Law as "many eyes make all bugs shallow", with the socio-technical argument that high quality open source software emerges when developers combine together their collective experience and expertise to review code collaboratively. Vulnerabilities are a particularly nasty set of bugs that can be rare, difficult to reproduce, and require specialized skills to recognize. Does Linus' Law apply to vulnerabilities empirically? In this study, we analyzed 159,254 code reviews, 185,948 Git commits, and 667 post-release vulnerabilities in the Chromium browser project. We formulated, collected, and analyzed various metrics related to Linus' Law to explore the connection between collaborative reviews and vulnerabilities that were missed by the review process. Our statistical association results showed that source code files reviewed by more developers are, counter-intuitively, more likely to be vulnerable (even after accounting for file size). However, files are less likely to be vulnerable if they were reviewed by developers who had experience participating on prior vulnerability-fixing reviews. The results indicate that lack of security experience and lack of collaborator familiarity are key risk factors in considering Linus' Law with vulnerabilities. }
}
 

When a patch goes bad: Exploring the properties of vulnerability-contributing commits Andrew Meneely, Harshavardhan Srinivasan, Ayemi Musa, Alberto Rodriguez Tejeda, Matthew Mokary, & Brian Spates Proceedings of the 2013 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement pp. 65–74 Oct. 2013

Security is a harsh reality for software teams today. Developers must engineer secure software by preventing vulnerabilities, which are design and coding mistakes that have security consequences. Even in open source projects, vulnerable source code can remain unnoticed for years. In this paper, we traced 68 vulnerabilities in the Apache HTTP server back to the version control commits that contributed the vulnerable code originally. We manually found 124 Vulnerability-Contributing Commits (VCCs), spanning 17 years. In this exploratory study, we analyzed these VCCs quantitatively and qualitatively with the over-arching question: "What could developers have looked for to identify security concerns in this commit?" Specifically, we examined the size of the commit via code churn metrics, the amount developers overwrite each others' code via interactive churn metrics, exposure time between VCC and fix, and dissemination of the VCC to the development community via release notes and voting mechanisms. Our results show that VCCs are large: more than twice as much code churn on average than non-VCCs, even when normalized against lines of code. Furthermore, a commit was twice as likely to be a VCC when the author was a new developer to the source code. The insight from this study can help developers understand how vulnerabilities originate in a system so that security-related mistakes can be prevented or caught in the future.
 @article{MeneelyESEM2013,
  title = {When a patch goes bad: Exploring the properties of vulnerability-contributing commits},
  booktitle = {Proceedings of the 2013 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement},
  series = {ESEM '10},
  location = {Baltimore, MD, USA},
  year = {2013},
  month = oct,
  pages = {65--74},
  numpages = {10},
  doi = {10.1109/ESEM.2013.19},
  author = {Meneely, Andrew and Srinivasan, Harshavardhan and Musa, Ayemi and Tejeda, Alberto Rodriguez and Mokary, Matthew and Spates, Brian},
  keywords = {vulnerability, churn, socio-technical, empirical},
  abstract = {Security is a harsh reality for software teams today. Developers must engineer secure software by preventing vulnerabilities, which are design and coding mistakes that have security consequences. Even in open source projects, vulnerable source code can remain unnoticed for years. In this paper, we traced 68 vulnerabilities in the Apache HTTP server back to the version control commits that contributed the vulnerable code originally. We manually found 124 Vulnerability-Contributing Commits (VCCs), spanning 17 years. In this exploratory study, we analyzed these VCCs quantitatively and qualitatively with the over-arching question: "What could developers have looked for to identify security concerns in this commit?" Specifically, we examined the size of the commit via code churn metrics, the amount developers overwrite each others' code via interactive churn metrics, exposure time between VCC and fix, and dissemination of the VCC to the development community via release notes and voting mechanisms. Our results show that VCCs are large: more than twice as much code churn on average than non-VCCs, even when normalized against lines of code. Furthermore, a commit was twice as likely to be a VCC when the author was a new developer to the source code. The insight from this study can help developers understand how vulnerabilities originate in a system so that security-related mistakes can be prevented or caught in the future.},
  month_numeric = {10}
}