search menu icon-carat-right cmu-wordmark

CERT/CC Blog

Vulnerability Insights

Latest Posts

Vulnonym: Stop the Naming Madness!

Vulnonym: Stop the Naming Madness!

• CERT/CC Blog
Leigh Metcalf

Spectre. Meltdown. Dirty Cow. Heartbleed. All of these are vulnerabilities that were named by humans, sometimes for maximum impact factor or marketing. Consequently, not every named vulnerability is a severe vulnerability despite what some researchers want you to think. Sensational names are often the tool of the discoverers to create more visibility for their work. This is an area of concern for the CERT/CC as we attempt to reduce any fear, uncertainty, and doubt for...

Read More
Adversarial ML Threat Matrix: Adversarial Tactics, Techniques, and Common Knowledge of Machine Learning

Adversarial ML Threat Matrix: Adversarial Tactics, Techniques, and Common Knowledge of Machine Learning

• CERT/CC Blog
Jonathan Spring

My colleagues, Nathan VanHoudnos, April Galyardt, Allen Householder, and I would like you to know that today Microsoft and MITRE are releasing their Adversarial Machine Learning Threat Matrix. This is a collaborative effort to bring MITRE's ATT&CK framework into securing production machine learning systems. You can read more at Microsoft's blog and MITRE's blog, as well as find a complete copy of the matrix on GitHub. We hope that you will join us in providing...

Read More
Snake Ransomware Analysis Updates

Snake Ransomware Analysis Updates

• CERT/CC Blog
Kyle O'Meara

In January 2020, Sentinel Labs published two reports on Snake (also known as Ekans) ransomware.[1][2] The Snake ransomware gained attention due to its ability to terminate specific industrial control system (ICS) processes. After reading the reports, I wanted to expand the corpus of knowledge and provide OT and IT network defenders with increased defense capabilities against Snake. The key takeaways from the Sentinel Labs’ reports for additional analysis were the hash of the ransomware and...

Read More
Bridging the Gap Between Research and Practice

Bridging the Gap Between Research and Practice

• CERT/CC Blog
Leigh Metcalf

A fundamental goal for a federally funded research and development center (FFRDC) is to bridge the gap between research and practice for government customers. At the CERT Division of the Software Engineering Institute (SEI), we've taken a step beyond that and decided that, in cybersecurity, we should be bridging the gap for all researchers and practitioners. To help achieve this goal, I decided that a journal would be an important step. The Association for Computing...

Read More
Security Automation Begins at the Source Code

Security Automation Begins at the Source Code

• CERT/CC Blog
Vijay Sarvepalli

Hi, this is Vijay Sarvepalli, Information Security Architect in the CERT Division. On what seemed like a normal day at our vulnerability coordination center, one of my colleagues asked me to look into a vulnerability report for pppd, an open source protocol. At first glance, this vulnerability had the potential to affect multiple vendors throughout the world. These widespread coordination cases usually have a prolonged coordination timeline. They typically involve multiple vendors on the one...

Read More
Comments on NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning

Comments on NIST IR 8269: A Taxonomy and Terminology of Adversarial Machine Learning

• CERT/CC Blog
Jonathan Spring

The U.S. National Institute of Standards and Technology (NIST) recently held a public comment period on their draft report on proposed taxonomy and terminology of Adversarial Machine Learning (AML). AML sits at the intersection of many specialties of the SEI. Resilient engineering of Machine Learning (ML) systems requires good data science, good software engineering, and good cybersecurity. Our colleagues have suggested 11 foundational practices of AI engineering. In applications of ML to cybersecurity, we have...

Read More
Machine Learning in Cybersecurity

Machine Learning in Cybersecurity

• CERT/CC Blog
Jonathan Spring

We recently published a report that outlines relevant questions that decision makers who want to use artificial intelligence (AI) or machine learning (ML) tools as solutions in cybersecurity should ask of machine-learning practitioners to adequately prepare for implementing them. My coauthors are Joshua Fallon, April Galyardt, Angela Horneman, Leigh Metcalf, and Edward Stoner. Our goal with the report is chiefly educational, and we hope it can act like an ML-specific Heilmeier catechism and serve as...

Read More