Each January on the SEI Weblog, we current the 10-most visited posts of the earlier yr. This yr’s high 10 record highlights the SEI’s work in software program acquisition, synthetic intelligence, giant language fashions, safe coding, insider danger mitigation, and enterprise danger administration. The posts, which have been printed between January 1, 2024, and December 31, 2024, are offered beneath in reverse order primarily based on the variety of visits.
by Ipek Ozkaya and Brigid O’Hearn
The fiscal yr 2022 Nationwide Protection Authorization Act (NDAA) Part 835, “Unbiased Examine on Technical Debt in Software program-Intensive Techniques,” required the Secretary of Protection to interact a federally funded analysis and growth heart (FFRDC) “to review technical debt in software-intensive techniques.” To fulfill this requirement and lead this work, the Division of Protection (DoD) chosen the Carnegie Mellon College (CMU) Software program Engineering Institute (SEI), which is a acknowledged chief within the observe of managing technical debt. In keeping with NDAA Part 835, the aim of the research was to supply, amongst different issues, analyses and proposals on quantitative measures for assessing technical debt, present and greatest practices for measuring and managing technical debt and its related prices, and practices for decreasing technical debt.
Our staff spent greater than a yr conducting the unbiased research. The report we produced describes the conduct of the research, summarizes the technical tendencies noticed, and presents the ensuing suggestions. On this SEI Weblog publish, we summarize a number of suggestions that apply to the DoD and different growth organizations searching for to research, handle, and cut back technical debt. You’ll find an entire dialogue of the research methodology, findings, and proposals within the SEI’s Report back to the Congressional Protection Committees on Nationwide Protection Authorization Act (NDAA) for Fiscal 12 months 2022 Part 835 Unbiased Examine on Technical Debt in Software program-Intensive Techniques.
Learn the publish in its entirety.
by Douglas Schmidt and John E. Robert
There may be appreciable curiosity in utilizing generative AI instruments, equivalent to giant language fashions (LLMs), to revolutionize industries and create new alternatives within the industrial and authorities domains. For a lot of Division of Protection (DoD) software program acquisition professionals, the promise of LLMs is interesting, however there’s additionally a deep-seated concern that LLMs don’t tackle at present’s challenges on account of privateness issues, potential for inaccuracy within the output, and insecurity or uncertainty about the best way to use LLMs successfully and responsibly. This weblog publish is the second in a sequence devoted to exploring how generative AI, significantly LLMs equivalent to ChatGPT, Claude, and Gemini, could be utilized inside the DoD to reinforce software program acquisition actions.
Our first weblog publish on this sequence offered 10 Advantages and 10 Challenges of Making use of LLMs to DoD Software program Acquisition and steered particular use instances the place generative AI can present worth to software program acquisition actions. This second weblog publish expands on that dialogue by exhibiting particular examples of utilizing LLMs for software program acquisition within the context of a doc summarization experiment, in addition to codifying the teachings we discovered from this experiment and our associated work on making use of generative AI to software program engineering.
Learn the publish in its entirety.
by Robin Ruefle
Incident response is a important want all through authorities and trade as cyber risk actors look to compromise important belongings inside organizations with cascading, typically catastrophic, results. In 2021, for instance, a hacker allegedly accessed a Florida water therapy plant’s pc techniques and poisoned the water provide. Throughout the U.S. important nationwide infrastructure, 77 p.c of organizations have seen an increase in insider-driven cyber threats over the past three years. The 2023 IBM Price of a Knowledge Breach report highlights the essential function of getting a well-tested incident response plan. Corporations with no examined plan in place will face 82 p.c increased prices within the occasion of a cyber assault, in contrast to those who have applied and examined such a plan.
Researchers within the SEI CERT Division compiled 10 classes discovered from our greater than 35 years of creating and dealing with incident response and safety groups all through the globe. These classes are related to incident response groups contending with an ever-evolving cyber risk panorama. In honor of the CERT Division (additionally referred to the CERT Coordination Heart in our work with the Discussion board of Incident Response and Safety Groups) celebrating 35 years of operation, on this weblog publish we have a look again at a few of the classes discovered from our Cyber Safety Incident Response Group (CSIRT) capability constructing experiences that additionally apply to different areas of safety operations.
Learn the publish in its entirety.
by Roger Black
In keeping with a 2023 Ponemon research, the variety of reported insider danger incidents and the prices related to them continues to rise. With greater than 7,000 reported instances in 2023, the common insider danger incident value organizations over $600,000. To assist organizations assess their insider danger applications and determine potential vulnerabilities that might lead to insider threats, the SEI CERT Division has launched two instruments obtainable for obtain on its web site. Beforehand obtainable solely to licensed companions, the Insider Risk Vulnerability Evaluation (ITVA) and Insider Risk Program Analysis (ITPE) toolkits present sensible strategies to evaluate your group’s capacity to handle insider danger. This publish describes the aim and use of the toolkits, with a deal with the workbook parts of the toolkits which are the first strategies of program evaluation.
Learn the publish in its entirety.
by David Svoboda
In current weeks a number of vulnerabilities have rocked the Rust group, inflicting many to query the protection of the borrow checker, or of Rust on the whole. On this publish, we look at two such vulnerabilities: the primary is CVE-2024-3094, which entails some malicious recordsdata within the xz
library, and the second is CVE-2024-24576, which entails command-injection vulnerabilities in Home windows. How did these vulnerabilities come up, how have been they found, and the way do they contain Rust? Extra importantly, may Rust be vulnerable to extra comparable vulnerabilities sooner or later?
Final yr we printed two weblog posts concerning the safety supplied by the Rust programming language. We mentioned the reminiscence security and concurrency security supplied by Rust’s borrow checker. We additionally described a few of the limitations of Rust’s safety mannequin, equivalent to its restricted capacity to stop varied injection assaults, and the unsafe
key phrase, which permits builders to bypass Rust’s safety mannequin when obligatory. Again then, our conclusion was that no language could possibly be absolutely safe, but the borrow checker did present vital, albeit restricted, reminiscence and concurrency security when not bypassed with the unsafe
key phrase. We additionally examined Rust by the lens of supply and binary evaluation, gauged its stability and maturity, and realized that the constraints and expectations for language maturity have slowly developed over the a long time. Rust is shifting within the route of maturity at present, which is distinct from what was thought of a mature programming language in 1980. Moreover, Rust has made some notable stability ensures, equivalent to promising to deprecate quite than delete any crates in crates.io to keep away from repeating the Leftpad fiasco.
Learn the publish in its entirety.
by Ipek Ozkaya, Douglas Schmidt, and Michael Hilton
The preliminary surge of pleasure and worry surrounding generative synthetic intelligence (AI) is progressively evolving right into a extra practical perspective. Whereas the jury continues to be out on the precise return on funding and tangible enhancements from generative AI, the speedy tempo of change is difficult software program engineering schooling and curricula. Educators have needed to adapt to the continued developments in generative AI to supply a sensible perspective to their college students, balancing consciousness, wholesome skepticism, and curiosity.
In a current SEI webcast, researchers mentioned the affect of generative AI on software program engineering schooling. SEI and Carnegie Mellon College consultants spoke about the usage of generative AI within the curriculum and the classroom, mentioned how school and college students can most successfully use generative AI, and thought of issues about ethics and fairness when utilizing these instruments. The panelists took questions from the viewers and drew on their expertise as educators to talk to the important questions generative AI raises for software program engineering schooling.
This weblog publish options an edited transcript of responses from the unique webcast. Some questions and solutions have been rearranged and revised for readability.
Learn the publish in its entirety.
by Jeff Gennari, Shing-hon Lau, and Samuel J. Perl
Massive language fashions (LLMs) have proven a outstanding capacity to ingest, synthesize, and summarize information whereas concurrently demonstrating vital limitations in finishing real-world duties. One notable area that presents each alternatives and dangers for leveraging LLMs is cybersecurity. LLMs might empower cybersecurity consultants to be extra environment friendly or efficient at stopping and stopping assaults. Nevertheless, adversaries might additionally use generative synthetic intelligence (AI) applied sciences in type. We’ve already seen proof of actors utilizing LLMs to help in cyber intrusion actions (e.g., WormGPT, FraudGPT, and so on.). Such misuse raises many vital cybersecurity-capability-related questions together with
- Can an LLM like GPT-4 write novel malware?
- Will LLMs turn out to be important parts of large-scale cyber-attacks?
- Can we belief LLMs to supply cybersecurity consultants with dependable info?
The reply to those questions is determined by the analytic strategies chosen and the outcomes they supply. Sadly, present strategies and strategies for evaluating the cybersecurity capabilities of LLMs aren’t complete. Not too long ago, a staff of researchers within the SEI CERT Division labored with OpenAI to develop higher approaches for evaluating LLM cybersecurity capabilities. This SEI Weblog publish, excerpted from a not too long ago printed paper that we coauthored with OpenAI researchers Joel Parish and Girish Sastry, summarizes 14 suggestions to assist assessors precisely consider LLM cybersecurity capabilities.
Learn the publish in its entirety.
by John E. Robert and Douglas Schmidt
Division of Protection (DoD) software program acquisition has lengthy been a fancy and document-heavy course of. Traditionally, many software program acquisition actions, equivalent to producing Requests for Info (RFIs), summarizing authorities rules, figuring out related industrial requirements, and drafting undertaking standing updates, have required appreciable human-intensive effort. Nevertheless, the appearance of generative synthetic intelligence (AI) instruments, together with giant language fashions (LLMs), presents a promising alternative to speed up and streamline sure facets of the software program acquisition course of.
Software program acquisition is certainly one of many complicated mission-critical domains that will profit from making use of generative AI to reinforce and/or speed up human efforts. This weblog publish is the primary in a sequence devoted to exploring how generative AI, significantly LLMs like ChatGPT-4, can improve software program acquisition actions. On this publish we current 10 advantages and 10 challenges of making use of LLMs to the software program acquisition course of and counsel particular use instances the place generative AI can present worth. Our focus is on offering well timed info to software program acquisition professionals, together with protection software program builders, program managers, techniques engineers, cybersecurity analysts, and different key stakeholders, who function inside difficult constraints and prioritize safety and accuracy.
Learn the publish in its entirety.
by Mark Sherman
The common code pattern accommodates 6,000 defects per million traces of code, and the SEI’s analysis has discovered that 5 p.c of those defects turn out to be vulnerabilities. This interprets to roughly 3 vulnerabilities per 10,000 traces of code. Can ChatGPT assist enhance this ratio? There was a lot hypothesis about how instruments constructed on high of enormous language fashions (LLMs) may affect software program growth, extra particularly, how they may change the way in which builders write code and consider it.
In March 2023 a staff of CERT Safe Coding researchers—the staff included Robert Schiela, David Svoboda, and myself—used ChatGPT 3.5 to look at the noncompliant software program code examples in our CERT Safe Coding commonplace, particularly the SEI CERT C Coding Customary. On this publish, I current our experiment and findings, which present that whereas ChatGPT 3.5 has promise, there are clear limitations.
Learn the publish in its entirety.
by Greg Touhill
The function of the chief info safety officer (CISO) has by no means been extra vital to organizational success. The current and near-future for CISOs might be marked by breathtaking technical advances, significantly these related to the inclusion of synthetic intelligence applied sciences being built-in into enterprise capabilities, in addition to emergent authorized and regulatory challenges. Continued advances in generative synthetic intelligence (AI) will speed up the proliferation of deepfakes designed to erode public belief in on-line info and public establishments. Moreover, these challenges might be amplified by an unstable world theater during which nefarious actors and nation states chase alternatives to take advantage of any potential organizational weak spot. Some forecasts have already characterised 2024 as a strain cooker setting for CISOs. In such an setting, expertise are important. On this publish I define the highest 10 expertise that CISOs want for 2024 and past. These suggestions draw upon my expertise because the director of the SEI’s CERT Division, in addition to my service as the primary federal chief info safety officer of the US, main cyber operations on the U.S. Division of Homeland Safety, and my prolonged army service as a communications and our on-line world operations officer.
Learn the publish in its entirety.
Wanting Forward in 2025
We publish a brand new publish on the SEI Weblog weekly. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, machine studying, cybersecurity, software program engineering, and extra.