CISA Releases Analysis of FY23 Risk and Vulnerability Assessments
The Cybersecurity & Infrastructure Security Agency (CISA) has released an analysis and infographic detailing the findings from the 143 risk…
CSIAC collects and publishes articles related to our technical focus areas on the web to share with the DoD community.
The Cybersecurity & Infrastructure Security Agency (CISA) has released an analysis and infographic detailing the findings from the 143 risk…
With the growing integration of artificial intelligence (AI) in cybersecurity, this article investigates the economic principles of substitution and scale’s elasticity to evaluate their impact on the return on security investment. Recognizing the potential of AI technologies to substitute human labor and traditional cybersecurity mechanisms and the significance of cost ramifications of scaling AI solutions within cybersecurity frameworks, the study theoretically contributes to understanding the financial and operational dynamics of AI in cybersecurity. It provides valuable insights for cybersecurity practitioners in public and private sectors. Through this analysis, ways in which AI technologies can redefine economic outcomes in cybersecurity efforts are highlighted. Strategic recommendations are also offered for practitioners to optimize the economic efficiency and effectiveness of AI in cybersecurity, emphasizing a dynamic, nuanced approach to AI investment and deployment.
When performing defense system analysis with simulation models, a great deal of time and effort is expended creating representations of real-world scenarios in U.S. Department of Defense (DoD) simulation tools. However, once these models have been created and validated, analysts rarely retrieve all the knowledge and insights that the models may yield and are limited to simple explorations because they do not have the time and training to perform more complex analyses manually. Additionally, they do not have software integrated with their simulation tools to automate these analyses and retrieve all the knowledge and insights available from their models.
In the digital age, the cyber domain has become an intricate network of systems and interactions that underpin modern society. Sim2Real techniques, originally developed with notable success in domains such as robotics and autonomous driving, have gained recognition for their remarkable ability to bridge the gap between simulated environments and real-world applications. While their primary applications have thrived in these domains, their potential implications and applications within the broader cyber domain remain relatively unexplored. This article examines the emerging intersection of Sim2Real techniques and the cyber realm, exploring their challenges, potential applications, and significance in enhancing our understanding of this complex landscape.
Neuromorphic computing systems are desirable for several applications because they achieve similar accuracy to graphic processing unit (GPU)-based systems while consuming a fraction of the size, weight, power, and cost (SWaP-C). Because of this, the feasibility of developing a real-time cybersecurity system for high-performance computing (HPC) environments using full precision/GPU and reduced precision/neuromorphic technologies was previously investigated. This work was the first to compare the performance of full precision and neuromorphic computing on the same data and neural network and Intel and BrainChip neuromorphic offerings. Results were promising, with up to 93.7% accuracy in multiclass classification—eight attack types and one benign class.
Zero Trust Architecture (ZTA) has become a mainstream information security philosophy. Many commercial enterprises are in varying stages of their journeys in adopting and implementing ZTA. Similarly, federal policy has moved toward ZTA, motivated by actions such as Executive Order 14028 and guided by NIST 800-207 and CISA’s Zero Trust Maturity.
GAITHERSBURG, Md. — When we need to show proof of identity, we might reach for our driver’s license — or…
WASHINGTON – The Department of Homeland Security (DHS) Science and Technology Directorate (S&T) announced a new solicitation seeking Software Artifact…
The Defense Information Systems Agency (DISA) is advancing the migration of users to its modernized network, DoDNet, as part of…
Memory safety vulnerabilities are the most prevalent type of disclosed software vulnerability and affect a computer’s memory in two primary…
GAITHERSBURG, Md. — The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has finalized its principal set…
Amid the skyrocketing popularity of large language models (LLMs), researchers at Lawrence Livermore National Laboratory are taking a closer look…