If the idea of following the industry standard of care for your IT projects is offensive to you, there is some good news.
If you’ve ever had to obtain an ‘Authority to Operate’ a Federal Information System under NIST, FISMA, or other security frameworks, you’re painfully aware of the overhead and irrelevancy of creating and auditing your security package.
The US Federal Government has thousands of FISMA Computer Systems and for most, at the Moderate level of classification, there are over 300 controls that need to be satisfied. Creating a security package manually for each system can mean assessing the controls across multiple systems easily resulting in thousands of pages of documentation which has to be manually created, assessed and audited every time the system goes up for authorization.
As a security practitioner, this means two things to me.
1. I have to keep staff on hand full time just to satisfy this requirement – staff that could be protecting our systems are really just moving paperwork.
2. The security of the systems that they are reviewing is no longer accurate after the review – and often before the review is even finished. It truly is a point in time that doesn’t represent the dynamic changes a system goes through.
So why do it? Until recently, there really isn’t a good way of continuously monitoring the security state of a large complex system. There are absolutely vendors who will promise their product will automate all of this, and some even advance the tedious work to something more akin to automated chaos; but true continuous authorization has been elusive.
So where is this going?
With the introduction of containers, micro services and large scalable and temporal infrastructures we have seen huge structural changes possible in systems architecture. We used to say that systems were less like pets and more like cattle. But if you consider all of the changes, including now moving toward micro service architecture and server-less systems, its more deconstructed and temporal than ever.
So if the system architecture is isolating the code from the infrastructure that it runs on, what will that do to our beloved security controls? Even vulnerability management faces an incredibly difficult path when containers have a half-life measured in minutes. Clearly, if we are to keep up with the accelerating and shortened lifecycle of systems, we need to start to think more declaratively about how we implement security.
This trend was recently flushed out in systems like terraform and it’s proprietary equivalents in AWS, Azure and GCS where we can automate the creation of systems as code. This Infrastructure as Code approach allows systems to be rapidly created, maintained and decommissioned. But this leaves out some of our traditional cyber security capabilities, making it confusing and possibly elusive to get our authority to operate based on strong audits.
Increasingly we see the patterns of systems being a more relevant audit target than the actual product of that pattern. A good example of this approach is the US Air Force Platform One Iron Bank capability. In this case, there are published or ‘known good’ containers that can be used within a system without the need for additional authorization. The pattern for these containers serves as the audit target as long as the containers are used within the guidelines of their authorization. While not an easy solution to refactor existing capabilities into architecturally, it provides a great pattern moving forward.
So where does this evolve? If you remember the NIST SCAP protocol, there are some great ideas coming back around from this groundbreaking work. Most people are more familiar with the sub-protocol for Common Vulnerability Enumeration (CVE), but did you know there is also a protocol for Common Control Enumeration (CCE)? Like its sibling, CCE is designed to communicate specific technical data at very detailed levels. But rather than vulnerabilities, it focuses on security controls. NIST SP800-53 anyone? We used to see this with tools like Symantec’s Control Compliance Suite or Saner SecPod where we could combine compliance checks with nifty things like OCIL and OVAL checklist for collecting information from users to generate audit reports. If you’ve ever worked on an ACAS system, you’re probably already having a triggering moment. Sorry about that.
But SCAP is old news right?
Well, NIST has revealed it’s second act with OSCAL. The Open Security Controls Assessment Language (OSCAL) is a framework for managing the process of defining, assessing and remediating system security controls. It is basically making that thousand page system security plan a machine readable document that can be automated.
With OSCAL you can declare the security controls you’re interested in, how they apply to specific components within your larger system, how they should be assessed, and even get a Plan of Action & Milestone (POAM) set of details about what steps you need to take to bring a system into compliance.
All sounds too good? Well, it is just a specification yet, but there is some interesting work going on to make this a reality. The company I work with has already created an open source API to manipulate large and complex OSCAL files using the NIST framework.
The reason I’m writing this piece is because of what comes next.
Remember our friend SCAP? Well, the CCE protocol within SCAP was never loved as much as it could have been. Even the STIGs out there are more up to date. It could easily be said that the CCE frameworks out there that are public can be five or more years out of date, if they even exist. However, the OSCAL framework will rely on something very much like this as it’s logical next step.
Right now, the primary work being done in OSCAL is on understanding the control selection process. If you remember your NIST Risk Management Framework, you’ll remember this as the initial part of system development. Once we know how to classify a system and the level of protection that it needs, selecting what controls need to be applied to each system is the first step we have to take in developing our security plan. OSCAL has this covered. While the tools aren’t great yet, the structure is there and with our new API, we can easily modify, merge, and interpret large sets of OSCAL data files.
But when it comes to actually looking at live systems and understanding how controls are applied, this is, well more difficult. Knowing how to implement an access control on an nginx web server is very different than implementing it on a database, or a micro-service, or a network router. Every kind of infrastructure has historically had an infinite variety of possible implementations for a given control. It’s like every asset is a snowflake. But we know that trend and can see that for infrastructure to serve a more temporal existence, standardization is likely to come calling.
For OSCAL, this is good news. Having a reference implementation, or architectural pattern for a system component makes it more likely that the instance of that pattern will follow best practices. Automating system implementation via managed lifecycle, like those done through terraform’s infrastructure as a service approach means that it is now possible to more strictly define expected operational controls and therefore security normalization can be a possible outcome.
For OSCAL that means that it may be possible to create a set of security controls that become embedded into those architectural patterns. Using the concepts of CCE, or codified security controls, we can embed this into a modernized OSCAL framework and create Compliance as Code. This approach makes each modular component within a system able to incrementally attest to which controls it supplies and in aggregate able to contribute to a larger posture.
From a top level view, this means nested levels of security control design and new complexities where the aggregate may not be defined from specific component level discussions; or where specific control assertions may be obviated by circumstances not contemplated by the use of the system; but a real platform for incremental improvement is possible.
In an idea situation, enterprise architects will begin to inherit the building blocks of CCE enabled OSCAL approaches and understand how these can be used in aggregate to form meaningful end to end security from concept through production.
As we learn more about potential OSCAL implementations, it should be apparent that the abiltiy to do declarative security controls at the system component level in an automated fashion can and must drive conversations on creating more resilient, friction-less and highly accelerated infrastructures.
Can you image a FISMA moderate or high system in where all of the security controls are declared, automated and constantly in compliance?
We can.