See all posts

Start manual, move towards automation and tooling

Application and software security is paramount for any organization creating custom applications, especially if these applications are customer-facing and/or process sensitive data. For some reason we often see that when an organization is ready to tackle the security aspects of their own applications there is a tendency to start looking into procuring third party security software. We believe this is a sub-optimal approach.

10 minJan 15, 2023
Håkon Nikolai Stange Sørum
Håkon Nikolai Stange SørumPrincipal Security Architect & Partner
Håkon Nikolai Stange Sørum

Introduction

This blog is part of an ongoing blog series addressing Application and Software Security.

Creating an application and software security programs should begin with finding the individuals that can start the initiative, identifying what foundational activities must be implemented for the program to be successful and determining if there are any tools that can help us.

Knowing the size of the attack surfaces of the organizations applications can greatly help when trying to scope an application security program. Different organizations have different needs based on their exposure to in-house development. Understanding one's needs and issues before attempting to rectify them is crucial. If not, the program will not result in an adequate security posture. In general, the more the organization is dependent on software built in-house, the larger and more resource intensive the program will be.

More specifically, if the application(s) built in-house themselves are mission critical the more you should be willing to invest. On the other hand, just because a piece of software is built in-hose does not make it worthy of extreme investments. For example, if the risk exposure is low due to attributes of the organization or applications, SW/AppSec does not necessarily need to be atop the list of security tasks.

To get started it might seem alluring to invest resources in a shiny new tool that will uncover both internally written and externally introduced vulnerabilities, and display this information to developers. After having given them the information one expects that they will fix the issues based on guidance from the tool and calling it AppSec. This is far from what we believe to be the truth. One of the reasons this approach fails is due to the characteristics of the tools; automated scanners, be it SAST, DAST, IAST and SCA, requires surprisingly large amounts of Tender Love and Care (TLC). In a perfect world these scanners would only detect true positives and true negatives. If this were the case, removing all vulnerabilities in a code base would be a simple task of remediating every true positive detected by the tool. However, this is not the case. Findings also include false positives, non-exploitable true positives and false negatives. This leads to engineering time spent on triaging the results. Often the engineers outside of security teams are not trained, nor have the time to do this triage. At the same time, security engineers do not have time to assist in triage due to the sheer amount of alerts to be handled. Security engineers are often also responsible for operating and tuning the tools, which adds to the total workload. This sisyphean task put on the engineering teams is not scalable.

Another challenge is that poor planning, and implementation of the tool itself creates short and long-term problems. For example, rolling out scanning too broadly often creates alert fatigue with the users. There are going to be findings, and probably quite a lot of them. When these findings are of low quality, adoption will be low, and therefor return on investment will also be low. We find that it is common for security teams and developers to attribute this to the tool being the wrong tool. When trying again with a different tool, the engineers are already skeptical at best, and adoption the second time around is just as bad. In even worse cases, it can create security apathy in engineers. If some display tells you that no matter how much time is spent on remediation there are always vulnerabilities in the codebase, motivation to continue using the product will sink.

All in all, spending time on remediating all these findings might just be a waste of time as the validity of the findings is not known, and the product may need significant tuning, as touched upon earlier. In addition, it might not be the most efficient use of time spending valuable engineering-time on squashing one-off bugs, instead of trying to remove classes of vulnerabilities and design flaws. To avoid getting to this point, we suggest beginning a secure development program with threat modeling and peer review as the foundation.

Bridging the gap: how to begin thinking about software security

Software and application security is in essence a people problem. There are no vulnerabilities if they are never created by us, the engineers. Instead of trying to fix after the fact, another approach may be to enable the engineers to create fewer vulnerabilities. There are a variety of ways that security teams may enable their engineers in this way, which we will explore throughout this blog series.

For any of these approaches to work there needs to be organizational alignment on the intention of the program. Product managers, product owners, project managers and anyone who impacts the prioritizations made will need to allow engineers to spend appropriate time on the activities that will be discussed. The improvements made will not only create a better security posture, but the overall quality of the applications will increase. Attributes such as robustness and maintainability are also benefactors of the program.

Building the program on people, processes and trust

For the rest of this blog post, we begin to outline a more pragmatic approach that may enable security teams and development teams to work together to improve their security posture. This will include three foundational activities and discussion regarding those activities: threat modeling, peer review and training. Operationalization and implementation is not included in this post to help reduce the length, but it will be covered in detail in later blogs.

Threat modeling

Threat modeling is a set of processes and techniques to uncover and quantify the threats that are relevant for a system or application. It can be used as a tool to build systems that function while under attack from malicious entities.

For some years now the whole industry, from vendors o trusted experts, has been talking about shifting security left. We wholeheartedly agree, but how far can one shift left? Well, all the way, before any code is written. This is where threat modeling comes in.

The value of threat modeling is undisputed. Security engineering activities done early in the development lifecycle seem to be more impactful on security posture^1. As Michael Howard of Microsoft said “If we had our hands tied behind our backs (we don’t) and could do only one thing to improve software security... we would do threat modeling every day of the week.”

So what is the value of threat modeling? To understand this we need to discern between vulnerabilities and design flaws. A vulnerability is a weakness that can be exploited to cause harm, while a design flaw in this context is a weakness in the overall design or architecture of a feature, system or application to be developed. If one does not architect the system for least privilege access control then that is a design flaw. When that design is implemented in the application, it becomes a vulnerability. With this type of design flaw, it is possible that the resulting application may have many many instances of vulnerabilities to remediate, but it could have been removed overall with a change in design. This should have been caught at design time, and that is what threat modeling is for.

Another huge benefit of threat modeling that should not be underestimated is the institutional knowledge it creates. Bringing together engineers across disciplines with individuals from the business or product side for collaborative work is something that can not be done enough. It is even a part of the Twelve Principles of Agile Software. These collaborative sessions will educate engineers working mainly on development about software and application security, while pushing the security focused engineers to understand more about how the applications they are responsible for securing actually works. The individuals that are there from business or product will have context and insight on the system no one else in the room have. They can often impact investment and work allocation decisions.

At a point the learning and artifacts created from threat modeling might allow the organization to scale down the usage of security engineers in these sessions so they can improve the security posture in other ways.

We argue that spending both security and development effort on removing design and architecture flaws prior to implementation is a better first step than triaging, analyzing and maybe remediating findings from AppSec-tools. Trusting the individuals in the organization to create the best security/feature tradeoff is the best way to go in the start. When this is in place tooling and automation will improve the output of the program. This will be discussed in later blogs.

There are countless methodologies, frameworks and blogs on what threat modeling is and how to implement it. One of (the many) great thing about threat modeling is its adaptability. An organization can implement it however it wants. We are not going to dive into this here and rather bump these resources: Redefining Threat Modeling, curated list of learning material, Threat Modeling Manifesto and Threat Modeling - A practical guide for Development Teams.

Peer review

Peer reviews ensures that at least two individuals have looked at all code going in to production. This is a good approach, but has to be implemented with care.

Security engineers often put restrictions on the code repositories and pipelines used to build and deploy code. This is a very sane approach. Ensuring the security of the storage, build, delivery and deployment of code is highly valuable, but there is a recurring issue with this. Some of these security controls are put in place as gates to disallow a certain behavior without organizations enabling the affected individuals to act as they wish. One of these controls is setting restrictions on integrating code without review. This is often done without taking into account the extra workload it puts on engineers. Allowing only time for feature development, while demanding that any pull request merged into a certain branch is reviewed by someone other than the author only reinforces bad habits. This will nurture a culture of “LGTM&accept” that neither the overall quality nor the security posture of apps benefit from. Make sure engineers have sufficient time to read and understand code they are expected to review.

The specific requirements, expectations and implementation of peer review is highly situational. Factors like organizational culture, demographics, workload and velocity must be taken into the equation. Thread with care, but start somewhere and iterate.

Training

Most engineers tasked with developing software do not get proper training for todays threat landscape. Few, if any of those with formal training had enough courses about the topic in school^2, and those without a formal degree seem to have just as little training on security. Very few organizations organize adequate training in the workplace on the topics of software and application security. This is problematic.

A key component of any application and software security program is knowledge. The training is ideally conducted within the context of the organization. This way the content can be tailored to the specific technology stack, threat landscape and risk acceptance of the organization. The content can include previous incidents and how they could have been avoided, common mistakes made in the technology stack of choice and organizational best practices. Bringing in external help to get this rolling is not a bad idea, but ideally the ownership must be internal to the organization. With internal ownership taking in threat modeling outputs and common bad patterns found in peer review is more likely to happen.

This training has synergistic effects with the previously discussed foundational activities as well. A knowledgeable engineer will provide better peer reviews and provide more value in a threat modeling session. This relationship goes the other way around as well. An engineer that is used to reviewing code and discussing high level threats will be able to process more complex topics in the training session.

Conclusion

Application and software security is not only a technical problem. It is just as much about who is dealing with it, and how is it being dealt with. Building any security program is about aligning people, processes and technology. Starting with technology is sub-optimal and equates to slapping some duct tape on a broken pipe. It might reduce the leakage right then and there, but after a while the surrounding areas will still get water damage.

Consider how you could implement some of the activities discussed in this article, and get the ball rolling. If the organization already has invested in technology this is not a showstopper for getting these foundational activities right, it should just act as further inspiration. As we will see in later blogs when the foundation are in place, adding tools will be a part of what takes your Application and Software security program from operational to optimizing.

References

^1 https://www.sciencedirect.com/

^2 https://ntnuopen.ntnu.no/