How to improve dependency management by ‘shifting security left’ and providing developers with a unified CI/CD pipeline Credit: Free-Photos Developers often want to do the “right” thing when it comes to security, but they don’t always know what that is. In order to help developers continue to move quickly, while achieving better security outcomes, organizations are turning to devsecops. Devsecops is the mindset shift of making all parties who are part of the application development lifecycle accountable for the security of the application, by continuously integrating security across your development process. In practice, this means shifting security reviews and testing left—i.e., shifting from auditing or enforcing at deployment time to checking security controls earlier at build or development time. For code your developers write, that means providing feedback on issues during the development process, so the developer doesn’t lose their flow. But for dependencies your code pulls in as part of your software supply chain, what should you do? Let’s first define a dependency. A dependency is another binary that your software needs in order to run, specified as part of your application. Using a dependency allows you to leverage the power of open source, and to pull in code for functions that aren’t a core part of your application, or where you might not be an expert. Dependencies often define your software supply chain. GitHub’s 2019 State of the Octoverse Report showed that on average, each repository has more than 200 dependencies. (Disclosure: I work for GitHub.) An upstream vulnerability in any one of these dependencies means you’re likely affected too. The reality of the software supply chain is that you are dependent on code you didn’t write, yet the dependencies still require work from you for ongoing upkeep. So where should you get started in implementing security controls? Unify your CI/CD pipeline Part of the goal of DevSecOps, and shifting left, is to provide not only feedback but also consistency and repeatability as part of the development environment. This isn’t unique to your supply chain, but applies to any security control. The sooner you can unify your CI/CD pipeline, the sooner you can implement controls, allowing your security controls to shift left. You don’t want to apply the same controls multiple times in different systems. Duplicating controls doesn’t scale, spreads your (already thin) security resources even thinner, allows inconsistencies to be introduced via drift or incompatibility in systems, and worst of all, makes it more likely that something will slip through the cracks. The precursor to shifting left and applying devsecops isn’t a security control at all. It’s about improving developer tooling to provide a consistent way to write, build, test, and deploy code. Introducing a centralized system for any one of these can help you improve your security. Organizations will typically tackle developer tools from the last step, and work backwards to the first, adopting a consistent deployment strategy before adopting a consistent build strategy, for example. The exception to this rule is code. Even if you build locally, chances are, you’re checking your code in for posterity. You can start applying security controls to your code even without getting all of the other steps unified. A developer-centric approach means your developers can stay in context and respond to issues as they code, not days later at deployment, or months later from a penetration test report. Building on a unified CI/CD pipeline, here are some tips on how your development team can apply DevSecOps to secure your software supply chain. Declare dependencies in code First things first, in order to maintain your dependencies—for example, applying security patches—you need to know what your dependencies are. Seems straightforward, right? There are many ways to detect your dependencies, at different parts of your development process: by analyzing the dependencies declared in code (specified by a developer in a manifest file or lockfile), by tracking the dependencies pulled in as part of a build process, or by examining completed build artifacts when they enter your registry. Unfortunately, there is no perfect solution, as all methods have their challenges, but you should pick the solution that best integrates with your existing development pipeline or use multiple solutions to give you insights into dependencies at each step in your development process. However, there are benefits to detecting dependencies in code, rather than later. You’re shifting that dependency management step left, allowing developers to immediately perform maintenance for dependencies—applying updates, applying security patches, or removing unnecessary dependencies—without waiting for feedback from a build or deployment step. Even if you don’t have a centralized or consistent build pipeline, and you can’t apply a check later, detecting your dependencies in code means you can still infer this information. The main downside to detecting dependencies in code is that you might miss any artifacts pulled in later. For example, Gradle allows for dependencies to be resolved as part of a build, meaning build-time detection will contain more complete information. To accurately detect dependencies in code—and to more easily control what dependencies you use—you’ll want to explicitly specify them as part of your application’s manifest file or lockfile, rather than vendoring them into a repository (forking a copy of a dependency as part of your project, aka copy-pasting it). Vendoring makes sense if you have a good reason to fork the code—for example, to modify or limit functionality for your organization—or to use this as a step to review dependencies (you know, actually tracking inputs from vendors). Some ecosystems also favor using vendoring. However, if you’re planning on using the upstream version, vendoring makes updating your dependencies harder. By specifying your dependencies explicitly, it’s easier for your development team to update them, as updates require only a single line of code in a manifest, rather than re-forking and copying a whole repository. In certain ecosystems, you can use a lockfile to ensure consistency, so you’re using the same version in your development environment as you are for your production build, and review changes like any other code changes. Standardize on ‘golden’ packages You might already be familiar with the concept of “golden” images, which are maintained and sanctioned by your organization, including the latest security patches. This is a common concept for containers, to provide developers with a base image on which they can build their containers, without having to worry about the underlying OS. The idea here is to only have to maintain one set of OSes, managed by a central team, that you know have been reviewed for security issues and validated in your environment. Well, why not do that for other artifacts too? To supplement a unified CI/CD pipeline, you can provide a reference set of maintained artifacts and libraries. This is just a pre-emptive security control. Rather than verifying that a package is up-to-date once it’s been built, give your developers what they need as an input to their build. For example, if multiple teams are using OpenSSL, you shouldn’t need every team to update it. If one team updates it (and there are sufficient tests in place!), then you should be able to change the default for all teams. This could be implemented by having a central internal package registry of your known good artifacts, which have already passed any security requirements, and have a clear owner responsible for updating if new versions are released. By providing a single set of packages, you’re ensuring all teams reference these. Keep in mind, the latest you can do this is in the build system, but this could also be done earlier in code, especially if you’re using a monorepo. An added benefit of sharing common artifacts and libraries is making it easier to tell if you’re affected by a newly discovered vulnerability. If the corresponding artifact hasn’t been updated, you are! And then it’s just one change to address the issue, and for the update to flow downstream to all teams. Phew. Automate downstream builds and deployments To make sure that developers’ hard work pays off, their changes actually need to make it to production! In creating a unified CI/CD pipeline, you cleared a path for changes that are made to code in a development environment to propagate downstream to testing and production environments. The next step is to simplify this with automation. In an ideal world, your development team only makes changes to a development environment, with any changes to that environment automatically pushed to testing, validated, and rolled out (and back, if needed). Rather than applying devops and devsecops by requiring your development team to learn operations tools, you simplify those tools and feedback to what these teams need to know in order to make changes where they’re most familiar, in code. This should sound familiar—it’s what’s happening with trends like infrastructure as code, or GitOps—define things in code, and let your workflow tools handle making the actual change. If you can automate downstream builds, testing, and deployment of your code, then your developers only need to focus on fixing code. Following devsecops principles, they don’t need to learn tooling to do validation testing, phased deployments, or whatever you might need in your environment. Crucially, for security, your development team doesn’t need to learn how to roll out a fix in order to apply a fix. Fixing a security issue in code and committing it is sufficient to ensure that it (eventually) gets fixed in production. Instead, you can focus on quickly finding and fixing bugs in code. Creating a unified CI/CD pipeline allows you to shift security controls left, including for supply chain security. Then, to best apply devsecops principles to improve the security of your dependencies, you should ask your developers to declare your dependencies in code and in turn provide them with maintained, “golden” artifacts and automated downstream actions so they can focus on code. Because this requires changes not only to security controls, but also to your developers’ experience, just using security tooling isn’t sufficient to implement devsecops. In addition to enabling platform-native dependency management features, you’ll also want to take a closer look at your CI/CD pipeline and artifact management. All together, applying devsecops allows you to gain a better understanding of what’s in your supply chain. By using devsecops, it should be simpler to manage your dependencies, with a change to a manifest or lockfile easily updating a single artifact in use across multiple teams, and with the automation of your CI/CD pipeline ensuring that changes developers make quickly end up in production. Maya Kaczorowski is a product manager at GitHub overseeing software supply chain security. She was previously in security and privacy at Google, focused on container security, encryption at rest, and encryption key management. Prior to Google, she was an engagement manager at McKinsey & Company, working in IT security for large enterprises. Outside of work, Maya is passionate about ice cream, puzzling, running, and reading nonfiction. Related content feature What is Rust? Safe, fast, and easy software development Unlike most programming languages, Rust doesn't make you choose between speed, safety, and ease of use. Find out how Rust delivers better code with fewer compromises, and a few downsides to consider before learning Rust. By Serdar Yegulalp Nov 20, 2024 11 mins Rust Programming Languages Software Development how-to Kotlin for Java developers: Classes and coroutines Kotlin was designed to bring more flexibility and flow to programming in the JVM. Here's an in-depth look at how Kotlin makes working with classes and objects easier and introduces coroutines to modernize concurrency. By Matthew Tyson Nov 20, 2024 9 mins Java Kotlin Programming Languages analysis Azure AI Foundry tools for changes in AI applications Microsoft’s launch of Azure AI Foundry at Ignite 2024 signals a welcome shift from chatbots to agents and to using AI for business process automation. By Simon Bisson Nov 20, 2024 7 mins Microsoft Azure Generative AI Development Tools news Microsoft unveils imaging APIs for Windows Copilot Runtime Generative AI-backed APIs will allow developers to build image super resolution, image segmentation, object erase, and OCR capabilities into Windows applications. By Paul Krill Nov 19, 2024 2 mins Generative AI APIs Development Libraries and Frameworks Resources Videos