The internet is a great place to share. For software developers, this is both its best quality and its biggest danger.
Every day, thousands of developers contribute to open-source software packages that they upload to online registries. Other developers import these packages into their own programs, using nothing more than a single line of code. This free functionality does amazing things for them, from currency conversions to aligning text, or processing images.
The downside is that developers now depend on packages that they don’t understand, or even know about. A package that you import might in turn have imported other packages. This is known as transitive dependency. It creates a digital Matryoshka; programs inside programs inside still more programs.
What if one of those third-party packages contains a poison pill? A security flaw that renders the developer’s entire application vulnerable? This is more common than you might think. Veracode, a security company that scans software for security bugs, found in its 2020 State of Software Security report that 71% of applications use at least one open source library with a security flaw.
Some crooks muddy the waters still further by introducing those flaws on purpose , impersonating popular packages with what amounts to malware. They hope that developers will mistakenly import their packages so that they can steal the developers’ own data or worse infect software relied upon by thousands of users.
Cleaning up our act
What can developers do to avoid these problems and keep the software landscape healthy?
The nuclear option is not to use these libraries, but that would make developers far less productive. A better option is diligence.
Developers can avoid malicious packages by double checking that they’re importing the right ones. A trickier problem is scouring legitimate packages for genuine security flaws.
Developers importing packages typically trade visibility for convenience. They don’t look closely at how those packages work or their transitive dependencies. They simply want to get the job done quickly.
Because the code is open source, the developer might assume that someone else has already checked any underlying code for security risks. This assumption is based on a famous doctrine called Linus’s law: “given enough eyeballs, all bugs are shallow,” it suggests.
Not so. In reality, bugs and malware can lurk for years undiscovered in open source packages, as we’ve seen with show-stopping open-source security disasters like Heartbleed and Shellshock.
We’ve already seen some efforts to stop the rot. The Linux Foundation runs the Core Infrastructure Initiative , a project that enlists tech giants to fund security drives in open source software. However, it can only tackle some of the larger projects. That still leaves thousands of smaller projects that don’t have the resources to focus on security.
Developers that use third-party code can help by donating their own time to audit its security. There are plenty of open-source code testing and analysis tools to help, and if even a small proportion of people that benefit from these libraries participated, it would bolster the entire ecosystem’s health.
The problem? In many cases, those developers would need their companies to carve out time for them to contribute, which is difficult for managers to justify.
The internet already gave us open source. The concept succeeded beyond the dreams of most developers. Now, it’s a valuable asset that too few people pay for, either in money or time.
Instead of treating open source security as someone else’s problem, it’s up to the developer community at large to keep code healthy by securing the cyber commons.