Log4J and the Fragility of Modern Infrastructure

January 20, 2022

It’s like watching a slow-motion train crash come to fruition. XKCD drew a comic in August 2020 effectively predicting an issue just like the Log4j RCE exploit, but we predict it to be the first of many headline-worthy exploitable dependencies.

The cracks in modern digital infrastructure are growing right along with the scope of our dependence on it. But potentially bad actors aren’t limited to rogue hacking groups looking for a nice payout. The other group, well-funded and quietly collecting potential cyberweapons, are nation-state actors.

In this blog post we’ll cover the following:

  • Weaponizing cyber exploits — how feasible is it?

  • What are low-hanging fruits for cyberweapons?

  • How would these even be used?

  • What exactly is the problem for private organizations?

  • What should DevOps/IT be on the lookout for?

Weaponizing cyber exploits - how feasible is it?

A fundamental question about weapons is: what is the bang for your buck?

OK, there’s more nuance to whether something can be “weaponized” but the guiding principle is the same. Weapons are tools and their primary objective is to damage an adversary, which means the cost should be proportional to said damage. It would make no sense for the military to invest money to arm infantry with pool noodles.

A weapon’s efficiency is defined as being able to do the maximum amount of damage to an adversary with the least amount of resources.

It’s a complicated science but that’s what weapons are evaluated by: damaging things.

So how feasible are cyberweapons? By the guiding question earlier: very.

After all, a military operation for disabling (read: bombing) a facility requires risking personnel’s lives, maybe a $600 million plane, and the intangible costs of international politics when the target country has some awkward questions.

But a cyberweapon?

No personnel needs to be sent to the target facility and risk their lives, you can probably disable the operations with basic computer equipment, and if you do it well enough the target country has no verifiable way to pin it on you.

If the goal is simply to take a facility out of operational capacity, the cost of deploying a cyberweapon is significantly less than most conventional weapons to achieve the same result.

Note: This is not actually what a cyberweapon looks like.

Disclaimer: the above comparison is simplified for illustrative purposes and is not meant in any way to give a comprehensive overview of all necessary considerations.

Now that it’s established that cyberweapons do give quite the proverbial “bang for the buck,” the next question is: which ones can be weaponized?

What are low-hanging fruits for cyberweapons?

In short: Log4J and everything in the same vein.

To elaborate, we mean: ubiquitous libraries and tools that almost every major company and country uses, but is open-source and maintained infrequently by unpaid developers.

Put yourself in the shoes of a hacker and view it from their point of view. The very features that make these libraries and tools attractive to use make them the perfect cyberweapons.

After all, why invest resources discovering exploits in some bespoke, in-house software solution when you can cast a wider net the moment you find an exploit in something like Python or Java libraries? In fact, any list of the most used dependencies can double as a list of dependencies with the most interest from bad actors.

Oh look, a handy short-list for CVEs that may be embedded in your organization’s infrastructure stack. Would be a shame if hackers are using it and you aren’t…

Nation-states are interested in it for similar reasons. While some nations may be doing it for money, others are undoubtedly assembling a cache of cybersecurity weapons for deployment purposes or just to recognize when it has been used against them.

How would these even be used?

In the time since this blog post entered the drafting phase, we’ve had an interesting series of international developments with Russia and Ukraine that has yet to come to a conclusion.

Whether the Ukraine hacks are a signal of more to come is hard to call, but we’ve seen evidence in the past that Russia has initiated a series of cyber-attacks against a target nation before sending in the troops and munitions.

An exploit in an open-source tool could be discovered privately and held in reserve for who-knows-how-long, before being reported and mitigated. While nation-states are undoubtedly collecting cyberweapons, they are also very careful about when to use them. Once a cyberweapon is deployed, it opens up the possibility of other nation-states utilizing the same avenue against the initial aggressor’s infrastructure. The default is that a cyberweapon can be used and re-used by anyone so long as they look into how it worked.

But this has a dark undertone: because of the aforementioned restriction, a cyberweapon is deployed only for the strategic purpose of softening up a target’s infrastructure before a strike — whether that be military or increasingly heavy-handed negotiation tactics.

“The malware, which is designed to look like ransomware but lacking a ransom recovery mechanism, is intended to be destructive and designed to render targeted devices inoperable rather than to obtain a ransom.”

Microsoft said in a blogpost.

Let’s just say that Russia’s malware is looking for more than just a payday.

What exactly is the problem for private organizations?

Well this isn’t a scare piece — it was written to surface a problem. After all, the first step is admitting you have a problem.

Oh, we haven’t defined the problem? Right.

As bad actors see incredible rates of return with their exploits, any organization that has their infrastructure built upon open-source & ubiquitous libraries and tools maintained infrequently by unpaid developers is at risk.

Security teams aren’t unaware of this issue, but the needs of the corporate bottom line generally reduce to “Hey, free code is free code!” Therefore the chase for higher margins and reducing cost-basis has incentivized organizations to stuff their infrastructure stack full of free open-source APIs and libraries with potentially unaddressed CVEs (Common Vulnerability Exposure). After all, there is no Service Level Agreement (SLA) binding these unpaid developers to address the CVEs immediately.

And that’s not all. The code integrity that makes up an organization’s foundation is just the tip of the iceberg. We must not forget that these unpaid developers are the true “owners” of that code. Even if a private organization can reasonably depend on the benevolence of these code owners — if your company needs to meet compliance mandates and satisfy auditing requirements, will the auditors be satisfied?

Moreover, these code owners are not beholden to your organization. They might, for example:

Imagine — your organization’s digital infrastructure collecting all this technical management debt that a disgruntled project maintainer can cash in at any time!

What should DevOps/IT be on the lookout for?

CVEs happen — no matter what you use. The question is whether your organization:

  • Acknowledges the CVEs that are a direct result of using unsupported software.

  • Will devote resources to preempt them, such as establishing SLAs with the developers.

  • Has a plan for when a CVE inevitably compromises something critical.

Because when (not if, when) situations similar to log4j come out of the woodwork, is there a designated person who already predicted and planned for such a situation? Do they understand how these things work in your stack well enough to limit the damage and replace compromised parts?

Let’s say it falls upon you to now understand how exposed your infrastructure is to these CVEs. Set aside time to take stock of every application in your digital infrastructure and understand how bad it will be if that code suddenly became vulnerable.

Then ask yourselves: if this application became vulnerable, what’s the remedy? If you have an SLA with the team that oversees said code, you should be fine — that’s the developer’s job to maintain the code and provide consistent updates to avoid lingering CVEs.

If you don’t...perhaps it’s time to pay for what you’re using. After all, when these things break and you don’t have an SLA, it’s up to your team to fix it. The team that has probably never worked on the code before, only called the API and never questioned it.

Which brings us to the next question...

  • What are the dependencies within your stack and how bad is it?

  • How long would it take for your team to fix/replace that code if it was suddenly unusable?

Source
Does your architecture look like this?

The answers are organization-specific, but we think that once you start asking these questions you might not like the answers.

Additionally, if your company needs to meet compliance this will be a bridge you eventually need to cross. When auditors also start asking these questions in order for you to meet compliance...will the auditors like your answers?

Or will you not want to answer them?

So the problem is open-source?

The problem isn’t open-source per se. The problem is corporations and organizations seeing a very popular open-source project and just assuming someone else is paying to support it when in reality it is not. This requires a cultural mind shift, acknowledging the reality of the situation in order to avoid the aforementioned “slow-motion train crash come to fruition.”

Take for example Apple’s audacious redirecting of Apple-specific support requests to Curl’s solo developer. Curl is one of the most deployed open-source software products and is also packaged into Apple’s products. Apple, a trillion dollar company, pays nothing to Curl’s developer for their usage of Curl, but openly expects Curl to handle support requests despite the explicit AS IS nature of the software.

Who wants to tell DevOps to pivot and put out this giant fire because management wanted to save some $$$?

But wait, isn’t Pomerium an open-source project?

We are! But we are a well-funded company with an enterprise product build on top of our open-source core, providing the obvious incentive to keep it audited and maintained. Unfortunately, not every open-source project has a business built around it to sustain the product, because many open-source projects are difficult to monetize. Typically, monetized OSS projects are either open-core or software as a service. What if a critical piece of software doesn’t easily fit into these models? How do we make sure that our critical infrastructure, public or private, is protected against bad actors looking to exploit unsupported software?

In fact, that’s why we authored this: we fully expect events similar to log4j to become a common occurrence going forward.

Meeting Security Compliance With Zero Trust

There are many enterprise-customers using Pomerium to move from perimeter to zero trust and identity-based access methods. To learn more about how Pomerium Enterprise can support your organization’s security needs and meet compliance, check out our Githubdocumentation, or reach out to us directly.

Share:

Stay Connected

Stay up to date with Pomerium news and announcements.

More Blog Posts

See All Blog Posts
Blog
Reference Architecture: Using AWS EKS with Pomerium
Blog
Identity Aware Proxy (IAP): Meaning, Pricing, Solutions
Blog
The Great VPN Myth: What PCI DSS 4.0 Actually Requires for Remote Access

Revolutionize
Your Security

Embrace Seamless Resource Access, Robust Zero Trust Integration, and Streamlined Compliance with Our App.

Pomerium logo
© 2024 Pomerium. All rights reserved