TunnelVision: The end of the VPN as we know it!?
This week we have a long article to help contextualize a new exploit that has a lot of people in the privacy and missionary spaces concerned.
Ars Technica dropped a bomb of an article which reports a vulnerability that’s existed in networking software since 2002 and allows attackers to “neuter[ VPNs’] entire purpose.”
It’s a bold assertion. They essentially say that any attacker can just—*boop*—neuter your VPN.
Take five minutes and go read the article here for context: https://arstechnica.com/security/2024/05/novel-attack-against-virtually-all-vpn-apps-neuters-their-entire-purpose/
Now that you’re back, let’s break down this attack:
The attacker must be on the same Local Area Network as you (think coffee shop or hotel Wi-Fi in this case).
The attacker must control a malicious DHCP server—a server which allows them to assign IP addresses to machines on the local network.
Your machine must look to that DHCP server for IP and route information.
The attacker must push a bad route to your network interface card that causes your computer to route internet traffic to their malicious server as the gateway instead of to your VPN app.
The attacker must be in a position to capture traffic and/or decrypt or strip other encryption protecting your communications to learn something of value.
In other words, this isn’t an attack where just any attacker anywhere on the internet can *boop* your VPN into revealing information about you. In fact, this isn’t actually a problem with the VPN software—it’s a problem with how Windows, MacOS, iOS, and Linux handle network configurations. That’s why most VPN providers aren’t in a position to fix this.
To reinterpret what the attack requires:
The attacker must already have access to the network you are on and be able to send network traffic to and from your computer (this is most relevant on public networks like coffee shops or hotels, though many now isolate all computers on the network from each other).
The attacker must deploy a malicious DHCP server without crashing the local area network in the act of deploying a second DHCP server, or must be able to take over the existing DHCP server.
The attacker must be able to force your computer to trust their server, so probably needs to have already compromised the router or the existing DHCP server.
The attacker pushes a bad route to your computer’s network interface which causes your computer to route over the normal network interface instead of the VPN’s virtual interface.
The attacker must be able to decrypt HTTPS traffic when it receives your network traffic and then relay that traffic on to legitimate sites.
In other words, this attack most likely occurs on a pre-compromised public network. That’s not unheard of, Russia’s known to attack hotel chains for exactly these kinds of purposes (https://www.reuters.com/article/idUSKBN1AR1IZ/), but your VPN hasn’t been neutered either.
How Virtual Private Networks Work
This attack works because using a VPN is effectively on off-label use of networking magic medicine. VPNs were originally designed to connect together distant local networks into corporate wide area networks. They’re used for privacy because—by simulating connecting to a massive conglomerate with millions of users—you gain the cryptography protecting the connection as well as the anonymity of the crowd.
When you turn on a VPN, it generally establishes a tap and tunnel interface on your computer that serves as a virtual network interface and sets up routes to determine whether traffic uses that interface. In the corporate wide area network scenario, those routes tell your computer to send business traffic to business servers and send the rest to the local net or internet as normal.
In the privacy context, VPNs are usually configured to route all network traffic across the VPN before allowing it to go out to the internet.
This makes your traffic appear to come from the conglomerate, not the individual.
In this case, the TunnelVision vulnerability leverages how DHCP servers talk to computers. The IP-address-authority-server sends a route to your computer that trumps the virtual network interface and says “don’t talk to him, talk to me.”
At this point, the attacker is able to see your network traffic as if there’s no VPN. But that’s not the end of the story.
What can attackers really get?
One thing to keep in mind with exploits like this is that they're not panopticons. They are targeted at specific networks and more ubiquitous encryption like HTTPS still provides value.
Just being in control of a route is sufficient to snoop on IP addresses, session lengths, maybe domain names, and other metadata. But specific encrypted information like what you are searching on google.com is probably not visible.
If decrypting traffic were so trivial, more random hops on the way from your home Wi-Fi to google.com would do it. And then your VPN wouldn’t matter because every nation-state attacker would sit close to google.com on your route, crack your cryptography, and read the internal details of your traffic to re-identify you.
Don’t get me wrong, it’s doable, but it’s costly to do at large scale and usually would involve a mass push to undermine the integrity of cryptography on the internet as a whole (So think China’s issuance of fake root certificates: https://en.wikipedia.org/wiki/Root_certificate or Facebook trying to spy on Snapchat https://www.malwarebytes.com/blog/news/2024/03/facebook-spied-on-snapchat-users-to-get-analytics-about-the-competition)
These kinds of attacks are made more expensive by modern browser defenses. Browser developers cooperate with web sites that publish their encryption certificates so that both the web site and the user can automatically verify there is no man-in-the-middle before allowing a connection.
So, the attacker on your hotel Wi-Fi is probably seeing HTTPS. That means they are seeing mostly metadata, not data. In privacy circles it’s common to quote Retired General Hayden of NSA fame saying “We kill people based on metadata,” (https://www.youtube.com/watch?v=tL8_caB35Pg). But that quote is from 2014, and some things have changed in our favor.
The biggest change being the proliferation of encrypted domain name systems which vastly reduce the value of domain metadata. The second being the proliferation of content delivery networks and cloud exchanges (CloudFlare, AWS, Microsoft Azure, etc.) causing most traffic to appear to route into massive corporate networks.
What this effectively means is that unless I can decrypt HTTPS I can no longer see the following in large volume:
The domain name of the site you’re going to because it’s probably encrypted
The specific server address of the site you’re going to because it’s probably in a massive shared Microsoft, Amazon, Akami, or Cloudflare network
The part of the site you’re going to because it’s encrypted within HTTPS
This isn’t universally applicable, but it’s generally true. However, if you’re in a nation where the very internet itself is engineered to allow intercept or where all companies are expected to comply with data and intercept requests, you’ve got bigger problems than your VPN can fix for you.
So ,what do we do about this?
First, procedurally. Don’t do sensitive work on untrustworthy public networks unless you actually have to. If someone looking over your shoulder in an airport would get you in trouble, don’t do it in an airport.
Second, when you turn your VPN on, check the IP address it reports. Then go to a leak detection site like https://whatismyipaddress.com/vpn-leaking and see if the IP address reported is the same as the one your VPN reports. If not, you’re not masked by the VPN. That doesn’t mean that you’ve been hit by TunnelVision, just that you’re not masked.
For those reading this and saying that this advice is untenable because you absolutely have to get work done right now and fast: stop. Your cell phone is lying to you about how fast you have to move. Emergencies are not improved by rushing, nor are fights won in a hurry.
Speed favors the bad guys in cyber. Slow and smooth favors the defender.
Second, technology:
If your VPN has a leak detection tool, turn it on and use it
Enable HTTPS by default in your browser (almost all browsers support this now, https://www.eff.org/https-everywhere)
Encrypt your internal traffic using services like Signal messenger and Pretty-good-privacy (PGP) for email encryption so that messages cannot be read without users’ encryption keys even if intercepted
Use big service providers where tenable so that you gain anonymity from the crowd as a customer
Consider using a Virtual Machine on your main, real machine and connecting to the VPN from there: that way even if your real machine is hit with TunnelVision, the traffic will be encrypted by the VPN before it reaches attackers.
Finally, some VPN-replacement technology like zero trust networking may help you—depending on how that technology interacts with your network card.
If TunnelVision is a concern to you and you need help navigating it, please don’t hesitate to reach out.
3-bit Framework for Planning Secure Communications over Internet Protocols
A few days ago, I was talking with a friend and colleague about secure messengers. A lot of times, when a new boutique tool pops up on the market my first question “Why not use Signal?” (and then usually an explanation of how I differentiate secure messaging apps like Signal from messaging apps that are generally secure like WhatsApp).
In my opinion, “Why not Signal?” is the first litmus test any new internet based, secure, messaging system needs to pass in order to be considered for ops. Don’t get me wrong though: Signal is arguably the best secure messenger (probably tied with Threema), but there are no solutions, only tradeoffs.
There are several reasons you might not want to use Signal. The main one, in my opinion, is that Signal is relatively loud on the wire. To surveillance, censors, and network defenders alike Signal is glaring light in network traffic. You can’t crack it yet, but you still always know when its there. In places where the app is very popular and legal, that may not matter all too much. If it’s in legal gray space, or rare, it produces a signal in the noise for monitors (pun only partially intended).
These tradeoffs lead us into a discussion: How do you build a communications plan leveraging internet protocols (IP)?
Most good communications plans have three or four “lines” or options. Usually, we describe this a PACE plan: primary, alternate, contingency, and emergency lines of communication. Getting this far is easy, but filling it in with quality procedures and tools is harder…
Primary – Signal!
Alternate – Encrypted email?
Contingency – Um, email? Text message?
Emergency – Runner and/or smoke signal
To be able to make a good PACE plan, we need a way to match tools and procedures to the context we’re working in. One way to do that is to evaluate the threat actors and how they’ll interact with your operations—frequently called Operations Security or OPSEC—but in my personal opinion we’re generally lacking a framework for plugging our IP communication tools into that OPSEC picture.
To begin answering that question, I propose the “3-bit” internet protocol communications framework, or “3-bit framework” going forward. This framework is aimed at helping your team or organization select IP based tools for your PACE plans.
It has three criteria, each of which can be “yes” or “no”—1 or 0. Hence bits. These criteria provide broad, fast, and simple rules of thumb to help you understand the tradeoffs of your internet communications.
Is it fast? – Does the tool allow you to quickly send short messages for rapid back and forth communication, or is it slow and longer form? Speed here is broadly referring to how quick the message is to make, send, receive, and read. E.g. is it Instant Messaging (fast), or email (slow)?
Is it quiet? – How well does the tool blend in with the rest of the noise, how discrete is the tool, or how observable is it? You might think of this one as “how easy is it to tell that I’m using this tool?” judged in terms of adversary effort. E.g. Signal is easy to spot on the wire (loud) but a boutique secure tool tailored for your team requires statistical analysis to spot (quiet – and we’re assuming a lot about the tool being well built here!)
Is it protected? – Are internal communications encrypted? If someone can snag my message off the wire or camp on a server relaying it, can they read it? E.g. Messages encrypted with end-to-end encryption where you control the keys (protected) vs email sent via stock Gmail (unprotected)
There’s probably more to say about applying these rules and building out decision making frameworks around them, for example: if it’s loud you probably want to be the same kind of loud as everyone else. However, I think these three are good enough to get the ball rolling.
3-bits gives us 8 combinations. For our given context, we need to pick the four best options given our needs. Slow, loud, and unprotected is probably always our least desirable choice because it takes a while, is obvious what it is, and is readable if intercepted. Fast, quiet, and protected is likely our most desirable choice because it’s quick, discrete, and encrypted if intercepted.
With this in mind, we can then build a quick PACE plan. Say for example we’re in a generic central Asian country where secure communications and messengers are allowed but generally spoken against and monitored—or at least readily blocked at the first sign of political turmoil. Because the 3-bit framework focuses on IP based communications, I’m not going to address the tradeoffs associated with device inspections or security, which are valid concerns in the real world (e.g. What do you do when simply having Signal installed at customs or a border checkpoint gets you in trouble?)
In this situation I’m going to assume that it’s best for me if my team uses cheap, fast, encrypted tools and only drops back to the crem de la crem when there’s pressure or unrest:
Primary – Signal; Fast, loud, protected (and free!)
Alternate – Specialized secure messenger; Fast, quiet, protected (probably $$$$$)
Contingency – Regular business email with encrypted messages; Slow, quiet, protected (reasoning that regular business email blends in and is frequently encrypted for legitimate business reasons)
Emergency – Regular business email; Slow, quiet, unprotected (and reasonably available as long as there’s internet)
Is it perfect? Nope, not at all. Is it good enough to get you started evaluating continuity of communications using internet tools? Probably. At a minimum, I think engaging in these three rules of thumb will guide you through discussing the grey areas.
What do you think? Is this heading in the right direction or missing the mark?
NIST Cybersecurity Framework 2.0
Earlier this year, the National Institutes for Standards and Technology released the much anticipated version 2.0 of their internationally popular Cybersecurity Framework (or “CSF”). The CSF is popular for a reason: it provides outcomes and security controls for developing your cybersecurity program, it’s incredibly thorough, and it’s free. This week, we’ll take an initial look at CSF 2.0’s core document to give you and your team a sense of what it does.
NIST CSF 2.0 does not provide a one-size-fits-all security model, but does aim to help everyone. It can be tailored for organizations of all sizes and all sectors—both for-profit and nonprofit—to help reduce and manage cybersecurity risk. The CSF broadly describes desired outcomes that are mostly universally applicable, and then maps those outcomes down to specific security controls.
The CSF wants organizations to consider cyber-risk in the context of their specific goals and needs. However, it is descriptive rather than prescriptive. The CSF starts with the “core,” which can help organizations understand what they should aspire to. It is then supported by a gigantic suite of online resources freely available to help you figure out how to achieve success. To this extent, it readily pairs with other resources such as security practice models to help your team better manage cyber-risk.
Within the core are six functions: identify, protect, detect, respond, and recover—all familiar from the first version of CSF—and now adding govern. The core doesn’t specify the sequence, priority, or importance of anyone function, so navigating that still requires you to evaluate your organization’s own goals and to rely on other resources like the Cyber Defense Matrix (CDM). Each function has a core pattern:
Govern – Understand your organization’s context and set risk management expectations and strategy; Write policy
Identify – Understand the organization’s assets, suppliers, and cyber-related risks; Create inventories and assess risk
Protect – Safeguard assets to lower the likelihood and/or impact of cybersecurity events; Block, log, and reduce risk
Detect – Detect and analyze attacks and compromises in order to support incident response and recovery actions; Detect attacks
Respond – Contain and mitigate the effects of cybersecurity incidents; Stop attacks
Recover – Restore assets and operations impacted by cyber-attacks to a functional and more secure baseline; return to normal.
While it’s clear from the Cyber Defense Matrix and even the CSF’s own verbiage about these functions that there are at least some strict dependencies between functions (Respond explicitly depends on Detect, for example), the CSF visualizes these functions as a wheel: All functions relate to each other and form a cycle of activities with Govern at the center. Govern informs the implementation of the other five functions and forms a cross-cutting line of effort.
Of the six functions, govern, identify, protect, and detect are all continuous functions, which should happen within a feedback loop of communication with your teams. These are also structural functions that directly apply to the engineering and configuration of your systems.
The last two functions are on-demand and manage situations: respond and recover should be prepared at all times but only active when an incident occurs.
These functions are then broken down into categories (broad outcomes) and subcategories (technical outcomes and management activities), that begin to move you from broad descriptions of activities into achievable goals that can be mapped to controls.
Supporting the map from function to outcomes and activities, are profiles and maturity tiers, which can be used during gap analysis to develop current and desired states and can be compared with communities of similar organizations. We’ll talk about profiles and tiers in a future article.
Cybersecurity and Donor Management Systems
Two of the best reasons to care about cybersecurity are to be a good steward of the money you’ve been given by your donors and to attract new donors.
Your donors want to see you keep control of the money they’ve given you and put it to work making the world a better place. Unfortunately, donor management platforms are juicy targets that often remain low-hanging fruit for thieves.
In July 2020, Blackbaud, who offers software and other products for “social good organizations” including nonprofits, foundations, and educational institutions was breached.
Attackers stole unencrypted bank account numbers, social security numbers, and login credentials for 13,000 of Blackbaud’s customers and those customer’s own clients. They also stole a variety of data such as demographic information, driver’s license numbers, financial records, employment data, donation history, wealth status, and even protected health information. An identity theft gold mine.
Attackers held the information at ransom and threatened to release the data if not paid. Blackbaud paid but didn’t verify the attackers actually deleted the information. Blackbaud unfortunately did not handle breach response and post-breach fallout very well.
Ahead of the breach, the FTC alleges that Blackbaud “failed to monitor attempts by hackers to breach its networks, segment data to prevent hackers from easily accessing its networks and databases, ensure data that is no longer needed is deleted, adequately implement multifactor authentication, and test, review and assess its security controls" and "allowed employees to use default, weak, or identical passwords for their accounts."
During the breach, technology and consumer relations discovered that bank numbers and social security numbers had been stolen but didn’t escalate the issue to management because there was no reporting procedure.
Then, when filing its 8-K in September 2020, Blackbaud left out crucial details of the scope of the breach and downplayed how sensitive the stolen information was and “characterized the risk of an attacker obtaining such sensitive donor information as hypothetical.”
43 States’ Attorneys General sued Blackbaud post-breach and won a settlement of $49.5 million. The SEC then sued them again for failing to disclose the full impact of the breach and for falsifying its quarterly report. They are dealing with 23 consumer class action lawsuits.
Blackbaud will be dealing with the chronic costs of the 2020 breach for years to come, and not just in court fees and settlements. They’re also now required to implement the security engineering they should have been doing ahead of the breach. Here’s a sample of the fixes they have to build and maintain:
· Implement and maintain a breach response plan
· Assist customers in the event of a breach
· Improve network segmentation, firewalls, and access controls
· Improve patch management
· Improve logging and monitoring
· Provide better employee security training
· Encrypt entire database storing personally identifiable information
· Delete customer data that is no longer needed
· Accurately portray data retention and protection procedures
If your organization hasn’t yet engaged with cybersecurity, you could do far worse than looking at that list and seeing what you’re doing for each item.
This breach shows us that we need to seriously evaluate ourselves and our vendors and ask the hard questions of whether we’re doing all we can to make sure the money we’re trusted with is going to our teams and our missions, or if its been set aside to pay thieves and fines.
Sources:
Why cybersecurity and nonprofits?
Ericius Security builds cybersecurity programs where they don’t exist, and we specialize in high-risk nonprofits.
Why cybersecurity and nonprofits? The short answer is that cyberattacks effect people. When computers are under attack, people are under attack.
The long answer is that cybersecurity should be part of your overall risk management and operations security efforts. These efforts are about mission success, good stewardship, and safety.
Cybersecurity contributes to mission success in a variety of ways. The most obvious way is that it allows you to retain control of your digital assets and use them the way they were created to be used. It also supports success in more subtle ways, such as protecting trust between your team members when they are communicating by providing ways to verify people are who they claim to be.
Risk management and security may seem like they distract from the mission—they’re simply overhead. But in reality, they are part of good stewardship. When people donate to a cause they want to see their money go as far as possible to create a positive impact on the world. You aren’t stewarding money well if you leave what you have unprotected for thieves. Investment in cybersecurity can prevent predictable problems and lessen losses. It’s still technically overhead, but it’s less overhead than the chronic costs and loss of trust stemming from breach.
Finally, cybersecurity supports member safety and care. Good security efforts can actually help reduce the sense of fear and paranoia your team may face by giving them a calibrated sense of what actually might go wrong (and what probably won’t go wrong). That tangibly reduces stress. It also shows your team you’re resourcing them and are concerned about their safety. It’s also a safety feature because it helps you retain control over information. We tend to think about cybersecurity and critical information in terms of reducing identity theft and fraud, but it can also help protect the locations of safe houses, the names of sources, and the nature of high risk/reward efforts.
People tend to decide cybersecurity is worth considering at different times during the life of their missions and businesses, but most commonly they’re either preparing to go work in place particularly well known for cyber-crime, are just recovering from a breach and want to prevent it from recurring, or are reaching their teenage years and starting to formalize and improve their processes and policies—risk management is generally top of mind for boards at this stage, especially if it was ignored during the start-up years.