MFA Your Grandma
I was born in 1990 (a young whippersnapper, I know), which means I was too young for commercials asking “It’s 10pm, do you know where your child is?” but old enough to remember PSAs at the end of Saturday morning cartoons and random episodes of TV shows where Spider-man or friends had to fight the personification of drug addiction.
Those were still the days of Scruff McGruff sending you counter-crime comic books if you wrote in, concerns that drug dealers were giving out free drugs at elementary schools, fear of adults putting edibles in Halloween buckets, and paranoia of kidnappers in vans with candy. Not to make too much light of serious fears, but the age in which the internet emerged to dominance was not without its security concerns.
To help deal with stranger danger, my parents developed a challenge and response for the family. The idea was that if someone came to pick us up because “your parents said to come get you” (in other words, “to kidnap you”) we were only to believe them if they new the password. The password was memorable, silly, and almost never used except when our parents would test us. Trusted family friends had it, as did—I assume—our grandparents and godparents.
The thing that my dad understood then, and that I rely on now professionally, is that control over our own identity and trust is powerful. Why do you use a call-sign when playing with a walky-talky as a kid? Because you’re keeping control over your identity so that people can’t abuse it to build trust they don’t deserve, and it makes any information they overhear less valuable.
This applies to secure messaging and security operations, even in the emerging age of AI: Security requires trust requires identity.
Think about a squad of soldiers securing a hill. To have “security,” you need to have control over who has access to that hill and what they are allowed to do when they get there. That security relies on both trust and a boundary, in this case a literal point on the ground that those not trusted must not cross. In computers, that’s usually roles, functions, and information someone may not have. To enforce trust, you need to know someone’s identity. And identity must be authenticated.
In warfare, we’ve learned quickly that identity cannot be authenticated with a single, falsifiable piece of information. “Oh you’re wearing the same uniform as me! You must be on my side” breaks down rapidly.
Nations have solved that problem in a variety of ways over the years, one example is the emergence of the challenge coin (https://en.wikipedia.org/wiki/Challenge_coin#Origins) and its legendary use by a downed WWI pilot who needed to prove he was who he said he was. Another is the classic call and response: if you want to return from “using the woodline” as a bathroom, you better know your platoon’s password to get back through security.
In cybersecurity, identity is usually first proven with a username or user ID. These are easy to falsify, so we add on one factor of authentication, usually a password. But, passwords can be guessed or stolen, so we’ve developed three forms of authentication:
Type 1 – Something you know (e.g. password, pass phrase)
Type 2 – Something you have (e.g. a token, smart card, or code-generating-device)
Type 3 – Something you are (e.g. biometrics like fingerprints)
In the post-COVID, high-speed, work from home environment we work in we’ve been defaulting to trusting someone’s identity because 1) They say they are John from IT and 2) they look and sound like John from IT on the video call. So called generative AI is challenging us to re-engage with security, trust, and identity: if you can’t trust what you see and hear, how can you be secure?
Fortunately, security principles are largely more timeless than the technologies that implement them. Can’t trust who they appear to be (Type 3)? Can we marry their audio and video back to something they know or something that they have? Can we slow down and turn to multi-factor authentication?
What’s one thing that you and your grandma both know? What’s a word or phrase you can come up with in advance first to force the conversation about security, scams, and deep fakes with grandma, but second to be prepared when someone calls her pretending they’re you and asking to be bailed out of jail?
To steal from a panelist I heard last week, how can you “MFA your grandma?” (Sorry, sir! I did not record your name)
Cybersecurity as Stewardship
First, we’ve received three anonymous donations through various workplace giving networks over the last couple months. If you’re behind them THANK YOU for your support.
This week, as I’ll mention in the updates, I had the opportunity to attend the DFW Technology Summit as well as the Faith at Work Summit. I also had the pleasure of talking with several of the excellent folks at Concilium who help Christians with risk management, security training, and safety (https://concilium.us/). All this has me pondering the place of cybersecurity and defense within work and vocation and within the “sacred-secular divide.”
Scott at Concilium has a bunch of great turns of phrase to explain why Christians and missions should care about security. One such phrase: we have a different “why” behind security but the same “how” as the commercial sector.
There are two ways to interpret that phrase: first, the commercial sector sees the “why” of security as preventing loss of money, but the missions sector sees it as about obeying God. Second, and more completely and generously, the commercial sector sees security in terms of something to protect but Christians see security in terms of someone to obey.
Security and defense in context of missions is about stewardship. It’s about not just accounting for resources and costs required to build a tower (c.f. Luke 14:28), but ensuring those resources are not squandered by mismanagement or foreseeable setbacks. In other words, cybersecurity is about something to protect to make the best use of the resources given us to accomplish the work that God has asked us to do with him.
Okay, so that’s not all that different than the secular world. Security is protecting an asset so that you can continue to steward it well—same same? The difference lies in the view of work as obedience to and therefore worship of God. If work for Christians is partaking in the restorative work of God, then risk management is making sure we’re faithful stewards of resources entrusted to us and putting ourselves in a posture of resilience and preparedness for stress and setbacks.
Normally, Christians talk about risk under what might be called “theology of suffering,” in which Christians are expected to suffer for the gospel, and security is thereby put at odds with the Biblical certainty of suffering for the Kingdom.
Instead, the stewardship view of security nests under what might be called “theology of work” and takes a different frame that fear, uncertainty, and doubt. Instead of being about “what can go wrong?” it remains about “how can I best be faithful?” Instead of pitting prayer for safety and the act of building defenses against each other, it integrates prayer into the acts of building walls and setting watchmen.
Putting security where it belongs under stewardship, then under work, then under faithfulness, then under worship keeps the proper frame in mind: security is about serving God and people. Yes, it’s about stopping bad guys, but instead of marketing and training with fear in mind, we can teach people with the intent to love them, protect them, and build trust. It also begs us to consider how we think about our opponents, adversaries, and thieves.
“To love is to will the good of the other.” – Thomas Aquinas
Secure Messengers, what are they?
You’ve probably noticed that we end up talking about secure messengers in these articles quite a bit. While selecting and using secure messengers isn’t necessarily about cybersecurity (as opposed to communications security or privacy), cybersecurity has a lot to say about what makes a messenger “secure” and how they all measure up to each other.
Also, missionaries and advocates ask us about secure messaging and VPNs frequently.
There are a lot of topics to cover and no one article can cover them all. Today we’re talking about the very basics.
What are they?
Secure messengers are messaging platforms—usually instant messaging or short-form messaging via a cell phone—that protect your communications from intercept by unwanted third parties, especially service providers and external surveillance.
For the sake of this article, we’ll focus on the usual service providers because external surveillance relies on or mirrors the monitoring conducted by service providers, at least until we have to consider quantum computing.
The service providers we’re normally concerned with comprise the infrastructure(s) our messages ride over. So:
The messaging provider itself and their servers
The cell service provider
The internet service provider
The cell phone or computer’s application operating system
Secure messengers prevent one or most of these service providers from being able to read the contents of messages sent between people using the secure service.
With that said, most secure messengers are a privacy tool: they protect what’s being said from snooping. They are not normally anonymity tools because they don’t always hide who you are while speaking (e.g. Signal and WhatsApp both used real phone numbers for communications until recently).
They also may or may not be quiet when broadcasting, as we talked about a bit in the 3-bit framework (https://www.ericiussecurity.org/blog/3-bit-ip-planning). Think of them as encrypted radio signals: people can hear the signal with their own radios, but they need something special to understand what’s being said.
Selecting a secure messenger
We’ll skip over the need to understand the information your team relies on for the moment and we’ll also skip over conversations about classification and need to know. Let’s assume that your team needs a secure messenger to communicate with each other about some form of sensitive information.
The first step to selecting a new tool is determining what it needs to do and why. Why discussing that we should consider at least the following:
Group size – How many people need to communicate at once
Features needed – Do you need text messages? Group calls, video calls, and/or document collaboration?
Security and Privacy features and policies – What are your team’s privacy and security policies? What are the privacy and security policies of the tools available to choose from?
Budget – How much money do you have vs how much do tools cost?
Operation System Support – Do tools need to support cell phones, computers, or both? Which ones?
In other words, the first thing we need to engage with are the business or mission need for the tool and how the tool will interact with the mission’s existing setup and constraints.
Then, we want to engage with privacy and security specific features.
Essential Security Features
Secure messengers aim to prevent service providers and surveillance from monitoring your communications. That means they need to do three primary things:
Protect data in transit – Prevent snooping as the messages travel
Protect data at rest – Prevent snooping when the message is stored and not in use on the phone or computer
Protect accounts/identities from takeover – Prevent other people from successfully pretending to be you to hijack your message storage or send/receive systems
These three goals are usually accomplished with encryption and strong access controls. These goals produce this list of essential features:
End to End Encryption with keys under the users’ control – Messages should be encrypted as soon as they are sent and should not be decrypted until received. Only the sender and receiver should be able to decrypt the messages
Forward Secrecy – When keys change in the future, old messages should be lost. No one without the right keys should be able to read the message, including the legitimate users. This implies keys will change in the future. (see also, https://avinetworks.com/glossary/perfect-forward-secrecy/)
Zero Knowledge – The service provider creating the messaging system should have no knowledge of the messages’ contents and as little knowledge about the senders or receivers as possible.
Contact Verification – Users should be able to control their own keys, view their own keys, and use the fingerprints of their keys to ensure they are talking to the person they think they are and that no one is sitting in the middle decrypting then relaying messages.
Support for Multi-factor Authentication – Accounts for the service should be protected from takeover by at least two forms of authentication.
Design or Architecture should be documented – In modern cryptography, it shouldn’t matter if the cryptographic system is known as long as keys remain secure. Similarly, it shouldn’t matter if the service provider publishes the broad overview of their architecture, because it should be secure unless someone has keys.
Independently audited and open about problems – All systems have problems and vulnerabilities. A secure messaging provider should acknowledge this and be open with customers about how frequently they are audited, what problems are found, and what’s done to fix problems
You might also consider price to be essential: if the tool is free you are the product. That saying may be overly simple because the tool may have an alternate funding strategy such as freemium subscriptions, nonprofit/donation supported, or open-source software (i.e. “you pay with your sweat and time”). It’s important that you know how the messenger makes its money and stays active, so that you know if they are monetizing your messages. Facebook Messenger, Whatsapp, and Telegram are great examples of free services whose funding models draw their security into question.
Useful Features
Besides essential features, you may also want to consider features that increase anonymity or decrease the impact of any exposure or failure. Namely:
Disappearing messages – Can you set messages to automatically erase so they are not available for exploitation if someone ever does break into the system?
Registration without phone or email – Can you create and secure an account without linking it back to other accounts, even if this means you could become permanently locked out?
Screening, Selection, and the 3-bit Framework
When selecting a tool, plan, or course of action, you generally have two sets of criteria. First, screening criteria establishes what you’re willing to consider. Second, selection criteria helps you rank your options.
Screening criteria set the table for what options you’re willing to compare to each other. Typically, screening criteria are based on the business or mission requirements for a tool or solution. They can also be based on your constraints and your willingness to use certain features or qualities.
Screening criteria can vary widely, so here are a few examples:
Must allow simultaneous editing and collaboration
Must provide instant messaging
Must not cost more than $1000
Must work with MacOS
Must provide end-to-end encryption
Note the use of the word “must.” Screening criteria lay out the non-negotiables that your options must meet.
Selection criteria, however, are used to qualify your options and rank them against each other. If all solutions get the job done, selection criteria determine which one gets it done best or most cost effectively. Some examples:
Cost
Ease of Use
Setup speed
Selection criteria typically take the form of qualitative scales, and those scales can be subjective. When criteria are subjective, you’d normally just rank all options against each other. The best solution is scored 1, the second best is scored 2, etc. (Though you can use an inverse ranking system if you want high scores to win—the world is your oyster.)
You can also weight selection criteria, so if cost is your most important factor, you can 2x the scores given to each option regarding costs to make options shake out most distinctly based on that criterion.
To borrow US Army language from FM 6-0 Commander and Staff Organization and Operations from 2022, all options must be suitable, feasible, acceptable, distinguishable, and complete. Screening criteria are used to narrow your options down to what’s suitable, feasible, and acceptable. Selection criteria help evaluate the degree to which an option is distinguished from other options and a complete solution.
Significant bits and Screening and Selection Criteria
The 3-bit Framework (ref: https://www.ericiussecurity.org/blog/3-bit-ip-planning) can be used for both types of criteria. Remember, the 3-bit framework is specifically built for evaluating your PACE plan: ranking options in order that they will be used. Which means:
Screening Criteria – What options are eligible for inclusion in the PACE plan?
Selection Criteria – Where does the option go in the PACE plan, if anywhere?
First, we’re going to determine if there are any of the three categories that must be answered a particular way:
Is it fast?
Is it quiet?
Is it protected?
If your options must be protected, then we’re going to force that bit to be “yes” (1) and throw out any option that doesn’t qualify.
In the language of bits and bytes, we can then select our most significant bits. In this case, we’re going to put our most important or significant bits all the way to the left in order of importance. For our screening criteria, we can choose to either make them most significant, or we can choose to drop that bit altogether going forward—that bit no longer helps us distinguish our options.
By ranking bits in order of importance from left to right, we can keep our yes/no options and develop a natural scoring framework atop it using a natural representation of numbers.
Let’s assume that we’ve screened options by some criteria not listed in the 3-bit framework. We then look at our 3 bits and rank them in order of importance. For a contrived example let’s say we choose:
Protected
Speed
Quiet
We assign each option a yes/no score. Using Signal and AOL Instant Messenger as examples:
Signal
Protected? Yes (1)
Speed? Yes (1)
Quiet? No (0)
AIM
Protected? No (0)
Speed? Yes (1)
Quiet? No (0)
Since we have 3 bits, re-write those scores from left to right:
Signal: 110
AIM: 010
Now you get to choose how much of a math nerd you’re going to be. It’s the 3-bit framework, so you can use binary (base 2) if you really want to. But time is valuable and 110 is bigger than 010 in both binary and in decimal (base 10, aka “normal numbers”).
So, in our contrived example, Signal scores higher than AIM because 110 is greater than 10 (I dropped the zero from 010).
Assuming you’ve put your criteria in order from most important to least important left to right, you will have a natural scoring system that can be used for PACE planning.
Primary – Highest score
Alternate – Second place
Contingency – Third place
Emergency – Fourth place
Unfortunately, this doesn’t solve tiebreakers for you. You probably then add additional criteria like cost and ease of use to differentiate the tie. If there’s still a tie and you’re a battalion commander, send the operations officer back to the dungeon to develop more distinct options. Otherwise, celebrate having two truly interchangeable options to build resiliency for your team.
NIST Cybersecurity Framework Profiles and Tiers
Continuing from our previous discussion about the NIST CSF (https://www.ericiussecurity.org/blog/nist-cybersecurity-framework-20), the Cybersecurity Framework 2.0 offers two tools called profiles and tiers.
Profiles describe what your team’s current or desired cybersecurity posture, usually by describing the outcomes you aim to achieve. Profiles help you understand, prioritize, and communicate how you’re trying to organize your cybersecurity efforts.
The gist of profiles is the creation of a “current profile” and a “target profile,” much like how Ericius describes creating a current and desired state while using the Cyber Defense Matrix (CDM) (https://www.ericiussecurity.org/blog/frameworks-for-cyber-success). The current profile lays out what your team is currently accomplishing and how well it’s going. The target profile explains what your desired state looks like and helps determine priorities and missing resources.
Also, just like how Ericius employees the Cyber Defense Matrix, you can use current and target profiles to identify gaps and create an action plan. The CDM is much more consistent in its application of the NIST CSF’s functions more thorough because it spells out what assets to consider. Ericius uses the CDM first to triage your team during gap assessment, and then more thoroughly to develop a risk registry and plan of action and milestones during roadmapping.
The CSF also offers community profiles, which describe the baseline objectives that other teams in your sector aim to achieve. Generally, community profiles can be used as best practice models or can be lifted as target profiles. They can also help with demonstrating prudence because they help you show that you are exerting commensurate effort with your peers.
Besides profiles, NIST CSF 2.0 offers a system of tiers, or categories that describe your progression from informal risk management and ad hoc crisis response to flexible and risk-informed approaches that are constantly learning. Tiers help you take a clear-eyed view of how well you’re doing and “set the overall tone for how an organization will manage its cybersecurity risk.” (CSF 2.0 pg 8)
However, tiers complement or nest with your team’s cybersecurity—and enterprise—risk management planning. Tiers don’t replace those broader efforts because the tiers can’t communicate what level of maturity you should be at. Instead, your team should evaluate costs and benefits associated with moving to a higher tier and exerting more effort.
That said, in my personal opinion most teams should aim for no less than Tier 2: Risk Informed, so that you’re at least engaging with risk as a team and not as a collection of individuals.
Within the rest of the core NIST CSF 2.0 document is just about 10 pages of discussion on supplementary (and complimentary) online resources and how these can be used to set and communicate strategy within your organization. We’ll pick up next time on the discussion of different types of IT and cybersecurity risks that your team should consider.