“One CISO, please!”
If you’re trying to improve your business’s cybersecurity you’re going to encounter the concept—or rather role—of the CISO. The Chief Information Security Officer.
There’s a lot of discussion online about what a CISO does and who they (should) report to in an organization. To summarize it all very briefly, the CISO is in charge of leading the cybersecurity team and efforts of the business to ensure the business’s success. That means they aren’t security analysts reviewing alerts from network intrusion detection. Rather, they set the goals for cybersecurity, build the team, get the team rowing in the right direction, write cyber-policy, direct training, etc.
You run into the concept of the CISO early on when trying to build the security function of your business because it needs the leadership CISOs offer: they’re leaders, team builders, and strategists. Without leadership, you end up paying for risk assessments and penetration tests that ultimately don’t serve your business.
So if you need a cyber-leader, how much do they cost? Well, Forbes says they average $584,000 per year in salary, not including bonuses and equity (https://www.forbes.com/sites/forbestechcouncil/2023/02/28/why-hire-a-virtual-ciso-in-2023/). That’s extreme and not normal. According to Salary.com CISO salaries range between $220,000 and $275,000 per year with an average salary sitting in the ballpark of $250,000 (https://www.salary.com/research/salary/benchmark/chief-information-security-officer-salary).
Honestly, that’s one-quarter million dollars a year that your business probably doesn’t have. But cybersecurity as a function (made of people, processes, and tools; not a feature built from software) needs leadership.
The odds are that you don’t need everything a $250K CISO brings to the table and if you were to hire one they’d be overkill for your organization. Especially if you’re just getting started, what you need in the short term is a skilled cyber professional that can:
Evaluate risk and shortcomings and present them in terms that non-IT leaders understand
Build plans and strategies to mitigate risk
Set direction, write policy, and design procedures that work for your business
Manage vendors and security training
Set budget
There’s more strategy and documentation in those requirements than leadership. That’s because a new cyber function likely leverages existing personnel and fills gaps by buying services. With a sufficiently well versed cybersecurity professional, you can manage the function while relying on existing people business leadership practices. That sounds more like hiring a technical program manager (TPM) who works directly for the COO or CFO.
Returning to Salary.com, TPMs have an average salary of $150,000 per year which is at least going to save you $100,000 while still saving you the headache of making your COO manage the cybersecurity awareness training or the CFO manage the IT helpdesk vendor (https://www.salary.com/research/salary/posting/technical-program-manager-salary).
Alternatively, you can further lean into buy-over-build while your business continues to grow by bringing in an outside or virtual CISO (vCISO) to accomplish the specific functions you’re missing while pairing them with an empowered leaders within your organization to drive change. The vCISO functions as a strategist, coach, and consultant so that you get the cybersecurity help you need now while you lay a proper foundation so you can continue to grow towards better safety and stewardship.
In contrast to any kind of full-time employee, a vCISO or security consultant can flex to meet you where you’re at and produce the change you need to see right now. Some vCISO services can be quite expensive but for us that normally translates to about $4000 for our for-profit clients—or about one-fifth of the cost of full time CISO.
MFA Your Grandma
I was born in 1990 (a young whippersnapper, I know), which means I was too young for commercials asking “It’s 10pm, do you know where your child is?” but old enough to remember PSAs at the end of Saturday morning cartoons and random episodes of TV shows where Spider-man or friends had to fight the personification of drug addiction.
Those were still the days of Scruff McGruff sending you counter-crime comic books if you wrote in, concerns that drug dealers were giving out free drugs at elementary schools, fear of adults putting edibles in Halloween buckets, and paranoia of kidnappers in vans with candy. Not to make too much light of serious fears, but the age in which the internet emerged to dominance was not without its security concerns.
To help deal with stranger danger, my parents developed a challenge and response for the family. The idea was that if someone came to pick us up because “your parents said to come get you” (in other words, “to kidnap you”) we were only to believe them if they new the password. The password was memorable, silly, and almost never used except when our parents would test us. Trusted family friends had it, as did—I assume—our grandparents and godparents.
The thing that my dad understood then, and that I rely on now professionally, is that control over our own identity and trust is powerful. Why do you use a call-sign when playing with a walky-talky as a kid? Because you’re keeping control over your identity so that people can’t abuse it to build trust they don’t deserve, and it makes any information they overhear less valuable.
This applies to secure messaging and security operations, even in the emerging age of AI: Security requires trust requires identity.
Think about a squad of soldiers securing a hill. To have “security,” you need to have control over who has access to that hill and what they are allowed to do when they get there. That security relies on both trust and a boundary, in this case a literal point on the ground that those not trusted must not cross. In computers, that’s usually roles, functions, and information someone may not have. To enforce trust, you need to know someone’s identity. And identity must be authenticated.
In warfare, we’ve learned quickly that identity cannot be authenticated with a single, falsifiable piece of information. “Oh you’re wearing the same uniform as me! You must be on my side” breaks down rapidly.
Nations have solved that problem in a variety of ways over the years, one example is the emergence of the challenge coin (https://en.wikipedia.org/wiki/Challenge_coin#Origins) and its legendary use by a downed WWI pilot who needed to prove he was who he said he was. Another is the classic call and response: if you want to return from “using the woodline” as a bathroom, you better know your platoon’s password to get back through security.
In cybersecurity, identity is usually first proven with a username or user ID. These are easy to falsify, so we add on one factor of authentication, usually a password. But, passwords can be guessed or stolen, so we’ve developed three forms of authentication:
Type 1 – Something you know (e.g. password, pass phrase)
Type 2 – Something you have (e.g. a token, smart card, or code-generating-device)
Type 3 – Something you are (e.g. biometrics like fingerprints)
In the post-COVID, high-speed, work from home environment we work in we’ve been defaulting to trusting someone’s identity because 1) They say they are John from IT and 2) they look and sound like John from IT on the video call. So called generative AI is challenging us to re-engage with security, trust, and identity: if you can’t trust what you see and hear, how can you be secure?
Fortunately, security principles are largely more timeless than the technologies that implement them. Can’t trust who they appear to be (Type 3)? Can we marry their audio and video back to something they know or something that they have? Can we slow down and turn to multi-factor authentication?
What’s one thing that you and your grandma both know? What’s a word or phrase you can come up with in advance first to force the conversation about security, scams, and deep fakes with grandma, but second to be prepared when someone calls her pretending they’re you and asking to be bailed out of jail?
To steal from a panelist I heard last week, how can you “MFA your grandma?” (Sorry, sir! I did not record your name)
Cybersecurity as Stewardship
First, we’ve received three anonymous donations through various workplace giving networks over the last couple months. If you’re behind them THANK YOU for your support.
This week, as I’ll mention in the updates, I had the opportunity to attend the DFW Technology Summit as well as the Faith at Work Summit. I also had the pleasure of talking with several of the excellent folks at Concilium who help Christians with risk management, security training, and safety (https://concilium.us/). All this has me pondering the place of cybersecurity and defense within work and vocation and within the “sacred-secular divide.”
Scott at Concilium has a bunch of great turns of phrase to explain why Christians and missions should care about security. One such phrase: we have a different “why” behind security but the same “how” as the commercial sector.
There are two ways to interpret that phrase: first, the commercial sector sees the “why” of security as preventing loss of money, but the missions sector sees it as about obeying God. Second, and more completely and generously, the commercial sector sees security in terms of something to protect but Christians see security in terms of someone to obey.
Security and defense in context of missions is about stewardship. It’s about not just accounting for resources and costs required to build a tower (c.f. Luke 14:28), but ensuring those resources are not squandered by mismanagement or foreseeable setbacks. In other words, cybersecurity is about something to protect to make the best use of the resources given us to accomplish the work that God has asked us to do with him.
Okay, so that’s not all that different than the secular world. Security is protecting an asset so that you can continue to steward it well—same same? The difference lies in the view of work as obedience to and therefore worship of God. If work for Christians is partaking in the restorative work of God, then risk management is making sure we’re faithful stewards of resources entrusted to us and putting ourselves in a posture of resilience and preparedness for stress and setbacks.
Normally, Christians talk about risk under what might be called “theology of suffering,” in which Christians are expected to suffer for the gospel, and security is thereby put at odds with the Biblical certainty of suffering for the Kingdom.
Instead, the stewardship view of security nests under what might be called “theology of work” and takes a different frame that fear, uncertainty, and doubt. Instead of being about “what can go wrong?” it remains about “how can I best be faithful?” Instead of pitting prayer for safety and the act of building defenses against each other, it integrates prayer into the acts of building walls and setting watchmen.
Putting security where it belongs under stewardship, then under work, then under faithfulness, then under worship keeps the proper frame in mind: security is about serving God and people. Yes, it’s about stopping bad guys, but instead of marketing and training with fear in mind, we can teach people with the intent to love them, protect them, and build trust. It also begs us to consider how we think about our opponents, adversaries, and thieves.
“To love is to will the good of the other.” – Thomas Aquinas
Secure Messengers, what are they?
You’ve probably noticed that we end up talking about secure messengers in these articles quite a bit. While selecting and using secure messengers isn’t necessarily about cybersecurity (as opposed to communications security or privacy), cybersecurity has a lot to say about what makes a messenger “secure” and how they all measure up to each other.
Also, missionaries and advocates ask us about secure messaging and VPNs frequently.
There are a lot of topics to cover and no one article can cover them all. Today we’re talking about the very basics.
What are they?
Secure messengers are messaging platforms—usually instant messaging or short-form messaging via a cell phone—that protect your communications from intercept by unwanted third parties, especially service providers and external surveillance.
For the sake of this article, we’ll focus on the usual service providers because external surveillance relies on or mirrors the monitoring conducted by service providers, at least until we have to consider quantum computing.
The service providers we’re normally concerned with comprise the infrastructure(s) our messages ride over. So:
The messaging provider itself and their servers
The cell service provider
The internet service provider
The cell phone or computer’s application operating system
Secure messengers prevent one or most of these service providers from being able to read the contents of messages sent between people using the secure service.
With that said, most secure messengers are a privacy tool: they protect what’s being said from snooping. They are not normally anonymity tools because they don’t always hide who you are while speaking (e.g. Signal and WhatsApp both used real phone numbers for communications until recently).
They also may or may not be quiet when broadcasting, as we talked about a bit in the 3-bit framework (https://www.ericiussecurity.org/blog/3-bit-ip-planning). Think of them as encrypted radio signals: people can hear the signal with their own radios, but they need something special to understand what’s being said.
Selecting a secure messenger
We’ll skip over the need to understand the information your team relies on for the moment and we’ll also skip over conversations about classification and need to know. Let’s assume that your team needs a secure messenger to communicate with each other about some form of sensitive information.
The first step to selecting a new tool is determining what it needs to do and why. Why discussing that we should consider at least the following:
Group size – How many people need to communicate at once
Features needed – Do you need text messages? Group calls, video calls, and/or document collaboration?
Security and Privacy features and policies – What are your team’s privacy and security policies? What are the privacy and security policies of the tools available to choose from?
Budget – How much money do you have vs how much do tools cost?
Operation System Support – Do tools need to support cell phones, computers, or both? Which ones?
In other words, the first thing we need to engage with are the business or mission need for the tool and how the tool will interact with the mission’s existing setup and constraints.
Then, we want to engage with privacy and security specific features.
Essential Security Features
Secure messengers aim to prevent service providers and surveillance from monitoring your communications. That means they need to do three primary things:
Protect data in transit – Prevent snooping as the messages travel
Protect data at rest – Prevent snooping when the message is stored and not in use on the phone or computer
Protect accounts/identities from takeover – Prevent other people from successfully pretending to be you to hijack your message storage or send/receive systems
These three goals are usually accomplished with encryption and strong access controls. These goals produce this list of essential features:
End to End Encryption with keys under the users’ control – Messages should be encrypted as soon as they are sent and should not be decrypted until received. Only the sender and receiver should be able to decrypt the messages
Forward Secrecy – When keys change in the future, old messages should be lost. No one without the right keys should be able to read the message, including the legitimate users. This implies keys will change in the future. (see also, https://avinetworks.com/glossary/perfect-forward-secrecy/)
Zero Knowledge – The service provider creating the messaging system should have no knowledge of the messages’ contents and as little knowledge about the senders or receivers as possible.
Contact Verification – Users should be able to control their own keys, view their own keys, and use the fingerprints of their keys to ensure they are talking to the person they think they are and that no one is sitting in the middle decrypting then relaying messages.
Support for Multi-factor Authentication – Accounts for the service should be protected from takeover by at least two forms of authentication.
Design or Architecture should be documented – In modern cryptography, it shouldn’t matter if the cryptographic system is known as long as keys remain secure. Similarly, it shouldn’t matter if the service provider publishes the broad overview of their architecture, because it should be secure unless someone has keys.
Independently audited and open about problems – All systems have problems and vulnerabilities. A secure messaging provider should acknowledge this and be open with customers about how frequently they are audited, what problems are found, and what’s done to fix problems
You might also consider price to be essential: if the tool is free you are the product. That saying may be overly simple because the tool may have an alternate funding strategy such as freemium subscriptions, nonprofit/donation supported, or open-source software (i.e. “you pay with your sweat and time”). It’s important that you know how the messenger makes its money and stays active, so that you know if they are monetizing your messages. Facebook Messenger, Whatsapp, and Telegram are great examples of free services whose funding models draw their security into question.
Useful Features
Besides essential features, you may also want to consider features that increase anonymity or decrease the impact of any exposure or failure. Namely:
Disappearing messages – Can you set messages to automatically erase so they are not available for exploitation if someone ever does break into the system?
Registration without phone or email – Can you create and secure an account without linking it back to other accounts, even if this means you could become permanently locked out?
Screening, Selection, and the 3-bit Framework
When selecting a tool, plan, or course of action, you generally have two sets of criteria. First, screening criteria establishes what you’re willing to consider. Second, selection criteria helps you rank your options.
Screening criteria set the table for what options you’re willing to compare to each other. Typically, screening criteria are based on the business or mission requirements for a tool or solution. They can also be based on your constraints and your willingness to use certain features or qualities.
Screening criteria can vary widely, so here are a few examples:
Must allow simultaneous editing and collaboration
Must provide instant messaging
Must not cost more than $1000
Must work with MacOS
Must provide end-to-end encryption
Note the use of the word “must.” Screening criteria lay out the non-negotiables that your options must meet.
Selection criteria, however, are used to qualify your options and rank them against each other. If all solutions get the job done, selection criteria determine which one gets it done best or most cost effectively. Some examples:
Cost
Ease of Use
Setup speed
Selection criteria typically take the form of qualitative scales, and those scales can be subjective. When criteria are subjective, you’d normally just rank all options against each other. The best solution is scored 1, the second best is scored 2, etc. (Though you can use an inverse ranking system if you want high scores to win—the world is your oyster.)
You can also weight selection criteria, so if cost is your most important factor, you can 2x the scores given to each option regarding costs to make options shake out most distinctly based on that criterion.
To borrow US Army language from FM 6-0 Commander and Staff Organization and Operations from 2022, all options must be suitable, feasible, acceptable, distinguishable, and complete. Screening criteria are used to narrow your options down to what’s suitable, feasible, and acceptable. Selection criteria help evaluate the degree to which an option is distinguished from other options and a complete solution.
Significant bits and Screening and Selection Criteria
The 3-bit Framework (ref: https://www.ericiussecurity.org/blog/3-bit-ip-planning) can be used for both types of criteria. Remember, the 3-bit framework is specifically built for evaluating your PACE plan: ranking options in order that they will be used. Which means:
Screening Criteria – What options are eligible for inclusion in the PACE plan?
Selection Criteria – Where does the option go in the PACE plan, if anywhere?
First, we’re going to determine if there are any of the three categories that must be answered a particular way:
Is it fast?
Is it quiet?
Is it protected?
If your options must be protected, then we’re going to force that bit to be “yes” (1) and throw out any option that doesn’t qualify.
In the language of bits and bytes, we can then select our most significant bits. In this case, we’re going to put our most important or significant bits all the way to the left in order of importance. For our screening criteria, we can choose to either make them most significant, or we can choose to drop that bit altogether going forward—that bit no longer helps us distinguish our options.
By ranking bits in order of importance from left to right, we can keep our yes/no options and develop a natural scoring framework atop it using a natural representation of numbers.
Let’s assume that we’ve screened options by some criteria not listed in the 3-bit framework. We then look at our 3 bits and rank them in order of importance. For a contrived example let’s say we choose:
Protected
Speed
Quiet
We assign each option a yes/no score. Using Signal and AOL Instant Messenger as examples:
Signal
Protected? Yes (1)
Speed? Yes (1)
Quiet? No (0)
AIM
Protected? No (0)
Speed? Yes (1)
Quiet? No (0)
Since we have 3 bits, re-write those scores from left to right:
Signal: 110
AIM: 010
Now you get to choose how much of a math nerd you’re going to be. It’s the 3-bit framework, so you can use binary (base 2) if you really want to. But time is valuable and 110 is bigger than 010 in both binary and in decimal (base 10, aka “normal numbers”).
So, in our contrived example, Signal scores higher than AIM because 110 is greater than 10 (I dropped the zero from 010).
Assuming you’ve put your criteria in order from most important to least important left to right, you will have a natural scoring system that can be used for PACE planning.
Primary – Highest score
Alternate – Second place
Contingency – Third place
Emergency – Fourth place
Unfortunately, this doesn’t solve tiebreakers for you. You probably then add additional criteria like cost and ease of use to differentiate the tie. If there’s still a tie and you’re a battalion commander, send the operations officer back to the dungeon to develop more distinct options. Otherwise, celebrate having two truly interchangeable options to build resiliency for your team.