Secrets. Everyone has them. Companies don’t tell anyone who their clients are. Military forces don’t tell anyone where their troops are being moved. People don’t want others to know what they have been up to at certain times. Secrets permeate society on every level. Even animals keep secrets as squirrels bury nuts so that no other squirrel can find them, and all nests are hidden from predators.
Having secrets is easy. Keeping secrets is something that has steadily become more difficult as time and technology advanced. In order to keep secrets, the operations surrounding the secret must be secured. This is a process called OPSEC. Branded by the military in the early 20th century, OPSEC has become a topic humans concerned themselves with for as long as they have had secrets.
Secrets always hold a certain degree of power. If a military force knows about enemy troop movements, they can prepare an ambush and gain an edge. If a blackmailer knows a secret about a person, he can demand money or other services to ensure secrecy.
Presumably the first report of a secret dates back to ancient Grecian times. It is closely associated with the symbol of the rose. As the legend goes, goddess Aphrodite gave her son Eros a rose, who in turn gave it to Harpocrates – the God of Silence – who was to ensure that Aphrodite’s various indiscretions would stay a secret. Some versions of this story claim that Harpocrates was to ensure that all the Gods’ indiscretions would stay a secret. Thus, the rose became a symbol for secrecy.
Christianity knows conversations sub rosa, under the rose, which means that secret information is being exchanged and that all parties involved in the conversation are trusted. Confessions are also treated as sub rosa, which is why confessionals often have roses or floral imagery on or around their doors.
Among the first people to investigate the abstract nature of secrets was German Sociologist, Philosopher and Critic Georg Simmel. In his Propositions, he outlined the nature of secrets and what they do to people involved in them. He concludes that the more secrets are organized and shared, the more likely it is that a centralized command structure needs to be established or establishes itself.
Because of the need of some kind of controlling instance, secrets and the keeping thereof are of great value to military forces all around the globe. Therefore, the need to know principle is often employed. It is rare that all members of a unit are getting all the information that pertains to a mission. They trust their superiors who have more intelligence to not abuse their privilege.
Enter strategic Operations Security or what we know as OPSEC today.
OPSEC, while having undoubtedly been practiced before, was first made a major part of the public’s affairs during World War II. There are numerous posters, pamphlets and stories reminding people that they’re to keep quiet, because they could never know when the enemy was listening. Perhaps the most famous of them all is Loose Lips Sink Ships, hinting at the fact that the public may be able to give enemy spies details about ships. This poster was drawn by the Americans.
But they were by far not the only ones to have OPSEC posters.
|USA||Don’t Kill Her Daddy With Careless Talk||–|
|England||Keep Mum – Loose Talk Costs Lives||–|
|Germany||Schäm dich, Schwätzer||Be Ashamed of Yourself, Blabbermouth|
While spying on each other and the enemy peaked in overt form during the cold war, the need for OPSEC never went away. Because within a decade of the Cold War ending, the age of the Internet began in the early 1990s. It was then, at the very latest, that OPSEC turned from something that is used to protect the interests and activities of a nation state into something that is today called an essential survival skill.
Today, society is at a point where every little piece of information about a person or a company can potentially lead to the person or company’s downfall. Even if the repercussions of an OPSEC failure are not catastrophic, they can be very annoying, lead to public embarrassment or harassment.
One of the most recent examples are the so-called Snowden Cat Facts. An anonymous hacker has programmed a bot that searches Twitter posts for publicly posted phone numbers. The bot then sends text messages with random facts about cats to that number. The only way to get it to stop is to tweet the following line to NSA Whistleblower Edward Snowden.
@Snowden Meow, I <3 catfacts
Edward Snowden himself has nothing to do with the text messages, apparently. While facts such as Approximately 40,000 people are bitten by cats in the U.S. annually might range from amusing to annoying, they show that many people fail at basic OPSEC. Because they posted their phone number publicly, accessible to everyone.
Before Facebook made making names public on the Internet the norm, even something as simple as telling someone a name was considered a breach of OPSEC. The early days of the Internet saw users hide behind screen names and avatars were hot topics of debate. Today, people appear to have less of an issue posting, commenting and existing online under their real name, going even as far as to post openly racist and sexist comments under their real names. This lax behaviour, among other factors such as the need for business secrecy in all its aspects, has led to OPSEC becoming more and more of an issue that not only companies but also everyday people must tackle. Because in everyday life, Informational Self-Determination, the domain over all personal data and the terms of its publishing, is of great importance. Friends on Facebook like to publish personal data, be it their own or their friends, and market
By nature, OPSEC procedures are secrets as well. Because if an attacker knows the procedure, he or she might be able to deduce content. For example, if a company uses code names to describe its clients, an attacker can find out who the client is if he knows the system behind the assigning of code names.
Therefore, a lot of companies think long and hard about their OPSEC procedures, train their staff to adhere to the standards and refine their methods over time. In order to have a baseline procedure that leads into the process of OPSEC – which is not a fixed state over time but keeps on changing – a kind of framework is needed. For that, questions known from journalism can be applied to figure out what needs to be protected when and how by whom.
These are fairly general questions, but they have proven to be functional in order to get to the most important answers. From this, it can be extrapolated what is really important. However, this does not lead to OPSEC measures or the answer to everything. In fact, answering these questions is just the beginning. After this very basic questionnaire is answered the measures to be implemented can be laid out.
The reason for this is not that the questions are insufficient, but they are only meant to be a fact-finding exercise. Because after the questions are answered, they lead the way into finding a suitable approach for each OPSEC measure.
There are three general approaches to OPSEC, each one having its advantages and disadvantages. The main difference in each of these approaches lie in the starting point for the thought process, although the measures that are ultimately implemented might end up being the same ones that would have been chosen when another approach would have been taken.
Every one of these approaches are followed up with a very simple example. In this example, an automobile manufacturer wants to keep a new, revolutionary car a secret until it’s been released to the market.
The Enemy First approach is based on a realistic and accurate estimation of the enemy. The most vital question that absolutely must be answered in a definitive way is Who wants to get access to sensitive data? Should this question not be answerable, then this approach is probably not the suitable one as the questions based on the Who require intimate or at least sophisticated knowledge of the enemy and his capabilities.
When assessing enemies, it seems reasonable to try to include everyone, including, but not limited to, the Chinese government, scammers from Nigeria, the NSA, competing companies and the neighbours. However, this approach, while seemingly smart, will raise costs of defence measures immensely. It therefore pays off to be realistic. Maybe the Chinese government or the NSA aren’t all that interested. Maybe the neighbour can’t even use a computer to save his life. The Nigerians are known to go after low hanging fruit, favouring quantity over quality, so those are out as well.
What remains is the competing company. Naturally, the competition would be interested in a company’s secrets, but they are not to have them, seeing as that would eliminate the company’s competitive edge. Knowing the competitor to be the enemy, defensive measures can be taken. It is a hard task to stick with confirmed facts about an agency, person or corporation that friendlies do not have inside access to, but the defences – should they have been implemented on solid intelligence – are targeted and therefore arguably more effective.
Example: The car’s main enemy are industrial spies. They are employed by big Chinese industrials that are after not only the blueprints of the car but also any snapshot of it they can get. If they could get their hands on manufactured car parts or even an entire car, the market launch and competitive edge is endangered. Therefore, it makes no sense to have the car manufactured in China.
The Project First approach that takes into account the operation first is arguably the most extensive one and probably the most difficult to maintain. However, it also promises a high degree of security, because every operation or project in a company or as a person receives the protection that it needs, individually suited to its properties. It can also lead to the establishment of a solid but vaguely phrased baseline for security, a number of security measures that facilitate the launch of future projects.
The Project First approach analyses each project and points out potential gaps in OPSEC, points where information can leak and – in a worst case scenario – lead to catastrophic failure of the project. When choosing this approach, it pays off to be more on the cautious side and overestimate seemingly minor and harmless gaps in security.
A project in the sense of the Project First approach can be anything from a business operation to a fake identity that needs maintaining without being compromised. This could even be an IT security audit in a company where the auditors need to act undercover so that employees are unprepared for the staged attack.
The questions asked earlier that stem from journalism can help find a list of words that are not to be mentioned. Involved personnel are not to be named, the assets used during the project are not to be named, neither are locations and times that could give away the project’s nature.
When pursuing the Project First approach, it is vital that the personnel defining the weak points of a project’s security is not only trusted but also thorough and as unbiased as possible. A superior will have to hear some things he or she does not want to hear when shortcomings in security are being discussed, but being overly critical during the early stages of this approach will save a lot of trouble as the project is being executed.
Example: The launch of the new car is imminent. For the secret pre-launch event that only press and other VIPs are to attend, the invitations are sent out targeted and with an NDA. The staff providing catering is being told a cover story as to which event they’re catering for. The venue where the event held is also being told a cover story. Also, the windows of the venue will be covered.
The third approach, Assets First, is the most defensive and arguably the one relying the most on factual and confirmed knowledge. It is the inversion of the Enemy First approach. When choosing this approach, the data or personnel to be protected is being analysed thoroughly and there’s a definition of high-risk assets. These are crucial assets that when falling into the wrong hands would lead to either great damage to or catastrophic failure of the operation.
Defensive measures of the high-risk assets can include access restrictions, the use of code, compartmentalization of knowledge among the personnel involved in the operation as well as the use of Non-Disclosure Agreements (NDA). However, during this, it is vital that the involved personnel is informed of the criticality of the data they are handling. NDAs are not to be treated lightly or as something that takes excessive amounts of time.
More than NDAs, education should play a key role. A superior in charge of the operation has the task to make sure that all personnel is aware of the criticality of the data they are handling. Even though this might lead to repetitive statements, it does pay off seeing as OPSEC needs only fail once before an entire operation can be scrapped completely. Once OPSEC fails and information is leaked, there is no going back.
Example: The asset in question is the new, revolutionary car. It is to be protected from prying eyes at all costs. Not even the CEO of the company knows all the secrets that the engineers have built into the car. When not on company premises, there is simply no talk of the car. On-site, talk about the car is only done using code names and soft language. Even the engine technology and the exhaust are being given separate code names. External expertes and consultants are being put under NDA and they don’t get access to company mail when outside the company’s network infrastructure, so no VPN or access to their mails on their phones.
Finding the right approach is, just like defining the measures that will ultimately be implemented, a matter of discussion among decision makers and experts in a corporation or the parties involved in the inner circle keeping the secret. Answering the journalistic questions above might lead to a tendency as to which approach would end up being the most suitable, but even that tendency should be questioned.
It pays off to have at least one party involved in the secret to play the continuous role of the adversary, poking holes into plans and measures. Perhaps bringing in a trusted and external group or person might be beneficial to this process, depending on the scope and size of the operation. Because as long as a benevolent entity points out flaws that a malevolent attacker could exploit, these can be fixed before the attacker even has a chance to attempt to harm the operation.
Secrets are as old as life is. Keeping secrets is more difficult than ever. Therefore, the military concept of OPSEC has to be adapted and extended to all aspects of life. There are a variety of approaches to better operational security, very few of which actually based on technological means. The means are decidedly low-tech, rely on concepts that are theoretical as the practical aspects follow a lengthy consideration process that has no technological aspects to it.
It is important to note that OPSEC only needs to fail once for he damage to be done. With this in mind, it pays off to take every measure of prevention and think about every aspect of the operation twice. It is vital to remember that all people in the centre of the secret are trusted and follow the same goal. So their criticism is not an attack on a person but a means to reach the same goal.
Let our Red Team conduct a professional social engineering test!
Our experts will get in contact with you!
Further articles available here