It seems that the concept of ‘intelligence’ is a problem. The definition isn’t agreed, and the industry is peppered with vendors and organizations applying a range or meanings and interpretations. For me – and I come from a military intelligence background – the concept of intelligence was always very straight forward, and in its purest form it is simply the application of the intelligence cycle.

I’ll start with an upfront confession: I think the intelligence cycle is awesome. You could call me an intelligence cycle optimist. If we consider that the purpose of intelligence is to ‘reduce ignorance in decision making’ or to simply provide some understanding of the confusion and complexity of outside factors from an organization, then the intelligence cycle is a great model to help us produce that. In conjunction with the intelligence cycle we can also use the intelligence trinity model, which assists with shaping our purpose as an intelligence organization by breaking our work into three areas; situational awareness, insight, and forecasting, but more on that in another blog. We have also been developing expanding the concept of ‘cyber situational awareness’, which is where we can provide some sight of the unknown beyond an organizations boundary.

Now I don’t want to sound like one of those ex ‘three letter guys’ (I’m not btw – but I will blog in future about why that’s interesting) preaching to the industry why they and only they can do intelligence, but I think there’s great value in not re-inventing the wheel, and looking at historic precedent and experience in order to gain advice and inspiration. So for those who aren’t familiar with the intelligence cycle, the widest accepted definition (and there are plenty others) is: direction; collection; analysis/processing; dissemination; and review. Now don’t get me wrong, I’m not saying that this is perfect – see the work of academic Arthur Hulnick for a thorough critique of why it isn’t – but it’s a cracking model to get started with. Let me now walk you through how we use the intelligence cycle here at Digital Shadows (now ReliaQuest) (now a ReliaQuest company). Oh and by the way, I would not claim that our usage of the intelligence cycle is in anyway perfect, but I think by laying it out here, we can:

a. be transparent

b. invite and benefit from community discussion

c. provide some help and advice to those who are only just dipping a toe into the murky world of intelligence.

ResizedImage600582 int cycle

Direction

Intelligence does not exist to service itself (aka the ‘Self licking lollipop’) and must get some kind of steer or nudge from somewhere to propel it in the right bearing. In intelligence cycle parlance, this is known as direction. For us, direction comes from two sources. First and foremost from our clients. We need to know what their concerns are, what keeps them awake at night, and what are the board or executives demanding answers about. This is pivotal. Intelligence must answer the requirements of its customers. But with that customer focus comes a risk. Often, CISOs, CIOs, and threat/SOC managers are focussed on the latest problem. They’re fighting fires, spinning plates, and juggling a wealth of complex and overlapping challenges. This can lead to them falling into the ‘information bubble’ , whereby the focus of intelligence becomes short term, lacks value, and can quickly expire. Which leads me neatly into our second source of direction; ourselves. We use our knowledge of the threat landscape to provide coverage against those areas which, although aren’t a significant threat today, may become a threat tomorrow. We’re also pushing for more ‘horizon scanning’ in our direction, i.e. what are the threats going to look like in a year’s time, what impact will they have, how will that affect our client’s risk posture, and how can we get a head start at covering them.

I think that direction is so critical, that we’re now building out a team just to handle its application across the intelligence cycle. We have a full time collection analyst whose sole responsibility is to ensure that our collection tools are doing their job – collecting against our requirements, and proving our analysts with the information which they require. I’ve known this previously as Intelligence Requirements Management and it operates across the cycle in order to ensure that requirements are being serviced. I could go on and on about IRM, but I’ll park that for another blog.

Collection

So, now we have gathered our client and internal requirements, we need to go and gather a bunch of information to hopefully turn into intelligence. The classic phrase ‘garbage in, garbage out’ applies neatly here, as it is critical that we collect timely and relevant information for our analysts to use, otherwise we will fail in satisfying our intelligence requirements. To help us to do that, we’ve developed the collection cycle (yes, another cycle). Being ex-military I like to keep things simple, and this certainly isn’t an exception. The cycle is; gather requirements (see previous int cycle step); develop observables, i.e. what will information which can service that requirement look like; collect information, either using the tools we already have, or building tools which can; assess information for relevance, credibility, validity and its ability to answer our requirements; feed these back into the cycle and constantly improve. I’ll write a future blog about the int collection cycle, as I find it a fascinating model for understanding and improving intelligence collection.

But collection isn’t just about collecting information, it’s also about having access to the information we need. We’re very active in this space, and source discovery is a constant task, as is building tools to assist us with our collection. A good example of this is the addition of our dark web spider, which at the most recent count has indexed 8 million pages of TOR and I2P for our collection engine to work over. As well as this, we’ve built functionality to ensure that our collection tools are working correctly, and that we can identify anomalies, reductions, and increases in volume from our sources (this is often a great indicator that something suspicious is going on). This is useful for identifying when threat actors move to new sources, or means of communications, or if a particular topic, grievance, motivation, or TTP has got entities across the threat landscape exercised.

Analysis

Following the collection of information, we then need to analyse/process it into intelligence. Whenever I interview applicants for intelligence positions I always ask what they think the most important step in the int cycle is, and almost always I get the same answer; analysis. I would agree with this – obviously, as I’m an intelligence analyst by trade – but also because it is often the most challenging part of the intelligence cycle. I read a lot of reports from other vendors, and it’s interesting to see the variation in analytical quality. Particularly interesting – and something which I admire in the industry – is the public, and often brutal way in which weak analysis is called out (see the April 2015 paper by Sans which questions the accuracy of a vendor report on ICS attacks).

To the untrained, analysis is simple the application of intuitive judgement fed by a selection of information. In contrast to intuition is the use of structured analytical techniques such as link analysis, ACH, and SWOT, which offer much needed frameworks for ordering and demonstrating your analytical steps. Intuition has got a bad rap from some commentators (see intuitive vs. structured analysis debates) but I believe both work well hand in hand, and ultimately deliver a stronger, more robust product. Intuition can draw on the power of the brain for processing information, identifying trends and links, and also for imagination which is critical for generating hypotheses. But intuition can fall victim to biases and heuristics, which can erode the quality of assessments. This is where structured techniques can assist by offering transparency and frameworks. Here at Digital Shadows (now ReliaQuest) (now a ReliaQuest company) we use both, and our goal is to provide intelligence analysis which answers requirements, and is developed with strong argumentation, clear sources and source evaluation, and logical conclusions.

Dissemination

Following analysis comes dissemination; we’ve got to get our product out to our customers in an appropriate and timely way. We use three main methods for this, which is customizable email alerting (sometimes referred to as ‘push’), our API, and our intelligence portal (sometimes referred to as ‘pull’). Because all of our intelligence is tagged using the STIX schema and given a severity rating, clients can subscribe to combinations of both. This is good, as it allows users to tune alerting to their specific appetite or requirements, helping to ensure that they receive much needed ‘signal’, and avoiding the pitfall of ‘noise’. We’re also always more than happy to speak with our clients to talk through the latest intelligence summary report, or update them on developing threats. On top of all of that, we also have integrations with both Maltego and ThreatConnect, so that clients can further explore and manipulate our intelligence feed.

Dissemination unfortunately can be used to cover-up shoddy or weak analysis. Sometimes all it can take is a glossy report template, snazzy graphics, and a snappy title to eclipse what underneath is poor analytical work. Digital Shadows (now ReliaQuest) (now ReliaQuest) we don’t subscribe to that model. All of our work is transparent (you can see our sources, and follow our assessments) and we’re absolutely not in the business of fear, uncertainty and doubt (FUD).

Review

Very lastly in the intelligence cycle (and thanks for reading this far) we have ‘review’. This is where we look back at our direction and ask “did we answer that requirement?” We conduct this internally and with our clients, and the results feed straight back into ‘direction’ for the next round of the cycle. By constantly reviewing our performance, we’re able to adjust our direction in order to satisfy client requirements, which, after all, is the entire point of intelligence.

But review is also about reviewing our analytical performance. We look at if we were right or not, as well as how we could have improved either our collection, analysis, or dissemination in order to better inform our clients of the threats against them. By taking this more holistic approach, we are constantly improving the quality of our product at each step of the intelligence cycle.

That was a fairly rapid canter through the five steps of the intelligence cycle according to Digital Shadows (now ReliaQuest) (ReliaQuest). As I mentioned at the beginning, I don’t see the cycle as perfect, but it’s a useful model for separating the various functions of intelligence in order to produce a product which is accurate, well researched, timely, and actionable. I’ll write further blogs about each individual step in the cycle, but I’m hoping this was a useful introduction.

If you want to learn more about how we use the Intelligence Cycle at Digital Shadows (now ReliaQuest) (ReliaQuest), click to request a demo here.

Further reading

Securing the State by Sir David Omand (ex-GCHQ head) is an excellent introduction to intelligence
FM3-24, the US Counter-Insurgency manual has a chapter dedicated to intelligence.
JDP 2-00 is the UK military intelligence manual, and offers some great insights into intelligence.