Schrödinger's Security Assessment

Can an application or organisation exist in two states at the same time, and why does this often take a security assessment to determine which?

A very young (and cute) kitten on a lounge cautiously peers out from under a shaggy blanket
Gratuitous kitten photo

I am told that many great journeys on the Internet start with pictures of kittens, so for the cat lovers out there - you're welcome.


Recent security assessment work for a client lead me to asking some questions:

  1. If the client had not engaged us to perform the assessment, would things have stayed the same (or even got worse)?

  2. What specifically about an impending security assessment can lead to security improvements?

  3. If the intention of security assessment is to proactively identify risk, why are there still pockets of industry who may feel embarrassed or resistant to knowing how things really are?

A young female teacher writes mathematic equations on a whiteboard
WARNING: Science Content

First, some background. Students of scientific history will remember a thought experiment which involved a box, a cat and some unpleasant outcomes for the cat. This introduced the concept of superposition (i.e. that something can exist in multiple [undetermined] states until an observation is made).


Prior to security assessment, the hope and assumption from most clients is that all is well with their environment.

When a security assessment is planned, there may be a corresponding flurry of activity to patch / disable de-bugging / remove test data / remove old admin accounts / upgrade ancient middleware applications / disable unused services etc.

So, before the assessment has even commenced, the environment may actually become more secure.

Why does this happen? Well, just as the hapless cat in the box would (if it was aware of the dastardly plans by the mad scientist) probably take steps to either not get in the box or somehow free itself, avoiding negative findings around an environment by taking precautions is a pretty logical thing to do.

Why does it often happen just before (or sometimes during) an assessment? This usually comes down to over-work, lack of resources or prioritisation issues (organisationally or at the team level).

three young people sit in a semicircle around three Apple Macbooks with two pointing to the middle screen and one looking on intently
A group of developers excited to help improve security

What can we learn from this behaviour and harness it for the forces of good? If security and risk management are not given sufficient time, prioritisation and resourcing as well as clear [achievable] targets and expectations, the result will usually be a reactive and chaotic approach.

Harnessing this behaviour therefore requires the following foundational things to be in place:

  1. Ensuring there is clarity on the business value and importance of an asset (or service)

  2. Ensuring there is prioritisation of these assets (or services) in much the same way as business continuity plans require

  3. Setting realistic targets for security and risk management outcomes (e.g. patching etc)

  4. Measuring performance against these targets to help highlight any challenges and issues

  5. Ensuring that specific time and adequate resourcing is allocated to the management of security and risk to achieve the stated targets

If you would like to talk to us about proactive services to help prioritise and plan your security controls - get in touch.

Author: Clinton Smith

57 views0 comments

Recent Posts

See All