How Certifications Killed Thinking in Cybersecurity
Why the industry replaced judgement with checklists and called it progress
In The Quiet Death of Common Sense, I argued that something fundamental has gone missing from cybersecurity. It isn’t intelligence, and it isn’t education. It’s judgement. The ability to reason through a situation instead of deferring to a rule, a framework, or an authority.
Certifications didn’t make people stop thinking. They just made it possible to have a very successful career without ever having to start.
It didn’t happen by accident, it happened because modern cybersecurity quietly built systems where thinking stopped being necessary, then rewarded people for navigating those systems as efficiently as possible. Certifications didn’t kill cybersecurity. They killed the need to think inside it.
Certifications were never meant to create security engineers. They were designed to create standardisation. They optimise for consistent terminology so everyone uses the same nouns. They enable repeatable assessments so Audit A looks like Audit B. They give HR a scalable way to bin hundreds of CVs. They provide legal defensibility, the familiar “we hired a certified professional, therefore we exercised due diligence.”
These are reasonable business goals. But, they are not security goals.
What they don’t optimise for is judgement. They don’t test how someone reasons under uncertainty, how they model failure, how they think adversarially, or how they respond when a system behaves in ways no framework anticipated. Certifications answer the question, “Do you know what the industry agrees to call things?” They do not answer the more important one, “Can you tell when the industry is wrong?”
Much of cybersecurity now operates in a world of comfortable, well-documented, professionally endorsed non-thinking.
The real damage begins when certifications stop being a supplement to thinking and quietly become a substitute for it. A proxy replaces the thing it was meant to represent. Instead of asking whether someone can reason about risk, design systems that fail safely, or understand trust boundaries, organisations ask easier questions. Are they certified? Which framework boxes do they tick? Will this pass an audit?
Once that substitution happens, thinking becomes optional. And in organisations optimised for speed, scale, and liability reduction, optional thinking doesn’t survive for long. At scale, this creates a dangerous inversion. Security stops being about reducing real-world risk and becomes about demonstrating alignment with accepted frameworks.
You hear it after every major breach. “We followed best practice.” “We passed the audit.” “We met the standard.” None of these statements describe security. They describe procedural safety. They protect organisations from blame, not systems from failure. This is how fully compliant organisations get breached, and how certified teams miss architectural flaws that a hobbyist could spot in ten minutes.
When everyone followed the rules, no one is responsible for the outcome, except, conveniently, the person whose job title includes the word “certified”.
Accountability becomes collective, procedural, and abstract. No one personally failed, the system was followed. Certification acts as a shield.
But for individuals, the logic inverts.
The most certified person in the room, the one with the longest list of credentials, often becomes the liability endpoint. When reality breaks through the paperwork, the question is no longer “did the system fail?” but:
“You were the certified one. Why didn’t you catch this?”
In this way, certification simultaneously protects organisations and exposes individuals. Institutions hide behind standards, while certified professionals are expected to have foreseen everything, even when operating inside systems that actively discourage independent judgement.
The irony is brutal. The more certified someone is, the less room they have to challenge bad assumptions, push back on broken architectures, or say “this framework does not fit reality”. Yet they are the first to be blamed when the framework inevitably fails.
Certification becomes a diffusion mechanism for organisational accountability and a concentration point for individual blame.
When everyone followed the rules, no one is responsible for the outcome, except, conveniently, the person whose job title includes the word “certified”. Compliance didn’t fail. It did exactly what it was designed to do. It created a paper trail of consensus.
There’s another uncomfortable truth here. Thinking is expensive. It slows projects down. It raises uncomfortable questions. It challenges authority. It creates friction with procurement, legal, and insurance. Framework-driven culture does the opposite. It creates predictability. Predictability is comforting. It’s budgetable. It’s defensible in court. The system evolves to reward the map-makers, people who can map any problem to an existing control set, while selecting against the engineers who point out that the map itself is wrong.
This leads to a paradox that most people in the industry recognise but rarely say out loud. The more certified an organisation becomes, the less personal responsibility anyone feels when things go wrong. Certification has become a diffusion mechanism for accountability. After a breach, no one says, “I should have thought this through differently.” They say, “We followed industry standards.” When everyone followed the rules, no one is responsible for the outcome.
Real security competence doesn’t look like that. It’s messy, subjective, and hard to package. It can’t be tested cleanly at scale because it requires accepting subjectivity, and subjectivity terrifies systems built on uniformity. Real competence assumes compromise by default. It designs for blast-radius reduction. It questions standard trust assumptions. It understands that controls interact in non-linear, often broken ways.
Instead of measuring these things, we certify the appearance of competence and hope it correlates with the real thing. Sometimes it does. Mostly, it doesn’t. Certifications are useful when they support thinking. They are a catastrophe when they replace it. The moment a badge becomes safer than judgement, the system has already failed. Much of cybersecurity now operates in a world of comfortable, well-documented, professionally endorsed non-thinking.
This isn’t an argument against education or shared language. It’s an argument against automating human judgement out of the system. Certifications didn’t make people stop thinking. They just made it possible to have a very successful career without ever having to start.
In Part 3, we’ll look at the hiring side of this problem, why the industry keeps hiring for the wrong signals, and how we might start identifying real judgement again.



