The Trouble with AI in Cybersecurity
Part 5: Ethics on Autopilot
Are we building digital guardians… or just smarter surveillance?
Artificial Intelligence is embedded deeper into our cyber defences every day. From anomaly detection to automated threat response, AI promises faster, smarter protection. But here’s the inconvenient truth: we’re rushing to automate cybersecurity before we’ve stopped to ask whether we should—or what “protection” even means in a world where your data, your identity, and your autonomy are constantly being mined and manipulated.
AI is only as “smart” as the data it’s trained on. In practice, this means massive data collection, that is randomly harvested and questionably governed. This should raise red flags around privacy, consent, and data sovereignty.
1. When the Guardian Becomes the Gatekeeper
AI-based cybersecurity systems can detect, assess, and respond to threats faster than any human. That’s the pitch. But what’s often left unsaid is how those decisions are made, what data they're fed, and who has oversight when the machine gets it wrong—or right, in a way we never intended.
AI doesn’t just guard the perimeter anymore. It decides what counts as a threat. That should make anyone paying attention to digital rights extremely uncomfortable.
[…] when AI becomes the judge, jury, and jailer—without oversight—we risk building a future that’s efficient, but ethically hollow […]
2. The Data Dilemma
AI is only as “smart” as the data it’s trained on. In practice, this means massive data collection, that is randomly harvested and questionably governed. This should raise red flags around privacy, consent, and data sovereignty.
When cybersecurity vendors train models on customer data, logs, emails, or endpoint behaviours, what guarantees do you have that:
Your data won’t be reused elsewhere?
Bias or profiling isn’t baked into detection?
The model’s behaviour is actually auditable?
FACT: most vendors won’t give you clear answers—because they can’t.
3. Automation vs. Accountability
Imagine an AI security system automatically isolates a department from the network due to a misclassified file transfer. You lose a day of operations, maybe a client, maybe worse. Who’s responsible? The vendor? The admin? The machine?
The current ecosystem loves to claim “autonomous cybersecurity” until the accountability questions come knocking. Then it becomes a collective shrug.
AI doesn’t just guard the perimeter anymore. It decides what counts as a threat.
4. Ethics-as-an-Afterthought Not Ethics-by-Design
Most cybersecurity AI today is designed to win the war—not protect the civilians. It’s built by engineers focused on performance, not philosophers worried about power.
Summing up:
Privacy-preserving techniques like ZKP and FHE? Rarely baked in.
Governance frameworks? Barely there.
Transparent audit trails? Good luck.
We don’t need smarter AI. We need smarter boundaries.
5. PrivID’s Take: AI with a Conscience
At PrivID, we’re not anti-AI. We’re anti-unaccountable AI.
That’s why we’ve taken a radically different approach—using Zero-Knowledge Proof and Fully Homomorphic Encryption to ensure AI-based operations can analyse patterns and enforce rules without exposing underlying data. Think of it as intelligence without intrusion.
We also believe the human element isn’t a weak link—it’s a critical check point. Our platform keeps humans in the loop when it matters, and ensures visibility at every step.
6. The Big Question: Is This the Future We Want?
Cybersecurity is supposed to protect people, not just infrastructure. But when AI becomes the judge, jury, and jailer—without oversight—we risk building a future that’s efficient, but ethically hollow.
Most cybersecurity AI today is designed to win the war—not protect the civilians.
We don’t need to fear AI. But we do need to demand better from it.
Because if we don’t build ethical frameworks into these systems now, we’ll spend the next decade unravelling their unintended consequences.
YOUR Call to Action: Is your cybersecurity vendor using AI? Ask them how they protect your data during training. Ask if you can audit decisions. Ask if their AI can be overridden. If they can’t answer clearly, walk away.



