Feeling overwhelmed by AI? Learn the Centaur Mindset. Read More
AI analyzing security log streams

Using AI to Analyze Log Files for Security Threats

Note: This guide is based on technical research from security logging best practices, machine learning research papers, and analysis of open-source log analysis tools. The techniques described are technically sound and based on documented implementations in production security environments. Code examples use established Python libraries with verified package versions. Readers should adapt these approaches to their specific log formats and security requirements. Security teams drown in log data. A medium-sized enterprise generates terabytes of logs daily from firewalls, IDS/IPS, endpoints, applications, and cloud services. Traditional log analysis—grep, awk, and manual review—doesn’t scale to this volume. ...

November 29, 2025 · 18 min · Scott
AI-powered security automation workflow

AI-Powered Security Automation: Automating Incident Response Workflows

Note: This guide is based on technical research from authoritative security sources, NIST publications, MITRE ATT&CK documentation, and open-source security automation frameworks. The techniques described are technically sound and based on documented production implementations. Readers should adapt these approaches to their specific security requirements and compliance needs. Security Operations Centers (SOCs) face an overwhelming volume of security alerts. According to the Ponemon Institute’s 2023 Cost of a Data Breach Report, organizations receive an average of 4,484 security alerts per day, with SOC analysts able to investigate only 52% of them. AI-powered automation offers a path to handle this alert fatigue while reducing mean time to respond (MTTR). ...

November 22, 2025 · 16 min · Scott
A balance scale weighing AI speed against human verification

When to Trust (and Verify) AI Output

This is Part 3 of “The Centaur’s Toolkit” series. We’ve covered AI pair programming fundamentals and building AI-assisted security tools. Now we tackle the hardest skill: knowing when to trust what AI tells you. Last week, I asked an AI to help me understand a library I’d never used. It gave me a confident, detailed explanation of the validateSchema() method, complete with parameter descriptions and example usage. The method doesn’t exist. The AI invented it. The explanation was coherent, the examples looked plausible, and if I hadn’t tried to actually use the code, I might have wasted hours debugging a function call to something that was never real. ...

January 17, 2026 · 10 min · Scott Algatt
A security operations center with AI-assisted threat detection visualization

Building AI-Assisted Security Tools

This is Part 2 of “The Centaur’s Toolkit” series. In Part 1, we covered the four collaboration modes for AI pair programming. Now we apply that framework to higher-stakes territory: security. You’ve embraced AI pair programming. You’re using Strategist mode for architecture, Editor mode for refinement, and you feel like a genuine Centaur. Then your manager asks you to build a security tool. Suddenly, the stakes feel different. In regular coding, an AI-suggested bug might waste a few hours of debugging. In security, an AI-suggested bug might become a vulnerability that sits in production for months. The cost of being wrong isn’t just time. It’s trust, data, and potentially your users’ safety. ...

January 10, 2026 · 10 min · Scott Algatt
A developer collaborating with AI, represented as a centaur at a computer

AI Pair Programming: Beyond Code Completion

This is Part 1 of “The Centaur’s Toolkit” series, where we explore practical strategies for human-AI collaboration in technical work. You’ve been using GitHub Copilot for six months. Or maybe it’s Claude, ChatGPT, or Cursor. The tab key has become your best friend. Boilerplate code that used to take twenty minutes now takes two. You feel faster. More productive. Like a coding superhero. But lately, something feels off. You catch yourself accepting suggestions without really reading them. You accept a function completion and realize you’re not entirely sure what it does. Yesterday, you spent an hour debugging code that the AI wrote, code you wouldn’t have written that way yourself. ...

January 3, 2026 · 10 min · Scott Algatt