I decided that I was spending too much time writing instructions in random Google docs, text files, and GitHub repos. I wanted to have a single location that I could use as a way to publicly document all the things that I tinker with:

  • Kubernetes
  • Docker (who doesnt?)
  • Splunk
  • Raspberry Pi
  • Random tech integrations (this list could go on forever)

With all of that, I felt that it was time to put it all into a blog. As I travel through life building these random things, I intend to document them here and link to any files, repos, or websites that I have created myself or used along the way.

Enjoy!

2. A Deep Dive into Google's Gemini CLI: Local AI Model Execution

A Deep Dive into Google’s Gemini CLI: Local AI Model Execution Introduction Google’s Gemini CLI is a powerful, open-source AI agent designed to bring Google’s Gemini models directly into the terminal. This tool allows developers to interact with advanced AI models through natural language prompts, enabling a wide range of tasks beyond traditional coding. In this article, we’ll delve into the features, capabilities, and applications of Gemini CLI, exploring its potential to revolutionize developer productivity and democratize AI. ...

July 1, 2025 · 3 min · Scott

*2. Deep Dive into Passkey Logins: Security Analysis and Implementation

Deep Dive into Passkey Logins: Security Analysis and Implementation Introduction Password-based authentication is a cornerstone in the sphere of authentication and authorization, despite its security challenges. The human factor is often the weakest link in security chains, and users frequently compromise security mechanisms. A shift in focus towards efficient user-centric mechanisms, protocols, and frameworks is necessary to balance security and usability. Passkeys have emerged as a key technology to resolve this tension, offering a secure, seamless, and modern authentication experience. ...

June 24, 2025 · 3 min · Scott
AI and cybersecurity concept art

AI-Powered Sentinels: A Guide to Vulnerability Scanning with Lambda's Inference API

The digital landscape is in a perpetual state of evolution, and with it, the sophistication of cyber threats. Traditional vulnerability scanning methods, while foundational, are increasingly strained by the sheer volume and complexity of modern software. Artificial Intelligence (AI) is emerging as a powerful ally, offering transformative capabilities in how we identify, analyze, and mitigate security weaknesses. This article delves into the burgeoning field of AI-driven vulnerability scanning, providing a comprehensive overview and, crucially, a hands-on guide to leveraging Lambda’s Inference API for a practical vulnerability analysis task. ...

June 10, 2025 · 8 min · Shellnet Security

A Developer’s Guide to Anthropic’s MCP: Integrating AI Models with Data Sources

Introduction “AI models are only as powerful as the data they access.” Anthropic’s Model Context Protocol (MCP) bridges this gap by standardizing how AI systems connect to structured and unstructured data sources—from cloud storage to enterprise databases. Yet, deploying MCP in production requires careful attention to architecture, security, and performance trade-offs. This guide walks through: MCP’s client-server architecture and how it differs from traditional API-based integrations. Step-by-step implementation with Azure Blob Storage (adaptable to PostgreSQL, GitHub, etc.). Security hardening for enterprise deployments (RBAC, encryption, auditing). Performance tuning for large-scale datasets (caching, batching, monitoring). Scope: This is a technical deep dive—assumes familiarity with REST/GraphQL and Python. ...

May 21, 2025 · 3 min · Scott

Optimizing AI Chatbot Energy Costs: Practical Metrics and Workflow Strategies

Optimizing AI Chatbot Energy Costs: Practical Metrics and Workflow Strategies Hook: “Training GPT-3 consumes ~1,300 MWh—but inference at scale can be worse. Here’s how to measure and slash your LLM’s energy footprint.” AI chatbots are electricity hogs. While training large language models (LLMs) like GPT-4 dominates sustainability discussions, inference—the process of generating responses to user queries—can cumulatively surpass training energy costs when deployed at scale. For DevOps teams and developers, unchecked energy use translates to: ...

April 30, 2025 · 4 min · Scott