Deep Learning for Anomaly Detection - Autoencoders and Neural Networks

Research Disclaimer This tutorial is based on: PyTorch v2.0+ (official deep learning framework) TensorFlow/Keras v2.15+ (alternative framework examples) scikit-learn v1.3+ (preprocessing and metrics) Academic research on autoencoder-based anomaly detection (Goodfellow et al., 2016; Kingma & Welling, 2013) Production deployment patterns from PyTorch Serve and TensorFlow Serving documentation All implementation patterns follow documented best practices for neural network-based anomaly detection. Code examples are complete, tested implementations suitable for production adaptation. Introduction Looking for classical ML approaches? If you’re new to anomaly detection, start with our guide on classical machine learning techniques using scikit-learn. That post covers Isolation Forest, One-Class SVM, and Local Outlier Factor—excellent choices for tabular data and interpretable results. ...

March 28, 2025 · 20 min · Scott

Deep Learning Model Optimization: From Training to Production Deployment

Deep Learning Model Optimization: From Training to Production Deployment Note: This guide is based on PyTorch quantization documentation (v2.1+), TensorFlow Model Optimization Toolkit documentation, ONNX specification v1.14, and NVIDIA TensorRT best practices. All code examples use production-tested optimization techniques and include performance benchmarks. Model optimization bridges the gap between research and production. A ResNet-50 trained in FP32 consumes 98MB and runs at 15ms inference on CPU. With INT8 quantization, the same model shrinks to 25MB and runs at 4ms—enabling deployment on edge devices, reducing cloud costs, and improving user experience. ...

February 5, 2025 · 10 min · Scott