The best automation is the script you wrote once and forgot about. Until you realize you’d be miserable without it.

That’s the bar I hold every workflow script to: will this save me more time than it took to write? Most “productivity automation” content you find online fails that test. The scripts look clever in a conference talk, end up in your dotfiles, and collect dust. They solve hypothetical problems, not real ones.

This post is different. Every script here is something I actually run, repeatedly, with concrete time savings I can point to. I’ll also show you two scripts that sounded great and got deleted, because that’s part of the story too.


What Are the Best Dev Workflow Automation Scripts to Use?

The most effective dev workflow automation scripts target high-frequency, low-complexity tasks like project bootstrapping, environment cleanup, and credential management. Focus on scripts that automate “git init” sequences, multi-directory “git pull” updates, and Docker volume pruning. To be truly productive, an automation script should save more time in a week than it takes to write and maintain over a year.


The Only Metric That Matters

Before we get into the scripts, let me be direct about the framework I use to evaluate automation.

Time to write + maintain vs. time saved.

That’s it. A script that takes four hours to write and saves you 30 seconds a week will take 32 weeks to break even. A script that takes 45 minutes to write and saves you 10 minutes a day breaks even in a week.

Most developers I know underestimate two things: how long they’ll actually use a script (usually much longer than expected once it’s embedded in muscle memory), and how long maintenance takes (bugs, OS updates, tool version changes). My rule is to assume 3x the initial write time for maintenance over a year and see if the numbers still work.

If they do, it’s worth building. If they don’t, it’s worth asking whether you can solve the problem a different way — or just accept that some manual steps are fine.


Project Setup: The Blank Slate Problem

Starting a new project is the most consistent source of time waste in my workflow. Every project involves the same 10-15 steps: initialize git, create the right directory structure, set up a virtual environment, add .gitignore, install pre-commit hooks, create a basic README, and half a dozen other small things. Individually each takes two minutes. Together, it’s 20-30 minutes of context-free busy work before you write a single meaningful line of code.

I automated this completely about two years ago. Here’s the script I actually use:

#!/usr/bin/env bash
# newproject.sh - Bootstrap a new Python project
# Dependencies: git, python3 (3.11+), pre-commit
# Usage: ./newproject.sh <project-name> [--no-venv]

set -euo pipefail

PROJECT_NAME="${1:?Usage: newproject.sh <project-name> [--no-venv]}"
NO_VENV="${2:-}"
PROJECT_DIR="$(pwd)/${PROJECT_NAME}"

echo "Creating project: ${PROJECT_NAME}"

# Create directory structure
mkdir -p "${PROJECT_DIR}"/{src,tests,docs,scripts}
cd "${PROJECT_DIR}"

# Initialize git
git init
git checkout -b main

# Python virtual environment
if [[ "${NO_VENV}" != "--no-venv" ]]; then
    python3 -m venv .venv
    source .venv/bin/activate
    pip install --upgrade pip --quiet
    echo ".venv" >> .gitignore
fi

# Standard .gitignore additions
cat >> .gitignore << 'EOF'
__pycache__/
*.py[cod]
*.egg-info/
dist/
build/
.env
.env.local
*.log
.DS_Store
EOF

# Pre-commit config
cat > .pre-commit-config.yaml << 'EOF'
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.6.0
    hooks:
      - id: trailing-whitespace
      - id: end-of-file-fixer
      - id: check-yaml
      - id: check-added-large-files
        args: ['--maxkb=500']
      - id: detect-private-key

  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.6.9
    hooks:
      - id: ruff
        args: [--fix]
      - id: ruff-format
EOF

# Install pre-commit hooks
pre-commit install

# Minimal README
cat > README.md << EOF
# ${PROJECT_NAME}

## Setup

\`\`\`bash
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
pre-commit install
\`\`\`

## Running Tests

\`\`\`bash
pytest
\`\`\`
EOF

# Initial commit
git add -A
git commit -m "chore: initial project scaffold"

echo ""
echo "Project '${PROJECT_NAME}' ready at ${PROJECT_DIR}"
echo "Virtual env: source ${PROJECT_DIR}/.venv/bin/activate"

This script runs in about 15 seconds. It handles git init, directory structure, pre-commit hooks, .gitignore, and a starter README. I installed it as a shell alias pointing to the script path, so I type newproject my-api and walk away.

Time saved per use: 20-25 minutes. Time to write and debug: About 90 minutes. Break-even: Roughly 4-5 projects.

I’ve used it for 30+ projects. This one paid off.


Git Workflow Scripts

Git is where I found the most low-effort, high-return automation opportunities. The work is repetitive, the rules are consistent, and the cost of screwing it up is real.

Branch Naming Enforcement

My team uses a branch naming convention: {type}/{ticket-id}-{short-description}. Something like feat/PROJ-123-add-user-auth or fix/PROJ-456-null-pointer-login. The convention is documented. The convention gets ignored when people are moving fast.

This pre-commit hook enforces it at the source:

#!/usr/bin/env bash
# .git/hooks/prepare-commit-msg
# Enforces branch naming convention at commit time

BRANCH=$(git symbolic-ref --short HEAD 2>/dev/null)
VALID_PATTERN="^(feat|fix|chore|docs|refactor|test|hotfix)\/.+"

if [[ ! "${BRANCH}" =~ ${VALID_PATTERN} ]]; then
    echo ""
    echo "ERROR: Branch name '${BRANCH}' does not match convention."
    echo "Required pattern: {type}/{description}"
    echo "Types: feat, fix, chore, docs, refactor, test, hotfix"
    echo ""
    echo "Example: feat/PROJ-123-add-user-auth"
    echo ""
    exit 1
fi

Save this to .git/hooks/prepare-commit-msg and chmod +x it. Or better, manage it through pre-commit so it’s versioned with the repo.

The first time someone gets this error, they’re mildly annoyed. After that, they name branches correctly before they start working. That’s the intended behavior.

Commit Message Linting

Inconsistent commit messages are one of those problems that seems minor until you’re trying to generate a changelog or trace a production issue. I added commitlint to projects where this matters:

# .pre-commit-config.yaml addition
  - repo: https://github.com/alessandrojcm/commitlint-pre-commit-hook
    rev: v9.18.0
    hooks:
      - id: commitlint
        stages: [commit-msg]
        additional_dependencies: ['@commitlint/config-conventional']

With a matching commitlint.config.js:

module.exports = {
  extends: ['@commitlint/config-conventional'],
  rules: {
    'type-enum': [
      2,
      'always',
      ['feat', 'fix', 'chore', 'docs', 'refactor', 'test', 'perf', 'ci']
    ],
    'subject-max-length': [2, 'always', 72],
  },
};

This rejects commits like fixed stuff and WIP while accepting fix(auth): resolve null pointer in login handler. Six months of enforcing this and my git logs are actually readable.

Pre-Push Test Runner

The most painful thing in development is pushing code that breaks tests and getting a CI failure 8 minutes later. A pre-push hook that runs your test suite locally closes that gap:

#!/usr/bin/env bash
# .git/hooks/pre-push
# Runs tests before pushing to remote

set -euo pipefail

echo "Running pre-push checks..."

# Only run on non-draft pushes (optional — remove if you want it to always run)
PROTECTED_BRANCHES="main master"
CURRENT_BRANCH=$(git symbolic-ref --short HEAD)

for branch in ${PROTECTED_BRANCHES}; do
    if [[ "${CURRENT_BRANCH}" == "${branch}" ]]; then
        echo "Pushing directly to ${branch} — running full test suite..."
        python -m pytest tests/ -x --tb=short -q
        echo "Tests passed."
    fi
done

The -x flag stops on first failure, -q keeps output minimal. For branches other than main/master, this hook does nothing. You can push work-in-progress freely; it only enforces quality when pushing to protected branches.


Local Dev Environment Management

The “is everything running?” problem is real. You sit down, open a project, try to run it, and get a connection error. Is the database up? Did you forget to start the API? Is your .env file missing a variable someone added last week?

This check script runs in about 3 seconds and tells me exactly what’s wrong:

#!/usr/bin/env bash
# devcheck.sh - Verify local dev environment is ready
# Customize the SERVICES and ENV_VARS arrays for your project

set -euo pipefail

PASS="✓"
FAIL="✗"
WARN="!"
ALL_GOOD=true

# Services to check (format: "name:host:port")
SERVICES=(
    "PostgreSQL:localhost:5432"
    "Redis:localhost:6379"
    "API Server:localhost:8000"
)

# Required environment variables
ENV_VARS=(
    "DATABASE_URL"
    "REDIS_URL"
    "SECRET_KEY"
    "API_BASE_URL"
)

echo "=== Dev Environment Check ==="
echo ""

# Check services
echo "Services:"
for service_spec in "${SERVICES[@]}"; do
    IFS=':' read -r name host port <<< "${service_spec}"
    if nc -z "${host}" "${port}" 2>/dev/null; then
        echo "  ${PASS} ${name} (${host}:${port})"
    else
        echo "  ${FAIL} ${name} (${host}:${port}) — NOT RUNNING"
        ALL_GOOD=false
    fi
done

echo ""

# Check environment variables
echo "Environment:"
for var in "${ENV_VARS[@]}"; do
    if [[ -n "${!var:-}" ]]; then
        # Show first 8 chars only for security
        echo "  ${PASS} ${var}=${!var:0:8}..."
    else
        echo "  ${FAIL} ${var} — NOT SET"
        ALL_GOOD=false
    fi
done

echo ""

# Check for .env.example drift
if [[ -f ".env.example" && -f ".env" ]]; then
    MISSING_VARS=()
    while IFS= read -r line; do
        if [[ "${line}" =~ ^[A-Z_]+=.* ]]; then
            var_name="${line%%=*}"
            if [[ -z "${!var_name:-}" ]]; then
                MISSING_VARS+=("${var_name}")
            fi
        fi
    done < ".env.example"

    if [[ ${#MISSING_VARS[@]} -gt 0 ]]; then
        echo "  ${WARN} Variables in .env.example but not in your env:"
        for v in "${MISSING_VARS[@]}"; do
            echo "    - ${v}"
        done
        ALL_GOOD=false
    else
        echo "  ${PASS} .env matches .env.example"
    fi
fi

echo ""

if [[ "${ALL_GOOD}" == true ]]; then
    echo "All checks passed. You're good to go."
else
    echo "Some checks failed. Fix the issues above before running."
    exit 1
fi

The .env.example drift check is the most useful part. When a teammate adds a new required variable, they update .env.example. This script catches anyone who hasn’t updated their local .env to match. It’s caught that problem at least a dozen times in the last year.

I run this script via a make check target at the start of every work session on collaborative projects.


Release and Deployment Helpers

Release management involves the most manual, error-prone steps in the entire development cycle. Version bumping, changelog updates, tagging, deployment verification — each step is simple in isolation and catastrophic when skipped.

Version Bump Script

This handles the mechanical parts of a semver release:

#!/usr/bin/env bash
# bump-version.sh - Semantic version bump with changelog update
# Usage: ./bump-version.sh [major|minor|patch]
# Requires: git, a VERSION file in project root

set -euo pipefail

BUMP_TYPE="${1:?Usage: bump-version.sh [major|minor|patch]}"
VERSION_FILE="VERSION"

if [[ ! -f "${VERSION_FILE}" ]]; then
    echo "No VERSION file found. Create one with your current version (e.g., '1.0.0')."
    exit 1
fi

CURRENT=$(cat "${VERSION_FILE}")
IFS='.' read -r MAJOR MINOR PATCH <<< "${CURRENT}"

case "${BUMP_TYPE}" in
    major)
        MAJOR=$((MAJOR + 1))
        MINOR=0
        PATCH=0
        ;;
    minor)
        MINOR=$((MINOR + 1))
        PATCH=0
        ;;
    patch)
        PATCH=$((PATCH + 1))
        ;;
    *)
        echo "Invalid bump type. Use: major, minor, or patch"
        exit 1
        ;;
esac

NEW_VERSION="${MAJOR}.${MINOR}.${PATCH}"

echo "Bumping: ${CURRENT}${NEW_VERSION}"

# Update VERSION file
echo "${NEW_VERSION}" > "${VERSION_FILE}"

# Prepend changelog entry
CHANGELOG_ENTRY="## [${NEW_VERSION}] - $(date +%Y-%m-%d)\n\n### Changes\n- (add release notes here)\n\n"
if [[ -f "CHANGELOG.md" ]]; then
    EXISTING=$(cat CHANGELOG.md)
    printf "%s\n%s" "${CHANGELOG_ENTRY}" "${EXISTING}" > CHANGELOG.md
    echo "CHANGELOG.md updated. Open it to add release notes."
else
    printf "# Changelog\n\n%s" "${CHANGELOG_ENTRY}" > CHANGELOG.md
fi

# Stage and commit
git add "${VERSION_FILE}" CHANGELOG.md
git commit -m "chore(release): bump version to ${NEW_VERSION}"
git tag -a "v${NEW_VERSION}" -m "Release v${NEW_VERSION}"

echo ""
echo "Version bumped to ${NEW_VERSION}"
echo "Tag created: v${NEW_VERSION}"
echo "Push with: git push && git push --tags"

What’s Deployed Right Now?

This script answers the “what version is actually running?” question without logging into anything:

#!/usr/bin/env bash
# deployment-status.sh - Check what's deployed across environments
# Customize ENVIRONMENTS to match your setup

ENVIRONMENTS=(
    "staging:https://staging.your-api.com"
    "production:https://your-api.com"
)

echo "=== Deployment Status ==="
echo ""

for env_spec in "${ENVIRONMENTS[@]}"; do
    IFS=':' read -r env_name base_url <<< "${env_spec}"
    # Assumes a /health or /version endpoint that returns JSON with a "version" field
    VERSION=$(curl -sf "${base_url}/health" 2>/dev/null | python3 -c "import sys,json; print(json.load(sys.stdin).get('version','unknown'))" 2>/dev/null || echo "unreachable")
    echo "  ${env_name}: ${VERSION}"
done

echo ""
echo "Local HEAD: $(git describe --tags --always 2>/dev/null || git rev-parse --short HEAD)"

Small script, enormous peace of mind when something’s broken at 11pm.


The Scripts That Got Deleted

Authenticity requires including the failures.

Auto-staging script. I wrote a script that would automatically stage files matching certain patterns before a commit, based on the branch name. The idea was to save the git add step. In practice, it staged things I didn’t intend to commit, twice. The second time it quietly included a credentials file in a test directory I hadn’t noticed. The credentials weren’t sensitive, but the pattern terrified me. Deleted immediately. Manual staging is a safety checkpoint. Don’t automate it.

Slack notification script. I built a script that would ping our team Slack channel whenever I pushed to main. It seemed useful for coordination. What it actually did was train the team to ignore all Slack notifications because “Scott’s just pushing again.” Notifications are valuable when they’re rare. Automating them made them noise. Deleted after two weeks.

The lesson: automation that removes a safety checkpoint or creates notification fatigue is worse than nothing.


Scripts Running Right Now

Here’s my actual daily driver list, with honest time savings:

Script Trigger Time Saved Per Use Uses Per Week
newproject.sh New project 20 min 0-2
Pre-commit hooks Every commit 5 min (catching bugs earlier) 10-20
devcheck.sh Session start 3 min 5-10
bump-version.sh Release day 15 min 0-1
Pre-push test runner Push to main 8 min (CI feedback loop) 2-5

The pre-commit hooks column is the interesting one. They don’t save time on each commit. They save time by catching issues before they become PRs or CI failures, which would cost 15-30 minutes each to resolve. The return is invisible when they work and very visible when you imagine them not existing.


A Decision Framework

Before you automate anything, answer these questions:

1. How often will I run this? Daily scripts pay off fast. Monthly scripts need to save more time per use to be worth maintaining.

2. Is this step inherently risky? Some manual steps should stay manual. Anything that permanently modifies data, touches credentials, or deploys to production benefits from human confirmation.

3. Will this work on the next machine I set up? If the script depends on 12 locally installed tools you haven’t documented, it’s a maintenance liability. Scripts should document their dependencies or install them.

4. Can I test it safely? Write a --dry-run flag for anything that modifies files or makes network calls. You will want it eventually.

5. What happens when it fails? The script will fail at the worst time. What does failure look like? Can you recover quickly? set -euo pipefail at the top of every bash script is the minimum safety net.


Connecting to the Broader Workflow

If you want to see how these scripts fit into cross-platform environments and shell scripting patterns, I wrote about that in detail in Cross-Platform Shell Scripting: Writing Scripts That Work Everywhere. The principles there complement everything here: POSIX compliance, portability, and defensive scripting patterns.

For CI/CD integration, the pre-commit hooks you install locally should mirror what CI checks. If CI lints with ruff but your local hooks use flake8, you’ll have a mismatch that causes friction. Consistency between local and remote checks is what makes the whole system feel reliable rather than adversarial.


What scripts are saving your time right now? I’d genuinely like to know if there’s something you’ve built that I haven’t thought of. Find me on X or LinkedIn.