Claude Code Hooks - Turning AI Suggestions Into Enforced Workflow

Claude Code Hooks - Turning AI Suggestions Into Enforced Workflow

AI coding tools are great at one thing: moving fast. They’re less great at stopping before they do something dumb.

That gap matters a lot more when you’re a solo developer. There’s no code review safety net, no teammate to sanity-check a diff, and no patience for fixing the same formatting or test failures for the tenth time. If AI is going to help me ship side projects, it has to follow the same rules I do.

Claude Code hooks are the first feature I’ve used that actually enforces that — by turning repo rules into something executable, not optional.

Not by being smarter — but by being stricter.


The Problem Hooks Actually Solve

The issue with AI-generated code isn’t correctness. Most of the time, the code works. The problem is that it doesn’t understand which constraints are absolute.

It doesn’t know that some directories are hands-off. It doesn’t know that generated files are radioactive. It doesn’t know that a failing test, even by one assertion, is still a hard stop.

So you end up doing the same cleanup loop over and over: reformat, revert, rerun, retry. That loop is fine when you’re experimenting. It’s a tax when you’re trying to ship.

Hooks exist to kill that tax. They turn repo rules into something executable instead of something you hope the AI remembers.


What Changes Once Hooks Are In Place

Once hooks are wired in, Claude no longer decides when it’s finished. Your workflow does.

That’s the entire mental shift.

Before hooks, the AI produces output and hands it to you for judgment. After hooks, the output goes through the same gauntlet your own code would: formatters, linters, tests, and guardrails. If something fails, the process stops immediately and the failure goes back to Claude, not you.

That one change eliminates a surprising amount of friction. You stop reacting to bad output and start trusting the loop.

This is where Claude stops feeling like autocomplete and starts behaving like a junior dev who actually reads the contributing guide.


How I’m Using Hooks in Real Side Projects

I don’t use hooks everywhere. I use them where mistakes are expensive.

Before Claude touches the repo, I enforce boundaries. If it tries to edit files outside approved directories or touch generated code, the run fails instantly. There’s no review step because there doesn’t need to be one — the rule is absolute.

After Claude finishes making changes, I run the same checks I’d run myself. Formatting is non-negotiable. Linters get a vote. Targeted tests run automatically. If any of that fails, Claude gets the failure output and has to respond to it.

That detail matters. The AI should experience the consequences of breaking rules, not the developer. Over time, the diffs get cleaner and the retries get smarter. You can feel the system tightening up.

Very …And Justice for All energy. Precise. Unforgiving. No slop.


Error Hooks Are the Underrated Part

This is where hooks stop being guardrails and start feeling like leverage.

Without error hooks, failures tend to spiral. A test fails, Claude guesses why, retries, and often makes things worse. You end up debugging noise instead of intent.

What makes error hooks powerful in Claude Code is a small but important detail: exit code 2.

If a hook exits with code 2, Claude treats it as a blocking failure and ingests the hook’s stderr directly. There’s no silent retry. No guessing. The AI is forced to respond to the failure context you give it.

That’s the magic.

In practice, this lets you turn failures into structured feedback instead of chaos. For example, when a formatter or test fails, I’ll capture just the relevant output, write it to stderr, and exit with 2. Claude sees exactly what broke and why, and the next step is almost always corrective instead of exploratory.

This matters even more in iOS projects, where failure output can be noisy. One of my side projects runs a SwiftLint hook after edits. If SwiftLint flags a violation, the hook trims the output down to the offending rule and line number, exits with 2, and hands that straight back to Claude. Same thing with xcodebuild output piped through xcbeautify — the hook filters out the junk and surfaces only the actionable error.

The difference is night and day. Instead of the AI flailing around, it responds like a teammate who just read the build log.


Why My Hook Config Is Intentionally Boring

This is the part where people usually expect something clever. It isn’t.

Here’s a simplified version of what one of my setups actually looks like — not the full config, just enough to make the idea concrete:

{
   "hooks":{
      "pre_tool":[
         {
            "command":"scripts/verify_edit_scope.sh"
         }
      ],
      "post_tool":[
         {
            "command":"scripts/format_code.sh"
         }
      ]
   }
}

That’s it. No special logic. No Claude-specific behavior.

The pre-tool hook is just a boundary check. It exits non-zero if Claude tries to touch files outside approved directories or edit generated code. There’s no review step because the rule is absolute.

The post-tool hook runs the same formatter I already trust in CI. If formatting fails, the run stops and Claude gets the failure output.

Everything else flows from that.

If you want something a little closer to what you’d actually drop into settings.json, here’s a more copy‑pasteable version that reflects the current Claude Code hook structure and naming:

{
   "hooks":{
      "PostToolUse":[
         {
            "matcher":"Write|Edit|MultiEdit",
            "commands":[
               "scripts/format_code.sh"
            ]
         }
      ]
   }
}

This version does the same thing as the earlier example, but with two important differences. First, it uses the canonical PostToolUse event casing from the current documentation. Second, it scopes the hook using a matcher so it only runs after write‑style operations, instead of paying the performance tax on every trivial tool invocation.

That matcher detail ends up mattering more than you’d expect once your workflow gets busy.

My hook setup isn’t clever. It doesn’t try to be. I like to keep things simple until I absolutely can’t.

It calls the same scripts my CI runs. The same formatters, the same test commands, the same checks I already trust. There’s no hook-specific logic and no special cases hiding in config files.

If a hook is slow, flaky, or annoying, it gets removed. Solo development has zero tolerance for friction that doesn’t pay for itself.

Hooks aren’t there to impress you. They’re there to quietly keep the repo in line while you focus on building the actual product.

The real secret sauce isn’t the script itself — it’s the exit code.

If my verify_edit_scope script exits with 2, Claude treats it as a hard boundary violation. It doesn’t ask for permission. It doesn’t negotiate. It just stops and fixes its approach.

That’s the difference between a suggestion and a hook.


Pro Tip for iOS Devs

If you’re working in Swift projects, two small hooks go a long way. These aren’t meant to be clever — they’re meant to be decisive.

1. Boundary enforcement (hard stop on invalid edits)

This script prevents Claude from touching anything outside Sources/ or Tests/, or modifying generated Swift files. If it exits with 2, Claude treats it as a blocking boundary violation.

#!/usr/bin/env bash
set -e

changed_files=$(git diff --name-only HEAD)

for file in $changed_files; do
    if [[ ! $file =~ ^(Sources|Tests)/ ]] || [[ $file == *.generated.swift ]]; then
        echo "Invalid edit detected: $file" 1>&2
        exit 2
    fi
done

2. SwiftLint enforcement (clean, actionable feedback)

This hook runs SwiftLint and trims the output down to just the offending rule and line. If violations exist, it exits with 2 so Claude is forced to fix them before continuing.

#!/usr/bin/env bash
set -e

output=$(swiftlint lint --quiet || true)

if [[ -n "$output" ]]; then
    echo "$output" 1>&2
    exit 2
fi

Neither script is complicated. The leverage comes from the exit code. Once Claude learns that 2 means stop and fix, these hooks stop being guardrails and start behaving like real workflow enforcement.


What Hooks Don’t Do

Hooks won’t fix bad architecture. They won’t make flaky tests reliable. They won’t replace CI or judgment.

What they do is shift failure left.

Instead of finding out 20 minutes later that CI is unhappy, you find out immediately — while the context is still fresh and the fix is obvious. That alone makes them worth the effort.

My rule is simple: if CI wouldn’t block it, my hooks probably shouldn’t either. Hooks are guardrails, not walls.


Why This Matters When You’re Building Alone

When you’re the only developer, consistency is everything. Hooks preserve standards across weeks, months, and half-finished ideas you come back to later. They remember your rules when you’re tired, distracted, or moving too fast.

Claude still writes code. You still make decisions. Hooks keep the system honest. Not every tool needs to shred.

Sometimes it just needs to stay in time.


Bonus: More Real-World iOS Survival Stories

If you’re into battle-tested workflows, shipping side projects, and tooling that actually earns its place, you’ll probably enjoy the rest of my writing.

You can find more here:

If you’ve wired Claude hooks into your own setup — or found a way to break them — I’d love to hear about it. Let’s keep building things that hold up under pressure and don’t waste our time.