Contracting

Work with me

Capacity and approval

I work full-time at PwC. Anything formal (paid work, a named engagement, something that looks like a side contract) has to clear my employer's approval process. I'm not going to skip that step, so build it into your timeline.

Outside the day job I have on the order of ten hours a week. I want to put most of that toward advisory work: read what you're building, tell you where it's going to break, suggest what to fix first, point you at patterns I've already written up here. That's the fit I'm looking for.

Deeper embeds (pair-build, install-the-system) are still possible when the scope fits that envelope and the paperwork is sorted. If you need someone to own implementation on a crunch deadline in off hours, I'm not that person.

What I actually do

I build the discipline around AI agents — the rules, review protocols, orchestration patterns, and grounding systems that make them reliable enough to put in front of real users. Most teams have agents. Most teams don't have the ops layer that keeps agents from embarrassing them in production. That's what I build. I've done it publicly on this blog and internally across multiple production codebases.

The post What I actually do goes into more detail if you want the long version. The short version: rules-as-memory, parallel agent orchestration via git worktrees, adversarial review before every push, and grounding for production agents that talk to users.

Problems I typically solve

Our agents hallucinate in production. This is almost always a grounding problem. The agent has no verification step before it presents claims to users. I build the HMAC-fingerprinted response layer and verification tooling that catches fabricated claims before they surface.

Our AI initiative is stuck in demo. Usually missing three things: persistent memory between sessions (rules-as-memory), a way to run agents in parallel without branch collision (worktrees), and a review step that catches failures before users do. I've wired up all three. Multiple times.

Our agents keep making the same mistakes over and over. No institutional memory. Every session starts cold. I build the rules corpus — one rule per failure, with the correct pattern baked in, loaded into context on every session. Ninety-odd rules in this codebase right now. Each one represents a mistake we already paid for.

We can't review AI-generated code fast enough. The review bottleneck is real. I build adversarial review pipelines — structured prompts where one persona attacks the diff (security holes, scope creep, accidental reversions) and one defends it. It's not a replacement for human review. It's the layer that catches the obvious stuff so human review can focus on the hard stuff.

We want to adopt agentic engineering but don't know where to start. I set up the worktree structure, write the first batch of rules, build the review pipeline, and train the team. Leave-behind docs, scripts, and a rules corpus you can build on.

How to engage

Advisory (hourly or retainer) — This is the main thing. I review what you're building, tell you where it's going to break, and explain how to fix it. Best for teams that already have something in motion and want a second opinion before it bites them. Fits the weekly time box and the approval path more cleanly than a big build-out.

Pair-build (weeks) — We ship something real together and I leave behind rules, docs, and scripts. Only when the scope fits my availability and employer sign-off. Not a default; ask if you think you need it.

Install-the-system (fixed scope) — Worktrees, rules corpus, adversarial review pipeline, CI hooks, trained team. Same constraints: narrow enough to fit the hours I have, and approved through my employer.

Proof

Everything I've described is documented and public. This blog is built on the same system I'd install for you.

  • Posts — including Rules that make quality sites easy and What I actually do
  • The rules corpus (~90 rules) is in .cursor/rules/ — each rule has an Origin section explaining the failure that triggered it
  • The skills library covers adversarial review, the blog pipeline, finish-work-merge-ci, and others — under .cursor/skills/
  • The adversarial review framework (personas, debate protocol, evaluation rubric) is documented under docs/adversarial-review/

The multi-version post format (every post has a human version and an AI-generated contrast) is itself an example of how I use the system in practice. The AI version is labeled so you can see the difference.

Contact

Email: jdetle@gmail.com

LinkedIn: linkedin.com/in/jdetle

I respond within a day or two. Mention if you already know you need employer-side approval on your end too — it helps set expectations. If you want to get a feel for how I think before reaching out, the posts are the best place to start.