Ask your AI assistant to create a lead in Instantly.
It builds a curl command from scratch. Uses campaign_id as the field name. The API returns 200 OK. No error. Your lead just... doesn't show up in the campaign.
You stare at it for 20 minutes. You check Instantly's UI. Nothing. You re-read the API docs. There it is, buried in a paragraph: the field is campaign, not campaign_id. Using the wrong name doesn't error. It silently drops the assignment.
That's one API call. I run 12 of these daily across my GTM stack.
Death by Enter Key
Here's what actually happens when an LLM tries to hit a GTM API it hasn't seen before. I watched this exact sequence with the Gamma presentation API:
- It writes a 30-line Python script. Shell quoting breaks. Enter.
- Rewrites the script. Uses Bearer auth. Wrong, Gamma uses X-API-KEY. Enter.
- Fixes the auth. Cloudflare blocks with 403. No User-Agent header. Enter.
- Adds User-Agent. Call succeeds but reads the response wrong. Enter.
- Fixes the response parsing. Polling endpoint fails. Enter.
Five Enter presses. Five permission prompts. Each one a 5-30 second round trip. One API call that should have been:
gamma.sh generate --text "Your content here" --mode preserve
One Enter. Three seconds. Done.
It's not just single calls
Here's another one. I needed to reactivate an n8n workflow. Deactivate, wait a second, reactivate. Three commands. Here's what actually happened:
- Claude tries to grep the API key from .env using Python. Syntax error. Enter.
- Rewrites as a shell pipeline. Chains grep, xargs, and curl together. Security hook blocks it (data exfiltration pattern). Enter.
- Tries again with a different curl structure. Blocked again. Enter.
- Fourth attempt gets through. Workflow deactivated. Now it tries to reactivate. Wrong Content-Type header. Enter.
- Fixes the header. Reactivation finally works. Enter.
Five Enter presses for a deactivate/reactivate cycle. Should have been:
n8n.sh deactivate --id WORKFLOW_ID
sleep 1
n8n.sh activate --id WORKFLOW_ID
Three commands. The sleep between deactivate and activate is a gotcha baked into the payload template so the LLM knows to include it every time.
This is the real cost of LLM API hallucination. It's not just wrong answers. It's you, sitting there, pressing Enter every time your AI assistant tries another variation of the same broken curl command. Each failure needs your approval. Each approval costs you focus. A single API integration can eat 10 minutes and 8+ Enter presses before it stumbles onto the right combination.
I call it the Enter Key Tax. And I was paying it dozens of times a day.
The Test
I ran 8 common GTM operations through Claude Code with no tool layer. Just the raw LLM and whatever API knowledge it had in its training data.
Result: 0 out of 8 correct.
| Operation | Result | What Went Wrong |
|---|---|---|
| Create Instantly lead | FAIL | Used campaign_id instead of campaign. Lead went nowhere. |
| Instantly SuperSearch | FAIL | Passed campaign ID instead of lead list ID |
| Upsert Attio person | FAIL | Used GET instead of PUT, wrong value format |
| Generate Gamma deck | FAIL | Bearer auth + no User-Agent = Cloudflare 403 |
| Slack Connect invite | FAIL | Wrong scope, missing charset in Content-Type |
| Bigin stage update | FAIL | Wrong endpoint, missing {data: [...]} wrapper |
| Pylon channel link | FAIL | Tried to create a channel. Pylon can only link existing ones. |
| n8n workflow update | FAIL | Missing required settings field, got validation error |
These aren't edge cases. They're the primary operations you'd run daily. And the failure modes are the worst kind: silent, or so cryptic you can't tell what went wrong without reading source code.
What I Built
GTMOps is a deterministic tool layer that sits between the LLM and the APIs.
12 bash scripts. One per API. Each one bakes in the auth pattern, the correct field names, the gotchas, and error handling. When Claude loads instantly.sh, it doesn't guess. It calls a function that's already been tested.
With GTMOps# One call. One Enter. Correct every time.
instantly.sh create-lead \
--email "jane@acme.com" \
--first-name "Jane" \
--campaign "YOUR_CAMPAIGN_UUID"
The gotcha about campaign vs campaign_id is on line 22 as a comment, and it's baked into the script's JSON builder so the LLM can't use the wrong field even if it wanted to.
Every tool follows the same contract:
--helpprints usage with a GOTCHAS section--dry-runprints the curl without executing- JSON to stdout, errors to stderr
- Non-zero exit on API error
There's also a guard hook. It runs after every Bash call in Claude Code. If your AI tries to build a raw curl command to a covered API instead of using the wrapper, the hook catches it and says: use instantly.sh, not curl api.instantly.ai. If a tool fails, it points you to the gotchas. It's the enforcement layer for the whole approach.
The Gotchas Are the Product
The scripts themselves are simple. Bash, curl, jq. Nothing fancy.
The value is in what's baked into them. Each gotcha represents a real debugging session:
- Instantly:
campaignnotcampaign_id. Two hours to find this. The API accepts both field names but only one actually assigns the lead. - Gamma:
X-API-KEYheader, not Bearer. Plus aUser-Agentheader or Cloudflare returns 403 with error 1010. Plus the response field isgenerationId, notid. - Attio: Queries use POST, not GET. Values are nested arrays of objects
[{value: X}], not flat values. - n8n workflow update: The
settingsfield is required on PUT. Extra fields likeavailableInMCPcause validation errors. You have to strip the GET response before PUTting it back. - Pylon: Cannot create Slack channels. Only link existing channel IDs. Every LLM tries to create first.
None of this is in the API docs in a way that LLMs can reliably extract. It's the kind of knowledge that only comes from production failures.
The Result
Same 8 operations, GTMOps loaded:
8/8 correct. Eight tool calls. Eight Enter presses.
No retries. No permission prompt chains. No watching curl commands fail and rebuild. Total time for all 8 dropped from "I stopped counting after 15 minutes" to under 2 minutes.
Without GTMOps: 0/8 correct. Each failure = 3-8 retry attempts. That's 30-50 Enter presses across the run.
With GTMOps: 8/8 correct. Eight tool calls. Eight Enter presses. Done.
Adding Your Own APIs
The repo includes OnboardAPI.md, a 6-phase workflow for adding new APIs. It also includes 23 payload templates that document the exact JSON shapes, field maps, and gotchas for each API. You don't need them to use the tools, but they're the playbook when you're building new ones.
- Discover - Read docs, identify auth, test one call manually
- Catalog Gotchas - Find the landmines before they blow up
- Build Tool - Copy an existing script, swap in the new API
- Build Payloads - JSON templates with curl commands and field maps
- Register - Add to the SKILL.md tables
- Validate -
--help,--dry-run, one real call, valid JSON
The whole process takes 15-30 minutes per API. The first three took longer because I was figuring out the pattern. Now it's mechanical.
v1.1 Update: Battle-Tested in a Real Pipeline
April 10, 2026
Yesterday we shipped an automated Slack Connect onboarding pipeline for a client launching at a conference. 30+ n8n nodes, Clay qualification, Pylon CS tracking, Attio CRM upserts, Zapier sync. The kind of build where you hit every API in the stack within a few hours.
GTMOps caught every gotcha before it became a debugging session. But it also exposed gaps in the tools themselves. Here's what we added:
Pylon: 3 new commands + 3 new gotchas
search-account, delete-account, update-account. We were cleaning up test data (9 Pylon accounts, 8 Attio records) and needed bulk operations the tool didn't support. Raw curl every time. Now it's:
pylon.sh search-account --domain acme.com
pylon.sh delete-account --id abc-123
pylon.sh update-account --id abc-123 --json '{"channels": [...]}'
The gotchas we learned the hard way:
- Pylon rejects private Slack channel IDs. If you try to link a private channel via API, you get a 400. Public channels only.
- Pylon auto-creates accounts for public
user-*channels. If the Pylon bot is in a public channel nameduser-acmecorp, it creates the account automatically. You might not need the create-account API at all. - DELETE returns 200 with just a request_id. No body confirmation. Don't parse it expecting account data back.
n8n: create + export workflows
We built a v2 workflow as a separate pipeline alongside v1. Without create-workflow, every new workflow was manual JSON manipulation and raw curl. Now:
n8n.sh --instance cloud export-workflow --id YGOJ2 --file backup.json
n8n.sh --instance cloud create-workflow --file new-workflow.json
New gotchas baked in: callerPolicy in settings causes validation errors (strip it). Cloud credentials API returns empty arrays (can't read values). PUT requires name field or you get a 400.
Attio: multi-workspace + tasks + members
The pipeline needed to create tasks for specific team members and work across two Attio workspaces. Added --workspace flag, list-members, and create-task:
attio.sh --workspace team list-members
attio.sh --workspace team create-task --content "Follow up with lead" --assignee member-uuid
Gotcha: is_completed, assignees, and linked_records are ALL required on the Tasks API. Miss any one and you get a validation error with no useful message. Also: domain upsert can match the wrong company if a domain appears on multiple records.
The philosophy behind it
GTMOps runs inside a Personal AI Infrastructure setup heavily inspired by Daniel Miessler's PAI architecture. If you haven't read his work on building persistent AI systems with skills, hooks, and steering rules, start there. A lot of the mental models behind GTMOps come from his framework: the idea that your AI should have domain expertise baked into reusable skills, not rediscovered every session.
One principle from that architecture shaped how GTMOps evolves: the tool IS the memory. When we hit a new API gotcha during a build, the learning goes directly into the bash script's comments and GOTCHAS section. Not into a separate doc. Not into a knowledge base the AI might or might not load. The tool file is the single source of truth.
We enforce this with a guard hook that blocks API learnings from going anywhere except the tool scripts. When the AI tries to write "Pylon rejects private channels" into a memory file, the hook catches it and says: put that in pylon.sh instead. The gotcha ends up exactly where it'll be read next time the tool runs.
That's why every --help output has a GOTCHAS section. Those aren't documentation we wrote after the fact. They're scars from real debugging sessions, captured at the moment of failure, baked directly into the code that prevents the next one.
Daniel's thinking on determinism, skill composition, and continuous learning shaped all of this. GTMOps is one skill in a larger system, but the principle is universal: if your AI keeps making the same mistake, don't train it harder. Give it a tool that makes the mistake impossible.
Try It
git clone https://github.com/themoatwriter/gtmops.git
cd gtmops
cp .env.example .env
# Add your API keys
chmod +x src/tools/*.sh
# Test
src/tools/instantly.sh --help
src/tools/attio.sh --help
9 tools. Open source, MIT licensed. If you run a GTM stack through Claude Code and you're tired of babysitting API calls, this is what fixed it for me.
github.com/themoatwriter/gtmops
Every gotcha in GTMOps is a scar from a real debugging session. If it saves you from pressing Enter 15 times on one API call, it did its job.