DTCSKILLS

Monday Ad Op: A Weekly DTC Ad Recipe in One Claude Code Chat

Jake Ballard·

Monday Ad Op: A Weekly DTC Ad Recipe in One Claude Code Chat

TL;DR: Once a week, one Claude Code chat runs your full paid ad loop end-to-end - refresh customer intel, pull competitor ads from the Meta Ad Library, diagnose dying creatives, generate brand-on replacements via Higgsfield, animate the winners, apply Meta's AI Content Label, and push live via the Meta CLI. About 30 minutes per ad account. The recipe below is the version we run as our weekly DTC ads playbook. Skip any step, and you either lose the brand voice, ship slower, or risk a policy strike.

Meta and Higgsfield both shipped MCPs in 24 hours last week. Here is the recipe I built on top of them. The workflow used to take a half-day per ad account if you did it manually (and most operators don't, which is why most ad accounts drift). With these new tools wired together, it takes about 30 minutes per account, every Monday morning, one chat in Claude Code at a time. The difference is not the AI. The difference is what the AI is allowed to read before it generates anything.

This post is the recipe. Eight steps, six tools, one chat. If you have already set up the Meta Ads CLI/MCP and Higgsfield MCP, and you have a Brand Brain in place, this should run end to end with one prompt per step.

The full loop in one diagram

Here is what happens in a single Monday Ad Op session:

1. Refresh customer intelligence       (live Shopify + Klaviyo + reviews)
2. Pull competitor ad library          (recurring ads = ROAS proxy)
3. Diagnose dying ads                  (frequency >3.0, CTR down >15%)
4. Generate replacement creative       (4 statics per dying ad, on-brand)
5. Animate the winner                  (Seedance 2.0, 3-5 second clip)
6. Apply Meta's AI Content Label       (compliance gate)
7. Push live via the Meta CLI          (and pause the dying ones)
8. Save the artifact bundle            (so next Monday picks up where this one left off)

Six minutes per step on average. Total real-time about 30 minutes per ad account, including review time. Multi-account agencies can parallelize across chats.

What you need installed first

Setup is one-time. About an hour total if you have nothing wired up yet.

  • Claude Code (or Claude Desktop / Codex) - this is where the loop runs
  • Meta Ads CLI - run ads login after install. Browser-based Meta Business OAuth, no Marketing API app review wait. Available to everyone today.
  • Higgsfield MCP - one custom connector at mcp.higgsfield.ai. OAuth into your Higgsfield account, no API keys.
  • A Brand Brain - structured markdown files for your voice, positioning, personas, objections, guardrails, and proven winners. Even a rough four-file version beats none. The DTC Stack ships a 54-file framework if you want the complete one.
  • The DTC Stack skills (or your own equivalent) - specifically competitor-intel for the Meta Ad Library research, dtc-ad-creative for diagnosis and push, and static-ad-prompt-engine for image and video generation. Each skill has pre-run blocks that read the Brand Brain automatically.

Step 1: Refresh customer intelligence (5 minutes)

The first prompt of every Monday Ad Op is a customer intelligence refresh. This pulls live data from Shopify, Klaviyo, your review platform, your CX tool, and any survey tool you use. The agent writes everything to a single file (shared/customer-intelligence/CUSTOMER_INTELLIGENCE.md) that every other skill reads automatically.

Why first: every downstream skill should be working from current data, not last week's snapshot. Conversion rate, AOV, top complaints, top winning email subject lines, top survey objections - all of these change week to week, and the AI cannot reason well from stale numbers.

Prompt: "Refresh my customer intelligence from the connected sources."

What you get back: a single file with seven sections (Store Performance, Email/SMS, Voice of Customer, Product Intelligence, Retention, Trends, Gaps). Anomalies over 15% week-over-week are flagged at the top. Total runtime is usually 3-5 minutes depending on how many sources are connected.

Step 2: Pull the competitor ad library (5 minutes)

Most DTC operators skip this step because it is tedious to do manually. You would have to open Meta's Ad Library for each competitor, scroll, screenshot, and try to remember what was running last week. By the third competitor you have lost the thread.

The Meta Ad Library is public information. Meta publishes it specifically for ad transparency. Pulling and analyzing it does not require API access, does not violate terms, and does not put your account at risk.

Prompt: "Run the Meta Ad Library research for [competitor list] and save to output/competitor-winners/."

What the agent does:

  1. Pulls each competitor's active ads from the public Ad Library page.
  2. Identifies recurring ads (running 14+ days across multiple ad sets is a strong proxy for ROAS - competitors do not keep dead ads alive).
  3. Categorizes each ad by placement (Reels / Feed / Stories), format (static / video / carousel), and hook angle (problem agitation, contrarian, mechanism, social proof, founder story).
  4. Downloads the top winning creatives to output/competitor-winners/[YYYY-MM-DD]/[competitor]/.
  5. Writes a _summary.md (narrative) and _meta.json (structured data) for downstream skills.

The artifact persists. Next Monday's session compares to this Monday's, so you can see what is new, what got discontinued, and what has been running for a month and is clearly working.

Step 3: Diagnose dying ads (3 minutes)

Now the agent looks at your own account. With the Meta CLI installed, it pulls the last 7 days of ad-level performance and flags ads that meet either condition:

  • Frequency over 3.0 AND CTR down more than 15% over the trailing 7 days (creative fatigue)
  • ROAS down more than 25% from the 30-day baseline (audience saturation or seasonal effect)

Prompt: "Run the creative fatigue audit on the last 7 days."

What you get back: a ranked list of dying ads with the trend chart for each, plus a one-line root-cause hypothesis (audience saturation, creative fatigue, copy mismatch, or external factor like a competitor launch). Spend impact is calculated so you know which ads to refresh first.

This is the artifact that drives the rest of the loop. If three ads are flagged, you will generate replacements for those three.

Step 4: Generate replacement creative (5 minutes)

For each flagged ad, the agent generates 4 replacement statics. Not generic ads. The pre-run blocks in the static ad prompt engine load:

  • The Brand Brain (voice, positioning, personas, objections, visual identity)
  • The customer intelligence file (top reviews, top objections, current AOV)
  • The proven winners file (which hooks have already worked)
  • The competitor winners folder (what is running in the category)

The result: prompts for Higgsfield's Nano Banana 2 model that are specifically informed by what is dying (the diagnostic), what your customers care about (CIE), what has worked before (proven winners), and what the category looks like (competitor research). The generated images respect your photography style, hit your top objection, and avoid the angles that are already saturated.

Prompt: "Generate 4 replacement statics for each of the flagged ads, using the brand DNA and the competitor research."

Each output gets a _meta.json sidecar with ai_generated: true, requires_meta_disclosure: true so downstream steps know to apply the AI Content Label. This metadata flow is what keeps the workflow compliant. (Full context on why this matters in the compliance follow-up post.)

You review the four variants, pick a winner per dying ad. Three flagged ads, four variants each, twelve images to evaluate. Usually 5 minutes of skim time.

Step 5: Animate the winner (3 minutes)

For each picked winner, the Animate Winners mode in the static ad prompt engine runs the static through Seedance 2.0. Output is a 3-5 second video clip sized for Reels and TikTok.

The animation is for the opening seconds of the video ad, not the whole thing. The static carries the brand-correct visual; Seedance just adds motion to make it stop scroll in a video-first feed.

Prompt: "Animate the winner from each replacement set with Seedance 2.0 at 1080x1920."

Quality gate: the agent verifies that on-product text is still readable in the animated version, that hands and faces did not deform, and that the brand color palette held. If any check fails, you regenerate or fall back to the static for that placement.

This step is optional. If you only run feed ads, skip it. The static alone will run.

Step 6: Apply Meta's AI Content Label (built into step 7)

This is not a separate prompt - it happens automatically in the push step. Worth calling out because it is the most-overlooked compliance gate in any AI ad workflow.

Meta requires AI-generated ad creative to be disclosed via Meta's AI Content Label in Ads Manager. Effective March 2026. Missing the label triggers ad rejection and a policy strike. Repeated strikes lead to permanent ad-account bans. The official Meta CLI does not exempt you from this rule.

The dtc-ad-creative skill reads the _meta.json sidecar from step 4 (which says ai_generated: true) and applies the AI Content Label in the same ads_create_ad call as the push. If the label cannot be applied for any reason, the skill refuses the push and surfaces the issue. Do not push unlabeled AI content. The cost of a delay is zero. The cost of a strike compounds.

Step 7: Push live via the Meta CLI (2 minutes)

For each replacement creative the operator picked, the agent pushes the new ad live and pauses the dying one. All in the same chat, no Ads Manager. The push uses Meta's official Ads AI Connectors (CLI for Claude Code, MCP for Claude Desktop).

Prompt: "Push each picked replacement live, pause the corresponding dying ad, and confirm the AI Content Label was applied."

What you get back: a confirmation per ad with the ad_id, the AI Content Label flag status, and the pause confirmation for the original. Plus the rate-limit pacing rule fires automatically (no more than 5 consecutive mutations without a 5-second pause, watch the X-Business-Use-Case-Usage header, stop on 429).

If anything fails, the agent stops the batch and surfaces the issue. Better to ship 6 of 8 ads cleanly than 8 ads with one banned.

Step 8: Save the artifact bundle (1 minute)

Final prompt: "Save the Monday Ad Op artifact bundle for this week."

The agent writes a single dated folder with everything from this session:

output/monday-ad-op/[YYYY-MM-DD]/
├── customer-intelligence-snapshot.md
├── competitor-winners/  (symlink to step 2 output)
├── fatigue-audit.md
├── replacement-creative/
│   ├── ad-001.png + _meta.json
│   ├── ad-002.mp4 + _meta.json
│   └── ...
├── push-log.md          (what was pushed, what was paused, label statuses)
└── exec-brief.md        (one-page narrative for stakeholders, in your brand voice)

This is what next week's session reads to compare. It is also what you forward to a client or co-founder if they ask "what changed this week."

What 30 minutes a week buys you

The math: 30 minutes per ad account, weekly. Assume 50 weeks per year. 25 hours of operator time per ad account per year, in exchange for:

  • Continuously refreshed customer intelligence flowing through every campaign
  • Weekly competitor monitoring (versus quarterly, which is the actual norm)
  • Creative fatigue caught at week-1 instead of week-4 when CPA blows up
  • 12+ creative variants generated per dying ad per week (vs the 2-3 most operators ship manually)
  • Full compliance with Meta's AI Content Label rule
  • A persistent artifact trail that any new operator on the team can pick up

The 85% / 15% framing one practitioner used last week captures it: this loop handles the operational and analytical 85% of paid ad work. The 15% that is left is where you actually add value - strategy decisions, creative taste, client conversations, deciding which markets to enter. That 15% does not get cheaper. The 85% gets a lot cheaper.

When to skip steps

The loop is modular. Skip any step that does not fit your situation.

  • Skip step 2 (competitor research) if you have stable, well-known category dynamics and have done a full teardown in the last 60 days. Pick up again the following Monday.
  • Skip step 5 (animate) if you do not run video placements. Static alone is fine for feed.
  • Skip steps 4-7 entirely in weeks where no ads are flagged in step 3. The loop is reactive, not proactive - if nothing is dying, do not change anything.
  • Replace step 4 with real photography when you have a hero brand moment (founder story, brand film, cinematic launch piece). AI generation is for the long tail of variants, not the hero.

The whole point

The recipe is not the moat. Anyone can run this loop. The moat is what the loop reads before it generates anything.

Without a Brand Brain, the agent generates the average DTC ad - "Glow up your routine with our hydrating serum, powered by hyaluronic acid." Generic, forgettable, indistinguishable from every competitor.

With a Brand Brain (voice, positioning, personas, objections, guardrails, proven winners), the same agent generates an ad that mentions the specific objection your customers list in 40% of your reviews ("I am scared of breakouts from any new product"), in the founder-led skeptical voice you have built, with the photography style you have documented.

Same model. Same prompt. Different ad. Different conversion rate.

The recipe is the operational shell. The brand context is the brain. Without the brain, you are running expensive auto-pilot. With the brain, you are compounding a competitive advantage every week.

Without Brand Brain With Brand Brain
Output quality Average DTC ad On-brand, distinctive
Hook angle Most-common in category White-space angles
Customer objection Not addressed Top objection in first 100 words
Photography style Stock-feel Brand-documented
Compliance Manual gate Automatic via metadata sidecar
Time per cycle Same 30 minutes Same 30 minutes
Year-over-year value Stays flat Compounds

What to do this week

If you have never run this loop:

  1. Install the Meta Ads CLI. Free, open beta. Run ads login for the OAuth flow.
  2. Install the Higgsfield MCP. One custom connector, one OAuth login. Costs ride your existing Higgsfield plan credits.
  3. Build a Brand Brain. Even a four-file version beats none: voice, personas, guardrails, proven winners. Or use the DTC Stack's 54-file framework.
  4. Set up customer intelligence sources. Connect Shopify, Klaviyo, and at least one review platform.
  5. Run the loop once. Pick one ad account, set aside 60 minutes (slower the first time as you tune prompts), and walk through all 8 steps.
  6. Schedule it for next Monday. Same chat, same sequence. By week three, you will be down to 30 minutes.

If you already have the loop running, the only update for May 2026: add the competitor-intel Meta Ad Library Research mode to step 2 if you have not yet. Recurring competitor data makes step 4 substantially better. We shipped that this week.

Frequently Asked Questions

How long does the Monday Ad Op actually take?

The first time you run it, plan for 60-90 minutes per ad account as you tune prompts and learn the artifacts. By the third week of running it, the steady state should be around 30 minutes per account. The biggest time sinks are reviewing creative variants (step 4) and confirming pushes (step 7) - both should be conscious human gates, not rushed.

What if I do not have a Brand Brain?

The loop still runs but the output is generic. The Brand Brain is the difference between "average DTC ad" and "ad that sounds like your brand." If you can only build four files this week, build voice, personas, guardrails, and proven winners - those four cover 80% of what the AI needs to produce on-brand creative.

Do I need both the Meta CLI and Higgsfield MCP, or just one?

Both. The Meta CLI handles the data and execution (steps 1, 3, 7). The Higgsfield MCP handles the asset generation (steps 4 and 5). Without one or the other, you are missing half the loop. The Meta MCP server (rolling out account-by-account at mcp.facebook.com/ads) is an alternative to the CLI for Claude Desktop and ChatGPT users; either path works for the data and execution side.

Can I run this for multiple ad accounts in the same session?

Yes, but pace it. Open a separate chat per ad account so the agent's working context stays clean for each brand. The Meta API rate limits stack across accounts when calls share an app, so the agent paces writes automatically (max 5 consecutive mutations, then a 5-second pause). For agencies running 5+ accounts, parallelizing across two or three chats works better than serial.

What if my account does not have the Meta MCP rolled out yet?

Use the CLI path. The MCP server is rolling out account-by-account, but the meta-ads CLI is available to everyone today. Same 29 tools, same OAuth flow, same compliance behavior. Switch to the MCP later when your account gets enabled.

Will using AI to run this loop get my ad account banned?

The official Meta CLI/MCP retires one specific class of ban risk (third-party developer apps and shared connectors). It does not retire Meta's content policies. The biggest risk in 2026 is undisclosed AI content - if you push AI-generated creative without applying Meta's AI Content Label, you take a policy strike. Repeated strikes lead to permanent bans. The compliance details are in the companion post on using Meta's MCP without getting banned. The Monday Ad Op recipe applies the label automatically at step 6.

Is this a replacement for an agency?

For execution, increasingly yes. For strategy, no. The 85% of paid ad work that is operational and analytical is what this loop automates. The 15% that is strategic - choosing markets, deciding offers, writing brand positioning, evaluating creative taste - is still owned by humans. If your agency relationship is mostly execution work, the loop replaces it. If your agency relationship is strategic, the loop frees them up to do more of the work you actually pay them for.


The DTC Stack ships every skill in this recipe pre-wired - the Brand Brain framework, the competitor-intel Meta Ad Library mode, the static ad prompt engine with Higgsfield MCP integration, and dtc-ad-creative with Meta CLI integration and AI Content Label compliance. $199 one-time. Get the Stack →.

JB
Jake Ballard

Builds AI marketing systems for DTC and Shopify brands doing $1M-$50M. Creator of The DTC Stack.

Build your Brand Brain. Ship on-brand content in minutes.

The DTC Stack is a Brand Brain + 19 AI execution skills for product pages, emails, ads, SEO, and more. One purchase, lifetime access. Works with Claude, Cursor, Copilot, and 30+ AI tools.

One-time purchase. Instant access. Lifetime updates.