Connecting GA4 to Claude or ChatGPT takes five minutes. Knowing what to actually ask it takes longer. The first day after you set up a GA4 MCP server is usually a long pause in front of an empty chat, followed by something vague like "show me my traffic," followed by disappointment.
The prompts below are the ones that actually move the needle. They're grouped by what you're trying to find out, with the exact wording I use and a short note on why each one works. If you haven't connected GA4 yet, the no-code setup guide is a five-minute prerequisite.
What makes a GA4 prompt actually work
Most failed GA4 prompts share three traits: no time window, no segment, no requested output. "Show me my conversions" gives you a table you'll squint at and abandon. "Compare conversion rate by device for the last 7 days versus the previous 7 days, sorted by largest drop" gives you a usable answer.
The first habit to build is to always specify the comparison. Almost every useful analytics question is a delta — this week versus last week, mobile versus desktop, paid versus organic. Give the AI both sides up front. It won't make them up for you.
The second is to ask "where" rather than "what." "What's my conversion rate?" is a one-number answer. "Where is my conversion rate weakest — which device, source, landing page?" is a diagnostic. The first ends the conversation. The second starts it.
The third is to describe the output you want. A daily series, a table sorted by impact, a one-line summary for Slack, three bullet points for a leadership update — the AI picks the format from your wording. Vague input gets vague output.
Weekly audit prompts
Use these once a week, every week. The point isn't to find an emergency — it's to spot the small shifts before they become emergencies.
"Compare the last 7 days to the previous 7 days across sessions, conversions, conversion rate, and revenue. Flag any metric that moved by more than 15% in either direction. Group the answer by traffic source."
This is the standing Monday prompt. Fifteen percent is the threshold I use because anything smaller is usually noise. Adjust based on your traffic volume — lower threshold for high-volume sites, higher for small ones.
"Show me a 30-day trend of my primary conversion event, daily. Highlight any anomalies or cliffs."
Useful as a sanity check before you trust any week-over-week comparison. If there's a cliff in the middle of the comparison window, the WoW number is meaningless and you'll act on noise.
"What were the three biggest wins and three biggest concerns in my GA4 data this week? Be specific — name the page, channel, or segment."
This is the prompt I use to write a leadership update in under a minute. The "be specific" clause is what stops the AI from returning platitudes.
Funnel and checkout prompts
For ecommerce and SaaS funnels, these prompts are how you find leaks.
"Pull my conversion funnel from product view to purchase. Show me each step, the user count, and the step-to-step drop-off rate. Highlight the step with the largest drop-off."
The classic funnel question, framed correctly. The AI returns a table you can act on — usually with one step that's clearly worse than the others.
"Compare my checkout funnel completion rate on mobile versus desktop for the last 14 days. Where is the biggest gap?"
Mobile checkout is the most common single leak in 2026 ecommerce. This prompt isolates it. If you see a 20-point gap or more, jump to the mobile conversion rate fix article.
"Look at my begin_checkout to purchase conversion rate by source. Are any traffic sources converting at less than half the average?"
This finds bad traffic. A paid channel sending lots of begin_checkout but few purchases is usually low-intent traffic, not a checkout problem — different fix.
"Compare my add_to_cart to begin_checkout conversion rate over the last 30 days, weekly. Has it shifted? If yes, when?"
The cart-to-checkout transition is one of the most stable funnel steps in normal times. A sustained shift here is almost always a UX or pricing change that needs investigating.
Anomaly and drop prompts
When something looks off, these prompts get to the diagnosis fast. The full ten-minute version is in the conversion rate drop diagnostic article — what follows are the standalone prompts from that workflow.
"My primary conversion event dropped this week. Compare the seven days before the drop to the seven days after, across device, source, geo, and landing page. Where is the loss concentrated?"
The single most useful drop-diagnosis prompt. Almost every drop concentrates somewhere — this is the prompt that finds the where.
"Has anything new started sending traffic to my site in the last 14 days? List any new referrers, source/medium combinations, or campaigns with significant volume."
This finds the inverse problem — when sessions inflate from a low-quality source and drag your rate down. The drop isn't behavioural, it's a denominator change.
"Detect any anomalies in my GA4 data over the last 30 days. For each, tell me which dimension and date it occurred on and how large the deviation was."
Run this every Monday as a complement to the weekly audit. It catches the things you didn't think to look for.
Mobile versus desktop prompts
Mobile is usually the underperformer. These prompts isolate the gap and start the investigation.
"Compare conversion rate, average session duration, and pages per session across device categories for the last 30 days. Where is mobile weakest relative to desktop?"
Sets the baseline. If mobile conversion rate is half of desktop, that's typical. If it's a quarter, something is broken.
"Show me time on page, scroll depth, and engagement rate for my checkout pages on mobile. Flag any page with engagement below 40%."
Mobile UX issues show up in engagement metrics before they show up in conversion. This is the early warning.
"Compare my mobile checkout funnel for iOS Safari versus Chrome. Is the drop-off different?"
Browser-specific bugs are common and almost invisible in aggregate metrics. This prompt surfaces them.
Landing page prompts
The landing page is the make-or-break moment. These prompts find which ones are working and which aren't.
"Rank my top 20 landing pages by sessions over the last 30 days. For each, show me bounce rate, engagement rate, and conversion rate. Highlight pages with high traffic but low conversion."
The classic landing page audit. Pages with high traffic and low conversion are where small fixes have the biggest dollar impact.
"For my top three landing pages by traffic, compare conversion rate by source of arrival. Are some sources converting much better than others on the same page?"
If organic visitors convert at 4% and paid visitors convert at 0.8% on the same landing page, the landing page isn't the problem — the paid traffic match is.
"Which landing pages had the biggest week-over-week drop in conversion rate? Limit to pages with at least 200 sessions per week."
The volume threshold is critical. Without it, the AI returns small-sample noise.
Channel and traffic source prompts
For multi-channel diagnosis and budget allocation discussions.
"Show me sessions, conversion rate, and revenue per session by traffic source/medium for the last 30 days. Sort by revenue per session."
Revenue per session is the right ranking metric for budget decisions, not raw revenue or conversion rate alone.
"Compare paid search and paid social over the last 60 days. Which channel is improving and which is degrading on a per-session basis?"
Trend matters more than snapshot. A degrading channel will continue degrading until something changes.
"Has my AI Assistant channel grown in the last 30 days? If yes, by how much, and what's its conversion rate compared to organic search?"
In 2026 the AI Assistant channel in GA4 — sessions from ChatGPT, Claude, Perplexity, and similar — is a fast-growing source for most sites. Worth tracking specifically because it behaves differently from organic.
"Flag any source/medium that sent more than 5% of my traffic but converted at less than 25% of the site average."
This is the bad-traffic prompt. Anything in this bucket is a candidate for exclusion, audience refinement, or a budget cut.
FAQ
What's the best way to write a GA4 prompt? Three things make a prompt work. A clear time window with a comparison ("last 7 days versus previous 7"). At least one segment dimension ("by device" or "by source"). And the format you want back ("a table sorted by impact" or "a one-line summary"). Without those, the answer is usually vague.
Do GA4 prompts work the same in Claude and ChatGPT? Mostly yes. Both call the same GA4 MCP tools, so the data and computation are identical. Claude tends to be more verbose by default and more willing to add commentary; ChatGPT tends to be more compact. Identical prompts produce slightly different framing but the same underlying analysis — full breakdown in Claude vs ChatGPT for analytics.
Why isn't my GA4 prompt returning the data I expected?
Three common reasons: the comparison window is ambiguous (say "last 7 days" explicitly), the metric name isn't matching GA4's naming exactly (conversions and key events are sometimes interchangeable, sometimes not, depending on your property age), or the segment you asked for has too little data to be meaningful. Add a volume floor like "with at least 100 sessions" to filter noise.
Can I save GA4 prompts as reusable templates?
Yes — both Claude and ChatGPT now support saved prompts and projects. Save the weekly audit as a template and run it every Monday. ConvRadar also exposes preset prompts (/full_audit, /weekly_checkup, /diagnose_drop) that bundle the patterns above with the diagnostic tools.
What are the most common GA4 prompt mistakes? Asking "what" instead of "where" — a number ends the conversation, a diagnostic starts one. Skipping the comparison window. Skipping the segment. Asking for one giant report instead of chaining narrower questions. The AI works better as a series of focused queries than as a single mega-prompt.
Do I need to know SQL to write good GA4 prompts? No. GA4 MCP servers wrap the Data API into named tools, and a good prompt describes the question in plain English. SQL knowledge helps when you fall back to the BigQuery + AI route described in our GA4 MCP server overview, but for the prompts above you don't need it.
Save these. Try one. The first weekly audit is the prompt most people stop ignoring their GA4 over — once it takes a minute instead of half an hour, you'll actually run it. Start your free trial if you don't have GA4 connected yet.