Last year I wrote about using MCP servers to generate AI-powered analytics reports from Umami data. The response was great, but there were some rough edges: too many dependencies, broken UTM queries, and no way to track newsletter conversions.
Today I’m announcing three major improvements to the umami-mcp-llm-report tool:
- Stripped to zero dependencies (just
mcp,python-dotenv, andaiohttp) - Newsletter subscription tracking with UTM attribution
- Fixed UTM reporting bugs that were causing “Bad request” errors
Plus, the reports are now more insightful than ever. Here’s what changed.
Dependency Cleanup: Less is More
The original version pulled in 15+ dependencies including heavy frameworks. This created:
- Slow startup times
- Security surface area
- Version conflicts
- Harder maintenance
New dependency list:
mcp # Model Context Protocol SDK
python-dotenv # Environment variable management
aiohttp # Async HTTP client
That’s it. The entire tool is now ~500 lines of Python with no external framework dependencies.
How I did it:
- Replaced FastAPI with raw MCP SDK
- Swapped pandas for simple dict operations
- Removed CLI frameworks in favor of argparse
- Used native async/await instead of framework wrappers
Result: Startup time dropped from 8s to 1.2s, and the tool is now trivial to audit and maintain.
Newsletter Tracking: The Missing Piece
The original tool could show pageviews and sessions, but couldn’t answer the most important question: “Which marketing campaigns actually drive subscriptions?”
What’s New
The newsletter system now captures:
utm_source- Where did they come from? (twitter, google, newsletter)utm_medium- What channel? (social, organic, email)utm_campaign- Which campaign? (launch2026, weekly-47)ctaName- Which signup form? (footer, blog-post-bottom)pageUrl- Which page did they sign up on?
Example Report Output
Here’s a real report from my site (March 2025 - March 2026):
📈 DASHBOARD ANALYSIS (GEMINI-CLI):
================================================================================
# Umami Analytics Dashboard: rhelmer.org
**Period:** 2025-06-01 to 2026-03-22
## Traffic Summary
| Metric | Value |
| --- | --- |
| **Pageviews** | 12,883 |
| **Unique Visitors** | 6,851 |
| **Sessions** | 8,497 |
| **Bounce Rate** | 91.28% |
| **Average Visit Duration**| 30 seconds |
## Top Content
### Top Pages
- `/` - 2,447 pageviews
- `/blog/` - 2,127 pageviews
- `/blog/building-whiskers-engine-cpp-game-engine/` - 1,142 pageviews
### Top Referrers
- linkedin.com - 753 visitors
- google.com - 728 visitors
- github.com - 65 visitors
## Newsletter Attribution
### Subscriptions by UTM Source
- (direct) / (direct) - 45 signups
- twitter / social - 23 signups
- newsletter / email - 18 signups
### Top Performing Campaigns
- launch2026 (twitter) - 15 signups
- weekly-47 (email) - 12 signups
## Key Insights
* **Strong Growth in Early 2026:** Pageviews show significant upward trend
* **High Bounce Rate:** 91.28% indicates single-page visits dominate
* **LinkedIn & Google Drive Traffic:** Primary referral sources
* **UTM Data Now Working:** Campaign attribution now functional
## Recommendations
* **Investigate High Bounce Rate:** Analyze landing page UX
* **Leverage Top Content:** Promote "Building Whiskers Engine" further
* **Track UTM Parameters:** Use campaign tags in all marketing links
================================================================================
The UTM Bug Fix
The original report showed “Bad request” for all UTM metrics. The issue? Umami’s API expects numeric website IDs, but I was passing domain names.
The Fix:
# Before (broken)
await self.mcp.call_tool(
"get_utm_metrics",
arguments={"website": "rhelmer.org", ...} # ❌ String name
)
# After (working)
await self.mcp.call_tool(
"get_utm_metrics",
arguments={"websiteId": 1, ...} # ✅ Numeric ID
)
Now UTM data flows through correctly, and you can see which campaigns actually convert.
Chat Mode: Ask Questions About Your Data
The new chat mode lets you interrogate your analytics:
$ uv run run.py --start-date 2025-06-01 --website rhelmer.org --chat
🤖 Entering chat mode using GEMINI-CLI!
Type 'quit', 'exit', or 'q' to leave chat mode.
📊 Your question: Which blog post drove the most newsletter signups?
🤖 Based on the data:
1. "Building Whiskers Engine" - 1,142 pageviews, 8 signups
2. "AI-Powered Analytics Reports" - 239 pageviews, 5 signups
3. "Privacy Analytics Without Cookie Banners" - 187 pageviews, 4 signups
The Whiskers Engine post has the highest absolute conversions, but
"AI-Powered Analytics Reports" has a better conversion rate (2.1% vs 0.7%).
Consider:
- Adding more prominent CTAs to high-traffic posts
- A/B testing CTA placement on the Whiskers post
- Creating follow-up content for engaged readers
Try It Yourself
Installation
# Clone the repo
git clone https://github.com/rhelmer/umami-mcp-llm-report
cd umami-mcp-llm-report
# Create virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv pip install -r requirements.txt
# Configure environment
cp .env.example .env
# Edit .env with your Umami credentials
Run a Report
# Basic report
uv run run.py \
--start-date 2025-06-01 \
--end-date 2026-03-22 \
--website rhelmer.org
# With AI analysis (requires Gemini CLI or other provider)
uv run run.py \
--start-date 2025-06-01 \
--website rhelmer.org \
--chat \
--ai-provider gemini-cli
Supported AI Providers
gemini-cli(Google)claude(Anthropic)gpt(OpenAI)- Any provider that supports MCP
What’s Next
Planned improvements:
- Automated weekly reports via cron + email
- Anomaly detection (alert on traffic spikes/drops)
- Cohort analysis (retention by signup date)
- Revenue attribution (track paid conversions)
The Philosophy
This project embodies my approach to AI tooling:
- Minimal dependencies - Every dependency is a liability
- Transparent data - AI should cite sources, not hallucinate
- Actionable insights - Reports should drive decisions, not just show charts
- Privacy-first - All data stays in your Umami instance
The best AI tools augment human decision-making, not replace it. That’s why chat mode lets you ask follow-up questions instead of just dumping a static report.
Try it out: github.com/rhelmer/umami-mcp-llm-report
Questions? The tool is now simple enough that you can read the entire codebase in ~30 minutes. I welcome PRs and issues!