Continuous profiling for production applications.
Find slow functions • Detect N+1 queries • Compare time windows • AI-powered insights
Fast Profiling
Keep production fast & stable.
Compare Time Windows
Spot what changed instantly.
AI Powered Insights
Find issues & get fixes.
Your AI Agent’s Performance Expert
FlameOn exposes an MCP server that your coding agent can query directly. It finds the slow code — your agent fixes it — all in one loop.
>
My /api/orders endpoint feels slow in production. Can you check FlameOn for any issues?
I'll query FlameOn's MCP server to check for anomalies.
▸
flameon.detect_anomalies
{"since": "24h"}
✓
[
{
"type": "n_plus_one",
"event_name": "web.request",
"event_identifier": "GET /api/orders",
"trace_id": "a1b2c3d4-...",
"call_count": 47,
"message": "Elixir.MyApp.Repo.all/1 called 47 times in a single trace"
},
{
"type": "slow_event",
"event_name": "web.request",
"event_identifier": "GET /api/orders",
"p95_us": 2340000,
"message": "p95 latency 2340ms exceeds threshold"
}
]
Found it — GET /api/orders
has an N+1 query. Repo.all/1
is called 47 times per request. Let me get the call tree.
▸
flameon.get_call_tree
{"event_name": "web.request", "event_identifier": "GET /api/orders"}
✓
{
"name": "root", "self_us": 0, "total_us": 2340000,
"children": [{
"name": "OrderController.index/2",
"self_us": 1200, "total_us": 2338000,
"children": [
{"name": "Repo.all/1", "self_us": 45000, "total_us": 45000},
{"name": "Enum.map -> Repo.all/1 (line_items)",
"self_us": 2115000, "total_us": 2115000}
]
}],
"pruned_below_pct": 1
}
The controller loops over each order calling Repo.all/1
for line items. Fixing with a preload.
✎
lib/my_app/controllers/order_controller.ex
- orders = Repo.all(Order)
- orders_with_items = Enum.map(orders, fn order ->
- Map.put(order, :line_items, Repo.all(from li in LineItem, where: li.order_id == ^order.id))
- end)
+ orders = Order |> preload(:line_items) |> Repo.all()
Replaced the N+1 loop with a single preloaded query. p95 should drop from 2.3s to under 100ms.
>
Users report slowness this hour. Compare it against the previous hour.
Comparing time windows on FlameOn.
▸
flameon.compare_periods
{"event_name": "web.request", "event_identifier": "GET /api/dashboard", "period_a_start": "2026-03-13T08:00:00Z", "period_a_end": "2026-03-13T09:00:00Z", "period_b_start": "2026-03-13T09:00:00Z", "period_b_end": "2026-03-13T10:00:00Z"}
✓
{
"period_a_stats": {
"count": 1420,
"p50": 120000, "p95": 280000, "p99": 410000, "avg_us": 145000
},
"period_b_stats": {
"count": 830,
"p50": 890000, "p95": 2100000, "p99": 3200000, "avg_us": 950000
},
"change_pct": 650.0
}
Confirmed regression — p95 jumped from 280ms to 2.1s (+650%).
Checking what got hot in the current window.
▸
flameon.get_profile
{"event_name": "web.request", "event_identifier": "GET /api/dashboard"}
✓
{
"functions": [
{"function": "MyApp.Stats.compute_daily/1", "self_us": 1850000,
"total_us": 1900000, "call_count": 830, "source_hint": "app"},
{"function": "Enum.reduce/3", "self_us": 720000,
"total_us": 720000, "call_count": 830, "source_hint": "otp"},
{"function": "MyApp.Repo.all/1", "self_us": 45000,
"total_us": 45000, "call_count": 830, "source_hint": "database"}
],
"summary": {"shown": 3, "total": 12, "coverage_pct": 94.2}
}
Stats.compute_daily/1 consumes 1.9s — doing an in-memory
reduce over raw rows instead of aggregating in the database.
✎
lib/my_app/stats.ex
- events = Repo.all(from e in Event, where: e.date >= ^start)
- Enum.reduce(events, %{}, fn e, acc ->
- Map.update(acc, e.date, e.count, &(&1 + e.count))
- end)
+ Repo.all(
+ from e in Event,
+ where: e.date >= ^start,
+ group_by: e.date,
+ select: {e.date, sum(e.count)}
+ ) |> Map.new()
Replaced in-memory Enum.reduce
with a GROUP BY
query.
>
Some users see 5+ second load times on the search page. Most requests are fine. Why?
Pulling the slowest traces from FlameOn.
▸
flameon.get_slow_traces
{"event_name": "web.request", "event_identifier": "GET /search"}
✓
[
{"trace_id": "a1b2c3d4-...", "total_us": 8200000,
"captured_at": "2026-03-13T10:23:41Z"},
{"trace_id": "e5f6a7b8-...", "total_us": 6100000,
"captured_at": "2026-03-13T09:14:22Z"},
{"trace_id": "c9d0e1f2-...", "total_us": 5400000,
"captured_at": "2026-03-13T08:55:03Z"}
]
Normal p50 is 340ms but outliers hit 5–8s. Let me check the common culprit.
▸
flameon.get_function_detail
{"function": "Elixir.MyApp.Search.query/2"}
✓
{
"function": "Elixir.MyApp.Search.query/2",
"total_self_us": 24500000,
"total_occurrences": 15,
"source_hint": "app",
"events": [
{"event_name": "web.request", "event_identifier": "GET /search",
"count": 3, "total_self_us": 22100000, "avg_self_us": 7366666},
{"event_name": "web.request", "event_identifier": "GET /api/suggest",
"count": 12, "total_self_us": 2400000, "avg_self_us": 200000}
]
}
Search.query/2 — 22s of self-time across 3 slow traces.
Unbound query for certain search terms. Adding a limit.
✎
lib/my_app/search.ex
- Repo.all(from p in Product, where: ilike(p.name, ^term))
+ Product
+ |> where([p], ilike(p.name, ^term))
+ |> limit(100)
+ |> Repo.all()
Added LIMIT 100. Consider a GIN trigram index on products.name for faster ILIKE.
Ready to give your AI agent superpowers?
Add FlameOn's MCP server to your agent's tool config and start finding performance issues automatically.
Get Started →