TapQuality
Platform
Q Optimizer
AI agent that manages campaigns 24/7
Voice AI
Screen and qualify callers before routing
Integrations
Close the loop with your ad platforms
Medicare
CMS-compliant scripts and enrollment targeting
PricingBlog
Log inGet started
TapQuality

The buy-side platform for
inbound phone calls.

Platform
Real-Time BiddingCampaign ManagerPropensity ScoringPerformance Intelligence
Solutions
Q OptimizerVoice AIIntegrationsBlog
Company
AboutContactPrivacyTerms

Stay in the loop

Insights on call buying, RTB, and performance marketing.

© 2026 TapQuality. All rights reserved.

SOC2 Type II Compliant

Running every 15 minutes

Autonomous
optimization

Q is an AI agent that observes your campaign performance, proposes bid experiments, evaluates results against locked scoring functions, and applies winning changes — autonomously, every 15 minutes, 24/7.

Get startedSee how it works
1OBSERVE2PLAN3EXPERIMENT4EVALUATE5DECIDECYCLE15m
Autonomous optimization

The idea

“What if an AI agent could run the same optimization loop a senior media buyer runs — but do it every 15 minutes instead of once a week?”

Inspired by Karpathy's autoresearch pattern. Q observes campaign performance, plans bid experiments, evaluates results against locked scoring functions, and applies winning changes — autonomously, 24/7, with guardrails the agent can't override.

The optimization loop

Every 15 minutes, Q runs a complete observe → plan → experiment → evaluate → decide cycle for each active campaign.

01

Observe

Q reads segment performance data — calls, conversions, CPA by geo, hour, source, and subID. It reviews the last 50 experiments to learn what worked and what didn't.

Reads last 50 experiments + live segment data
02

Plan

Based on observed patterns, Q identifies high-opportunity segments. "Florida converts at 73% vs 31% national — the geo modifier should be higher."

AI reasoning over performance patterns
03

Experiment

Q proposes a bid modifier change — e.g., increase FL geo modifier by +20%. The system validates against 5 hard constraints before applying.

e.g. FL geo modifier +20%
04

Evaluate

After accumulating enough calls (50+ minimum), Q evaluates the experiment. The locked evaluator compares treatment vs. baseline on proxy score and CPA.

Locked scoring — agent cannot modify
05

Decide

Q recommends commit or revert. The system checks the locked evaluator independently — if the evaluator disagrees, it rejects the agent's recommendation. Safety first.

Commit winner or revert to baseline

What Q actually does

Real experiment log from a single campaign. Q ran 47 experiments over 2 weeks — 31 committed, 12 reverted, 4 expired.

Medicare AEP — Nationalautonomous
Last cycle: 2 min ago
committedgeo_state:FL+20%312 calls

FL converts at 73% vs 31% national. Increasing modifier to capture more volume.

$47.80 → $42.10
-12% CPA
runninghour_of_day:10-14+15%89 calls

Peak conversion window 10am-2pm ET. 62% of conversions occur here.

$44.20 → $41.50
-6% CPA
revertedsource:mediaalpha-25%156 calls

Source quality degraded. Reducing modifier to limit exposure. Evaluator confirmed no improvement.

$38.90 → $41.20
+6% CPA
runninggeo_state:TX+10%34 calls

TX shows improving conversion trend. Testing moderate bid increase.

$36.40 → —
committedhour_of_day:22-06-30%201 calls

Overnight calls convert poorly. Reducing bids to preserve budget for peak hours.

$62.10 → $58.40
-6% CPA

Marketing Autopilot

Most experiments fail — that's by design. Q runs dozens per campaign, reverts what doesn't work, and compounds the improvements that do. Your CPA drops while you sleep.

Q OPTIMIZER · MEDICARE AEP — NATIONAL67 EXPERIMENTS · 0 KEPT
$38$42$46$50$54$58EXPERIMENT #CPA
$58.20
CURRENT CPA
-0.0%
CPA DROP
0
KEPT
Safety architecture

The locked evaluator

Q proposes changes. But a separate scoring function — invisible to the agent, immutable by the agent — validates every decision before it touches your campaigns.

The creative function (plan, experiment) is separated from the evaluation function (score, validate). The agent can't game what it can't see.

Hard limits (agent cannot bypass)
Scoring thresholds — proxy score and CPA limits
Modifier delta limits — max ±30% per experiment
Bid floor — effective bid never drops below minimum
Concurrent experiment cap — max 3 per campaign
Minimum call threshold — 50+ calls before evaluation
Dimension allowlist — only approved segment types
What the agent decides (creative judgment)
Which segments to experiment on
How much to adjust modifiers (within limits)
When to commit or revert experiments
Pattern recognition across experiment history

A fully automated marketing team

Q doesn't just optimize silently. It keeps you in the loop — Slack messages, email digests, or SMS alerts with what changed, why, and what it plans to do next.

Most teams go from shadow to autonomous within a week.

Shadow

Shadow

Advisor

Q runs the full optimization loop but doesn't apply changes. See what Q would have done — build confidence before going autonomous.

  • Full experiment proposals logged
  • No modifiers changed
  • Review "what would have happened" in dashboard
  • Most teams go autonomous within a week
Supervised

Supervised

Copilot

Q proposes experiments, but you approve before they run. See everything Q is thinking — same AI, your final call.

  • Proposals require your approval
  • Commit/revert recommendations visible
  • Override any decision
  • Available if your compliance requires it
Autonomous

Autonomous

Autopilot

Q runs independently, 24/7. The locked evaluator validates every decision. Changes are applied automatically. Q messages you via Slack, email, or SMS.

  • Locked evaluator validates every decision
  • Auto-revert on timeout (14 days)
  • Campaign-level kill switch always available
  • Performance updates via Slack, email, or SMS

Autoresearch, adapted for performance marketing

Karpathy's autoresearch pattern lets an AI agent iteratively improve code. We adapted it for bid optimization — same architecture, with guardrails built for real ad spend.

ConceptAutoresearchTapQuality
PlaybookLocked operational manualCampaign optimization policy — your rules, Q's boundaries
EvaluatorLocked scoring functionLocked scoring — proxy score + CPA thresholds the agent can't see
ExperimentMutable code changesBid modifier adjustments — the levers Q is allowed to pull
HistoryAppend-only results logFull experiment audit trail — every proposal, result, and decision
CommitKeep what workedWinner applied — modifier stays, baseline updated
RevertUndo what didn'tLoser rolled back — modifier restored to pre-experiment value

The numbers

96
experiments/day
Per campaign at 15-min cycles
14%
avg CPA reduction
Within first 2 weeks
< 60s
cycle time
Observe → decide in under a minute
0
manual interventions
In autonomous mode
Q

Let Q manage your campaigns

Start in shadow mode. Watch Q work. Move to autonomous when you're ready.

Get started