Intempt
Experiment · Ship winners

Test fast. Ship winners faster.

Experiments share profiles and segments with journeys. Winners ship as personalized experiences in one click.

Experiments · Live
Pricing Page CTA TestRunning

Control

4.2%

Variant A

5.8%

Variant B

5.1%

Confidence: 94.2%CUPED active · -31% variance

12

Active tests

68%

Win rate

+$184k

Revenue lift

Blu

BLU

Variant A reached 94% confidence. Ship as personalization rule for high-fit users in one click.

Program ROI in one view

One dashboard for the entire experimentation program

See every active experiment, its status, primary metric, and estimated revenue impact in one view. No spreadsheet, no vendor dashboard per team — one program overview.

BluAsk Blu

Experimentation program showing strong ROI

Running 12 active experiments with 4 showing statistical significance. Shipped experiments contributed an estimated +$34K/month in incremental revenue. Video creative tests outperforming static by 2.1x on conversion rat

Analyze resultsOptimization ideasImpact assessment
Last updated: 2 min ago

$139,142.86

Total revenue ⓘ

↑ 1.5% vs. previous period

Mar 16Apr 1Apr 14

$48,700.00

Intempt Attributed Revenue (35.00% of total) ⓘ

↑ 0.9% vs. previous period

Mar 16Apr 1Apr 14

INTEMPT ATTRIBUTED REVENUE

Per experience ⓘ

$6957.14

All Experiences ⓘ

$48.7K

100.00%

Personalization ⓘ

$21.8K

44.76%

Experiments ⓘ

$26.9K

55.24%

10 experiences
Name ⓘStatus ⓘDuration ⓘType ⓘCreated by ⓘLast updated ↓

Homepage Hero A/B Test

95% CICUPEDSequential TestingBenjamini-Hochberg
Active13 daysExperiment
SJSarah Johnson
Jun 11, 2025
Experience Optimizer

Every experiment, every team, one program view.

Program-level ROI dashboard

See the aggregate revenue lift, experiments shipped, and win rate across the entire program — not just the test you're currently running.

Active test monitoring

Every running experiment shows current sample size, primary metric trend, guardrail status, and estimated time to significance.

Experiment history and learnings

Winning and losing results are stored with the hypothesis, audience, and outcome — searchable across the full program history.

Team and product area views

Filter the program dashboard by team, feature area, or experiment type so every stakeholder sees only what's relevant to them.

Most teams see 3–5x more experiments shipped per quarter when the full program is visible in one dashboard.

Live, always-valid stats

Real-time results with statistical confidence

MSPRT always-valid statistics let you check results at any time without inflating false-positive rates. CUPED variance reduction means you reach significance faster with the same traffic.

HYPOTHESIS

Revamping the Product Detail Page to improve layout, visual hierarchy, and highlight key information (e.g., price, reviews, shipping) will increase user engagement and drive higher conversion rates compared to the existing PDP design.

BluBLU SUMMARY

Mixed results with significant CVR gains

Video Ads Creative shows +43.4% lift in Conversion Rate (p < 0.001) with statistical significance. However, this comes with a -10.1% decrease in Visitors. The net impact on revenue is estimated at +$12.4K/month if shipped.

KEY TAKEAWAYS

Significant CVR lift (+43.4%) but visitor count declined by -10.1%

Net revenue impact estimated at +$12.4K/month if shipped

Statistical significance achieved (p < 0.001) on primary metric

Chart Interval:

Cumulative Users & Impressions

6,450

Cumulative Users ⓘ

Nov 11Nov 14Nov 17
Static Ads Creative (3,225)Video Ads Creative (3,225)

15,800

Total Impressions ⓘ

Nov 11Nov 14Nov 17
Experience Optimizer

Always-valid stats. No peeking penalty.

MSPRT always-valid testing

Sequential testing means you can check results at any time without inflating false-positive rates. No fixed sample size required upfront.

CUPED variance reduction

Pre-experiment covariate adjustment reduces variance by 20–50%, meaning you reach significance faster with the same traffic volume.

Guardrail metrics

Define guardrail metrics alongside your primary goal. If a guardrail is violated, the experiment is flagged automatically before it ships.

Multi-metric dashboards

See primary metric, secondary metrics, and guardrails in one chart with confidence intervals and significance indicators.

Teams using CUPED + MSPRT typically ship winners 30–40% faster than frequentist-only setups.

Decisions in 5 minutes

From experiment summary to ship decision in one click

When an experiment reaches significance, Blu summarizes the result, flags guardrail violations, and lets you ship the winner as a personalization rule or journey branch in one click.

Experiences/Spring Promo Experiment

SETUP & STATUS

DecisionIn Progress
TargetingNorth America Targeting
Duration7/9/2025 – Present (223d)/ 90d target
Static Ads CreativeControl50% · 74.95K (50.1%)
Video Ads Creative50% · 74.6K (49.9%)

HYPOTHESIS

Revamping the Product Detail Page to improve layout, visual hierarchy, and highlight key information (e.g., price, reviews, shipping) will increase user engagement and drive higher conversion rates compared to the existing PDP design.

BluBLU SUMMARY

Ship Video Ads Creative

Significant lift on 5/7 metrics, including the primary metric [purchase_event (event_count)]: +9.8%. Shipping to 100% of traffic projects a +9.8% lift. Estimated ~2,100 additional purchases/month based on current traffic of ~4,800/month.

KEY TAKEAWAYS

Significant lift on 5/7 metrics including the primary metric

Net revenue impact estimated at +$12.4K/month if shipped

Statistical significance achieved (p < 0.001) on primary metric

Comparerelative to
CI 95% : α = 0.05CUPED ⓘ: : YesSequential Testing ⓘ: : YesBH ⓘ: : On

Primary Metrics

1.purchase_event (event_count)
Video Ads CreativeBest
+9.8%+1.0%
2.purchase_event (event_dau)
Experience Optimizer

From winner to shipped personalization, without engineering.

Plain-language result summary

Blu explains what happened, why, and whether the result is reliable — in language your whole team can act on.

Guardrail violation flags

If the winning variant violated a guardrail metric, Blu surfaces it before you ship so you don't accidentally harm a secondary goal.

One-click ship as personalization

Ship the winner as a personalization rule targeting the same audience — no engineering ticket, no separate tool.

Automatic holdout group

A holdout is maintained automatically so you can measure the true long-term lift of the shipped winner over time.

Most teams go from experiment summary to shipped personalization in under 5 minutes.

Ask Blu anything about your experiments

Type a question or invoke a skill. Blu reads results, scores opportunities, and recommends what to ship.

Experience Optimizer
Experience Optimizer Online · ready to run skills

▎How it works

From hypothesis to shipped winner, in days not weeks

Add Integration

SOURCES

JavaScript

JavaScript

Track events from web applications

Node JS

Node JS

Server-side event tracking

iOS

iOS

Mobile analytics for iOS

Android

Android

Mobile analytics for Android

Step 01

Connect your sources

Server-side and client-side events land in the same unified profile. No separate instrumentation for experimentation — use the events you already track.

Primary metric · conversion
goal_completed_in_journey
Guardrail · bounce rate
< 5% increase allowed
Audience · trial users
Shared segment · 4,200 users
CUPED · enabled
Covariate: sessions_7d

Step 02

Set primary metric, guardrails, audience, and CUPED

Configure your experiment in one form: primary metric, guardrail metrics, target audience from shared segments, and CUPED covariate for faster results.

Winner: variant B (+12.4%)
Significance: 97.3%
Guardrail: no violation
Shipped as personalization
Holdout: maintained

Step 03

Read the winner, ship as personalization

Results update live. When significance is reached, Blu summarizes the result, flags guardrails, and lets you ship the winner as a personalization rule in one click.

Real results, not just tech

We drive measurable outcomes in the first 90 days. Beyond the platform.

Jim Stromberg
StockInvest
01 / 03
We were losing visitors before they signed up. Intempt's personalized experiences changed that - we started meeting people where they were instead of guessing. Once they're in, Intempt's automated email takes over and keeps the relationship moving. Acquisition and retention finally feel like one connected motion instead of two separate problems.

Jim Stromberg, CEO

StockInvest

Case Study

StockInvest needed to turn anonymous traffic into registered users before any retention strategy could work. With Intempt's Experiences, they personalized the anonymous visitor flow, surfacing the right content and CTAs to boost signup conversion. Once users signed up, automated Journeys nurtured them through onboarding and deeper engagement, steadily increasing lifetime value.

▎Why teams switch

Intempt vs the testing patchwork

Most teams stitch Optimizely or VWO with a CDP. Here's the side-by-side.

Setup time
Other tools
Days/weeks
Intempt
Minutes
Client-side + server-side
Other tools
Separate products
Intempt
Statistical engine
Other tools
Frequentist only
Intempt
MSPRT + CUPED
Audience targeting
Other tools
Manual sync
Intempt
Shared segments
Ship winner as personalization
Other tools
Intempt
Multi-variant + multi-page
Other tools
Limited
Intempt
Real-time results
Other tools
Daily refresh
Intempt
Live always-valid
Holdout group support
Other tools
Manual
Intempt
Built-in
Per-experiment fees
Other tools
Common
Intempt
None

▎Pricing

MTU-based pricing. No per-experiment fees.

No per-experiment fees. No traffic caps on Pro and above. One platform replaces 3–4 tools.

Stop debating winners. Start shipping them.

Connect your sources in minutes. Run your first always-valid experiment by tomorrow.

Experiment questions, answered

Frequently asked questions

Everything teams ask before replacing their testing stack.

Intempt supports A/B tests, multivariate tests, multi-page experiments, Champion/Challenger tests, and server-side experiments. You can test UI changes, copy, pricing, feature flags, and backend logic from the same platform.

Intempt uses MSPRT (mixture sequential probability ratio test) for always-valid testing, meaning you can check results at any time without inflating false-positive rates. CUPED variance reduction helps you reach significance faster with the same traffic.

Yes. Experiment audiences use the same shared segments as your journeys and personalizations. There is no audience sync required and targeting updates apply in real time as profile data changes.

When an experiment reaches significance, you can ship the winner as a personalization rule targeting the same audience in one click. No engineering ticket or separate tool required.

Experimentation tests whether a change improves a metric. Personalization applies the winning change to specific audiences permanently. In Intempt, the two are connected — you ship an experiment winner directly as a personalization rule.

No. Client-side experiments use the Intempt visual editor or a snippet. Server-side experiments use the Intempt SDK. Most teams launch their first experiment within a day of setup without a dedicated engineering sprint.

Yes. Intempt handles mutual exclusion and collision detection automatically. You can also group experiments into a program and define priority order.

MSPRT always-valid statistics allow you to check results at any time without the peeking problem that inflates false-positive rates in frequentist tests. You do not need to commit to a fixed sample size or wait until a predetermined end date.

You can use any event tracked in Intempt as a primary or guardrail metric. Common primary metrics include conversion rate, revenue per user, and feature adoption. Common guardrails include bounce rate, support volume, and page performance.

All experiment data, variant assignments, and results are encrypted at rest and in transit. Intempt is SOC 2 Type II certified and GDPR compliant. No experiment data leaves your contracted data region.

Still have questions?
Our team can walk you through your first experiment setup in 20 minutes.
Talk to sales
Experimentation Platform for Growth Teams