Cut the debate. Test it.

Target Test built for results. I ship experiments that lift conversion, reduce drop-off, and pay for themselves.  Adobe Target at the core; GTM, WordPress, Framer, and Squarespace supported.

What I Do

Well-built experiments, decisions without guesswork

Sound Hypotheses

Prioritized by impact and feasibility—no “let’s just try things.”

Sound Hypotheses

Prioritized by impact and feasibility—no “let’s just try things.”

Sound Hypotheses

Prioritized by impact and feasibility—no “let’s just try things.”

Clean implementation

Client-side and, when useful, server-side. Events defined up front.

Clean implementation

Client-side and, when useful, server-side. Events defined up front.

Clean implementation

Client-side and, when useful, server-side. Events defined up front.

QA that prevents bad reads

Targeting, flicker, performance, and tracking verified.

QA that prevents bad reads

Targeting, flicker, performance, and tracking verified.

QA that prevents bad reads

Targeting, flicker, performance, and tracking verified.

Responsible stats

Power, MDE, and significance respected—no “early wins.”

Responsible stats

Power, MDE, and significance respected—no “early wins.”

Responsible stats

Power, MDE, and significance respected—no “early wins.”

Decisions, not decks

Keep / iterate / retire with expected impact and risk notes.

Decisions, not decks

Keep / iterate / retire with expected impact and risk notes.

Decisions, not decks

Keep / iterate / retire with expected impact and risk notes.

Build, test, learn. Then, repeat

A reliable six-step cycle: measure the baseline, craft the hypothesis, implement in Adobe Target, pass QA, run under guardrails, and ship a decision backed by effect size, confidence, and risk notes. Designed for speed without shortcuts, so wins compound quarter over quarter.

01

Discovery & metrics

Align on why we’re testing and what “good” looks like. We define the business goal, choose a primary KPI (and guardrails), audit the current data layer, and capture constraints (traffic, tech, timelines). This prevents vanity tests and sets an honest baseline for impact.

02

KPIs, constraints, baselines.

Write a falsifiable hypothesis (“If we do X for Y audience, Z metric will change because…”), then prioritize by impact × confidence × effort. We lock the primary/secondary metrics, MDE, required sample size, estimated duration, targeting, and the exact variant spec (copy/layout/logic). This guards against mid-test scope creep and peeking.

03

Development

Build the variants cleanly—HTML/JS/CSS in Adobe Target (or your tool), with instrumentation via GTM/SDK and events to GA4 (or your analytics). We suppress flicker, protect Core Web Vitals, and keep changes encapsulated and reversible. Accessibility and cross-browser parity are non-negotiable.

04

QA & controlled launch

Run a strict checklist: visual parity, edge cases (Safari/iOS, low bandwidth, ad-blockers), event firing, audience eligibility, and privacy. Launch to a small % first, watch for anomalies, then ramp deliberately. Define abort criteria (SRM, broken UX, severe performance hit).

05

Monitoring & readout

Track progress without “p-hacking.” I monitor SRM, variance, bots/spam, segment skews, and guardrails. When power and significance thresholds are met, calculate effect size and confidence intervals; sanity-check with secondary metrics and qualitative context. No victory laps mid-game.

06

Report & next steps

Make the call: Keep / Iterate / Retire—with quantified impact, risk notes, and exactly what to ship. Successful variants get a rollout plan; near-misses become follow-ups; losses go to the “do-not-repeat” library. We update the backlog and the win library so gains compound.

Build, test, learn. Then, repeat

A reliable six-step cycle: measure the baseline, craft the hypothesis, implement in Adobe Target, pass QA, run under guardrails, and ship a decision backed by effect size, confidence, and risk notes. Designed for speed without shortcuts, so wins compound quarter over quarter.

01

Discovery & metrics

Align on why we’re testing and what “good” looks like. We define the business goal, choose a primary KPI (and guardrails), audit the current data layer, and capture constraints (traffic, tech, timelines). This prevents vanity tests and sets an honest baseline for impact.

02

KPIs, constraints, baselines.

Write a falsifiable hypothesis (“If we do X for Y audience, Z metric will change because…”), then prioritize by impact × confidence × effort. We lock the primary/secondary metrics, MDE, required sample size, estimated duration, targeting, and the exact variant spec (copy/layout/logic). This guards against mid-test scope creep and peeking.

03

Development

Build the variants cleanly—HTML/JS/CSS in Adobe Target (or your tool), with instrumentation via GTM/SDK and events to GA4 (or your analytics). We suppress flicker, protect Core Web Vitals, and keep changes encapsulated and reversible. Accessibility and cross-browser parity are non-negotiable.

04

QA & controlled launch

Run a strict checklist: visual parity, edge cases (Safari/iOS, low bandwidth, ad-blockers), event firing, audience eligibility, and privacy. Launch to a small % first, watch for anomalies, then ramp deliberately. Define abort criteria (SRM, broken UX, severe performance hit).

05

Monitoring & readout

Track progress without “p-hacking.” I monitor SRM, variance, bots/spam, segment skews, and guardrails. When power and significance thresholds are met, calculate effect size and confidence intervals; sanity-check with secondary metrics and qualitative context. No victory laps mid-game.

06

Report & next steps

Make the call: Keep / Iterate / Retire—with quantified impact, risk notes, and exactly what to ship. Successful variants get a rollout plan; near-misses become follow-ups; losses go to the “do-not-repeat” library. We update the backlog and the win library so gains compound.

Build, test, learn. Then, repeat

A reliable six-step cycle: measure the baseline, craft the hypothesis, implement in Adobe Target, pass QA, run under guardrails, and ship a decision backed by effect size, confidence, and risk notes. Designed for speed without shortcuts, so wins compound quarter over quarter.

01

Discovery & metrics

Align on why we’re testing and what “good” looks like. We define the business goal, choose a primary KPI (and guardrails), audit the current data layer, and capture constraints (traffic, tech, timelines). This prevents vanity tests and sets an honest baseline for impact.

02

KPIs, constraints, baselines.

Write a falsifiable hypothesis (“If we do X for Y audience, Z metric will change because…”), then prioritize by impact × confidence × effort. We lock the primary/secondary metrics, MDE, required sample size, estimated duration, targeting, and the exact variant spec (copy/layout/logic). This guards against mid-test scope creep and peeking.

03

Development

Build the variants cleanly—HTML/JS/CSS in Adobe Target (or your tool), with instrumentation via GTM/SDK and events to GA4 (or your analytics). We suppress flicker, protect Core Web Vitals, and keep changes encapsulated and reversible. Accessibility and cross-browser parity are non-negotiable.

04

QA & controlled launch

Run a strict checklist: visual parity, edge cases (Safari/iOS, low bandwidth, ad-blockers), event firing, audience eligibility, and privacy. Launch to a small % first, watch for anomalies, then ramp deliberately. Define abort criteria (SRM, broken UX, severe performance hit).

05

Monitoring & readout

Track progress without “p-hacking.” I monitor SRM, variance, bots/spam, segment skews, and guardrails. When power and significance thresholds are met, calculate effect size and confidence intervals; sanity-check with secondary metrics and qualitative context. No victory laps mid-game.

06

Report & next steps

Make the call: Keep / Iterate / Retire—with quantified impact, risk notes, and exactly what to ship. Successful variants get a rollout plan; near-misses become follow-ups; losses go to the “do-not-repeat” library. We update the backlog and the win library so gains compound.

Frequently Asked Questions

What is “Target Test”?

How long does an A/B test run?

Do I need big traffic?

Will testing hurt SEO or performance?

Can you work with VWO or Optimizely?

What do I deliver at the end?

Frequently Asked Questions

What is “Target Test”?

How long does an A/B test run?

Do I need big traffic?

Will testing hurt SEO or performance?

Can you work with VWO or Optimizely?

What do I deliver at the end?

Frequently Asked Questions

What is “Target Test”?

How long does an A/B test run?

Do I need big traffic?

Will testing hurt SEO or performance?

Can you work with VWO or Optimizely?

What do I deliver at the end?

Turn uncertainty into leads

If you want design judgment and technical execution in one place, let’s talk.If you want design judgment and technical execution in one place, let’s talk.

Turn uncertainty into leads

If you want design judgment and technical execution in one place, let’s talk.If you want design judgment and technical execution in one place, let’s talk.

Turn uncertainty into leads

If you want design judgment and technical execution in one place, let’s talk.If you want design judgment and technical execution in one place, let’s talk.

JOSUE SB

Building digital things that actually make sense

2025 - All rights reserved

JOSUE SB

Building digital things that actually make sense

2025 - All rights reserved

JOSUE SB

Building digital things that actually make sense

2025 - All rights reserved