What brands mess up with growth experimentation

Which of these would you prefer from a growth test….

Experiment 1:

Month 1: +421 leads

Month 2: +8 leads

Month 3: +12 leads

Experiment 2:

Month 1: +8 leads

Month 2: +28 leads

Month 3: +87 leads

To me experiment one shows the capacity to launch, which is important, but then I don’t see real traction/fit.

Experiment two shows velocity. I would take experiment two all day. In a year it’s going to be ahead 9/10 times.

You want a channel you can scale and systemize.

I think this is where teams, especially startups, tend to screw up.

They have these target goals, be it MQLs or leads or qualified pipe, and they turn off projects if they can’t hit absolute numbers too early

OR

They stick with projects because they “take time” when the % growth is sub 10% in the first few months of a channel test.

Both of these lead to bad outcomes.

I first heard about LVR (Lead Velocity Rate) from Lemkin ten years ago and then read about it in T2D3 by Stijn Hendrikse.

Whether it’s leads, trials or whatever funnel metric you optimize for, the idea is you test experiments against velocity, this leads to better outcomes.
 

Similar threads

Back
Top