Skip to main content
Firaz Zakariya
← Projects

Most experimentation tooling is built for companies with millions of daily active users. If you have hundreds, the standard advice — “just wait for statistical significance” — means waiting months, or running tests so underpowered that you’re mostly measuring noise. microexp is a Python package built for that situation.

Problem

Running experiments on Cawosh meant small weekly traffic. Classical fixed-horizon tests required sample sizes I couldn’t reach before the world changed. I needed methods that let you peek at results without inflating false positive rates, and that extract more signal from the data I had.

Approach

Sequential testing (mSPRT). The mixture Sequential Probability Ratio Test lets you test continuously and stop as soon as you have enough evidence — in either direction. Unlike naive peeking, the Type I error stays controlled at the chosen level.

Bayesian A/B testing. Models the conversion rate as a Beta-distributed random variable and updates it as data arrives. Gives you a probability that variant B beats control, rather than a p-value, which is more useful for small-traffic decisions.

CUPED variance reduction. Uses pre-experiment covariates (e.g. prior week’s behaviour) to reduce outcome variance, tightening confidence intervals without collecting more data. Often equivalent to running the experiment with 20–40% more users.

Package design. [Describe the API — e.g., a Test class you instantiate with a metric and alpha, then call .update(data) and .result(). Any CLI or notebook integration.]

Results

[Fill in: did it reach PyPI? How does it compare to a naive fixed-horizon test on your own traffic? Any simulation results showing Type I error control?]

Code & package

pip install microexp — [PyPI URL]

Source: [GitHub URL]