Why MVT-Based Response Testing Breaks Down When Direct Mail Drop Frequency Becomes the Decision
This is Post 2 of a Series on Multivariate Testing and Economic Truth in Direct Mail
Multivariate testing (MVT) plays a central role in modern direct mail programs, particularly in Medicare Advantage. Every mailing is tested, scored, and reported using MVT‑based response frameworks. Creative elements are ranked, quintiles are compared, and performance is evaluated across monthly drops. Over time, this produces a detailed and reliable picture of how response behaves by message and by timing.
As the December 7 Medicare Advantage enrollment deadline approaches, response predictably increases. MVT captures this timing effect with precision. It shows how creative performance varies across mailings and how timing influences response throughout the year. Used for these purposes, MVT is a valuable interpretive tool.
The problem begins when drop frequency becomes the decision rather than the variable being observed. At that point, response optimization gives way to economic judgment. The distinction is neither subtle nor optional.
Mailing more often almost always increases total response, especially as urgency builds closer to the enrollment deadline. MVT reinforces this conclusion by showing a stronger response in later drops and a continued lift from repeated exposure. When viewed strictly through a response lens, the data appears to argue for higher and higher frequency.
What MVT cannot show is whether additional mailings were financially justified on the basis of a quantifiable KPI, such as cost per sale (CPS) or cost per lead (CPL). It cannot determine whether the fourth, fifth, or sixth drop reduced cost per sale or merely increased volume at a higher marginal cost. More importantly, it cannot reveal when incremental mailings begin to push CPS to unacceptable levels, even as response continues to rise. When every name gets the same number of touches, response accumulates in the data—but the ROI of each added mailing is impossible to isolate.
This distinction becomes clearer when timing and frequency collide. It is entirely possible for the lowest‑performing creative segment of a May mailing to outperform the weakest segment of an early December drop. Timing alone can overwhelm creative differences. MVT captures this precisely, making it relatively straightforward to determine when to mail when the same list is mailed frequently. What remains unresolved is the cumulative economic impact of repeatedly mailing to the same names throughout the year.
At that point, frequency ceases to be a testing question and becomes a business decision. Response rates alone cannot support that decision. Without CPS—or at least CPL—explicitly overlaid on response results, there is no way to evaluate whether increased frequency improved performance sufficiently to warrant the added mailing as a whole. A higher response may appear to be success, while margins quietly erode.
What it cannot do is support decisions about how often to mail. Mailing frequency cannot be evaluated using response rates alone—even when those rates are measured rigorously—because frequency is a capital-allocation decision, not a response-optimization problem. Determining whether three mailings are economically superior to six requires financial KPIs such as CPS or CPL and outcome-level comparisons that response inference alone cannot supply.
Related Insight
Multivariate testing can reveal useful signals, but it cannot determine how much capital should ultimately be deployed for customer acquisition. That decision depends on the allowable Cost-Per-Sale and the economics of profitable growth.
Explore that question in How Much Should We Spend, in Total, to Acquire Customers This Year?