John Ruljancich – Head of Product
We’ve said a few times this quarter that we’ve made meaningful improvements to our contact data — specifically to phone and email accuracy and match rates. We don’t love vague claims, so this post is about the specifics: what we actually tested, how we measured it, and what the numbers show.
Why we test this way
Contact data quality is notoriously hard to talk about honestly. “High accuracy” and “verified data” are phrases that get thrown around without much substance behind them. Our approach is to test against ground truth — taking a known dataset where we can independently verify whether a phone number or email address is correct — and measure how often our data matches that reality.
We ran two types of tests. The first used a 1,000-record dataset, where the phone numbers were confirmed to be associated to a specific name and address combination through actual phone call dial outs. The second used a larger 2,382-record dataset of consumer provided name, address, email address and phone numbers. We call this “Truth Data”. Tests were run against both lists before and after we introduced new data sources into production to determine the improvement to both Rate Rate and Accuracy..
What changed in the data
We added multiple new data sources — including expanded cell phone coverage and updated email sources — to our contact enrichment pipeline. The goal was to increase both the quantity of matches returned and the accuracy of the records we deliver. Here’s what the testing showed.
Phone accuracy results
The headline: Phone accuracy across improved meaningfully
Here’s what the averages across both test sets showed:
|
Metric |
Improvement |
|
Phone append accuracy |
+12% |
|
Multi-phone append accuracy |
+37% |
There’s an important nuance worth being upfront about: for our multiphone append product, the new data sources now return fewer total phone numbers per record. That’s intentional. We made a deliberate choice to prioritize accuracy over volume — filtering out low confidence numbers rather than returning everything we have.
Multiphone append – More connections on fewer calls:
- More correct phones: +7.9% more correct first phones and +5.4% more correct phone numbers across all phones provided.
- Fewer incorrect phones (less calls to make): -5.6% fewer first-phones provided, and -22.9% total phones provided.
For outreach teams, this is a meaningful trade-off in your favor. You’re dialing a tighter list where a higher percentage of numbers actually connect. Less time on dead ends, more time in real conversations. If you’re running high-volume sequences, the efficiency gains compounds quickly.
Email results
The headline: email match rate is up approximately 40% — many more records now return an email address than before.
In production, we’re seeing email match rates improve from approximately 25% to approximately 35% on average. That means from the same input file, you’re finding email addresses for significantly more contacts than before.
It’s also worth noting: this improvement is likely to grow. We’re in the process of adding a substantial new set of emails to our directory later this month or early next, which should push match rates higher still.
For marketing and demand gen teams, that match rate improvement directly expands your addressable audience. You’re not leaving reachable contacts behind just because the data couldn’t find them before.
What this means in practice
If you’re running outbound email campaigns, more matched records means broader reach from the same input file — without any change to your workflow.
If you’re running phone outreach, the accuracy improvement reduces wasted effort. A higher percentage of the numbers we return are the right numbers. Combined with the reduction in total numbers returned, your team works more efficiently with a tighter, higher-quality list.
Neither of these improvements is magic. Contact data still decays. People change emails and phone numbers. But the baseline is meaningfully better than it was, and we have the testing to back that up.
Want to test it yourself?
The best way to evaluate this for your specific use case is to run it against a segment you know. We’re happy to set up a test on a real file — whether you’re a current customer, an active prospect, or someone who tested us before and walked away unimpressed.
Versium provides AI-ready marketing data for global brands. Learn more at versium.com.