We Tested AI Transparency on 600+ Calls — Here’s What Actually Happened

We recently ran a live split test across 600+ real inbound calls for a home-service brand. The only variable was the first sentence the caller heard — one version sounded like a normal receptionist, while the other disclosed that the caller was speaking with a “digital assistant.”

The results were eye-opening. When callers were told upfront they were talking to AI, hang-ups nearly tripled. Almost one in four disconnected before saying a word. Engagement dropped across the board, and call times shortened because conversations never really started.

What surprised us most was what didn’t change. For callers who stayed on the line, booking rates were essentially the same — even slightly higher with the AI disclosure. The technology worked just as well. There were simply far fewer people willing to engage once they knew it was AI.

This raises an important question for any business adopting AI: Are we optimizing for what customers say they want, or for how they actually behave? People say they want transparency. In practice, they seem to want speed, competence, and resolution. If the experience feels helpful, most don’t care what powers it.

In home services — where someone’s roof is leaking or their fence blew down — the caller wants help immediately. If they hang up because of an AI disclosure, they don’t call back. They call the next company on Google.

We’re not hiding technology. We’re removing friction at the moment when intent is highest. That distinction matters.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top