For years now there has been one thing that pretty much all paid search marketers could agree on: all your keywords should be in exact match, and broad match modifier. It has gotten to the point where it has become conventional wisdom, as taken for granted as keeping budgets uncapped.

That all changed late last year with Google’s introduction of close match variants for exact match. Here is how Google described the change:

Close variants include searches for keywords with the same meaning as the exact keywords, regardless of spelling or grammar similarities between the query and the keyword.

Whether someone is searching for “running shoes” or “shoes for running,” what they want remains the same; they’re looking for running shoes. Close variants of exact match keywords help you connect with people who are looking for your business—despite slight variations in the way they search—and reduces the need to build out exhaustive keyword lists to reach these customers.

This definition is disingenuous and euphemistic. Google buries the lede, later citing examples of what counts as a close variant – reordering of words (something for which we used to depend on broad match modifier), synonyms ([bathing suit] matching against “swimming suits”) and even completely different terms based on search intent ([images royalty free] matching against “free copyright images”). From our own experience, we’ve even seen terms matching against foreign language equivalents.

At this point it is tempting to ask how this is different from pure broad match, and the answer is: it is difficult to say. It does seem the “cloud” of acceptable matching terms around exact match keywords remains a lot tighter than for pure broad, but this is based on anecdotal evidence. So why the shift?

Smart bidding is most likely the answer. If Google is optimising bids based on user demographic and location data, times of day and myriad other signals, does it make sense to limit matching options to only the exact terms built out in a given account? For a rigorously deployed structure where the long-tail is covered by DSA campaigns or a thorough keyword build out, perhaps it does. But increasingly these detailed account structures are becoming overkill. The time has not yet come to abandon commonly accepted best practices, but the trajectory is clear to see. There is a reason Google’s keyword match type defaults to broad match, and why there is a big 2019 push behind the adoption of Responsive Search Ads. As machine learning continues to improve, and the ultimate goal of driving results is matched or bettered than the old way of doing things, sacrificing granular control over account behaviour will be worth the trade-off.

So, how do we quantify this impact? The answer is probably moot. Our own testing with smart bidding has shown that in the majority of cases it is exceeding manual bidding capabilities. Expertise is shifting from keyword bidding methodology to smart bidding analysis and management. Understanding how smart bidding reacts to changes and factoring in seasonality are the new keys to successful performance – and it is not as easy as it sounds. Responsive Search Ads are fascinating, but there is not yet a clear way to prove incrementality, and Google’s available reporting tools are limited. Undoubtedly these tools will improve over time, but the days of knowing exactly what ad you are showing to what customer are going away.

Knowing exactly what terms you’re targeting with your media spend has always been reassuring. Particularly in an agency environment, providing clients with that level of reassurance is paramount. But it is also ignoring a sea change in how search behaviour is changing, and how users are placing more and more trust in Google to understand not just what they are looking for, but why they are looking for it. Understanding context is becoming the new expertise – the days of granular keyword build outs and detailed negatives harvesting is slowly but surely coming to an end. And if that helps to improve performance, who are we to argue?