pindrop-logo-2.svg
Search
Close this search box.
Search
Close this search box.

Written by: Pindrop

Contact Center Fraud & Authentication Expert

There’s been interesting news recently on conversational AI bots being utilized by platforms like DoNotPay, Ideta, or the sassier Jolly Roger Telephone Co. These services can be used to call inbound to a contact center to dispute bills and charges, engage with customers, or even frustrate telemarketers on your behalf. While this is exciting for us everyday folk who don’t want to sit on hold to argue with a customer service provider or are excited by the idea of increasing a fraudster’s blood pressure when they try to bug you for personal information, this is incredibly frightening for the business world, especially banks. Why? While these new developments are often used for good, they can also be used with negative intentions. Fraudsters are never far behind (and usually ahead) when it comes to new techniques, technologies, and finding the best way to do more with less.

 

What does this mean for contact centers?

The connection of conversational AI, deepfakes or synthetic voice technology, and the processing power for real-time interactions is finally here. While it seems this trifecta was first created for legitimate purposes—like customer advocacy in the form of a “robot lawyer”—its existence signals a new age for fraudulent activity. Instead of spending time calling in themselves, fraudsters could instead utilize synthetic voices and conversational bots to interact with IVRs and even human agents. That means more activity, more chances for success, with less effort on their part.

 

What can contact centers do about it?

Contact centers can take comfort in knowing there are ways to get ahead of these activities. Take a look at this quick and easy checklist to deploy simple strategies that can help:

  1. Agent education – Make sure your agents are aware of the potential for conversation with digital voices that sound more real than they have experienced in the past. 
  2. Utilize callbacks – If a caller voice is suspicious and sounds synthetic, consider implementing a callback process where the call is ended and an outbound call is made to the account owner for direct confirmation.
  3. Leverage multifactor fraud and authentication solutions – You can leverage other factors, like call metadata for caller ID verification, digital tone analysis for device detection, and keypress analysis for behavior detection, or even utilize OTPs (although we know those aren’t as secure these days).

 

If you’re already a Pindrop customer, you’re in luck! Leveraging negative voice profiling, Phoneprinting® technology, and voice mismatch in your fraud and authentication policies is a great way to get ahead of these bots taking advantage of your contact center. Make sure to reach out to your customer success representative to help evaluate your implementation and enable the right features for cases like this.

 

Pindrop is also acutely aware of the rising problem with synthetic and deepfake voices. The availability of conversational AI and increasingly better deepfakes, combined with an abundance of processing capacity is a dangerous combination for fraudsters wanting to leverage these solutions for nefarious purposes, especially account takeovers. Our research team is already solutioning for deepfake detection and we can’t wait to leverage this kind of technology within our portfolio of innovations.

 

Interested in learning more? Contact us today.

More
Blogs