Press Release

Gillibrand Calls For Investigation Of AI-Related Senior Scams

May 19, 2023

Today, U.S. Senator Kirsten Gillibrand is calling on the Federal Trade Commission (FTC) to assess the prevalence of artificial intelligence (AI)-related scams targeting older adults. Recent reports suggest that scams using AI-powered technology, including voice clones, chatbots, and “deep fake” videos, are a growing problem and may be used to target vulnerable populations, particularly older Americans.

As artificial intelligence becomes increasingly widespread, older adults are at particular risk of becoming victims of AI-powered scams, said Senator Gillibrand. “The FTC must take this threat seriously and provide Congress with a thorough assessment of the prevalence of these scams and its plan to fight them.” 

Scammers can use AI-powered technology to create deceptive emails, phone calls, and images. Chatbots can be used to mimic a writing style, find personal information, and generate more convincing fake documents, while voice-cloning technology provides scammers with another avenue for upgraded impersonation. In one recent case, a scammer posing as a kidnapper used voice-cloning technology to duplicate the sounds of a mother’s crying daughter and demand ransom.

Senator Gillibrand is asking the FTC to provide answers to the following questions: 

  1. What is the FTC’s understanding of recent developments in AI-related scams?  
  2. What information and data does the FTC have on the prevalence of AI-related scams and accompanying risks?  
  3. What steps is the FTC taking or preparing to protect older Americans from AI-related scams?  
  4. Is the FTC preparing to update its counter-scam educational and awareness materials, including the “Pass It On” campaign’s materials directly intended for older Americans, to account for the rising risks of AI-related scams?  

The full text of Senator Gillibrand’s letter to FTC Chair Lina Khan can be found here or below: 

Dear Chair Khan:

As members of the Special Committee on Aging, we write to request information on your efforts to protect older Americans from increasing threats posed by artificial intelligence-related (AI) frauds and scams. Combatting frauds and scams has been a longstanding priority for the Committee across annual hearings, the Committee’s fraud hotline, and its fraud book. You recently noted how “generative AI risks turbocharging fraud.” While AI contains significant promise as an innovative technology, it can also be manipulated by malicious actors targeting vulnerable populations, particularly older Americans.

Federal Trade Commission (FTC) warnings have noted that scammers can use AI-powered technology, including voice clones and chatbots, to create deceptive emails, phone calls, and images in order to take advantage of consumers and targeted populations. Recent reports suggest that such scams are a growing problem. Voice-cloning technology in particular may facilitate imposter scams by allowing scammers to closely replicate an individual’s voice using just a short audio sample. In one case, a scammer used this approach to convince an older couple that the scammer was their grandson in desperate need of money to make bail, and the couple almost lost $9,400 before a bank official alerted them to the potential fraud. Similarly, in Arizona, a scammer posing as a kidnapper used voice-cloning technology to duplicate the sounds of a mother’s crying daughter and demand ransom.

Chatbots can also be used to mimic a writing style, find personal information, and generate more convincing fake documents, while “deep fake” videos and other AI-generated images can provide scammers with another avenue for upgraded impersonation. For older Americans, targeted by countless scams every year that result in multimillion-dollar financial losses, anxiety, and even anguish, this threat of powerful, newly enhanced fraud is acute.

As the FTC considers reasonable strategies to safeguard older Americans from frauds and scams, we request that you provide the following information by

June 20th, 2023:  

  1. What is the FTC’s understanding of recent developments in AI-related scams?
    • What, if any, analyses are available to help policymakers better respond to them?
    • How is the pace of innovation and improvements in AI technology likely to influence the incidence of these scams?
    • Are there any specific policy suggestions that FTC can highlight to protect older Americans from these scams?
  2. What information and data does the FTC have on the prevalence of AI-related scams and accompanying risks?
    • How do these scams affect older Americans?
    • Are older Americans at higher risk of being targeted?
  3. What steps is the FTC taking or preparing to protect older Americans from AI-related scams?
    • To what extent is the FTC working with other agencies and state and local governments to identify and combat the unique threat and challenge these scams pose to older Americans?
    • To what extent is the FTC partnering with private sector actors to develop options that could better protect older Americans from these scams?
  4. Is the FTC preparing to update its counter-scam educational and awareness materials, including the “Pass It On” campaign’s materials directly intended for older Americans, to account for the rising risks of AI-related scams?
    • What other resources can FTC make available on these scams to help inform and protect older Americans?

Thank you for your attention to this important issue. We look forward to your response.