Vithanage Erandi Kawshalya Madhushani Jade Times Staff
V.E.K. Madhushani is a Jadetimes news reporter covering Innovation.
Effort to Combat Fraudulent Ads and Protect Public Figures Gains Momentum
Meta, the parent company of Facebook and Instagram, is ramping up its efforts to crack down on scammers who misuse the likeness of celebrities to promote fraudulent schemes. In a significant step forward, the tech giant will introduce facial recognition technology to detect and remove fake ads that falsely feature public figures. Celebrities like Elon Musk and finance expert Martin Lewis have frequently been targeted in such scams, often promoting dubious investment schemes and cryptocurrencies without their consent.
Rising Problem of Celebrity Scam Ads
Celebrity scam ads have plagued social media platforms for years, creating an urgent need for a more effective solution. Many of these fake advertisements leverage the credibility and fame of well-known figures to lure unsuspecting users into financial traps. Martin Lewis, a personal finance guru, has spoken publicly about the distress these ads have caused him, stating that he receives numerous daily reports of his name and face being used in scam promotions.
Meta has faced increasing pressure to address this issue, especially as these scams have become more sophisticated through the use of deepfake technology—AI-generated videos or images that create lifelike portrayals of celebrities endorsing products or services they have no affiliation with.
Meta's New Approach: Facial Recognition
Meta currently employs an artificial intelligence-based ad review system that scans for fake celebrity endorsements. However, the company is now enhancing this system with facial recognition technology. The new method will compare images in flagged ads with the profile pictures of celebrities on Facebook and Instagram. If a match is confirmed and the ad is deemed a scam, it will be removed automatically.
Meta has reported promising results from early testing of the technology. With the system showing potential, the company is expanding its use to a broader group of public figures who have been affected by these so called "celeb bait" scams. In app notifications will alert impacted individuals, allowing them to take quicker action.
The Role of Deepfakes in Scam Ads
The rise of deepfake technology has made celebrity scams even more convincing and difficult to detect. Unlike traditional photo manipulation, deepfakes can create realistic videos or images of public figures, further deceiving users into thinking the endorsements are genuine. The problem has grown so severe that back in the 2010s, Martin Lewis took legal action against Facebook, pushing for stronger regulations. Though the case was eventually dropped after Facebook introduced a reporting feature for scam ads and made a significant donation to Citizens Advice, the problem persisted.
As scams evolve and become more complex, so must the solutions. Meta's facial recognition technology aims to address this by offering a more robust layer of protection against fraudulent use of celebrity images.
The Government’s Role and Calls for Action
Meta’s new initiative has garnered attention, especially after a recent fake interview featuring UK Chancellor Rachel Reeves was used in an elaborate scam to steal users' bank details. Following the incident, Lewis urged the UK government to give regulatory bodies like Ofcom greater powers to crack down on such scam ads.
Meta has acknowledged the difficulty in staying ahead of scammers, stating, "Scammers are relentless and continuously evolve their tactics to try to evade detection." The company hopes its approach can help guide other tech platforms in defending against online scams, which are becoming a growing concern globally.
Expanding the Use of Facial Recognition for Account Security
In addition to tackling celebrity scams, Meta has announced plans to use facial recognition technology to help users regain access to locked social media accounts. Traditionally, unlocking an account required users to upload official identification, which could be a time-consuming process. Now, video selfies and facial recognition are being tested as a quicker way for users to verify their identity and regain access to their accounts.
Meta has promised that the data generated from facial recognition will be securely encrypted and deleted after the identity verification process. However, the technology will not be immediately available in regions where regulatory approval has not been obtained, such as the UK and the European Union.
Privacy Concerns and Future Considerations
Despite the potential benefits of using facial recognition technology, concerns over privacy and data security remain. Meta had previously used facial recognition on Facebook but discontinued its use in 2021 due to growing concerns about privacy, accuracy, and potential biases in the system. Now, with the reintroduction of the technology in limited capacities, Meta is keen to assure users that steps will be taken to safeguard personal information. The company has committed to encrypting video selfies and ensuring facial data is deleted once the verification process is complete.
As Meta continues to refine its facial recognition tools, the broader implications of such technology both in terms of user privacy and its potential to combat online scams will be closely watched by regulators and privacy advocates alike.
A Step Toward Safer Social Media
Meta’s decision to deploy facial recognition technology represents a significant development in the fight against online scams, particularly those targeting celebrities. While challenges remain, this move marks a promising step toward creating a safer and more transparent online environment, not only for public figures but for everyday users as well. The initiative also serves as a reminder of the delicate balance between innovation and privacy that tech companies must navigate as they adopt new tools to protect users from emerging threats.