AI Tools Face Scrutiny After Steering Users to Illegal Gambling Sites

AI Tools Face Scrutiny After Steering Users to Illegal Gambling Sites
A new study by The Guardian has found that major AI chatbots can lead users to illegal online casinos. The analysis revealed that AI tools sometimes show people how to get around UK gambling laws.

The Guardian looked into five popular AI tools: ChatGPT, Microsoft Copilot, Meta AI, Grok, and Google Gemini. Each was asked a series of questions about unlicensed casinos, including which ones were the best ones. Also, ways to avoid source-of-wealth checks and methods for accessing casinos outside GamStop were requested.

As the analysis suggests, all five chatbots could be prompted to recommend offshore gambling sites that were not licensed to operate in the UK. This implies that the services offered by AI were illegal.

Some Bots Went Further Than Listing Sites

The responses did not stop at simple lists. Some of the tools went as far as comparing bonuses, payout speed, payment methods, and crypto options. These are the same commercial signals that are used to attract people who are after fewer restrictions and faster access to funds.

Another serious concern was that some systems also appeared willing to help users work around consumer protection rules. According to the research findings:

  • Meta AI described compliance measures in a disparaging way and offered suggestions on how they could be avoided;
  • Gemini was reported to have offered similar information in one of the tests;
  • Grok used cryptocurrency as a way of reducing association with personal banking information;
  • ChatGPT, on the other hand, offered a side-by-side comparison of non-GamStop casinos.

Only two of the five tools started any of their answers with a health warning. Two of the five also included information about support services for those worried about gambling harm.

What Companies Say & What May Come Next

The companies behind the AI systems all claim to have safety precautions or to be undergoing updates to deal with risky behaviors better. Google, in particular, says that Gemini is meant to return helpful answers and point out possible dangers, if any. Copilot by Microsoft is positioned to use several layers of protection (including automated controls and human oversight). As for OpenAI’s ChatGPT, it’s marketed to be trained to reject prompts that might lead to risky behavior and propose healthier alternatives instead.

Nevertheless, the tests suggest that those protections don’t always work consistently. Therefore, regulators might want to take a closer look at AI recommendations as a possible way for unlicensed operators to get in.

Have you enjoyed the article?

Link Copied