The Malta Gaming Authority is turning broader AI policy discussions into sector-specific guidance for the gaming industry. On 8 May 2026, the regulator initiated a targeted consultation on a draft AI Gaming Charter to address the issue of ethical and responsible use of AI.
The charter was developed in collaboration with the Malta Digital Innovation Authority. It is not designed to be a new code of rules and regulations. Rather, the regulator wants a principles-based reference point for licensed operators, suppliers, and other companies using AI in daily work.
AI Use Moves Into Compliance
AI has gone past product tests alone. Within the gaming industry, this technology is applied in customer support, fraud prevention, marketing, risk scoring, and safer gambling processes.
This makes governance harder because the misuse of chatbots is only one concern. Risk scoring that wrongly flags a player, fails to recognize harm, or relies on poor data will cause a more serious compliance violation.
This move by the MGA puts emphasis on how AI-powered decisions are made and evaluated. The regulator tied the charter to the responsible and transparent usage of AI. Also, it mentioned that the document should complement other regulations, including the EU Artificial Intelligence Act.
Voluntary Rules With Practical Pressure
“Voluntary” is a significant term, but it doesn’t make the charter less significant. Malta continues to be a leading jurisdiction for online gambling licensing. The guidance provided by the MGA could influence the internal compliance standards, vendor agreements, and auditing processes.
For operators, the main concern is proof. If the use of AI affects customer treatment, player protection, or anti-fraud measures, a business might have to justify how a certain technology works. It may also need records showing who examined the system’s output and how it was protected.
The latter point is critical when using third-party technology. While the operator can outsource a model or even a whole platform, it cannot outsource accountability.
Player Protection Sets the Boundary
The charter highlights another, even broader industry issue. AI implementation could make risk detection faster. At the same time, the lack of proper oversight could lead to overly aggressive personalization.
For this reason, human involvement is still essential. Automated tools can help in making decisions, but decisions that have high-impact implications require human scrutiny. Bias testing, data protection, and clear escalation policies are no longer nice-to-haves for mature operators.
Bottom Line
This consultation suggests that Malta expects the industry to prepare before regulations become stricter. AI governance is set to shift from innovation units to compliance departments. Operators that have already begun mapping their tools could be better off than those waiting for implementation.


