Artificial intelligence (AI) is transforming the way societies operate, affecting communication, commerce, governance, and education. Across the globe, governments are introducing laws and policies to balance the opportunities AI presents with the risks it poses.
In The Bahamas, it is both timely and necessary to develop a framework that ensures AI is integrated ethically and safely into national life. This article proposes a hybrid approach to governance: empowering regulators to act within their mandates now, while laying the groundwork for a comprehensive legislative framework.
International approaches to AI regulation
The European Union (EU)
The EU passed its Artificial Intelligence Act in 2024. This legislation categorizes AI systems into four risk levels: prohibited (e.g. systems that manipulate behavior, exploit vulnerable groups, or enable real-time biometric surveillance in public spaces without safeguards), high-risk (e.g. law enforcement or hiring), limited-risk (eg. chatbots), and minimal-risk (eg. spam filters or AI used in video games).
High-risk AI applications must comply with stringent safety and transparency requirements, and violations can result in significant financial penalties.
The United States
The United States does not yet have a single, comprehensive national AI statute. Instead, individual states like California and Colorado have introduced their own rules, while federal authorities have issued executive orders offering broader guidance.
Notably, the US supports “regulatory sandboxes,” which are structured environments in which developers and companies can test AI systems under regulatory oversight. These initiatives help balance innovation with public safety, ensuring ethical standards are maintained throughout the development process.
Regulators can act now
In The Bahamas, regulators need not wait for national legislation to begin managing AI within their sectors. For instance, the Utilities Regulation and Competition Authority (URCA) has recognized AI as a regulatory priority in its 2025 Draft Annual Plan.
The plan requires that licensees notify URCA within 30 days of deploying any AI technology in public communications networks, and mandates compliance with data protection and cybersecurity laws.
URCA also affirms that AI deployments must align with ethical principles such as transparency, fairness, and accountability. These provisions highlight the importance of each regulator adopting measures that are appropriate to the nature and needs of their specific sector.
AI oversight should not follow a one-size-fits-all model, but instead reflect the distinct risks, opportunities, and operational realities within each regulatory domain.
Legal and education sectors
Given my role as a legal educator, this article places particular emphasis on the legal industry and the education sector as critical points for sectoral AI governance.
In the legal industry, the judiciary should look to the Caribbean Court of Justice (CCJ), which issued Practice Direction No. 1 of 2025, titled “The Use of Generative Artificial Intelligence Tools in Court Proceedings,” on February 14, 2025. These directions also emphasize transparency, fairness, and human oversight.
They provide guidance on areas such as the responsible use of AI-assisted tools in case management and decision support systems, while maintaining judicial accountability.
The Bahamian judiciary can seek to adapt these principles to its local context to ensure that the use of AI upholds the integrity and independence of the legal process.
The Ministry of Education should also play a proactive role in developing AI policy. As a leading agency responsible for shaping national knowledge and skills, the ministry’s stance on AI will set the tone for how future generations engage with this technology.
Education is a foundational sector where awareness, understanding, and ethical use of AI must be cultivated early. By leading the development of AI policies, the ministry will help build long-term national capacity in digital and technological literacy.
This includes ensuring that schools and educators have the guidance and resources needed to equip students with the skills to responsibly interact with AI.
A strong precedent exists globally for such leadership. For example, effective September 2025, China is making AI education compulsory in primary and secondary schools, requiring students to receive at least eight hours of AI instruction per year. This move demonstrates how early education policy can serve as a strategic tool for national readiness in the face of rapid technological change.
Therefore, the ministry should clearly encourage the ethical use of AI within its own directives and promote similar policies across all educational institutions.
These policies should focus on fostering digital literacy, encouraging innovation, and ensuring that AI is used in a manner that is inclusive, transparent, and aligned with broader educational goals.
The need for AI legislation
While sectoral regulation is an important first step, The Bahamas also requires a national law to address AI systems that fall outside the scope of existing regulatory bodies. Such legislation should:
• Reflect domestic priorities, particularly in sectors such as financial services, education, and tourism;
• Employ a risk-tier framework similar to the EU’s, allowing different rules for different categories of AI systems; and
• Establish meaningful penalties for the unauthorized or harmful use of AI technologies.
Planning for the future The government of The Bahamas is currently developing a national AI policy. According to The Tribune, Minister of Economic Affairs Michael Halkitis stated in June 2025 that a white paper is being prepared for Cabinet review as part of efforts to establish a comprehensive framework for AI governance.
As part of this process, the introduction of a regulatory sandbox should be considered. These controlled environments would allow both public institutions and private entities to test AI systems in real-world conditions, under close regulatory supervision. This approach would help detect and address potential harms before wider deployment.
Broad consultation is key For an AI governance framework to be effective and legitimate, it must be informed by broad-based consultation. Stakeholders from across the public and private sectors, as well as civil society and academia, should be engaged.
Inclusive dialogue will ensure the resulting legal and policy instruments reflect shared national values and practical realities. In addition, individual regulators should engage directly with their licensees before issuing any AI-related directives or guidance notes. This will help ensure that regulatory approaches are practical, context-sensitive, and reflective of the needs and concerns of those most affected.
Conclusion Developing a sound AI governance framework for The Bahamas requires prompt action and thoughtful planning. Regulators should begin crafting sector-specific guidelines now, while national lawmakers work toward comprehensive legislation. This hybrid approach balances immediate oversight with long-term preparedness, ensuring that AI serves the public good while safeguarding fundamental rights.
• Keenan Johnson is an attorney-at-law and legal educator.
(0) comments
Welcome to the discussion.
Log In
Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.