Wednesday, November 26, 2025
By EARYEL BOWLEG
Tribune Staff Reporter
ebowleg@tribunemedia.net
CHIEF Justice Ian Winder warned yesterday that artificial intelligence must never “distort or diminish justice,” urging the Industrial Tribunal to adopt stronger protections as it expands its use of digital tools.
He delivered the caution during the opening of the tribunal’s legal year, saying technological progress can improve fairness and efficiency only when paired with strict safeguards. While he acknowledged the value of modern systems in reducing backlogs and improving transparency, he cautioned that innovation cannot come at the expense of judicial integrity.
He said many tribunal users represent themselves, making digital tools helpful when deployed responsibly.
“I readily accept that an environment heavily utilised by unrepresented litigants, such a tool represents a very useful aid,” he said. “I therefore encourage the industrial tribunal to consider the guidance recently issued by the Supreme Court on the responsible use of artificial intelligence and see how this could assist its continued work.”
He added: “I therefore encourage the president to take deliberate steps to strengthen the Tribunal's processes to guard against misuse of technology and to ensure that efficiency is pursued without sacrificing thoroughness.”
Industrial Tribunal President Indira Demeritte-Francis used her remarks to announce that the tribunal has already launched an artificial intelligence pilot project. The pilot is designed to assist with legal research, judgment formatting, and tracking case timelines, forming part of a broader modernisation push that will be evaluated for potential expansion in 2026.
She said these upgrades reflect the tribunal’s ongoing shift toward full digital transformation.
Her comments followed a high-profile directive from Sir Ian earlier this month, reported by Tribune Business, outlining the Supreme Court’s own standards for AI after an attorney submitted three “fake cases” generated by an AI tool in support of her legal argument. The Supreme Court referred the matter to the Bahamas Bar Council’s ethics committee.
The report noted that the courts “will be fully accountable to the public for any use of AI in decision-making functions”, ensuring its deployment never threatens a person’s right to a fair hearing “before an impartial tribunal”.
“Generative AI may be used to assist judicial officers and court users with material for court proceedings, including the drafting of documents, summarising information and drafting decisions,” the directive states.
It makes clear, however, that responsibility remains with human users.
“Judicial officers and court users assume full responsibility for the accuracy, appropriateness and relevance of any material filed.
“The court may require a user to disclose whether a Generative Al tool was employed in the preparation of any document or evidence. Court users should be prepared to identify specific portions of their submissions influenced by Generative AI and explain the steps taken to ensure accuracy.”
The directive warns that generative tools can produce “hallucinations and may be inaccurate”, generating results that “may be biased, incomplete or in breach of copyright” and which cannot guarantee confidentiality.
“Court users shall not input privileged or sensitive information into unsecured AI platforms as Chat Bots prompts, requests or interactions within the programme may be automatically added to the large language model database, and used to respond to queries from other users which would make confidential information potentially available to others unless it is disabled,” Sir Ian added.
“Sensitive material and information to which professional privilege may attach may not be inputted by court users in a public chatbot.”
The chief justice said these standards will be essential as more judicial bodies adopt AI tools, stressing that modernisation must never weaken public trust in the courts.
Log in to comment