Generative AI tools including ChatGPT are prone to “hallucinating”, including made-up answers, a consultant warns.
They also tend to provide inaccurate, albeit “superficially plausible” information, which may be the most common problem associated with using these tools, said Rob Friedman, senior analyst for Stanford, Connecticut-based Gartner Inc.’s legal and compliance practices. , in statement.
“Legal and compliance leaders should issue guidance requiring employees to review all output generated by ChatGPT for accuracy, appropriateness and actual usefulness before accepting it,” he said.
Other issues include cases of bias, intellectual property and copyright risks, cyber fraud and consumer protection risks, the statement said.
Few organizations have corporate policies on the use of AI, which could expose them to the loss of confidential information if employees enter it into ChatGPT as part of a work project, Stephanie Snyder Frenier, senior vice president, business development leader-professional and cyber solutions, at CAC Specialty in Chicago, said at the Risk & Insurance Management Society Inc.̵7;s Riskworld annual conference in Atlanta earlier this month.