The first threat of legal action against a tech firm for authoring or publishing libellous AI-generated content was reported from Australia this week.
Brian Hood, mayor of small town Hepburn Shire in the state of Victoria, is said to have asked OpenAI, the developers of large language model ChatGPT to correct false information about him and to have said he may sue the San Francisco-based AI firm for falsely naming him in a bribery scandal linked to the Reserve Bank of Australia over a decade ago.
OpenAI are yet to respond. However, in the wider context of heightened scrutiny around platforms’ effective immunity as intermediaries, questions that emerge include how easily or not incorrect AI-generated information may be corrected, what the extent of misinformation could be as global reliance on AI grows, and where liability will lie.