- Home
- Daily News
- Newest ‘Bluebook’ has ‘bonkers’ rule on citing…
Synthetic Intelligence & Robotics
Newest ‘Bluebook’ has ‘bonkers’ rule on citing to synthetic intelligence
If you’re not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in response to consultants who spoke with LawSites. (Photograph by Howchou, PD ineligible (books), by way of Wikimedia Commons)
Up to date: If you’re not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in response to consultants who spoke with LawSites.
The twenty second version of The Bluebook: A Uniform System of Quotation, launched in Might, features a new Rule 18.3 for citing output from generative AI. Critics argue that the brand new rule “is basically flawed in each conception and execution,” LawSites studies.
Critics embody Susan Tanner, a professor on the College of Louisville’s Louis D. Brandeis Faculty of Legislation, who referred to as the brand new rule “bonkers” in a submit at Medium.
The rule requires that authors citing output from generative AI, equivalent to ChatGPT conversations or Google search outcomes, save a screenshot of that output as a PDF. The rule has three sections—for big language fashions, search outcomes and AI-generated content material—and has barely differing quotation guidelines for every.
One drawback, Tanner stated, is that the rule treats AI as a citable authority, slightly than a analysis instrument.
“What would a smart method to AI quotation seem like?” Tanner wrote. “First, acknowledge that in 99% of instances, we shouldn’t be citing AI in any respect. We should always cite the verified sources AI helped us discover.”
Within the uncommon case by which an AI output needs to be cited, the writer ought to keep in mind that the quotation is documenting what was stated by generative AI, not the reality of what was stated, Tanner stated. She offers this instance: “OpenAI, ChatGPT-4, ‘Clarify the rumour rule in Kentucky’ (Oct. 30, 2024) (conversational artifact on file with writer) (not cited for accuracy of content material).”
Jessica R. Gunder, an assistant professor on the College of Idaho Faculty of Legislation, offered one other instance of an acceptable quotation to generative AI in her critique of Rule 18.3 posted to SSRN.
“If an writer needed to focus on the unreliability of a generative AI instrument by pointing to the truth that the instrument crafted a pizza recipe that included glue as an ingredient to maintain the cheese from falling off the slice, a quotation—and preservation of the generative AI output—could be acceptable,” she wrote.
Cullen O’Keefe, the director of analysis on the Institute for Legislation & AI, sees one other drawback. The rule differentiates between massive language fashions and “AI-generated content material,” however content material generated by massive language fashions is a sort of AI-generated content material.
In an article on the Substack weblog Jural Networks, he urged that one interpretation of the rule governing AI-generated content material is that it applies to issues equivalent to photos, audio recordings and sound.
He additionally sees inconsistencies about whether or not to make use of firm names together with mannequin names and when to require the date of the era and the immediate used.
“I don’t imply to be too harsh on the editors, whom I commend for tackling this challenge head-on,” O’Keefe wrote. “However this rule lacks the everyday precision for which The Bluebook is (in)well-known.”
Up to date Sept. 25 at 2:34 p.m. to precisely cite Cullen O’Keefe’s level about massive language fashions. Up to date Sept. 27 at 8:02 a.m. to right Jessica R. Gunder’s title.
Write a letter to the editor, share a story tip or update, or report an error.