Like any genAI model, Google Gemini responses can sometimes be inaccurate, but in this case it might be because testers don’t have the expertise to fact-check them.
According to TechCrunch, the firm hired to improve accuracy for Gemini is now making its testers evaluate responses even if they don’t have the “domain knowledge.”
The report raises questions about the rigor and standards Google says it applies to testing Gemini for accuracy. In the “Building responsibly” section of the Gemini 2.0 announcement, Google said it is “working with trusted testers and external experts and performing extensive risk assessments and safety and assurance evaluations.” There’s a reasonable focus on evaluating responses for sensitive and harmful content, but less attention is paid to responses that aren’t necessarily dangerous but just inaccurate.
Mashable Light Speed
Google seems to disregard the hallucination and error problem by simply adding a disclaimer that “Gemini can make mistakes, so double-check it,” which …