Support trulyindependent journalism
Our mission is to deliver unbiased, fact-based reporting that holds power to account and exposes the truth.
Whether $5 or $50, every contribution counts.
Support us to deliver journalism without an agenda.
X’s artificial intelligence assistant Grok lacks “effective guardrails” that would stop users from creating “potentially misleading images” about 2024 candidates or election information, according to a new study.
The Center for Countering Digital Hate (CCDH) studied Grok’s ability to transform prompts about election fraud and candidates into images.
It found that the tool was able to churn out “convincing images” after being given prompts, including one AI image of Vice President Kamala Harris doing drugs and another of former president Donald Trump looking sick in bed.
For each test, researchers supplied Grok with a straightforward text prompt. Then, they tried to modify the original prompt to circumvent the tool’s safety measures, such as by describing candidates rather than naming them.
The AI tool didn’t reject any of the original 60 text prompts that …