New capabilities for building safe models include watermarking, prompt refining, and prompt debugging and work with any large language models.
Credit: Waridsara_HappyChildren / Shutterstock
Google has enhanced its Responsible Generative AI Toolkit for building and evaluating open generative AI models, expanding the toolkit with watermarking for AI content and with prompt refining and debugging features. The new features are designed to work with any large language models (LLMs), Google said.
Announced October 23, the new capabilities support Google’s Gemma and Gemini models or any other LLM. Among the capabilities added is SynthID watermarking for text, which allows AI application developers to watermark and detect text generated by their generative AI product. SynthID Text embeds digital watermarks directly into AI-generated text. It is accessible through Hugging Face and the Responsible Generative AI Toolkit.
Also featured is a Model Alignment library that helps developers refine prompts with support from LLMs. Developers provide feedback regarding how they would like their model’s outputs to change, …