Nvidia has released an open source toolkit designed to make AI chat programs more secure. The toolkit is called “NeMo Guardrails” and is intended to set up “guard rails” for large language models (Large Language Models, LLM). According to Nvidia, this enables developers to program “secure and trustworthy LLM conversation systems”. The toolkit works with all LLMs, including ChatGPT.

With NeMo Guardrails, developers should be able to set up rules that define how a chatbot reacts to user input. There are currently three categories for these rules: “topical guardrails” so that the content of the chat does not go astray, “safety guardrails” to prevent disinformation, and “security guardrails” against malicious code.

NeMo Guardrails are based on community developed toolkits like LangChain, writes Nvidia in a blog entry. LangChain is a framework specialized in developing applications based on language models. In addition, the new toolkit uses Nvidia’s Colang programming language, which can be used to develop chatbots with natural-language commands, according to Nvidia.

At the end of November 2022, OpenAI released the chatbot ChatGPT. Since then, the secure use of language models has been increasingly discussed worldwide. In an interview, the data protection officer of the state of Schleswig-Holstein explains how the German and European authorities check OpenAI.


(strong)

To home page

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply