A workforce of researchers from synthetic intelligence (AI) agency AutoGPT, Northeastern College and Microsoft (NASDAQ:) Analysis have developed a device that screens massive language fashions (LLMs) for probably dangerous outputs and prevents them from executing.
The agent is described in a preprint analysis paper titled “Testing Language Mannequin Brokers Safely within the Wild.” In keeping with the analysis, the agent is versatile sufficient to observe current LLMs and may cease dangerous outputs, resembling code assaults, earlier than they occur.
Proceed Studying on Cointelegraph