Prominent AI models have fallen short of EU regulations governing cyber resiliency and preventing discriminatory output, according to data reviewed by Reuters. The publication obtained the results of a tool developed by Swiss startup LatticeFlow AI, and research partners ETH Zurich and Bulgaria’s INSAIT that scored AI models on a rubric between 0 and 1 across multiple categories, such as technical capabilities and safety. Multiple models were found to have a score of 0.46 or below in tests for discriminatory output and “prompt hijacking.”
Full story
