Giskard Unveils Phare: A New Benchmark for Evaluating AI Models
Giskard has launched Phare, an open and independent benchmark to assess AI models on security dimensions like hallucination and bias, with Google DeepMind as a research partner.
Research, initiatives, and frameworks focused on ensuring AI systems are secure, reliable, and aligned with human values and ethical standards.
Giskard has launched Phare, an open and independent benchmark to assess AI models on security dimensions like hallucination and bias, with Google DeepMind as a research partner.
Former OpenAI CTO Mira Murati has launched a new AI startup, Thinking Machine Labs, with a team of top researchers and engineers, including many from OpenAI.
Pangea has announced the availability of AI Guard and Prompt Guard to enhance AI security, alongside a $10,000 jailbreak competition to highlight AI vulnerabilities.
Ilya Sutskever, co-founder of OpenAI, is raising over $1 billion for his startup Safe Superintelligence, which is now valued at more than $30 billion.
Caseware's AI digital assistant, AiDA, has been positively evaluated for its safety protocols by the Holistic AI Governance Platform, ensuring data security and compliance for accounting professionals.
ArisGlobal has signed the EU AI Pact, reinforcing its commitment to ethical AI practices and preparing for the EU AI Act.
AppSOC's testing reveals significant security vulnerabilities in DeepSeek's AI model, raising concerns over its use in enterprise applications.
The ROOST initiative, launched at the AI Action Summit in Paris, aims to provide open-source safety tools for AI, focusing on child safety and leveraging large language models.
OpenAI CEO Sam Altman proposes a 'compute budget' to ensure AI benefits are widely distributed, while addressing the challenges of AGI development.
G42 and Microsoft have launched the Responsible AI Foundation in Abu Dhabi, focusing on promoting responsible AI standards in the Middle East and Global South.