AI Safety

Research, initiatives, and frameworks focused on ensuring AI systems are secure, reliable, and aligned with human values and ethical standards.

IntelePeer Earns TX-RAMP Level 2 Certification for AI Communication Solutions

IntelePeer has achieved TX-RAMP Level 2 certification, confirming its compliance with Texas state security standards for cloud-based AI communication systems used in regulated sectors such as healthcare and government.

March 20, 2026

Resume.org Survey: 21% of U.S. Companies Halt Entry-Level Hiring Due to AI

A new Resume.org survey finds that 21% of U.S. companies have stopped hiring entry-level workers because of AI, with nearly half expecting to do so by 2027.

March 10, 2026

OpenAI Hardware Leader Caitlin Kalinowski Resigns Over Pentagon Deal

Caitlin Kalinowski has resigned from OpenAI, citing concerns about the company's agreement with the U.S. Department of Defense and the lack of safeguards around surveillance and autonomous weapons.

March 08, 2026

Kinetech Launches GovShield AI Platform for Law Enforcement

Kinetech has launched GovShield, a secure AI platform designed for law enforcement and government agencies, built on the Mendix low-code platform to meet CJIS compliance standards.

March 05, 2026

OpenCxMS Files 15 Patents for Hardware Safety Standard in AI Robotics

OpenCxMS Technologies has filed 15 patents within 13 days for the Standardized Autonomous Safety Module, an open hardware and software standard designed to enforce safety in AI-controlled robots.

February 22, 2026

AnChain.AI Partners with NUVA to Enhance Blockchain Security and Compliance

AnChain.AI has announced a partnership with NUVA to integrate its AI-driven blockchain security and compliance tools into NUVA’s marketplace for tokenized real-world assets, strengthening institutional trust and regulatory readiness.

February 22, 2026

CHAI Reports $68 Million ARR and Expands AI Safety Measures

CHAI announced it has reached $68 million in annual recurring revenue and a $1.4 billion valuation while introducing new AI safety protocols aligned with EU and NIST standards.

February 22, 2026

Google Threat Intelligence Group Reports Surge in AI Misuse for Cyber Operations

The Google Threat Intelligence Group (GTIG) has released a report detailing how threat actors are increasingly using AI for phishing, reconnaissance, and malware development, while also conducting model extraction attacks targeting proprietary AI systems.

February 16, 2026

Astrix Security Releases OpenClaw Scanner to Detect AI Agent Deployments

Astrix Security has launched the OpenClaw Scanner, a free tool designed to detect instances of the open-source AI assistant OpenClaw across enterprise environments, addressing security concerns over autonomous AI agents.

February 13, 2026

Eve Security Files Patent for 'Interrogation-as-a-Service' to Manage AI Agent Risks

Eve Security has filed a patent for its new 'Interrogation-as-a-Service' technology, designed to control and audit AI agent actions in real time. The system introduces a reasoning-before-execution approach to enhance safety and compliance in enterprise AI operations.

February 13, 2026

OpenAI Disbands Mission Alignment Team, Appoints Chief Futurist

OpenAI has disbanded its mission alignment team, with members reassigned across the company, while former team head Josh Achiam becomes chief futurist.

February 12, 2026

Skan AI Launches Agentic Ontology of Work for Enterprise Automation

Skan AI has introduced the Agentic Ontology of Work (AOW), a standardized framework designed to unify how humans and AI agents collaborate within enterprise systems. The ontology defines key elements like agents, skills, intents, and policies to improve interoperability and governance in agentic automation.

February 11, 2026

Alice Launches Caterpillar to Detect Malicious OpenClaw Skills

Alice has released Caterpillar, a free open-source security tool designed to identify malicious behaviors in AI agent skills within OpenClaw. The tool follows a case where several harmful skills were found active among over 6,000 users.

February 08, 2026

Bounteous and Anthropic Launch Claude Code Lab Series for Enterprise AI Adoption

Bounteous has announced a new series of Claude Code Labs in partnership with Anthropic, offering hands-on workshops for enterprise teams to integrate Claude Code responsibly into their environments.

February 08, 2026

Aura to Acquire Qoria and List on ASX

Aura has announced plans to acquire Qoria through an Australian scheme of arrangement, with the combined company set to trade on the ASX under the ticker AXQ. The deal, expected to close in the second quarter of 2026, will create a global leader in online safety and wellbeing solutions.

February 04, 2026

DC Capital Partners Takes Majority Stake in Knexus to Expand Government AI Services

DC Capital Partners has acquired a majority stake in Knexus, an applied AI company serving U.S. government agencies. The partnership aims to accelerate Knexus's growth and expand its AI solutions for defense and civilian missions.

February 04, 2026

2026 International AI Safety Report Highlights Rapid Advances and Rising Risks

The 2026 International AI Safety Report, chaired by Yoshua Bengio, details major advances in general-purpose AI capabilities and growing safety concerns, including misuse in cybersecurity and biological research.

February 04, 2026

Perplexity AI Offers Free Public Safety Platform for Law Enforcement

Perplexity AI has introduced a new initiative providing law enforcement agencies with free access to its Enterprise Pro platform for one year, enabling officers to use multimodal AI tools for field and administrative tasks.

January 16, 2026

Indonesia Blocks xAI’s Grok Over Non-Consensual Deepfakes

Indonesia has temporarily blocked access to xAI’s Grok chatbot after reports that it generated sexualized AI images, including depictions of minors. The government called the content a violation of human rights and summoned X officials to address the issue.

January 10, 2026

OpenAI Seeks Head of Preparedness to Oversee AI Safety and Risk Mitigation

OpenAI CEO Sam Altman announced that the company is hiring a Head of Preparedness, a new executive role dedicated to managing AI safety and risk mitigation as its models grow more capable. The position will pay $555,000 annually and focus on evaluating and mitigating cybersecurity and biological risks associated with advanced AI systems.

December 28, 2025

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.