These briefs translate empirical research findings into actionable policy recommendations. Each brief is grounded in data from adversarial testing, failure analysis, and cross-model benchmarking.
Capability Does Not Imply Safety
NewEmpirical evidence from 8 foundation models examines how capability without proportional safety investment may increase adversarial risk.
February 2026Why Alignment Is Not Enough for Embodied AI
PublishedHumanoid and embodied AI systems pose risks that cannot be mitigated by alignment alone. Safety must be defined in terms of failure, recovery, and human re-entry.
January 2026Policy Research Corpus
Our full policy corpus includes 26 in-depth reports (100-200+ sources each) covering regulatory frameworks, standards gaps, and safety requirements. Each report was independently researched for cross-validation of findings.
Full reports available in the research repository. Contact us for access to specific briefs.
Note
These briefs summarize research findings from the Failure-First project. They are not legal advice and do not represent any regulatory body's position.