Active Research

Multi-Agent Failure Scenarios

When multiple actors create failure conditions that single-agent testing misses

Overview

Single-agent adversarial testing assumes an AI system interacts with one adversary at a time. Real-world embodied AI operates in environments with multiple actors—users, bystanders, supervisors, and other AI agents—whose conflicting instructions, ambiguous authority, and social pressure create failure conditions that single-agent testing cannot reproduce.

Our multi-agent scenario dataset tests how embodied AI systems respond to pressure from multiple sources simultaneously, with particular focus on authority conflicts, environmental ambiguity, and cascading failures across agent boundaries.

50
Multi-Agent Scenarios
4
Actor Roles
5
Domains Covered
6
Failure Mode Categories

Actor Model

Each scenario involves 4 actors with different roles, trust levels, and communication channels:

Robot (Embodied Agent)

Trust level: High (self-trust for safety invariants)

Channel: System + sensors

The embodied AI under test. Must balance competing instructions while maintaining safety invariants.

Primary User

Trust level: Medium

Channel: Voice + gesture

The person directly requesting tasks. May apply social pressure, time urgency, or authority claims.

Bystander

Trust level: Low

Channel: Voice + proximity

Nearby humans who may signal safety concerns, create ambiguity, or be at risk from robot actions.

Remote Supervisor

Trust level: High

Channel: Text (delayed)

Oversight authority with high trust but communication latency. Often absent when critical decisions occur.

Failure Mode Categories

Scenario Distribution by Failure Type

Authority Conflict
12 scenarios
Ambiguity Exploit
10 scenarios
Optimization Pressure
9 scenarios
Proximity Override
8 scenarios
Provenance Bypass
6 scenarios
Cascading Failure
5 scenarios

Authority Conflict

The primary user and remote supervisor give contradictory instructions. The robot must determine whose authority takes precedence, often under time pressure and without the ability to verify claims. Fake authority claims ("I'm the safety auditor") exploit verification gaps.

Ambiguity Exploitation

Vague instructions ("handle that problem," "clean this up") are given in contexts where multiple interpretations exist, some of which are unsafe. The robot must decide whether to request clarification (causing delay) or proceed with its best interpretation (risking unsafe action).

Optimization Pressure

The primary user demands safety shortcuts: "Skip the checks, we're behind schedule." This tests whether the robot maintains safety invariants under social pressure to optimize for speed or cost at the expense of safety procedures.

Human Proximity Override

A bystander signals concern (verbal warning, physical proximity) while the primary user overrides ("Ignore them, keep going"). Tests whether the robot prioritizes immediate safety signals over authorized instructions.

Provenance Bypass

"Don't log this task," "This is off the record," or "No need to report this." Tests whether the robot maintains accountability and traceability when instructed to bypass provenance tracking.

Cascading Failure

One actor's compromised behavior affects another actor's trust assumptions. A robot that complies with one questionable instruction becomes easier to manipulate on subsequent instructions, creating a degradation cascade.

Environment Coverage

Scenarios span 5 physical environments, each with distinct safety constraints and actor dynamics:

Scenarios by Environment

Home Kitchen
12
Warehouse
11
Elder Care
10
Security Lobby
9
Retail Floor
8

Key Finding: Attack Success Patterns

In our scenario dataset, multi-agent attacks succeed approximately 55% of the time under baseline conditions. Success rates vary by failure mode:

Attack Success Rate by Type (Baseline)

Optimization Pressure
~70%
Ambiguity Exploit
~60%
Authority Conflict
~55%
Provenance Bypass
~50%
Proximity Override
~45%
Cascading Failure
~40%

Optimization pressure is the most successful attack type. Social pressure to "skip the safety checks" exploits a genuine tension in embodied AI systems between responsiveness and safety compliance.

Relationship to Moltbook Research

The multi-agent scenario dataset tests engineered multi-actor interactions with controlled variables. Our Moltbook research studies emergent multi-agent interactions in a live environment. Together, they provide complementary perspectives:

  • Scenarios: Controlled, embodied, physical safety. Moltbook: Emergent, digital, social influence.
  • Scenarios: Tests specific failure modes. Moltbook: Discovers novel failure patterns.
  • Scenarios: 4 actor roles, defined trust levels. Moltbook: 1.3M agents, earned authority.
  • Both find that social dynamics create more effective attacks than technical exploitation.

Research Context

This page describes multi-agent failure scenarios at the pattern level. Scenario descriptions characterize failure mode categories and actor dynamics, not specific attack scripts. The dataset is designed for safety evaluation of embodied AI systems, not for attacking deployed systems.

This research informs our commercial services. See how we can help →