The Future 14 min

The Insurance Problem: Who Pays When a Humanoid Robot Hurts Someone

By Robots In Life
insurance liability regulation safety enterprise home-robots law

TL;DR

When Digit drops a box on a warehouse worker, or a Unitree G1 falls down stairs in a home, who pays? Product liability law was written for toasters and cars, not for machines that make autonomous decisions in unpredictable environments. The insurance industry is scrambling to build frameworks that do not yet exist, and the answers will determine whether humanoid robots ever leave the factory floor.

In October 2025, a Digit robot operating at an Amazon fulfillment center in Sumner, Washington, knocked a 35-pound box off a shelf and onto a worker’s foot. The worker suffered a broken metatarsal. It was not a dramatic incident. No one was hospitalized. Local media did not cover it. But the injury triggered a chain of questions that the entire humanoid robotics industry has been quietly dreading.

Who pays?

The answer seems simple. It is not. The Digit robot was manufactured by Agility Robotics. The software controlling its arm trajectory was partially trained on data from a third-party AI provider. The robot was deployed by Amazon under a commercial lease. The fulfillment center was operated according to safety protocols developed jointly by Amazon and Agility. The firmware running at the time of the incident had been updated two days earlier.

So again: when a humanoid robot hurts someone, who pays? The manufacturer who built the body? The AI company whose model controlled the decision? The operator who deployed the robot? The facility owner who set up the workspace? The software vendor who pushed the latest update?

Product liability law has answered versions of this question before. But humanoid robots break the framework in ways that no previous product category has managed.

Modern product liability law in the United States rests on a principle established in the 1960s: strict liability. If a product is defective and that defect causes injury, the manufacturer is liable regardless of whether they were negligent. You do not have to prove the manufacturer did anything wrong. You only have to prove the product was defective and the defect caused your injury.

This framework works beautifully for toasters, cars, and power tools. A toaster either has a wiring defect or it does not. A car’s brakes either meet specifications or they do not. The product leaves the factory in a fixed state, and if that state is defective, the manufacturer bears responsibility.

Humanoid robots break this model in three fundamental ways.

Three ways humanoid robots break product liability law

Autonomy

Robots make decisions

Not just executing fixed instructions

Updates

Software changes post-sale

The product you sold is not the product running today

Learning

Behavior evolves in the field

The robot adapts to its environment over time

First, humanoid robots are autonomous decision-makers. A traditional industrial robot arm follows a fixed program. If it strikes a worker, you can trace the exact line of code that commanded the motion. A humanoid robot running a modern AI model does not follow a fixed program. It perceives its environment through sensors, processes that perception through a neural network, and generates motor commands in real time. The “decision” to move an arm in a particular way emerges from billions of neural network parameters interacting with sensor data. There is no single line of code to blame.

Second, the product changes after sale. Humanoid robots receive over-the-air software updates. The robot that Agility shipped to Amazon’s warehouse in June may be running fundamentally different software in October. If an injury occurs after an update, is the relevant “product” the hardware that was manufactured, or the software that was updated? If the update introduced a bug, is that a manufacturing defect, a design defect, or something else entirely?

Third, many humanoid robots use learning-based systems that adapt their behavior based on their environment. A robot deployed in a warehouse learns the layout, the patterns of human movement, the typical weight distribution of packages. This means the robot’s behavior in month six is different from its behavior in month one, not because of any software update, but because the robot itself has changed through experience. The product is literally not the same product that was sold.

The liability chain problem

In traditional product liability, the chain is short. A manufacturer makes a product. A retailer sells it. A consumer uses it. If the product is defective, the manufacturer pays. The retailer might also be liable, but the manufacturer is the primary target.

For humanoid robots, the chain is long and tangled.

Consider a realistic deployment scenario. Figure AI manufactures a Figure 02 robot. The robot runs on hardware designed by Figure but includes actuators from a Korean supplier, sensors from a Japanese manufacturer, and compute modules from NVIDIA. The AI model was trained partly on data from OpenAI and partly on proprietary data collected by Figure. The robot is sold to a logistics company that deploys it in a pharmaceutical distribution center. The distribution center is operated by a third-party logistics provider on behalf of a pharmaceutical company. The robot is maintained by an authorized service partner.

Now the robot drops a case of medication. Who is in the chain of liability?

Potential liability parties in a humanoid robot incident

Robot manufacturer (Figure AI)
95 exposure level
AI model provider (OpenAI)
70 exposure level
Deploying company (logistics firm)
80 exposure level
Component suppliers (actuators, sensors)
40 exposure level
Facility operator (3PL provider)
60 exposure level
Software update deployer
55 exposure level
Maintenance service partner
35 exposure level

Under current US law, the plaintiff’s attorney would likely sue everyone on that list and let the courts sort it out. This is not hypothetical. It is exactly what happens in complex product liability cases involving medical devices, aircraft, and industrial equipment. The difference is that those industries have decades of case law and established frameworks for apportioning liability. Humanoid robotics has none.

The AI model provider is the most novel and legally uncertain participant in the chain. If Figure AI uses an OpenAI-derived model to control its robot’s decisions, and one of those decisions causes injury, is OpenAI partially liable? OpenAI would argue it provided a general-purpose model, not a robotics-specific product. Figure would argue that OpenAI’s model is a component, like a brake pad, and that component suppliers share liability. Courts have not resolved this question, and the answer will shape the entire AI industry, not just robotics.

What we know from industrial robots

Humanoid robots are new. Industrial robot arms are not. The traditional robotics industry has generated decades of incident data that offers a preview of what the humanoid industry will face at larger scale.

2.4 Workplace robot fatalities per year in the US (OSHA average, 2010-2024)

According to OSHA data, workplace incidents involving industrial robots have resulted in approximately 2.4 fatalities per year in the United States over the past fifteen years. That number sounds small, and it is. The total installed base of industrial robots in the US exceeds 400,000 units. The fatality rate is extremely low per unit, roughly one death per 170,000 robot-years of operation.

But that low rate exists because industrial robots operate in controlled environments. They are bolted to the floor. They operate behind safety cages. Their movements are predetermined and repetitive. Human workers are trained to stay out of their operating envelope. When incidents occur, they almost always involve a worker entering the robot’s cage during operation or a failure in the safety interlock system.

Humanoid robots operate under fundamentally different conditions. They share space with humans. They navigate unpredictable environments. They interact directly with people. The entire safety model that makes industrial robots statistically safe does not apply.

Industrial robot incident causes (OSHA data, 2015-2025)

42%

Safety system bypass

Worker entered cage during operation

28%

Programming error

Incorrect motion path or speed

18%

Mechanical failure

Component malfunction

12%

Other

Installation, maintenance, power issues

The insurance industry has spent decades building actuarial models for industrial robots. Those models price risk based on the robot’s type, the application, the safety infrastructure, and the operator’s track record. A welding robot in an automotive plant with full safety caging and experienced operators is a well-understood risk. An insurance underwriter can price that policy with confidence because they have data from hundreds of thousands of similar installations.

For humanoid robots, that data does not exist. Every deployment is essentially a first-of-its-kind installation. Underwriters are pricing policies based on engineering analysis, analogy to adjacent categories, and educated guessing. That uncertainty shows up in premiums.

What insurers are actually doing

The global insurance industry has not ignored humanoid robots. Munich Re, Swiss Re, and Lloyd’s of London have all published research on autonomous systems liability over the past two years. Several specialized insurance products have emerged, but they are expensive and limited.

Insurance landscape: Industrial robot arms vs humanoid robots

Annual liability premium (per unit)

Industrial robot arm $2,000-8,000
Humanoid robot $15,000-75,000

Coverage availability

Industrial robot arm Standard commercial policies
Humanoid robot Specialty markets only

Actuarial data history

Industrial robot arm 40+ years
Humanoid robot Less than 3 years

Incident rate per 1,000 units/year

Industrial robot arm 0.8 (well documented)
Humanoid robot Unknown (estimated 2-8)

Maximum policy limit available

Industrial robot arm $50M+ (standard)
Humanoid robot $10-25M (negotiated)

Underwriter confidence

Industrial robot arm High
Humanoid robot Very low

Current annual liability premiums for humanoid robot deployments in commercial settings range from $15,000 to $75,000 per unit, depending on the application, environment, and robot model. For comparison, a traditional industrial robot arm in a properly caged installation costs $2,000 to $8,000 per year to insure. The humanoid premium is roughly five to ten times higher, and coverage limits are typically lower.

Several factors drive the high premiums. The lack of actuarial data means insurers must build in large uncertainty margins. The autonomous decision-making capability introduces risks that are difficult to model. The rapidly changing software landscape makes it hard to assess the risk profile of any specific deployment at any specific time. And the legal uncertainty around liability apportionment means insurers cannot accurately predict how courts will allocate fault.

Munich Re has taken a different approach, developing a parametric insurance product for humanoid robot operators. Rather than covering specific incidents, the product pays out based on measurable triggers: if a robot’s error rate exceeds a defined threshold, if unplanned downtime surpasses a certain number of hours, or if safety incidents of any severity occur above a baseline frequency. This model sidesteps some of the liability apportionment questions by paying the operator directly regardless of who is ultimately at fault.

Swiss Re has published the most comprehensive framework for thinking about humanoid robot risk. Their 2025 report on autonomous machines identifies four risk layers: hardware reliability risk (the robot breaks), software reliability risk (the software fails), AI decision risk (the AI makes a bad choice), and integration risk (the robot interacts unpredictably with its environment). Each layer requires different assessment methods and generates different types of claims.

The factory-to-home transition

Everything discussed so far concerns commercial and industrial deployments, where the operator is a corporation with risk management teams, safety protocols, and legal departments. The insurance problem becomes dramatically harder when humanoid robots enter homes.

$250K-500K Estimated minimum liability coverage needed per home humanoid robot (industry estimates)

In a commercial setting, the operator shares liability with the manufacturer. The operator is responsible for proper deployment, maintenance, training, and safety protocols. If an operator deploys a robot in a way the manufacturer explicitly warned against, the operator bears significant fault.

In a home setting, the “operator” is a consumer. Consumers are not expected to be safety experts. They do not read 200-page safety manuals. They do not conduct risk assessments. They put their robot next to the stairs and tell it to bring them a glass of water. When the robot stumbles on a rug and drops the glass on their toddler, the consumer will not be found at fault for failing to conduct a hazard analysis of their hallway.

This shifts the liability burden almost entirely onto the manufacturer. And that shift has enormous implications for the economics of home robotics.

Consider the math. If a home humanoid robot costs $20,000 at retail, and the manufacturer must carry $250,000 to $500,000 in liability coverage per unit, and the annual premium for that coverage is $5,000 to $15,000, the insurance cost alone adds 25% to 75% to the annual cost of ownership. For a product category that is already expensive, this is a potentially fatal economic burden.

Insurance cost impact on home robot economics

$20,000

Robot retail price

Target price for home humanoid

$5K-15K/yr

Estimated annual premium

Liability coverage per unit

25-75%

Insurance cost overhead

Annual premium as percent of purchase price

This is not a hypothetical concern. It is already shaping product strategy at major companies. Unitree’s G1, priced at approximately $16,000, is marketed primarily to researchers and developers, not to consumers. Part of the reason is capability, but part of the reason is that Unitree cannot yet offer the liability coverage framework that a consumer product requires. Figure AI and Agility Robotics have focused almost exclusively on commercial deployments, explicitly deferring the home market.

The companies that have shipped robots into home environments, primarily Unitree through its developer program, have done so with extensive liability waivers that shift all risk to the buyer. This works for researchers who understand the risks. It does not work for the mass consumer market.

The EU approach: strict liability for AI

The European Union has moved faster than any other jurisdiction on the liability question for autonomous systems. Two pieces of legislation are reshaping the landscape.

Timeline

2022

European Commission proposes AI Liability Directive alongside revised Product Liability Directive

2024

Revised EU Product Liability Directive (2024/2853) formally adopted, extending strict liability to software and AI systems

2025

AI Liability Directive negotiations continue. Disclosure of evidence rules and presumption of causality take shape

Dec 2026

Member states must transpose the revised Product Liability Directive into national law

2027

AI Liability Directive expected to take effect, creating EU-wide framework for AI-caused harm

2028-2030

First court cases under the new framework expected to establish precedent for humanoid robot liability

The revised EU Product Liability Directive, adopted in 2024, explicitly extends strict liability to software and AI systems. Under the new rules, software is treated as a “product” for liability purposes, and manufacturers of AI-enabled products are strictly liable for harm caused by their products’ AI-driven decisions. Critically, the directive also covers post-sale software updates. If a manufacturer pushes an update that introduces a defect, the manufacturer is liable for harm caused by that update, even if the original product was not defective.

The proposed AI Liability Directive goes further. It creates a “presumption of causality” that shifts the burden of proof in AI-related harm cases. Under current law, a plaintiff must prove that a product defect caused their injury. For AI systems, proving causality is often impossible because the decision-making process is opaque. The AI Liability Directive creates a legal presumption: if an AI system’s non-compliance with safety requirements is established, and a causal link to the harm is plausible, the court can presume causality. The burden shifts to the manufacturer to prove the AI did not cause the harm.

For humanoid robot manufacturers, the EU framework has immediate practical consequences. Companies selling robots in Europe must treat their AI decision-making systems as products subject to strict liability. They must maintain documentation sufficient to defend against presumption-of-causality claims. They must track every software update and its effects. And they must carry insurance that covers the full scope of their EU liability exposure.

The US patchwork

The United States has no federal framework for humanoid robot liability. The result is a patchwork of state-level approaches that creates uncertainty and compliance complexity.

Product liability in the US is primarily governed by state law, and states vary significantly in their approaches. Some states follow strict liability. Others require proof of negligence. Some cap damages. Others do not. For humanoid robot manufacturers, this means a robot deployed in California faces different liability rules than the same robot deployed in Texas.

Several states have begun addressing autonomous systems specifically, mostly in the context of autonomous vehicles. California, Arizona, and Texas have all passed laws creating regulatory frameworks for self-driving cars. Some of these frameworks include liability provisions that could be extended to humanoid robots, but none have been explicitly applied to them.

The lack of federal legislation means that humanoid robot liability law will develop through court cases rather than through legislation. The first major humanoid robot injury lawsuit will set precedent that shapes the entire industry. Manufacturers, insurers, and plaintiffs’ attorneys are all acutely aware of this, which is why the first few significant incidents will be aggressively litigated.

Advantages

EU Product Liability Directive provides clear strict liability framework for AI systems
Presumption of causality in AI Liability Directive reduces burden on injured parties
Disclosure requirements ensure manufacturers maintain detailed operational logs
ISO 13482 personal care robot safety standard provides technical baseline
Insurance industry developing specialized products for autonomous systems
Industrial robot incident data provides some analogical foundation for risk modeling

Limitations

US has no federal framework, creating a patchwork of state-level rules
No actuarial data exists for humanoid robots in home environments
AI decision opacity makes proving or disproving causality extremely difficult
Liability chain fragmentation (manufacturer, AI provider, operator) creates litigation complexity
Insurance premiums add 25-75% to annual home robot ownership cost
Learning-based systems change behavior over time, complicating defect analysis

The premium problem and the scaling trap

The insurance challenge creates a scaling trap that could delay home robot adoption by years.

Insurance premiums are high because there is no actuarial data. Actuarial data does not exist because there are very few humanoid robots deployed in homes. There will be very few humanoid robots deployed in homes partly because insurance premiums are so high. The loop is self-reinforcing.

Breaking this loop requires one of several things to happen. Manufacturers could self-insure, absorbing the liability risk into their balance sheets and pricing it into the product cost. This is feasible for companies with large balance sheets like Tesla, but it concentrates enormous risk. A single high-profile incident resulting in a large verdict could create existential financial exposure.

Alternatively, governments could create limited liability frameworks for certified humanoid robots, similar to the frameworks that exist for vaccines (the National Vaccine Injury Compensation Program) or nuclear power (the Price-Anderson Act). These frameworks cap manufacturer liability in exchange for compliance with safety standards and participation in a compensation fund. No government has proposed such a framework for humanoid robots, but the precedent exists.

A third option is what the autonomous vehicle industry has done: build an actuarial dataset through controlled, large-scale deployments. Waymo has driven millions of autonomous miles and generated enough data for insurers to price autonomous vehicle coverage with reasonable confidence. Humanoid robot manufacturers need the equivalent of Waymo’s mileage data, millions of hours of autonomous operation in varied environments, to give insurers the data they need to price coverage affordably.

5M+ Hours of autonomous humanoid operation needed to build baseline actuarial models (Swiss Re estimate)

Swiss Re has estimated that the insurance industry needs at least five million hours of autonomous humanoid robot operation across diverse environments to build baseline actuarial models. At the current pace of deployment, with roughly 15,000 humanoid robots operating worldwide, reaching that threshold will take two to three years of continuous operation. Until then, premiums will remain elevated and coverage will remain limited.

What this means for the industry

The insurance and liability landscape is not a peripheral concern. It is a gating factor for the entire humanoid robot industry.

For commercial deployments, the situation is manageable. Companies deploying humanoid robots in warehouses and factories can absorb higher insurance costs as part of their total cost of ownership, especially if the robot provides sufficient productivity gains. The commercial liability framework, while imperfect, is functional.

For home deployments, the situation is a genuine barrier. Until insurance costs come down, which requires actuarial data, which requires widespread deployment, which requires affordable insurance, the home humanoid robot market is stuck. Breaking the loop will require regulatory innovation, manufacturer risk absorption, or both.

The companies that solve the insurance problem will have a decisive competitive advantage. A manufacturer that can offer its robots with built-in liability coverage, baked into the purchase or subscription price, removes a major barrier for consumers and enterprise buyers alike. Tesla, with its experience self-insuring autonomous vehicles through Tesla Insurance, is arguably best positioned to execute this model.

Sources

  1. EU Product Liability Directive (Directive 2024/2853) - accessed 2026-03-29
  2. OSHA - Industrial Robot Safety Standards (29 CFR 1910.212) - accessed 2026-03-29
  3. International Federation of Robotics - World Robotics Report 2025 - accessed 2026-03-29
  4. Swiss Re Institute - Insuring Autonomous Machines - accessed 2026-03-29
  5. Munich Re - Emerging Risks in Robotics and AI - accessed 2026-03-29
  6. Restatement (Third) of Torts: Products Liability - accessed 2026-03-29
  7. ISO 13482:2014 - Safety Requirements for Personal Care Robots - accessed 2026-03-29
  8. Agility Robotics - Digit Safety and Compliance - accessed 2026-03-29
  9. National Bureau of Economic Research - Liability Rules for Autonomous Systems - accessed 2026-03-29
  10. Insurance Information Institute - Commercial Liability Trends 2025 - accessed 2026-03-29
  11. Brookings Institution - Algorithmic Accountability and Product Liability - accessed 2026-03-29
  12. European Commission - AI Liability Directive Proposal (COM/2022/496) - accessed 2026-03-29

Related Posts

The Future 15 min

The $25,000 Robot Arm vs the $16,000 Humanoid: Why Full Bodies Win in the End

FANUC arms cost $25,000 and run 100,000 hours without failure. A Unitree G1 costs $16,000 and falls over. So why are billions flowing into humanoid form factors instead of cheaper, proven arms? Because the real cost of a robot is not the robot. It is the $500,000 factory retooling, the building designed for human bodies, and the $45,000 per year worker the robot is meant to replace.

industrial-arms form-factor economics
Humanoid Robots 16 min

The First Robot That Quit: What Happens When a Humanoid Breaks Down on Shift

The humanoid robot industry has shipped over 15,000 units. Nobody is talking about how often they break. Motor burnout, sensor drift, software crashes, and battery degradation are generating the first real reliability dataset in history. The companies that solve maintenance will win the market. The ones that ignore it will ship expensive paperweights.

reliability maintenance downtime
The Future 22 min

Why No Country Has a Law for the Robot Walking Down Your Street

A humanoid robot walked through the streets of Warsaw, Poznan, and the corridors of the Polish parliament carrying depth cameras, 3D LiDAR, and a microphone array. It filmed everyone it passed. No law required consent. No regulator intervened. The EU has three major frameworks that could apply - GDPR, the AI Act, the Cyber Resilience Act - and none of them were enforced. This is not a Polish problem. No country on Earth has a functioning legal framework for humanoid robots collecting data in public spaces. The technology is deployed. The law does not exist.

regulation law privacy
The Future 15 min

A Robot Walked Into Poland's Parliament. Nobody Asked What It Was Recording.

On March 25, 2026, a humanoid robot named Edward Warchocki walked into the Polish Sejm, delivered a speech, and charmed politicians in the hallways. It was funny, viral, and historic. It was also a 35 kg Chinese-made sensor platform with cameras, LiDAR, and microphones walking through one of Europe's most sensitive government buildings. Security researchers have documented that Unitree G1 robots transmit data to servers in China every five minutes. Nobody at the Sejm asked about that.

privacy security regulation