Home Robots 18 min

The Folding Problem: Why the Simplest Human Task Is Robotics' Hardest Challenge

By Robots In Life
manipulation deformable-objects home-robots research laundry dexterity

TL;DR

You can fold a t-shirt without thinking. For a robot, that same task requires solving perception, deformable object physics, force control, and real-time planning all at once. Laundry folding is the unsolved problem that stands between us and the home robot future.

Pick up a crumpled t-shirt from your laundry basket. Shake it out. Identify the front and back. Lay it flat on a surface. Fold the sleeves in. Fold the bottom half up. Smooth out the wrinkles. Place it in a stack. You just completed a task that takes roughly eight seconds, requires no conscious thought, and remains one of the most difficult unsolved problems in all of robotics.

Every company pitching a home humanoid robot includes some version of “it will do your laundry” in its presentation deck. Elon Musk has mentioned it for Optimus. 1X Technologies imagines NEO handling household chores. Figure AI’s long-term vision includes domestic tasks. The pitch is always the same: a robot that folds your clothes, loads your dishwasher, and tidies your home while you sleep. It is a compelling vision. It is also, at the current state of the art, essentially science fiction.

This is not because robotics researchers are not trying. They have been trying for over two decades. The problem is that laundry folding sits at the exact intersection of every hard problem in robotics, all at once. It requires perceiving soft, formless objects. It requires manipulating materials that change shape every time you touch them. It requires force sensitivity that current actuators cannot reliably deliver. And it requires doing all of this not once in a lab, but thousands of times in a row, in the chaotic, variable conditions of a real home.

Laundry folding is not just a chore. It is a litmus test. If a robot can fold your laundry, it can probably do almost anything else you need around the house. And if it cannot fold your laundry, well, that tells you something important about how far we really are from the home robot future.

Why rigid objects are easy and soft objects are impossible

To understand why folding is so hard, you first need to understand the distinction that governs almost everything in robotic manipulation: rigid versus deformable objects.

A rigid object, like a coffee mug or a screwdriver, has a fixed shape. You can model its geometry with a 3D mesh. When you grasp it, you know exactly where every point on the object will be. You can plan a trajectory, simulate the grasp in software, and execute it with high confidence that reality will match your model. Decades of research in robot grasping have made rigid-object manipulation a largely solved problem. Industrial robots pick and place rigid parts millions of times per day in factories around the world with near-perfect reliability.

A deformable object, like a t-shirt, has no fixed shape. It has effectively infinite degrees of freedom. A flat cotton t-shirt can be crumpled into an astronomical number of configurations. Every time you pick it up, it hangs differently. Every time you lay it down, it wrinkles differently. You cannot precompute a 3D model of a crumpled shirt because no two crumples are the same. The state space is not just large. It is functionally infinite.

Infinite Degrees of freedom in a deformable fabric, compared to 6 for a rigid object

This is not a matter of needing more computing power. It is a fundamentally different class of problem. Rigid-object manipulation is a search problem in a well-defined space. Deformable-object manipulation is a control problem in a space that changes with every action you take. Every grasp deforms the object, which changes its state, which changes the optimal next action, which changes the state again. The feedback loop is continuous, high-dimensional, and extremely sensitive to initial conditions.

Pieter Abbeel’s group at UC Berkeley, one of the leading labs in this area, has described deformable manipulation as operating in a “configuration space so large that traditional planning methods are computationally intractable.” You cannot brute-force your way through the possibilities. There are simply too many.

Rigid vs. deformable object manipulation

6

Degrees of freedom

Rigid object (position + orientation)

1000+

Effective DOF

Single piece of fabric

99.9%

Industrial pick rate

For rigid objects today

The eight-second miracle: what actually happens when you fold a shirt

When you fold a t-shirt, your brain is performing an orchestra of tasks so seamlessly integrated that the entire process feels like nothing. It is not nothing. Here is what is actually happening.

What your brain does in 8 seconds of folding

1

Visual perception

Identify garment type, orientation, inside/outside, front/back

2

State estimation

Build a mental model of every fold, wrinkle, and edge

3

Grasp planning

Choose where to grab based on fabric type, thickness, stiffness

4

Bimanual coordination

Two hands working together with different roles and forces

5

Force modulation

Adjust grip pressure continuously, sense when fabric slips

6

Real-time replanning

Adapt mid-fold when fabric bunches, catches, or shifts

7

Quality assessment

Check symmetry, smoothness, stack alignment

8

Error correction

Unfold and refold if the result is not right

Each of these steps is, on its own, an active area of research in robotics. Combining all of them into a single fluid behavior that runs in real time, adapts to any garment, and works thousands of times without failure is a challenge that no research lab on Earth has fully solved.

Let us walk through the hardest parts.

The perception problem

Before a robot can fold anything, it needs to understand what it is looking at. For a crumpled piece of fabric on a table, this is brutally hard. Standard computer vision techniques like object detection and pose estimation assume rigid geometry. A shirt does not have a “pose” in any traditional sense. It has a configuration, and that configuration is different every single time.

Researchers at Carnegie Mellon’s Robotics Institute have spent years on garment perception. The challenge is not just identifying that something is a shirt. It is identifying, from a random crumpled state, where the collar is, where the sleeves are, which side is the front, and whether the garment is inside out. Humans do this in a fraction of a second using a combination of vision, touch, and prior knowledge about how shirts are shaped. For a robot, each of these cues requires a separate system, and integrating them reliably is an open problem.

Recent work using large vision-language models like Google DeepMind’s RT-2 has shown promise in semantic understanding of deformable objects. A model trained on internet-scale data can often identify garment types and approximate their state. But “approximate” is the key word. The level of geometric precision needed to execute a good fold, knowing that this specific edge is the hem and it needs to be aligned with that specific crease, exceeds what current vision models can reliably deliver.

The physics problem

Even if a robot perfectly perceives a garment, it needs to predict how the fabric will behave when manipulated. This is a physics simulation problem, and fabric physics is notoriously difficult to simulate.

Cloth simulation in computer graphics has been an active field since the 1990s, producing the flowing capes in animated movies and the realistic draping in video game characters. But graphics simulation optimizes for visual plausibility. It needs to look right. Robotics simulation needs to be physically accurate. The fabric needs to behave in simulation exactly the way it behaves in reality, down to the way a cotton-polyester blend drapes differently from pure linen, or the way a worn t-shirt has different stiffness than a new one.

The fundamental problem is that fabric behavior depends on properties that are difficult to measure and vary enormously between garments. Thread count, weave pattern, fiber composition, moisture content, wear history, and temperature all affect how a piece of cloth responds to manipulation. A robot that has learned to fold a specific cotton t-shirt in a lab may fail on a linen blend it has never encountered.

The manipulation problem

Assuming perfect perception and perfect physics modeling, you still need to actually fold the thing. This requires bimanual manipulation: two robot hands (or grippers) working together in a coordinated, force-sensitive manner.

Most robot arms today use parallel-jaw grippers or suction cups designed for rigid objects. These work well for picking up boxes and bottles. They are terrible at handling fabric. A parallel-jaw gripper can pinch a shirt, but it cannot spread its fingers across a surface to smooth out wrinkles. It cannot slide its grip along an edge to find a corner. It cannot adjust the tension between two hands to keep fabric taut while executing a fold.

The dexterous manipulation that folding requires, using fingertips to feel edges, applying differential force across two hands, rolling fabric between thumb and forefinger, is something that even the most advanced robot hands can only approximate. Shadow Robot Company’s Dexterous Hand and the hands on robots like the 1X NEO and Figure 02 are impressive, but they remain far from matching the 27 degrees of freedom, 17,000 mechanoreceptors, and millisecond-latency feedback loop of the human hand.

17,000 Mechanoreceptors in a human hand, enabling the force sensitivity needed for fabric manipulation

The graveyard of attempts

This is not a problem that has been ignored. Some of the best robotics labs and companies in the world have taken their shot at automated laundry folding. The results are instructive.

Laundroid: the $90 million failure

The most expensive attempt at a laundry-folding machine was Laundroid, built by the Japanese company Seven Dreamers Laboratories. Unveiled in 2016, Laundroid was a refrigerator-sized appliance that could accept a pile of clean laundry and output folded garments. It used computer vision and proprietary manipulation to identify, pick up, and fold individual items.

Laundroid took between 5 and 10 minutes to fold a single shirt. It cost roughly $16,000 per unit in its planned consumer version. It could not handle socks reliably. In 2019, Seven Dreamers filed for bankruptcy with over $90 million in debt. The company’s failure was not due to lack of funding, talent, or ambition. It was due to the sheer difficulty of making the system work reliably enough for a consumer product.

The Laundroid postmortem

$90M+

Total investment

Before bankruptcy in 2019

5-10 min

Time per shirt

Versus 8 seconds for a human

~60%

Success rate

On standard garments

$16K

Planned price

For the consumer unit

The Laundroid story is a cautionary tale for anyone who thinks laundry folding is just an engineering problem waiting for enough investment. Seven Dreamers had significant resources, a decade of development time, and a dedicated team. The problem defeated them anyway.

SpeedFolding: the current state of the art

The most impressive academic result in robotic folding comes from UC Berkeley’s AUTOLAB, led by Ken Goldberg. Their SpeedFolding system, published in 2022, uses a bimanual robot setup with two ABB industrial robot arms and a learned policy trained on thousands of human demonstrations.

SpeedFolding can fold a t-shirt in roughly 120 seconds, a dramatic improvement over Laundroid’s 5-10 minutes. The system uses a combination of imitation learning and a neural network that predicts optimal grasp points from overhead camera images. It achieves a success rate of about 93% on previously seen garment types.

But there are critical caveats. The system works on a dedicated folding table with controlled lighting and a fixed camera setup. It handles only a limited range of garment types. The 93% success rate means roughly 1 in 14 folds fails. And the robot arms it uses are industrial-grade equipment that costs tens of thousands of dollars.

FlingBot: thinking differently about manipulation

Researchers at Columbia University took a creative approach with FlingBot, a system that, instead of carefully grasping and folding fabric, flings it into the air to unfold it. The insight is that dynamic manipulation, using speed and air resistance rather than careful quasi-static control, can solve the state-estimation problem by forcing fabric into a known configuration.

FlingBot demonstrated that for the specific subtask of unfolding a crumpled garment, dynamic flinging outperforms careful pick-and-place manipulation by a wide margin. The system trained entirely in simulation and transferred surprisingly well to real fabric, partly because the physics of throwing are simpler to simulate than the physics of careful folding.

This kind of creative problem decomposition, breaking the impossible task into subtasks and solving each one with a different strategy, is likely the path forward. But FlingBot solves one subtask. A complete laundry-folding system needs to chain together unfolding, classification, smoothing, folding, stacking, and sorting, each with its own set of challenges.

DextAIRity: when air is your third hand

Another creative approach from UC Berkeley is DextAIRity, a system that uses an air pump to blow fabric into desired configurations. Instead of relying entirely on gripper-based manipulation, DextAIRity augments a robot arm with a directed air jet that can spread, flatten, and reposition fabric without touching it.

The result is a system that can flatten crumpled garments significantly faster than manipulation-only approaches. It highlights an important principle: solving the folding problem may require rethinking the tools, not just improving the algorithms. The human approach to fabric manipulation, two dexterous hands with sensitive fingertips, may not be the only or even the best approach for a robot.

The comparison that makes roboticists uncomfortable

To understand how far away home laundry folding really is, it helps to compare it with other tasks that robots have learned to do well, and examine why those tasks were solvable while folding is not.

Advantages

Navigation: solved. SLAM algorithms and LiDAR let robots move reliably through homes.
Speech recognition: solved. LLMs and voice models handle natural language with over 95% accuracy.
Rigid grasping: solved. Suction and parallel-jaw grippers pick rigid objects with over 99% reliability.
Vacuuming: solved. Robot vacuums navigate and clean floors autonomously for under $500.
Visual recognition: mostly solved. Deep learning identifies objects, people, and scenes reliably.

Limitations

Deformable manipulation: unsolved. No system folds diverse garments with consumer-grade reliability.
Tactile sensing: immature. Robot fingertips cannot match human sensitivity for fabric handling.
Bimanual coordination: limited. Two-arm dexterous tasks remain a major research challenge.
Material adaptation: unsolved. Robots cannot adjust strategy based on fabric type.
Error recovery: fragile. A single failed fold often cascades into an unrecoverable state.

The pattern is clear. Tasks with rigid environments, fixed goals, and repeatable conditions have been solved. Tasks involving soft objects, variable conditions, and contact-rich manipulation have not. Folding is the hardest example of the hardest category.

Robot navigation works because the physical world, walls, floors, furniture, is mostly rigid and mostly static. You can build a map once and use it for months. Obstacles have defined shapes. The physics of a robot moving through space is well understood and easy to simulate.

Fabric does not stay mapped. Its state changes every time you interact with it. There is no “map” of a crumpled shirt that remains valid across two consecutive grasps.

Speech was cracked because language is structured

Natural language, despite its complexity, has grammar. It has syntax. Words occur in patterns that statistical models can learn. Language is, in a deep sense, compressible. The space of valid English sentences, while enormous, is vastly smaller than the space of all possible character sequences.

Fabric configurations have no grammar. There is no syntax to a wrinkle. The “language” of cloth states is incompressible, with every configuration essentially unique.

What would it actually take to solve folding?

Researchers who work on this problem are not pessimistic. They are realistic. The consensus view from labs at UC Berkeley, CMU, Stanford, MIT, and Toyota Research Institute is that reliable home laundry folding will require advances in at least four areas simultaneously.

1. Better tactile sensing

The human hand is the most sophisticated manipulation tool in nature. Its 17,000 mechanoreceptors provide continuous information about pressure, texture, slip, temperature, and vibration. This tactile feedback is not supplementary to vision. It is essential. When you fold a shirt, you feel when the fabric is taut, when it is slipping, when a crease is forming correctly. Much of your fold quality comes from touch, not sight.

Current robotic tactile sensors are improving rapidly. MIT’s GelSight technology, which uses a camera behind a deformable elastomer pad to create high-resolution touch images, has shown real promise. BioTac sensors from SynTouch provide multi-modal tactile data. Meta’s DIGIT sensor brings GelSight-style sensing to a compact fingertip form factor.

But these sensors are still far from matching human capability. They offer lower spatial resolution, slower update rates, and less integration with motor control than the human hand. More importantly, we lack the algorithms to effectively use rich tactile data for deformable manipulation. Having a sensor that can detect a wrinkle is not the same as having a control policy that knows what to do about it.

2. Foundation models for manipulation

The success of large language models has inspired a parallel effort in robotics: building foundation models that can generalize across objects, tasks, and environments. Google DeepMind’s RT-2 and its successors demonstrate that a single model can map vision and language to robot actions across many different manipulation tasks.

For laundry folding, a foundation model would need to generalize across garment types, fabric materials, fold styles, and error states. It would need to handle a cotton t-shirt, a silk blouse, a wool sweater, and a pair of jeans with different strategies, adjusting its approach based on visual and tactile feedback.

Toyota Research Institute’s work on diffusion policy, a method that uses diffusion models (the same mathematical framework behind image generators like Stable Diffusion) to generate robot action sequences, has shown impressive results on contact-rich manipulation tasks. Diffusion policies can capture the multi-modal nature of manipulation, where multiple valid action sequences exist for the same situation, and they are more robust to perturbations than traditional policy learning methods.

3. Better simulation

The sim-to-real gap for deformable objects needs to shrink dramatically. Training manipulation policies requires enormous amounts of data, far more than can be collected on physical robots. Simulation is the only scalable source of training data, but only if the simulation is accurate enough for the resulting policies to transfer to reality.

Recent advances in differentiable physics simulation, where the simulator itself can be optimized to match real-world observations, offer a path forward. Projects like PlasticineLab and DiffCloth have demonstrated simulators that can be tuned to specific materials by observing their real-world behavior, then used to generate realistic training data at scale.

The goal is not a perfect universal fabric simulator. It is a system that can quickly calibrate to a specific garment’s properties, simulate that garment accurately enough for policy transfer, and update its model as the garment ages and changes. This is a hard but tractable engineering challenge, and progress over the past three years has been substantial.

4. Hardware designed for the task

Current humanoid robots are designed primarily for locomotion and rigid-object manipulation. Their hands, when they have hands at all, prioritize grip strength and finger count over the tactile sensitivity and compliance needed for fabric handling.

A robot that can fold laundry may need hands specifically designed for the task. That might mean softer fingertips with embedded tactile arrays, variable-stiffness joints that can switch between firm grasping and gentle smoothing, and palm surfaces with enough friction to hold fabric without bunching it.

It might also mean, as FlingBot and DextAIRity suggest, that the answer is not better hands but entirely different tools. A combination of air jets, compliant rollers, and strategic grasps might outperform a human-hand-shaped approach. The constraint is that a home robot needs to do many things, not just fold laundry, so task-specific hardware is a harder sell than general-purpose dexterous hands.

The reliability wall

Even if every technical challenge above is solved in isolation, there remains the overarching problem of reliability. Consumer electronics need to work. Not 93% of the time. Not 99% of the time. They need to work virtually all of the time.

What success rates actually mean at household scale

93%

Current best (SpeedFolding)

~350 failures per year

99%

Good by research standards

~50 failures per year

99.8%

Consumer minimum

~10 failures per year

Consider the analogy to self-driving cars. Autonomous vehicles can drive successfully 99% of the time. But 99% means a serious failure every few hours of driving, which is unacceptable. The jump from 99% to 99.99% has taken the industry over a decade and hundreds of billions of dollars, and it is still not fully achieved.

Folding faces the same curve. The last few percentage points of reliability are exponentially harder than the first. Every new garment type, every unusual fabric, every edge case (a shirt tangled with a sock, a garment turned partially inside out, a button caught on another piece of clothing) needs to be handled. The long tail of edge cases is where consumer products live or die.

The real timeline

So when will a robot actually fold your laundry? Here is an honest assessment based on the current state of research, the pace of progress over the past five years, and conversations with researchers in the field.

2026-2028: Lab demonstrations get impressive. Expect to see research systems that can fold 5-10 garment types with 95%+ success rates in controlled settings. These will make great YouTube videos. They will not be products.

2028-2030: Limited commercial attempts. Companies will begin offering laundry-folding features as part of home robot packages, likely with significant constraints. Expect these to work only on pre-sorted, unfolded garments laid flat on a designated surface. Do not expect them to handle a random pile from the dryer.

2030-2033: Meaningful but imperfect capability. Advances in foundation models for manipulation, better tactile hardware, and years of real-world data collection will push success rates toward 98-99% on common garment types. This will be useful but still noticeably imperfect.

2035+: Consumer-grade reliability. Reaching the 99.8%+ reliability threshold across diverse garments, in uncontrolled home environments, with full error recovery, is likely a decade away. This timeline assumes sustained funding and research attention, which is not guaranteed.

The road to consumer-grade laundry folding

1

Lab demos

2026-2028

2

Limited commercial

2028-2030

3

Useful but imperfect

2030-2033

4

Consumer reliable

2035+

These dates will strike some readers as pessimistic. They are not. They are based on the actual rate of progress in deformable-object manipulation research, which has been real but slow, measured in percentage-point improvements per year rather than order-of-magnitude leaps. The history of robotics is littered with predictions that underestimated the difficulty of tasks that seem simple to humans. This is one of those tasks.

Why this matters beyond laundry

Laundry folding is not important because of laundry. Nobody is going to build a $25,000 humanoid robot whose only job is to fold shirts. Laundry folding matters because it is a proxy for the entire category of household manipulation tasks that involve soft, variable, contact-rich interactions with the physical world.

If you can fold laundry, you can probably also:

  • Make a bed (deformable sheets, blankets, pillows)
  • Load a dishwasher (variable shapes, fragile items, spatial reasoning)
  • Sort and put away groceries (bags, produce, fragile items)
  • Clean a bathroom (wet surfaces, variable tools, contact-rich wiping)
  • Cook a meal (cutting, stirring, handling raw ingredients)

Each of these tasks shares the core challenges of laundry folding: deformable or variable objects, contact-rich manipulation, force sensitivity, and enormous variability in conditions. Solving folding does not automatically solve these tasks, but the underlying capabilities transfer heavily.

This is why laundry folding has become a benchmark in the research community. It is not that researchers are obsessed with clean clothes. It is that folding is a compact, measurable proxy for the general capability that home robots need. When a robot can fold diverse laundry reliably, we will know that the fundamental manipulation capabilities for home robotics are in place.

The companies and the claims

It is worth mapping the current landscape of companies that have, at some point, claimed or implied that their robots will handle laundry.

Tesla has shown Optimus folding a shirt in a scripted demo. The video was carefully framed, used a single pre-positioned garment, and showed the robot being teleoperated (controlled by a human) rather than operating autonomously. This is a demonstration of hardware capability, not software intelligence. The hands can make the motions. The AI cannot yet decide which motions to make.

1X Technologies has been more cautious, describing NEO’s initial home capabilities as focused on tidying, carrying, and fetching rather than complex manipulation. This is probably the right approach. Promising capabilities you can deliver is better than promising capabilities you cannot.

Figure AI, while primarily focused on industrial applications, has discussed a long-term vision that includes domestic tasks. Given that Figure 02 is currently deployed in warehouse and manufacturing settings, home laundry is likely years away from their product roadmap.

Chinese manufacturers like Unitree and UBTECH have demonstrated various domestic task capabilities in controlled settings, but none have shipped a product that handles deformable objects in real homes.

The honest take is that no company is close to shipping a robot that folds laundry in real home conditions. The companies that are honest about this are the ones worth watching. The ones that show scripted demos and imply the product is around the corner are following a playbook that has failed many times before.

What to watch for

For anyone following this space, whether as a potential buyer, an investor, or just someone who is curious, here are the signals that will indicate real progress.

Watch for unscripted demos. Any company can fold a single pre-positioned shirt with a teleoperated robot. The meaningful demonstration is a robot autonomously folding a random pile of mixed garments from a laundry basket, on camera, without cuts, multiple times. When you see that, pay attention.

Watch for garment diversity. Folding one type of shirt is not the same as folding shirts, pants, socks, underwear, towels, and fitted sheets. Real progress means handling the full range of household textiles.

Watch for failure rates. When researchers publish folding results, look at the success rate and the number of garment types tested. A 95% success rate on 3 garment types is very different from a 95% success rate on 30 garment types.

Watch for speed. SpeedFolding takes 120 seconds per garment. A human takes 8 seconds. A household basket of 20 items would take 40 minutes at the robot’s pace versus under 3 minutes for a human. Speed matters for consumer viability.

Watch for tactile hardware. The breakthroughs in folding will likely come from better hands and better touch sensing, not just better vision models. Companies investing in dexterous hands with rich tactile feedback are working on the right problem.

The beautiful difficulty

There is something profound in the fact that folding a t-shirt, a task so simple that a distracted teenager can do it while watching TV, represents one of the hardest open problems in all of robotics. It is a reminder that human capability is extraordinary. The combination of perception, planning, dexterity, and adaptation that your hands and brain perform every day when you handle soft objects is the product of hundreds of millions of years of evolutionary optimization.

Robotics will get there. The rate of progress in manipulation learning, tactile sensing, and foundation models is real and accelerating. The question is not whether robots will eventually fold laundry, but when, and whether “when” is 2030 or 2040.

In the meantime, every time a robotics CEO shows a demo of a humanoid robot doing household chores, ask one simple question: can it fold a random pile of laundry from the dryer?

If the answer is no, you know exactly how far away the home robot future really is. And if the answer is yes, reliably, across diverse garments, at reasonable speed, in an uncontrolled environment, then something truly remarkable has happened. Something worth paying attention to. Something that means the Moravec paradox, after decades, is finally starting to crack.

Until then, your laundry basket is waiting. And so are we.

Sources

  1. UC Berkeley AUTOLAB - SpeedFolding: Learning Efficient Bimanual Folding of Garments - accessed 2026-03-25
  2. Columbia University - FlingBot: The Unreasonable Effectiveness of Dynamic Manipulation for Cloth - accessed 2026-03-25
  3. CMU Robotics Institute - Learning to Manipulate Deformable Objects without Demonstrations - accessed 2026-03-25
  4. Stanford ILIAD Lab - Deformable Object Manipulation Survey - accessed 2026-03-25
  5. MIT CSAIL - Robotic Fabric Manipulation - accessed 2026-03-25
  6. IEEE Transactions on Robotics - A Survey on Cloth Manipulation - accessed 2026-03-25
  7. Science Robotics - Sim-to-Real Transfer for Deformable Object Manipulation - accessed 2026-03-25
  8. UC Berkeley - DextAIRity: Deformable Manipulation Can be a Breeze - accessed 2026-03-25
  9. Google DeepMind - RT-2: Vision-Language-Action Models - accessed 2026-03-25
  10. International Journal of Robotics Research - Benchmarking Deformable Object Manipulation - accessed 2026-03-25
  11. Nature Machine Intelligence - Foundation Models for Robotics - accessed 2026-03-25
  12. Seven Dreamers Laundroid - Postmortem Analysis - accessed 2026-03-25
  13. Toyota Research Institute - Diffusion Policy for Manipulation - accessed 2026-03-25
  14. Carnegie Mellon University - Garment Perception and Manipulation - accessed 2026-03-25
  15. Goldman Sachs - Humanoid Robots Market Report - accessed 2026-03-25

Related Posts

Home Robots 14 min

1X NEO Gamma at $499 per Month: The First Robot You Might Actually Rent

Every other humanoid robot company is chasing factory contracts. 1X Technologies is betting everything on your living room. At $499 per month with OpenAI backing and a 10,000-unit deal already signed, NEO Gamma is either the most audacious play in robotics or the most expensive subscription you will never renew.

1X Technologies NEO rental
Home Robots 10 min

Humanoid Robots in Your Home by 2028: What to Actually Expect

Every robotics CEO promises a home robot is just around the corner. We looked at the actual timelines, real capabilities, and honest pricing to figure out what consumers should really expect.

home consumer 1X NEO
Humanoid Robots 14 min

AgiBot Shipped More Robots Than Tesla, Figure, and Apptronik Combined. You Have Probably Never Heard of Them.

AgiBot shipped 5,200 humanoid robots while Tesla managed 500, Figure AI shipped 200, and Apptronik shipped 50. Combined, the three most-hyped American humanoid programs delivered one-seventh of what a Shanghai startup achieved in under two years. The numbers expose a Western media blind spot that has real consequences.

AgiBot China manufacturing
Humanoid Robots 19 min

Brett Adcock Raised $1.85 Billion for a Robot That Does Not Exist Yet in Consumer Hands

Brett Adcock has founded three companies in three different industries. He sold the first. He took the second public. The third is valued at $39 billion with 200 robots deployed. His career is a case study in how Silicon Valley's founder-worship economy works, and whether the pattern that created Vettery and Archer Aviation can produce a company that actually puts humanoid robots into everyday life.

Brett Adcock Figure AI founder