ACM Fellow Daniela Rus has been dreaming of robots since she was a child, imagining mechanical shoes to help her jump higher. As director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT), Rus has done pioneering work in modular robots, soft robotics, novel neural networks, and more. Her talk on the future of robotics and AI was featured at a recent TED conference, and this year she released a pair of books for the general public, including The Mind’s Mirror: Risk and Reward in the Age of AI.
Throughout her career, Rus has maintained a dual focus on improving both the bodies and the brains of intelligent machines. This traces back to her Ph.D. thesis, when she discovered the algorithms she’d developed for dexterous manipulation were too advanced for the robotic hands of the day. Here, Rus talks about the breadth of her work, the current fascination with humanoids, and why we need fresh ideas in artificial intelligence (AI).
Let’s start with manipulation. Are we any closer to building capable robotic hands today?
We are getting closer, but we are not there. There is still a lack of good hardware and more capable algorithms. My group is working on touch skin-like sensors that can provide dense feedback, more capable vision sensors, and algorithms to process the data and compute hand control. Advancing robot dexterity is about improving the body and brain together so as to increase their ability to manipulate a wide range of objects and perform a wide range of grasping and in-hand tasks.
Speaking of a wide range, you’ve worked on soft robotic fish, modular robots, grippers that look like flowers, and self-driving cars.
When most people think of robots, they think of metallic humanoids, but robots don’t have to be inspired by the human form. They don’t have to be boxes on wheels, either. We’ve created soft robots that look like fish and sea turtles, and these help us understand the mysteries of the ocean. We’ve built micro-robots that will be able to do incision-free surgery. My colleagues are even designing robots out of biological cells.
Yet there has also been a resurgence of interest in humanoids recently–why?
Our dream to create a machine in our own image that is smart and obedient has been a constant throughout the ages, and what we create depends on the available technology.
But it’s not just about technological improvements.
Societal and economic factors also play a crucial role. Many countries face an aging population, increasing the demand for robots that can assist with elder care and provide companionship. Labor shortages in certain industries, particularly in roles that are dangerous, repetitive, or otherwise undesirable, further drive the need. Additionally, the COVID-19 pandemic underscored the value of robots that can operate in environments where human presence poses a risk, such as in healthcare and sanitation.
Why would humanoids in particular be suited to solving these problems?
The human-like form of these robots would make it easier for them to operate in environments designed for humans, using tools and interfaces meant for human use. This versatility is crucial for their integration into everyday settings and enhances their functionality.
So the need is there. What are the biggest engineering roadblocks?
There are several significant engineering challenges that span various domains. Achieving stable and efficient bipedal locomotion to handle arbitrary terrain, for example, requires innovations in sensors, actuators, and algorithms for balance and dynamic movement control. We also need to innovate in the development of adaptive manipulation and whole body control algorithms, as well as algorithms for high-level reasoning, since humanoids need to make sense of their complex human-centered environments and interact with people using language and abstractions that people understand.
Robust sensing, environmental understanding, real-time data processing and decision making, durability, reliability, energy efficiency both in battery technology and more energy-efficient actuators. Humanoid robots consume significant power, especially when performing dynamic tasks. Then we have to integrate all these advances into a compact, aesthetically pleasing form, while addressing ethical considerations and ensuring social acceptance.
In other words, there’s some work to do. Will we see them first on the factory floor?
They are not necessarily the best option for factory floors. Task-specific robots often have simpler designs and mechanisms, and are also simpler to control, which increases robustness, reduces the risk of mechanical failure, and simplifies maintenance.
Let’s move from the factory to the home. If you look ahead a decade or more, do you envision our homes including multiple task-specific robots and AIs, or one or two versatile, capable autonomous machines that manage dozens of tasks?
A combination of both. We already have robotic vacuum cleaners, robotic lawnmowers, and robotic pool cleaners. I can imagine humanoid robots that could learn from people how to perform certain tasks, cleaning up the yard or assisting inside the home. But I suspect we will have a specialized laundry folding robot before a robotic humanoid Iron Chef. To me, the question is whether we can build these machines at a price point that is affordable.
We’ve been focused on robots here, but much of the interest in your work today centers on liquid networks, the new neural network design and architecture you developed with several students turned colleagues. What makes liquid neural networks different?
There are multiple factors, but for one, they mimic the brain’s natural adaptability. In traditional AI networks, the models are fixed. They cannot improve after training; we just wait for the next release. Liquid neural networks (LNNs) can change based on the inputs that they see. This flexibility allows LNNs to adapt to new information or altered environments without needing retraining from scratch.
How do they do that?
The nature of the computation inside liquid networks enables adaptation. Another important aspect is the wiring between the artificial neurons: their architecture. In looped or recurrent architectures, information cycles through the network, allowing the network to consider the current input along with a form of ‘memory’ of what has been processed previously. This structure is important for reinforcement learning, where an agent learns to make sequences of decisions by receiving feedback in the form of rewards or punishments.
The dynamic nature of this looped architecture enables systems to refine their strategies based on past successes and failures, effectively learning from experience. This is in stark contrast to more static, feedforward architectures where inputs move only forward and the system cannot naturally adapt to changes without retraining or restructuring.
Why are they called “liquid” networks?
We call them liquid networks because the time constant of the differential equation that defines the computation of the artificial neuron depends on x(t). This translates into more adaptive computation based on their inputs.
So they’re liquid because they are fluid and adaptable.
Yes.
Switching subjects, the Stanford AI Index notes that most of the innovation in AI these days is happening in the private sector. What should the university’s role be?
It’s important to recognize the distinct and complementary roles that academia and industry play. Academia is fundamentally oriented towards pushing the boundaries of knowledge and understanding and solving the fundamental issues with today’s AI systems, rather than immediate commercial applications. Academia also serves as a fertile ground for generating new ideas and nurturing innovative thinking. Universities provide an environment where researchers can explore unconventional approaches and take intellectual risks that might be less feasible in a profit-driven corporate setting. This freedom to experiment is crucial for fostering the kind of radical innovation that can lead to paradigm shifts in AI.
And we need new ideas in AI.
What do you mean by that?
The current advances in AI are based on ideas invented decades ago, enhanced by huge data and compute. I believe these statistical methods have a ceiling. Without new ideas, everybody will, in time, be doing the same thing and the results will be increasingly incremental.
At a broad level, you are generally an optimistic technologist, which isn’t a popular viewpoint right now. How would you respond to those who say your visions are somewhat utopian?
I understand the skepticism surrounding techno-optimism, especially given the complex challenges we face today. However, it’s important to recognize that visionary thinking is essential for progress. My aim is not to present a utopian ideal, but to set ambitious goals that drive innovation and positive change. By addressing the ethical, social, and technical challenges head-on, we can harness technology’s potential to improve lives and ensure the well-being of our planet and all the species we share it with, while being mindful of its implications. I believe in maintaining a hopeful outlook while staying grounded in the practical realities and responsibilities that come with technological advancement.
Okay, so if we’re being optimistic, which is going to happen first, our cars driving us to work or a humanoid making us dinner?
Well, given how much people hate their morning commute, I’d bet on our cars driving us to work first. But don’t worry, once you’re home, a humanoid chef will be there to whip up dinner just in time and ask how your self-driving car handled the traffic!
Join the Discussion (0)
Become a Member or Sign In to Post a Comment