Sign In

Communications of the ACM

ACM Careers

Using Virtual Assistants To Tackle Emergencies In Space


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
astronaut holding an electronic display

The Texas A&M team will develop a virtual assistant with three main skills: detection, diagnosis, and recommendation.

Credit: Getty Images

Future space exploration missions will call for a higher degree of autonomy for astronauts, due to longer communication delays and mission durations. If an emergency arises, a communications delay might prevent ground control intervention. Providing the crew with onboard support to identify and resolve issues in a timely manner is crucial to ensure their safety. A team of researchers at Texas A&M University is proposing the use of virtual assistants (VAs) to provide such support.

The team, comprised of Daniel Selva, Ana Diaz Artiles, and Bonnie J. Dunbar from the Department of Aerospace Engineering, along with Raymond Wong from the Department of Statistics, has been awarded over $1 million from NASA to study the impact of using VAs to support crew members in the context of spacecraft anomaly treatment during long duration exploration missions (LDEM).

During LDEM such as to Mars, ground support from Earth will be less effective for urgent tasks, such as identifying and resolving time-sensitive anomalies that appear during flight operations. Should the spacecraft develop a leaky fuel line or valve, for example, the crew might not have enough time to wait for instructions to come through due to a communication delay. "Imagine having a space-related Siri or Alexa specifically for astronauts to call on, that is essentially what this would be," says Selva, assistant professor in the department.

The team will develop a VA called Daphne-AT, based on similar software developed by Selva. Daphne-AT will be designed with three main skills: detection, diagnosis, and recommendation. The detection skill is in charge of answering questions related to anomaly detection. The diagnosis skill helps the user characterize the anomaly and identify its root cause, answering questions such as "What do you think is causing the increase in temperature?" or "What have been root causes of similar anomalies in the past?" Finally, the recommendation skill helps the user devise a course of action to deal with the anomaly, answering questions such as "How can I stop the leakage on the water line?" or "How do I replace the sensor temperature on the battery?" The VA uses various machine learning and artificial intelligence techniques as well as databases with past anomalies and procedures to answer those questions.

Once developed, the impact of Daphne-AT on performance, cognitive workload, situational awareness and trust will be assessed by Diaz Artiles through a set of three experiments with human subjects in a laboratory environment. Twelve different subjects, with a degree in engineering or sciences, will be used for each experiment.

The first experiment will measure the impact of the baseline VA with the ability to answer questions, but without the ability to engage in dialogue with the user or take initiative. "Our hypothesis is that the VA increases human performance in the treatment of anomalies, while also reducing cognitive workload and increasing situational awareness," says Diaz Artiles, assistant professor in the department.

The next experiment will measure the impact of adding in the capabilities of the VA to provide explanations and take initiative. "Here, the main idea is to show that the initiative and explanations lead to a significant improvement in trust, and hopefully an additional improvement in performance," Selva says.

Finally, the system will be deployed in NASA's Human Exploration Research Analog (HERA) environment, where the 12 subjects will be tested over three 45-day missions. HERA is a high-fidelity space analog that simulates some of the aspects of living in a space station, and is commonly used to perform scientific experiments and validate technologies before they go into space.

Selva is principal investigator with the overall responsibility for the project and its associated objectives, including overseeing the development of the virtual assistant. Diaz Artiles will be in charge of designing and conducting the experiments with the human subjects. Dunbar, TEES Eminent Professor in the department, has extensive flight experience as an astronaut and will provide valuable insight into the design of the VA as well as the experiments. Wang, assistant professor in his department, will provide valuable input for experimental design and ensure the soundness of the statistical approach.

This project is part of a consortium of seven grants funded under the NASA Human Capabilities Assessment for Autonomous Missions (HCAAM) Virtualized NASA Specialized Center of Research (VNSCOR), which includes teams led by the University of California, Davis, NASA Ames, TRACLabs, Space Research Company, Massachusetts Institute of Technology, and the University of Wisconsin-Madison. The goal of the HCAAM VNSCOR project as a whole is to provide standards and guidelines related to autonomy that will be used to inform the design of future missions to the moon and Mars.


 

No entries found