Computing Profession Viewpoint

Intelligent Futures in Task Assistance

Applying lessons learned developing and deploying task management software.
  1. Introduction
  2. Task Intelligence
  3. Lessons So Far
  4. The Road Ahead
  5. References
  6. Author
  7. Footnotes
long arms at desk with icons of work and productivity

Tasks are the primary unit of personal and professional productivity. They describe activity toward an objective. Tasks can be explicitly specified, for example, in a to-do list, or inferred from user behavior and context, for example, during search engine interaction. Task management software, including to-do applications such as Todoist, Google Tasks, and Microsoft To Do, is a multibillion-dollar industry, with large gains forecast over the next few years. We can improve individual and team task productivity through a mixture of education on best practices and intelligent task assistance. There are many strategies for the former—Getting Things Done (GTD)1 is one example—and there are plentiful opportunities for the computer science community to spearhead the latter. For example, by investing more in task-related research and engineering in areas such as natural language processing (NLP), machine learning (ML), and human-computer interaction (HCI). Among other things, intelligent task assistance can help people prioritize their task backlogs, plan their daily agendas, decompose their complex tasks into manageable steps, ensure their commitments to others are met, and help automate common activities.

Helping people make progress toward task completion is the primary objective of task assistance. Few intelligent systems offer end-to-end completion support, with some exceptions, for example, task-oriented dialog systems for, say, ticket reservations. Search engines provide result lists that serve as starting points for post-query navigation. Digital assistants such as Alexa or Google Assistant provide situational reminders for people to perform tasks at an appropriate time or in the appropriate location. Dedicated task management applications provide a means of storing and organizing tasks but rely on users to drive activities such as scheduling and completion. These challenges are not restricted to individuals. Teams may also track tasks carefully, via dedicated project management tools, for example, Trello, Asana, and monday.com. However, task management requires significant human intervention, with limited system intelligence. Experienced professionals, using established project management protocols (for example, Kanban boards, Gantt charts), are often required to manage team tasks.

Back to Top

Task Intelligence

Putting the task management burden on humans alone is problematic considering challenges they may face, for example, cognitive overload and cognitive biases such as overoptimism. Only a fraction of people use task management software. Others prefer alternatives (for example, sticky notes, email sent to self), and some do not explicitly track their tasks at all. Artificial intelligence (AI) can assist with many aspects of the task life cycle, including: capturing task intentions from communications (including email and meeting transcripts), content (such as photos, notes), and activities (such as searching, browsing); organizing tasks to find time to complete them given scheduling constraints and task attributes such as priority; and assisting with task completion activities, for example, contextual reminders and task automation. There is a significant opportunity to enhance people’s personal and professional productivity and give them more time for other things in their lives, such as leisure activities, via more focus on task intelligence. The need for intelligent task support may be amplified in the global pandemic and its aftermath, where home and work tasks are more intertwined, and people may require even more task assistance from AI systems.

Examples of task intelligence. There has been prior work on intelligent task assistance, although more is required to make these research investments production ready and assist users across the full task life cycle. At Microsoft Research, we have been developing a task intelligence platform called Titanium (Ti), comprising ML models spanning several applications and many task scenarios. Examples of recent task-related research, grouped by some of the GTD themes, include:

Capture: This involves collecting, extracting, and understanding tasks from a variety of digital and analog sources. Studies have shown that many people track tasks explicitly via to-do listsa that can assume many forms (notepads, sticky notes, email).2 To help everyone, intelligent systems must reliably extract tasks from both analog and digital sources, for example, via email intent understanding (such as identifying commitments made to others), observing application usage, converting handwritten task lists to digital form, and mining “tasklets” (that is, interface automation scripts) from websites.7 Semantic representations, including task embeddings,8 can enrich system understanding of tasks and enable better matching for applications such as task ranking and task recommendation.

Organize: This involves arranging and scheduling task execution, which can be challenging for systems given many constraints on users’ time (some latent) and uncertainty about user priorities. Working in tandem with users, systems can offer support including task duration estimation,11 to help schedule time; task completion detection,12 to minimize redundant notifications; task prioritization,13 to help people determine their next actions, and complex task decomposition,14 dividing tasks into more manageable steps (microtasks), to help people make progress on their complex tasks over time and sometimes assign them to crowd workers to get help in doing so.4

To be effective in practice and build and maintain user trust, intelligent task assistance must seek consent to access and use task and observational data, and work together with users.

Engage: This involves supporting the completion of tasks via system recommendations acted on by users or systems completing tasks automatically on users’ behalf, with consent. Progress on robotic process automation has enabled systems to extract action graphs from websites to automatically execute multistep tasks, such as food delivery or restaurant reservations, on users’ behalf.10 Mixed initiative task assistance capitalizes on complementary human and machine capabilities, for example, assisting with task completion by suggesting resources for review. Task-based recommendation systems suggest the most appropriate tasks for specific situations, personalized to an individual’s task habits6 or based on population data about situations where people typically perform tasks.3

To be effective in practice and build and maintain user trust, intelligent task assistance must seek consent to access and use task and observational data, and work together with users, offering them agency over system support while providing explanations for their task assistance.

Evaluating task intelligence. Beyond designing and developing task intelligence technologies, there are also considerable challenges in evaluating systems that offer task assistance. These systems can be highly complex, containing many components chained together, making the attribution of good or bad performance to specific system components challenging. For example, the architecture for the Project Execution Assistant (PExA) for task-time management9 lists at least eight different system components (managers, coordinators, predictors, explainers, and so forth). We need both holistic metrics (including satisfaction and success) and per component metrics (such as relevance and efficiency) to fully understand system performance, and even then, experimenters might face challenges with metric overload (that is, which metric(s) to focus on?). Integrated metrics may help capture multiple facets simultaneously, but the challenge is in how to interpret these combined metrics and compile a story of task performance around them. For many task-related applications, tasks data is highly confidential and not accessible for third-party human review, making it difficult to debug task intelligence in operation or to develop new AI-powered features. A lack of inter-pretable benchmarks, and the strong effects on situational and individual factors, mean it is difficult to amass sizeable and comprehensive tasks datasets, making it challenging to measure progress in research and development over time.

Back to Top

Lessons So Far

We have learned much from our experiences in developing and fielding task support across a variety of different applications in Microsoft products. While users appreciate the additional assistance, how they accomplish tasks has been shown to be highly personal.4 We therefore need to offer help selectively and be prepared to adapt to individual user styles. Beyond the challenges with evaluating task intelligence surfaced earlier, we have also learned several additional lessons, including: lightweight task capture is essential, since many people do not use task management software and some do not explicitly track their tasks at all; integration with existing applications is important, since dedicated task management application usage is rare; information on the task completion process is often limited, for example, quite often we might not observe completion events or actions in the data used to train ML models, making support for task completion difficult to develop; tasks span application, device, and session boundaries, making it challenging to reliably track task progress through a limited lens on user activity—other mechanisms such as in-situ surveys might be required; along similar lines, multiple devices may be needed to perform tasks most effectively, for example, a tablet plus a smart speaker for recipe preparation; many factors contribute to complex task attributes such as priority (comprising both urgency and importance) and these must be accurately modeled for systems to provide optimal task assistance; tasks can be highly context dependent, for example, relevant to particular times and/or locations, with plenty of other situational aspects (for example, the application context) that need to be understood; task feasibility is an important dimension in offering intelligent support that has not been a focus to date, including whether users have the tools, resources, and data required to complete the current task; task repetition is common, both within specific users and in general in the population—intelligent systems could do more to assist with this repeat behavior (for example, by surfacing previous solutions, decompositions, and so forth), and; tasks can reveal potentially sensitive information explicitly (such as text resembling passwords in to-do lists) or implicitly (for example, n-grams mined from activity streams), so methods to mitigate privacy risks are paramount.

Back to Top

The Road Ahead

Despite the large market opportunity and significant need for this type of task support, there is still much to do in task intelligence. Computer science researchers and practitioners are well placed to develop and apply advances in technology to help people focus on what matters to them and get more done with less effort. In what follows, I offer eight examples of future directions for task intelligence research (grouped by GTD theme, plus three foundational directions that span all themes), each with general computer science research topics that would enable progress, and task-specific research topics to accompany them (see the accompanying table).

Table. Future directions in task intelligence, with examples of research topics, both general topics in computer science (CS) (especially ML, NLP, and HCI) and topics focused on task management.

Better understand tasks to enable more intelligent assistance (Capture). Tasks can be expressed in natural language in many forms, such as commitments and requests in interpersonal communications or as short (few-word) task entries in to-do lists. To be actionable by intelligent systems, we need consistent task representations (for example, a “task2vec” mapping). Research on language understanding can help here, but more focus on tasks is needed given the differences in meanings and user intents from words and phrases appearing standard prose. Research on semantic understanding for extremely short texts such as search queries or social media postings is also relevant. In addition, more research is needed on intent understanding and task understanding from observational activity data, including application usage and screen understanding (for example, a “pixel2vec” representation for tasks and activities).

While users can appreciate the additional assistance, how they accomplish tasks has been shown to be highly personal.

Enhance task planning and support goal attainment (Organize). More research on task and time management and methods to model high-level user goals (such as planning a successful birthday party) would help intelligent systems better reason over activities and how they can be optimally organized with respect to goals. We must also explore more sophisticated task-related inferences, for example, task feasibility determination (that is, can this task be actioned in this context?), more granular task progress estimation (not just binary completion prediction12), task repetition (that is, has this task been done before and if so, how?) and task decomposition (breaking down not only complex tasks,14 but goals too).

Improve task outcomes via human-AI cooperation (Organize and Engage). Additional research on human-AI teaming in general and in the context of intelligent task assistance could help improve task outcomes (both ongoing and future). We can develop systems that can coach and guide users on best practices (for example, hints and tips for more effective task management), nudges toward positive task outcomes (such as proactive task reminders), improve user understanding and build user trust through system explanations (including clarifying why tasks were recommended and prioritized), and help users complete their tasks by cooperating with them (for example, via conversations in task-oriented dialog systems or by converting natural language instructions into action sequences).

Boost task productivity for individuals and teams (Engage). The new future of work emerging from the global pandemic will create an even more pressing need for intelligent task assistance across work and life boundaries. As part of this, understanding hybrid work and work life balance is ever more important, as are gig work mechanisms such as task marketplaces (for example, Task-Rabbitb), where tasks are performed on behalf of users, and task automation, where systems complete tasks automatically (for example, emerging work on action transformersc). Further research on “microproductivity”4 will help people make incremental progress on their tasks by first identifying microtasks and then matching them with short snippets (5–10 minutes) of available time. More research on teamwork and specifically, project management: learning from roles, task delegation, and task dependencies will enable intelligent systems to one day improve team productivity via automatic task assignment, task sequencing, and cross-team load balancing.

Despite the large market opportunity and significant need for this type of task support, there is still much to do in task intelligence.

Provide holistic support to facilitate task completion (Engage). Intelligent support should be integrated within existing applications to foster uptake and span different applications and/or devices, for example, as part of a task AI fabric such as Titanium (mentioned earlier), to support more comprehensive task understanding and end-to-end task completion. Tasks can persist over time, may require multiple applications (such as connecting a to-do list with a web search engine for actionable, search-friendly tasks) or need multiple devices (such as connecting a smartphone or tablet with a smart speaker for complex, multistep tasks), hence cross-session, cross-application, and cross-device workflows become important. More research on intelligent assistance in general, irrespective of tasks, would also help build algorithms and experiences that can still assist users with tasks and inform further task-related research and development.

Refine assistance over time via feedback loops (Foundations). The first of several key future research directions that span all stages of the task life cycle is feedback loops, that is, collecting implicit and explicit feedback from users and refining/adapting intelligent systems accordingly. General research on personalization and contextualization is relevant here, as are task-related variants, for example, personal task prioritization and contextual task recommendation. Beyond the individual and their current situation, more research is needed on cohort modeling, that is, learning from others performing similar tasks about, for example, which resources are typically used for the current task and in what situations tasks are typically feasible.

Expedite progress via new datasets and evaluation methods (Foundations). One reason that there has not been more interest in task intelligence in general is a lack of large-scale tasks data to train, validate, and test ML models. As is often the case with nascent ML application domains, labeled datasets are challenging to obtain. It requires some creative thinking to make progress on building such datasets: in some of our research on task duration estimation, we had to collect our own large dataset of task titles and associated task durations directly from crowd workers. Shared tasks datasets, task-related competitions (for example, the Alexa TaskBot challenged), benchmarking studies, and living laboratories, will all help drive further advances in task intelligence. Deeper study of evaluation metrics and community agreement on primary metrics would also expedite progress.

Build and maintain user trust (Foundations). Trust must be a cornerstone of anything we do involving user data. User consent is critical and advances in areas such as privacy-preserving machine learning, FATE (fairness, accountability, transparency, and ethics), and client-side (and hybrid) training and inference are clearly relevant and will make tasks applications more robust. Within these well-explored areas there are task-related research challenges, such as removing and securing potentially sensitive information (passwords, bank details, medicines, and so forth) from to-do tasks datasets, reminding users of the risks of including such sensitive information in their task entries, de-biasing team task auto-assignments (which may be learned from skewed historic data), and performing private and secure inference on observational data.

Overall, given the existing capabilities and the many avenues for additional research, I believe the future is bright for task intelligence. There are considerable commercial and scientific opportunities in this area and the computer science community should do more to lead the way.

    1. Allen, D. Getting Things Done: The Art of Stress-Free Productivity. Penguin, 2015.

    2. Bellotti, V. et al. What a to-do: Studies of task management towards the design of a personal task list manager. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (2004), 735–742).

    3. Benetka, J.R. et al. Understanding context for tasks and activities. In Proceedings of the 2019 ACM SIGIR Conference on Human Information Interaction and Retrieval (2019), 133–142.

    4. Cheng, J. et al. Break it down: A comparison of macro-and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (2015), 4061–4064.

    5. Haraty, M. et al. How personal task management differs across individuals. International Journal of Human-Computer Studies, 88, (2016), 13–37.

    6. Kessell, A. and Chan, C. Castaway: A context-aware task management system. In Proceedings of the ACM SIGCHI Extended Abstracts on Human Factors in Computing Systems (2006), 941–946.

    7. Li, Y., and Riva, O. Glider: A reinforcement learning approach to extract UI scripts from websites. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (2021), 1420–1430.

    8. Mehrotra, R. and Yilmaz, E. Task embeddings: Learning query embeddings using task context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (2017), 2199–2202.

    9. Myers, K. et al. An intelligent personal assistant for task and time management. AI Magazine 28, 2 (2007), 47–61.

    10. Riva, O. and Kace, J. Etna: Harvesting action graphs from websites. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology (2021), 312–331.

    11. White, R.W. and Hassan Awadallah, A. Task duration estimation. In Proceedings of the 12th ACM International Conference on Web Search and Data Mining (2019), 636–644.

    12. White, R.W. et al. Task completion detection: A study in the context of intelligent systems. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (2019), 405–414.

    13. Zhang, C. et al. Grounded task prioritization with context-aware sequential ranking. ACM Transactions on Information Systems 40, 4 (2021), 68.

    14. Zhang, Y. et al. Learning to decompose and organize complex tasks. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (2021), 2726–2735.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More