Sign In

Communications of the ACM

151 - 160 of 3,298 for bentley

The research on development differences and influence factors of inbound tourism in the Six provinces of central China

Explore development differences of inbound tourism in six provinces of central China and uncover the influence factors. Based on inbound tourism data in six provinces of central China during 2009-2018, apply one-way ANOVA and clustering analysis to estimate scale and tendency differences of inbound tourism in the six provinces, and use factor analysis to explore the influence factors that lead to the differences. The results show that: the development of inbound tourism in the six provinces is greatly different in both scale and tendency. The factors that influence the development of inbound tourism are respectively traffic location and local economy, tourism size, resources and service ability, foreign economy and natural environment.


Rethinking Consumer Email: The Research Process for Yahoo Mail 6

This case study follows the research process of rethinking the design and functionality of a personal email client, Yahoo Mail. Over three years, we changed the focus of the product from composing emails towards automatically organizing specific categories of business to consumer email (such as deals, receipts, and travel) and creating experiences unique to each category. To achieve this, we employed iterative user research with over 1,500 in-person interviews in six countries and surveys to many thousands of people around the world. This research process culminated in the launch of Yahoo Mail 6.0 for iOS and Android devices in the fall of 2019.


Design is (A)live: An Environment Integrating Ideation and Assessment

Design coursework is iterative and continuously-evolving. Separation of digital tools used in design courses disaffects instructors' and students' iterative process experiences.

We present a system that integrates support for design ideation with a learning analytics dashboard. A preliminary study deployed the system in two courses, each with ~15 students and 1 instructor, for three months. We conducted semi-structured interviews to understand user experiences.

Findings indicate benefits when systems contextualize creative work with assessment by integrating support for ideation with a learning analytics dashboard. Instructors are better able to track students and their work. Students are supported in reflecting on relationships among deliverables. We derive implications for contextualizing design with feedback to support creativity, learning, and teaching.


CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems


We are excited to welcome you to CHI 2020 in beautiful Honolulu, Hawai'i!

Although CHI has strong origins in the USA, it has never been to Hawai'i. We see this rather "unusual" location for a conference as both an acknowledgement of the role underrepresented regions play in the field of Human-Computer Interaction as well as a symbol for more outreach to the rest of the world.

The ACM CHI Conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction. CHI - pronounced "kai" - is a place where researchers and practitioners gather from across the world to discuss the latest in interactive technology. We are a multicultural community from highly diverse backgrounds who together investigate new and creative ways for people to interact.

CHI has a rich history of bringing together people from di?erent disciplines, cultures, sectors, communities and backgrounds. Through CHI, designers, researchers and practitioners come together with the common purpose of creating technology that works for people and society.

We are increasingly realizing how our technology use is changing how we delineate work and pleasure, how it is advancing our productivity but at the same time threatening our wellbeing. In choosing a beautiful location like Hawai'i, we hope we highlight the importance of work-life balance and also elicit new discussions on such critical perspectives about the future of interactive technology.

Thanks to our massive numbers of volunteers and help from ACM and its SIGCHI members, we are excited to present a vibrant technical and social programme for you to experience. Over six days, participants can join and continue to engage with the CHI community and explore technology and world-class research, and engage in discussions with designers, researchers, students, and practitioners!

Ho'omalu-o means "to conserve; to use or manage wisely" in the Hawaiian language. One of our goals for CHI 2020 is to make more sustainable choices wherever we can, recognising, of course, that any travel, especially to locations like ours, has a significant impact on the environment. Working with the Sustainability Chairs, we have chosen recycled, biodegradable or eco-friendly products and engaged with local suppliers, wherever possible. We have implemented options to reduce travel related to the conference organisation by using videoconference meetings as much as possible. We have worked with the CHI Steering and Executive Committee to identify future opportunities to reduce travel and to reduce the number of meetings. We have removed the conference bag and gifts by default and encouraged the selection of more sustainable food choices (including the decision not to serve red meat). We have also chosen reusable or compostable crockery and cutlery where possible and are donating any remaining food to a homeless shelter to avoid food waste.

Furthermore, we have chosen to locate all activities in or near the Convention Centre and negotiated deals with hotels nearby to reduce the need for transportation. The Convention Centre itself is the first and only public assembly convention centre to earn LEED v.4 O+M Gold Certification in the United States. In the spirit of Ho'omaluo- , we have also decided to set the default temperature in the venue higher to reduce air condition energy usage.

A particular highlight is the Interactivity programme, which will be launched at the Reception on Monday evening, giving a live glimpse into the future with hands-on prototypes, design experiences as well as inspirational technologies.

We are also excited to continue the commitment to making CHI, and CHI content, more widely accessible. We will be live-streaming even more paper sessions. We also provide a nursing room, all-gender bathrooms, badge pronouns, a desensitization room and a prayer room.


Considering Wake Gestures for Smart Assistant Use

Smart speakers have become an almost ubiquitous technology as they enable users to access conversational agents easily. Yet, the agents can only be activated using specific voice commands, i.e. a wake word. This, in turn, requires the device to constantly listen to and process sound, which represents a privacy issue for some users. Further, using the trigger word for the agent in a conversation with another human may lead to accidental triggers. Here, we propose using gestural triggers for conversational agents. We conducted gesture elicitation to identify five candidate gestures. We then conducted a user study to investigate the acceptability and effort required to perform the gestures. Initial results indicate that the snap gesture shows the most potential. Our work contributes initial insights on using smart speakers with ubiquitous sensing.


Interactive Parallel Coordinates for Parametric Design Space Exploration

We present an interactive visualization based on parallel coordinates that enables comparison, generation, and modification of multiple parametric design alternatives. Such capabilities are lacking in existing tools. Initial evaluation suggests that our proposal improves usability over existing tools, has novel parameter space exploration capabilities, and also reveals a space for designing direct interactions with visualizations to support parametric exploration.


RCEA: Real-time, Continuous Emotion Annotation for Collecting Precise Mobile Video Ground Truth Labels

Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos.


"All in the Same Boat": Tradeoffs of Voice Assistant Ownership for Mixed-Visual-Ability Families

A growing body of evidence suggests Voice Assistants (VAs) are highly valued by people with vision impairments (PWVI) and much less so by sighted users. Yet, many are deployed in homes where both PWVI and sighted family members reside. Researchers have yet to study whether VA use and perceived benefits are affected in settings where one person has a visual impairment and others do not. We conducted six in-depth interviews with partners to understand patterns of domestic VA use in mixed-visual-ability families. Although PWVI were more motivated to acquire VAs, used them more frequently, and learned more proactively about their features, partners with vision identified similar benefits and disadvantages of having VAs in their home. We found that the universal usability of VAs both equalizes experience across abilities and presents complex tradeoffs for families-regarding interpersonal relationships, domestic labor, and physical safety-which are weighed against accessibility benefits for PWVI and complicate the decision to fully integrate VAs in the home.


Social Sensing: Assessing Social Functioning of Patients Living with Schizophrenia using Mobile Phone Sensing

Impaired social functioning is a symptom of mental illness (e.g., depression, schizophrenia) and a wide range of other conditions (e.g., cognitive decline in the elderly, dementia). Today, assessing social functioning relies on subjective evaluations and self assessments. We propose a different approach and collect detailed social functioning measures and objective mobile sensing data from N=55 outpatients living with schizophrenia to study new methods of passively accessing social functioning. We identify a number of behavioral patterns from sensing data, and discuss important correlations between social function sub-scales and mobile sensing features. We show we can accurately predict the social functioning of outpatients in our study including the following sub-scales: prosocial activities (MAE = 7.79, r = 0.53), which indicates engagement in common social activities; interpersonal behavior (MAE = 3.39, r = 0.57), which represents the number of friends and quality of communications; and employment/occupation (MAE = 2.17, r = 0.62), which relates to engagement in productive employment or a structured program of daily activity. Our work on automatically inferring social functioning opens the way to new forms of assessment and intervention across a number of areas including mental health and aging in place.


WATouCH: Enabling Direct Input on Non-touchscreen Using Smartwatch's Photoplethysmogram and IMU Sensor Fusion

Interacting with non-touchscreens such as TV or public displays can be difficult and inefficient. We propose WATouCH, a novel method that localizes a smartwatch on a display and allows direct input by turning the smartwatch into a tangible controller. This low-cost solution leverages sensor fusion of the built-in inertial measurement unit (IMU) and photoplethysmogram (PPG) sensor on a smartwatch that is used for heart rate monitoring. Specifically, WATouCH tracks the smartwatch movement using IMU data and corrects its location error caused by drift using the PPG responses to a dynamic visual pattern on the display. We conducted a user study on two tasks -- a point and click and line tracing task -- to evaluate the system usability and user performance. Evaluation results suggested that our sensor fusion mechanism effectively confined IMU-based localization error, achieved encouraging targeting and tracing precision, was well received by the participants, and thus opens up new opportunities for interaction.