Building a Multimodal Human-Robot Interface

Building a Multimodal Human-Robot Interface
Author:
Publisher:
Total Pages: 7
Release: 2001
Genre:
ISBN:

Download Building a Multimodal Human-Robot Interface Book in PDF, Epub and Kindle

No one claims that people must interact with machines in the same way that they interact with other humans. Certainly, people do not carry on conversations with their toasters in the morning, unless they have a serious problem. However, the situation becomes a bit more complex when we begin to build and interact with machines or robots that either look like humans or have functionalities and capabilities. Then, people well might interact with their humanlike machines in ways that mimic human-human communication. For example, if a robot has a face, a human might interact with it similarly to how humans interact with other creatures with faces. Specifically, a human might talk to it, gesture to it, smile at it, and so on. If a human interacts with a computer or a machine that understands spoken commands, the human might converse with the machine, expecting it to have competence in spoken language. In our research on a multimodal interface to mobile robots, we have assumed a model of communication and interaction that, in a sense, mimics how people communicate. Our interface therefore incorporates both natural language understanding and gesture recognition as communication modes. We limited the interface to these two models to simplify integrating them in the interface and to make our research more tractable. We believe that with an integrated system, the user is less concerned with how to communicate (which interactive mode to employ for a task) and is therefore free to concentrate on the tasks and goals at hand. Because we integrate all our system's components, users can choose any combination of our interface's modalities. The onus is on our interface to integrate the input, process it, and produce the desired results.

Human-Robot Interaction

Human-Robot Interaction
Author: Céline Jost
Publisher: Springer Nature
Total Pages: 418
Release: 2020-05-13
Genre: Social Science
ISBN: 3030423077

Download Human-Robot Interaction Book in PDF, Epub and Kindle

This book offers the first comprehensive yet critical overview of methods used to evaluate interaction between humans and social robots. It reviews commonly used evaluation methods, and shows that they are not always suitable for this purpose. Using representative case studies, the book identifies good and bad practices for evaluating human-robot interactions and proposes new standardized processes as well as recommendations, carefully developed on the basis of intensive discussions between specialists in various HRI-related disciplines, e.g. psychology, ethology, ergonomics, sociology, ethnography, robotics, and computer science. The book is the result of a close, long-standing collaboration between the editors and the invited contributors, including, but not limited to, their inspiring discussions at the workshop on Evaluation Methods Standardization for Human-Robot Interaction (EMSHRI), which have been organized yearly since 2015. By highlighting and weighing good and bad practices in evaluation design for HRI, the book will stimulate the scientific community to search for better solutions, take advantages of interdisciplinary collaborations, and encourage the development of new standards to accommodate the growing presence of robots in the day-to-day and social lives of human beings.

Multimodal Interfaces for Human-Robot Interaction

Multimodal Interfaces for Human-Robot Interaction
Author: Shokoofeh Pourmehr
Publisher:
Total Pages: 132
Release: 2016
Genre:
ISBN:

Download Multimodal Interfaces for Human-Robot Interaction Book in PDF, Epub and Kindle

Robots are becoming more popular in domestic human environments, from service applica- tions to entertainment and education, where they share the workspace and interact directly with the general public in their everyday life. One long-term goal of human-robot inter- action (HRI) research is to have robots work with and around people, taking instructions via simple, intuitive interfaces. For a successful, natural interaction robots are expected to be observant of the human present, recognize what they are doing and act appropriately to their attention-drawing behaviors such as gaze, body posture or gestures. We call such a system by which a robot can take notice of someone or something and consider it as interesting or relevant attention system. These systems enable robots to shift their focus of attention to a particular part of the information that is relevant and meaningful in a given situation based on the motivational and behavioral state of the robot. This awareness comes from interpreting the exchanged information between humans and robots. The exchange of information through a combination of different modalities is anticipated to be of most ben- efit. Multimodal interfaces can be used to take advantage of the existing strengths of each composite modality and overcome individual weaknesses. Also, it has been argued [1] that multimodal interfaces facilitate a more natural communication as by employing integrated systems users will be less concerned about how to communicate the intended commands or which modality to use, and therefore be free to focus on the task and goals at hand. This PhD thesis presents our contributions made in designing and implementing multimodal, sensor-mediated attention systems that enable users to interact directly with physically col- located robots using natural and intuitive communication methods. We focus on scenarios when there are multiple people or multiple robots in the environment. First, we introduce two multimodal human multi-robot interaction systems for selecting and commanding an individual or a group of robots from a population. In this context, we study how spatial configuration of user and robots may affect the efficiency of these interfaces in real-world settings. Next, we present a probabilistic approach for identifying attention-drawing signals from an interested party and controlling a mobile robot's attention toward the most promis- ing interaction partner among a group of people. Finally, we report on a user study designed to assess the performance and usability of this proposed system for finding HRI partners in a crowd when used by the non-robotics experts and compare it to manual control.

Human-Robot Interactions in Future Military Operations

Human-Robot Interactions in Future Military Operations
Author: Florian Jentsch
Publisher: CRC Press
Total Pages: 434
Release: 2016-05-23
Genre: Computers
ISBN: 1317119460

Download Human-Robot Interactions in Future Military Operations Book in PDF, Epub and Kindle

Soldier-robot teams will be an important component of future battle spaces, creating a complex but potentially more survivable and effective combat force. The complexity of the battlefield of the future presents its own problems. The variety of robotic systems and the almost infinite number of possible military missions create a dilemma for researchers who wish to predict human-robot interactions (HRI) performance in future environments. Human-Robot Interactions in Future Military Operations provides an opportunity for scientists investigating military issues related to HRI to present their results cohesively within a single volume. The issues range from operators interacting with small ground robots and aerial vehicles to supervising large, near-autonomous vehicles capable of intelligent battlefield behaviors. The ability of the human to 'team' with intelligent unmanned systems in such environments is the focus of the volume. As such, chapters are written by recognized leaders within their disciplines and they discuss their research in the context of a broad-based approach. Therefore the book allows researchers from differing disciplines to be brought up to date on both theoretical and methodological issues surrounding human-robot interaction in military environments. The overall objective of this volume is to illuminate the challenges and potential solutions for military HRI through discussion of the many approaches that have been utilized in order to converge on a better understanding of this relatively complex concept. It should be noted that many of these issues will generalize to civilian applications as robotic technology matures. An important outcome is the focus on developing general human-robot teaming principles and guidelines to help both the human factors design and training community develop a better understanding of this nascent but revolutionary technology. Much of the research within the book is based on the Human Research and Engineering Directorate (HRED), U.S. Army Research Laboratory (ARL) 5-year Army Technology Objective (ATO) research program. The program addressed HRI and teaming for both aerial and ground robotic assets in conjunction with the U.S. Army Tank and Automotive Research and Development Center (TARDEC) and the Aviation and Missile Development Center (AMRDEC) The purpose of the program was to understand HRI issues in order to develop and evaluate technologies to improve HRI battlefield performance for Future Combat Systems (FCS). The work within this volume goes beyond the research results to encapsulate the ATO's findings and discuss them in a broader context in order to understand both their military and civilian implications. For this reason, scientists conducting related research have contributed additional chapters to widen the scope of the original research boundaries.

Multimodal Human-robot Interaction in an Assistive Technology Environment

Multimodal Human-robot Interaction in an Assistive Technology Environment
Author: Zhi Li
Publisher:
Total Pages: 350
Release: 2012
Genre:
ISBN:

Download Multimodal Human-robot Interaction in an Assistive Technology Environment Book in PDF, Epub and Kindle

The research work presented in this thesis is motivated by the increasing demand for care for the elderly. A domestic assistive robot has the potential to supplement humans in the provision of assistance for the elderly with simple daily tasks, such as retrieving small objects from various places, switching lights on and off, and opening and closing doors. The proposed assistive robot possesses both transactional intelligence and spatial intelligence. This thesis concentrates on the realization of the transactional intelligence, which enables the robot to naturally and effectively interact with human users. The ultimate goal of this research is to develop a system for the robot to perceive multiple modalities used by humans during face-to-face communication, including speech, eye gaze and gestures, so that the robot is able to understand the user's intention and make appropriate responses.Some important features in the design and implementation of the system are as follows.1. Naturalness and effectiveness are the fundamental principles in the design of the interaction interface. Therefore, only cameras are used as non-contact sensing devices.2. The user is observed only from the robot's view, so that the interaction can take place anywhere rather than be confined to a particular room.3. The behavioural differences between individuals are emphasized, enabling the robot to give appropriate responses to different users. This is achieved by a user identification method and a profile built for each individual user, which stores several characteristics of a specific user.4. The proposed hand gesture recognition system recognizes both dynamic motion patterns and static hand postures. The 3D Particle Filter-based hand tracking approach combines information of colour, motion and depth. It robustly tracks the hands even when the person wears a short-sleeved shirt exposing the forearm.5. Different sources of information conveyed by speech, eye gaze and gestures are aligned and then combined by the proposed multimodal interaction system. The approach takes into account that each sub-system may generate incomplete or erroneous results.6. Mutual interaction is realised by a dialogue manager. Based on the perceived information, the robot decides either to perform a required task or negotiate with the user when the command is ambiguous or not feasible.7. The ability of the robot to infer the user's emotional states as a social companion is also attempted.The technical contributions of this thesis have been validated with a series of experiments in typical indoor environments.

Multimodal Communication for Embodied Human-Robot Interaction with Natural Gestures

Multimodal Communication for Embodied Human-Robot Interaction with Natural Gestures
Author: Qi Wu
Publisher:
Total Pages: 96
Release: 2021
Genre:
ISBN:

Download Multimodal Communication for Embodied Human-Robot Interaction with Natural Gestures Book in PDF, Epub and Kindle

Communication takes place in various forms and is an essential part of human-human communication. Researchers have done a plethora of studies to understand it biologically and computationally. Likewise, it also plays a significant role in Human-Robot Interaction (HRI) in order to endow Artificial Intelligence (AI) systems with humanlike cognition and sociality. With the advancement of realistic simulators, multimodal HRI is also a hotspot with embodied agents in the simulation. Human users should be able to manipulate or collaborate with embodied agents in multimodal ways, including verbal and non-verbal methods. Up to now, most prior works with embodied AI have been focusing on addressing embodied agent tasks using verbal cues along with visual perceptions, e.g., using human languages as natural language instructions to assist embodied visual navigation tasks. Nonetheless, nonverbal means of communication like gestures, which are rooted in human communication, are rarely examined in embodied agent tasks. In this dissertation, I contemplate on existing research topics in embodied AI and propose to tackle embodied visual navigation tasks with natural human gestures to fill the deficiency of non-verbal communicative interfaces in embodied HRI. It has the following contributions: - To this end, I first develop a 3D photo-realistic simulation environment, Gesture-based THOR (GesTHOR). In this simulator, the human user can wear a Virtual Reality (VR) Head-Mounted Display (HMD) to be immersed as a humanoid agent in the same environment as the robot agent and communicate with the robot interactively using instructional gestures by sensory devices that can track body and hand motions. I provide data collection tools so that users can generate their own gesture data in our simulation environment.- I created Gesture ObjectNav Dataset (GOND) and standardized benchmarks to evaluate how gestures contribute to the embodied object navigation task. This dataset contains natural gestures collected from human users and object navigation tasks defined in GesTHOR. - To demonstrate the effectiveness of gestures for embodied navigation, I build an end-to-end Reinforcement Learning (RL) model that can integrate multimodal perceptions for the robot to learn the optimal navigation policies. Through cases studies with GOND in GesTHOR, I illustrate that the robot agent can perform the navigation task successfully and efficiently with gestures instead of natural languages. I also show that the navigation agent can learn the underlined semantics of unpredefined gestures, which is beneficial to its navigation. By introducing GesTHOR and GOND as well as related experimental results, I aim to spur growing interest in embodied HRI with non-verbal communicative interfaces toward building cognitive AI systems.

Cooperating Robots for Flexible Manufacturing

Cooperating Robots for Flexible Manufacturing
Author: Sotiris Makris
Publisher: Springer
Total Pages: 409
Release: 2021-10-02
Genre: Technology & Engineering
ISBN: 9783030515935

Download Cooperating Robots for Flexible Manufacturing Book in PDF, Epub and Kindle

This book consolidates the current state of knowledge on implementing cooperating robot-based systems to increase the flexibility of manufacturing systems. It is based on the concrete experiences of experts, practitioners, and engineers in implementing cooperating robot systems for more flexible manufacturing systems. Thanks to the great variety of manufacturing systems that we had the opportunity to study, a remarkable collection of methods and tools has emerged. The aim of the book is to share this experience with academia and industry practitioners seeking to improve manufacturing practice. While there are various books on teaching principles for robotics, this book offers a unique opportunity to dive into the practical aspects of implementing complex real-world robotic applications. As it is used in this book, the term “cooperating robots” refers to robots that either cooperate with one another or with people. The book investigates various aspects of cooperation in the context of implementing flexible manufacturing systems. Accordingly, manufacturing systems are the main focus in the discussion on implementing such robotic systems. The book begins with a brief introduction to the concept of manufacturing systems, followed by a discussion of flexibility. Aspects of designing such systems, e.g. material flow, logistics, processing times, shop floor footprint, and design of flexible handling systems, are subsequently covered. In closing, the book addresses key issues in operating such systems, which concern e.g. decision-making, autonomy, cooperation, communication, task scheduling, motion generation, and distribution of control between different devices. Reviewing the state of the art and presenting the latest innovations, the book offers a valuable asset for a broad readership.

Human-Robot Interaction

Human-Robot Interaction
Author: Gholamreza Anbarjafari
Publisher: BoD – Books on Demand
Total Pages: 186
Release: 2018-07-04
Genre: Computers
ISBN: 178923316X

Download Human-Robot Interaction Book in PDF, Epub and Kindle

This book takes the vocal and visual modalities and human-robot interaction applications into account by considering three main aspects, namely, social and affective robotics, robot navigation, and risk event recognition. This book can be a very good starting point for the scientists who are about to start their research work in the field of human-robot interaction.