Human-robot collaboration

Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, design, cognitive sciences and psychology.[1]

Industrial applications of human-robot collaboration involve Collaborative Robots, or cobots, that physically interact with humans in a shared workspace to complete tasks such as collaborative manipulation or object handovers.[2]


Collaborative Activity

Collaboration is defined as a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal.[3] The process typically involves shared plans, shared norms and mutually beneficial interactions.[4] Although collaboration and cooperation are often used interchangeably, collaboration differs from cooperation as it involves a shared goal and joint action where the success of both parties depend on each other.[5]

For effective human-robot collaboration, it is imperative that the robot is capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction.[6] The robot must also communicate its own set of intents and goals to establish and maintain a set of shared beliefs and to coordinate its actions to execute the shared plan.[3][7] In addition, all team members demonstrate commitment to doing their own part, to the others doing theirs, and to the success of the overall task.[7][8]

Theories Informing Human-Robot Collaboration

Human-human collaborative activities are studied in depth in order to identify the characteristics that enable humans to successfully work together [9]. These activity models usually aim to understand how people work together in teams, how they form intentions and achieve a joint goal. Theories on collaboration inform human-robot collaboration research to develop efficient and fluent collaborative agents [10].

Belief Desire Intention Model

The belief-desire-intention (BDI) model is a model of human practical reasoning that was originally developed by Michael Bratman.[11] The approach is used in intelligent agents research to describe and model intelligent agents.[12] The BDI model is characterized by the implementation of an agent's beliefs (the knowledge of the world, state of the world), desires (the objective to accomplish, desired end state) and intentions (the course of actions currently under execution to achieve the desire of the agent) in order to deliberate their decision-making processes.[13] BDI agents are able to deliberate about plans, select plans and execute plans.

Shared Cooperative Activity

Shared Cooperative Activity defines certain prerequisites for an activity to be considered shared and cooperative: mutual responsiveness, commitment to the joint activity and commitment to mutual support.[7][14] An example case to illustrate these concepts would be a collaborative activity where agents are moving a table out the door, mutual responsiveness ensures that movements of the agents are synchronized; a commitment to the joint activity reassures each team member that the other will not at some point drop his side; and a commitment to mutual support deals with possible breakdowns due to one team member’s inability to perform part of the plan.[7]

Joint Intention Theory

Joint Intention Theory proposes that for joint action to emerge, team members must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan.[15] In collaborative work, agents should be able to count on the commitment of other members, therefore each agent should inform the others when they reach the conclusion that a goal is achievable, impossible, or irrelevant.[7]

Approaches to Human-Robot Collaboration

The approaches to human-robot collaboration include human emulation (HE) and human complementary (HC) approaches. Although these approaches have differences, there are research efforts to develop a unified approach stemming from potential convergences such as Collaborative Control.[16][17]

Human Emulation

The human emulation approach aims to enable computers to act like humans or have human-like abilities in order to collaborate with humans. It focuses on developing formal models of human-human collaboration and applying these models to human-computer collaboration. In this approach, humans are viewed as rational agents who form and execute plans for achieving their goals and infer other people's plans. Agents are required to infer the goals and plans of other agents, and collaborative behavior consists of helping other agents to achieve their goals.[16]

Human Complementary

The human complementary approach seeks to improve human-computer interaction by making the computer a more intelligent partner that complements and collaborates with humans. The premise is that the computer and humans have fundamentally asymmetric abilities. Therefore, researchers invent interaction paradigms that divide responsibility between human users and computer systems by assigning distinct roles that exploit the strengths and overcome the weaknesses of both partners.[16]

Key Aspects

Specialization of Roles: Based on the level of autonomy and intervention, there are several human-robot relationships including master-slave, supervisor–subordinate, partner–partner, teacher–learner and fully autonomous robot. In addition to these roles, homotopy (a weighting function that allows a continuous change between leader and follower behaviors) was introduced as a flexible role distribution.[18]

Establishing shared goal(s): Through direct discussion about goals or inference from statements and actions, agents must determine the shared goals they are trying to achieve.[16]

Allocation of Responsibility and Coordination: Agents must decide how to achieve their goals, determine what actions will be done by each agent, and how to coordinate the actions of individual agents and integrate their results.[16]

Shared context: Agents must be able to track progress toward their goals. They must keep track of what has been achieved and what remains to be done. They must evaluate the effects of actions and determine whether an acceptable solution has been achieved.[16]

Communication: Any collaboration requires communication to define goals, negotiate over how to proceed and who will do what, and evaluate progress and results.[16]

Adaptation and learning: Collaboration over time require partners to adapt themselves to each other and learn from one's partner both directly or indirectly.[16]

Time and space: The time-space taxonomy divides human-robot interaction into four categories based on whether the humans and robots are using computing systems at the same time (synchronous) or different times (asynchronous) and while in the same place (collocated) or in different places (non-collocated).[19][20]

See Also

gollark: Spreadsheets < numpy.
gollark: What? Nothing is scheduled until 2026.
gollark: (My school provides it for """free" and also LibreOffice Calc)
gollark: Imagine buying Excel.
gollark: Ah, JavaScript.

References

  1. Bauer, Andrea; Wollherr, Dirk; Buss, Martin (2008). "Human–Robot Collaboration: A Survey". International Journal of Humanoid Robotics. 05: 47–66. doi:10.1142/S0219843608001303.
  2. Cakmak, Maya; Hoffman, Guy; Thomaz, Andrea (2016). "Computational Human-Robot Interaction". Foundations and Trends in Robotics. 4 (2–3): 104–223. doi:10.1561/2300000049.
  3. Grosz, Barbara J.; Kraus, Sarit (1996). "Collaborative plans for complex group action". Artificial Intelligence. 86 (2): 269–357. doi:10.1016/0004-3702(95)00103-4.
  4. Thomson, A. M.; Perry, J. L.; Miller, T. K. (2007). "Conceptualizing and Measuring Collaboration". Journal of Public Administration Research and Theory. 19: 23–56. doi:10.1093/jopart/mum036.
  5. Hord, S. M. (1981). Working Together: Cooperation or Collaboration? Communications Services, Research and Development Center for Teacher Education, Education Annex 3.203, University of Texas, Austin, TX 78712-1288
  6. Chandrasekaran, Balasubramaniyan; Conrad, James M. (2015). "Human-robot collaboration: A survey". Southeast Con 2015. pp. 1–8. doi:10.1109/SECON.2015.7132964. ISBN 978-1-4673-7300-5.
  7. Hoffman, Guy; Breazeal, Cynthia (2004). "Collaboration in Human-Robot Teams". AIAA 1st Intelligent Systems Technical Conference. doi:10.2514/6.2004-6434. ISBN 978-1-62410-080-2.
  8. Levesque, Hector J.; Cohen, Philip R.; Nunes, José H. T. (1990). "On acting together". Proceedings of the eighth National conference on Artificial intelligence - Volume 1 (AAAI'90). 1. AAAI. pp. 94–99. ISBN 978-0-262-51057-8.
  9. Roy, Someshwar; Edan, Yael (2018-03-27). "Investigating Joint-Action in Short-Cycle Repetitive Handover Tasks: The Role of Giver Versus Receiver and its Implications for Human-Robot Collaborative System Design". International Journal of Social Robotics. doi:10.1007/s12369-017-0424-9. ISSN 1875-4805.
  10. Someshwar, Roy; Edan, Yael (2017-08-30). "Givers & Receivers perceive handover tasks differently: Implications for Human-Robot collaborative system design". arXiv:1708.06207 [cs].
  11. Bratman, Michael (1987). Intention, Plans, and Practical Reason. Center for the Study of Language and Information.
  12. Georgeff, Michael; Pell, Barney; Pollack, Martha; Tambe, Milind; Wooldridge, Michael (1999). "The Belief-Desire-Intention Model of Agency". Intelligent Agents V: Agents Theories, Architectures, and Languages. Lecture Notes in Computer Science. 1555. pp. 1–10. doi:10.1007/3-540-49057-4_1. ISBN 978-3-540-65713-2.
  13. Mascardi, V., Demergasso, D., & Ancona, D. (2005). Languages for Programming BDI-style Agents: an Overview. WOA.
  14. Bratman, Michael E. (1992). "Shared Cooperative Activity". The Philosophical Review. 101 (2): 327–341. doi:10.2307/2185537. JSTOR 2185537.
  15. Cohen, Philip R.; Levesque, Hector J. (1991). "Teamwork". Noûs. 25 (4): 487. doi:10.2307/2216075. JSTOR 2216075.
  16. Terveen, Loren G. (1995). "Overview of human-computer collaboration". Knowledge-Based Systems. 8 (2–3): 67–81. doi:10.1016/0950-7051(95)98369-H.
  17. Fong, Terrence; Thorpe, Charles; Baur, Charles (2003). "Collaboration, Dialogue, Human-Robot Interaction". Robotics Research. Springer Tracts in Advanced Robotics. 6. pp. 255–266. doi:10.1007/3-540-36460-9_17. ISBN 978-3-540-00550-6.
  18. Jarrassé, Nathanaël; Sanguineti, Vittorio; Burdet, Etienne (2014). "Slaves no longer: Review on role assignment for human–robot joint motor action" (PDF). Adaptive Behavior. 22: 70–82. doi:10.1177/1059712313481044.
  19. Ellis, Clarence A.; Gibbs, Simon J.; Rein, Gail (1991). "Groupware: Some issues and experiences". Communications of the ACM. 34: 39–58. doi:10.1145/99977.99987.
  20. Yanco, H.A.; Drury, J. (2004). "Classifying human-robot interaction: An updated taxonomy". 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583). 3. pp. 2841–2846. doi:10.1109/ICSMC.2004.1400763. ISBN 978-0-7803-8567-2.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.