How can one define joint action with robots?
humans and robots
cooperate (mentally and physically)
in a joint workspace
by synchronising their actions and mental states
What are the individual aspects of joint action?
joint attention
knowing what others perspective
action observation
knowing what others will do
task-sharing
knowing what others should do
action coordination
cooperating for a common cause
confusions about agency
who is responsible for what
What are the immediate goals of joint action between humans and robots?
work together without barriers
supersede the need for (textual) expert programming
supplement each others skills and competences
help the robot to adpt to new sutiations
e.g. changes in environemtal conditions
objects to be manupulated
etc.
What are the long-term goals of joint action between humans and robots?
seamless cooperation and mutual understanding of robot systems and humans
development of a certain degree of autonomy on the part of the robot
extend the scenarios to N>1 robots and M>1 humans
e.g. medical robots
What are general difficulties in human-human JA?
representations that can be shared
abilities to percieve and predict the others actions, state of mind, intentions
mechanisms htat predict the effects of own and others actions
“foresight” alowing us to integrate the predicted outcome of others’ actions into the planning of our own
What are general difficulties in human-robot JA?
representations must be communicable
-> how can robot know about humans background and physical (body) experience?
state recognition: how does robot know about state of the human
and through which (varying, combined) sensing modalities?
prediction and intention detection
how does robot develop reasoning capability about his human partner’s intentions, beliefs, desires?
What is another major difficulty (especially until several years ago=?
voice recognition still not 100% correct…
=> voice does not capture mimic, gestures, etc. and contains ramdom errors…
=> i.e. this one is blue (what is blue???)
computer vision
-> varying lighting and env. conditions…
frame rates too low to recognise gestures with enough detail
fault detection
robots should be capable of recognizing their own erros
as wellas errors and inconsistencies in behavior of interaciotn partner
What are some issues in dialogue contol that was obsered in the bielefeld baufix-system?
semantic inconsistence
-> leads to inquiry (i.e. request for more detail) or attempt to infer missing information
=> screw the slat on the cube (how) -> screw the slat on the cube using the red screw
ambiguity in the language expression
lexical inconsistence -> screw is noun and imperative…
attachment of constituents
-> schraub die leiste mit der roten mutter auf den flügel (attribute)
-> schraub die leiste mit der roten muter auf den flügel (instrument)
situational ambiguity
even when successfully interpreted -> does not necessarily result in action e.g. take a cube (there is more than one…)
intervention
lead to modifictation of the action (e.g. stop- not this one) or complete reset of the action (now you have screwed it into the wrong hole)
Evtl. restliche slides rein…
Last changed2 years ago