Reading PAGE

Peer Evaluation activity

Downloads 2
Views 4

Total impact ?

    Send a

    Kristinn has...

    Trusted 0
    Reviewed 0
    Emailed 0
    Shared/re-used 0
    Discussed 0
    Invited 0
    Collected 0

     

    This was brought to you by:

    block this user Kristinn Thórisson Trusted member

    Associate Professor

    School of Computer Science, Reykjavik University, Reykjavik, Iceland

    Fluid semantic back-channel feedback in dialogue: Challenges and progress

    Export to Mendeley

    Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase's onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data. Springer-Verlag Berlin Heidelberg 2007.

    Oh la laClose

    Your session has expired but don’t worry, your message
    has been saved.Please log in and we’ll bring you back
    to this page. You’ll just need to click “Send”.

    Your evaluation is of great value to our authors and readers. Many thanks for your time.

    Review Close

    Short review
    Select a comment
    Select a grade
    You and the author
    Anonymity My review is anonymous( Log in  or  Register )
    publish
    Close

    When you're done, click "publish"

    Only blue fields are mandatory.

    Relation to the author*
    Overall Comment*
    Anonymity* My review is anonymous( Log in  or  Register )
     

    Focus & Objectives*

    Have the objectives and the central topic been clearly introduced?

    Novelty & Originality*

    Do you consider this work to be an interesting contribution to knowledge?

    Arrangement, Transition and Logic

    Are the different sections of this work well arranged and distributed?

    Methodology & Results

    Is the author's methodology relevant to both the objectives and the results?

    Data Settings & Figures

    Were tables and figures appropriate and well conceived?

    References and bibliography

    Is this work well documented and has the bibliography been properly established?

    Writing

    Is this work well written, checked and edited?

    Write Your Review (you can paste text as well)
    Please be civil and constructive. Thank you.


    Grade (optional, N/A by default)

    N/A 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
    Close

    Your mailing list is currently empty.
    It will build up as you send messages
    and links to your peers.

     No one besides you has access to this list.
    Close
    Enter the e-mail addresses of your recipients in the box below.  Note: Peer Evaluation will NOT store these email addresses   log in
    Your recipients

    Your message:

    Your email : Your email address will not be stored or shared with others.

    Your message has been sent.

    Description

    Title : Fluid semantic back-channel feedback in dialogue: Challenges and progress
    Author(s) : Gudny Ragna Jonsdottir, Jonathan Gratch, Edward Fast, Kristinn R Thórisson
    Abstract : Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase's onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data. Springer-Verlag Berlin Heidelberg 2007.
    Keywords : back, content feed, envelope feedback, face face dialogue, interactive virtual agent, real time

    Subject : unspecified
    Area : Other
    Language : English
    Year : 2007

    Affiliations School of Computer Science, Reykjavik University, Reykjavik, Iceland
    Journal : Computer
    Url : http://www.scopus.com/inward/record.url?eid=2-s2.0-37249074953&partnerID=40&md5=95626c9abff17f94df42b8d1a291f63f

    Leave a comment

    This contribution has not been reviewed yet. review?

    You may receive the Trusted member label after :

    • Reviewing 10 uploads, whatever the media type.
    • Being trusted by 10 peers.
    • If you are blocked by 10 peers the "Trust label" will be suspended from your page. We encourage you to contact the administrator to contest the suspension.

    Does this seem fair to you? Please make your suggestions.

    Please select an affiliation to sign your evaluation:

    Cancel Evaluation Save

    Please select an affiliation:

    Cancel   Save

    Kristinn's Peer Evaluation activity

    Kristinn has...

    Trusted 0
    Reviewed 0
    Emailed 0
    Shared/re-used 0
    Discussed 0
    Invited 0
    Collected 0
    Invite this peer to...
    Title
    Start date (dd/mm/aaaa)
    Location
    URL
    Message
    send
    Close

    Full Text request

    Your request will be sent.

    Please enter your email address to be notified
    when this article becomes available

    Your email


     
    Your email address will not be shared or spammed.