Reading PAGE

Peer Evaluation activity

Trusted by 1
Downloads 10488
Views 53
Full text requests 1
Collected by 1

Total impact ?

    Send a

    Aliaksandr has...

    Trusted 0
    Reviewed 1
    Emailed 0
    Shared/re-used 0
    Discussed 0
    Invited 0
    Collected 1

     

    This was brought to you by:

    block this user Aliaksandr Birukou Trusted member

    Post Doctorate

    DISI, University of Trento, Trento

    LiquidPub D3.3. Simulation and validation of the behavioral models

    Export to Mendeley

    In many systems, objects from a given set (let it be movies in The Internet Movie Database or books on Amazon) can be rated by individual users. A similar situation occurs in Liquid Journals where readers may be allowed to rate papers and journals. A sophisticated algorithm, taking into account user ability or reputation, may produce a better aggregation of ratings than the simple arithmetic average. Various co-determination algorithms are available to this end with both user and object reputation iteratively refined together and resulting in improved measures of both derived directly from the rating data. However, none of the proposed algorithms has been studied on real data. We use various distinct real datasets to test several ranking algorithms, compare their results, and identify advantages and limits of each algorithm.

    Oh la laClose

    Your session has expired but don’t worry, your message
    has been saved.Please log in and we’ll bring you back
    to this page. You’ll just need to click “Send”.

    Your evaluation is of great value to our authors and readers. Many thanks for your time.

    Review Close

    Short review
    Select a comment
    Select a grade
    You and the author
    Anonymity My review is anonymous( Log in  or  Register )
    publish
    Close

    When you're done, click "publish"

    Only blue fields are mandatory.

    Relation to the author*
    Overall Comment*
    Anonymity* My review is anonymous( Log in  or  Register )
     

    Focus & Objectives*

    Have the objectives and the central topic been clearly introduced?

    Novelty & Originality*

    Do you consider this work to be an interesting contribution to knowledge?

    Arrangement, Transition and Logic

    Are the different sections of this work well arranged and distributed?

    Methodology & Results

    Is the author's methodology relevant to both the objectives and the results?

    Data Settings & Figures

    Were tables and figures appropriate and well conceived?

    References and bibliography

    Is this work well documented and has the bibliography been properly established?

    Writing

    Is this work well written, checked and edited?

    Write Your Review (you can paste text as well)
    Please be civil and constructive. Thank you.


    Grade (optional, N/A by default)

    N/A 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10
    Close

    Your mailing list is currently empty.
    It will build up as you send messages
    and links to your peers.

     No one besides you has access to this list.
    Close
    Enter the e-mail addresses of your recipients in the box below.  Note: Peer Evaluation will NOT store these email addresses   log in
    Your recipients

    Your message:

    Your email : Your email address will not be stored or shared with others.

    Your message has been sent.

    Description

    Title : LiquidPub D3.3. Simulation and validation of the behavioral models
    Author(s) : Matus Medo, Luo-Luo Jiang, Tao Zhou, Joseph Wakeling, Katsiaryna Mirylenka, Azzurra Ragone
    Abstract : In many systems, objects from a given set (let it be movies in The Internet Movie Database or books on Amazon) can be rated by individual users. A similar situation occurs in Liquid Journals where readers may be allowed to rate papers and journals. A sophisticated algorithm, taking into account user ability or reputation, may produce a better aggregation of ratings than the simple arithmetic average. Various co-determination algorithms are available to this end with both user and object reputation iteratively refined together and resulting in improved measures of both derived directly from the rating data. However, none of the proposed algorithms has been studied on real data. We use various distinct real datasets to test several ranking algorithms, compare their results, and identify advantages and limits of each algorithm.
    Keywords : LiquidPub, data mining, reputation, reputation systems, ranking

    Subject : unspecified
    Area : Computer Science
    Language : English
    Year : 2010

    Affiliations DISI, University of Trento, Trento
    Editors : Matus Medo
    Reviewers : Azzurra Ragone, Jordi Sabater Mir
    Attribution by

    Leave a comment

    This contribution has not been reviewed yet. review?

    You may receive the Trusted member label after :

    • Reviewing 10 uploads, whatever the media type.
    • Being trusted by 10 peers.
    • If you are blocked by 10 peers the "Trust label" will be suspended from your page. We encourage you to contact the administrator to contest the suspension.

    Does this seem fair to you? Please make your suggestions.

    Please select an affiliation to sign your evaluation:

    Cancel Evaluation Save

    Please select an affiliation:

    Cancel   Save

    Aliaksandr's Peer Evaluation activity

    Trusted by 1
    Downloads 10488
    Views 53
    Full text requests 1
    Collected by 1

    Aliaksandr has...

    Trusted 0
    Reviewed 1
  • 8/10  read reviewPeer Evaluation, Publisher.
  • Emailed 0
    Shared/re-used 0
    Discussed 0
    Invited 0
    Collected 1
    Invite this peer to...
    Title
    Start date (dd/mm/aaaa)
    Location
    URL
    Message
    send
    Close

    Full Text request

    Your request will be sent.

    Please enter your email address to be notified
    when this article becomes available

    Your email


     
    Your email address will not be shared or spammed.