Expertise classification: Collaborative classification vs. automatic extraction

Persistent Link:
http://hdl.handle.net/10150/105709
Title:
Expertise classification: Collaborative classification vs. automatic extraction
Author:
Bogers, Toine; Thoonen, Willem; van den Bosch, Antal
Editors:
Furner, Jonathan; Tennis, Joseph T.
Citation:
Expertise classification: Collaborative classification vs. automatic extraction 2006, 17
Publisher:
dLIST
Issue Date:
2006
URI:
http://hdl.handle.net/10150/105709
Submitted date:
2007-03-27
Abstract:
Social classification is the process in which a community of users categorizes the resources in that community for their own use. Given enough users and categorization, this will lead to any given resource being represented by a set of labels or descriptors shared throughout the community (Mathes, 2004). Social classification has become an extremely popular way of structuring online communities in recent years. Well-known examples of such communities are the bookmarking websites Furl (http://www.furl.net/) and del.icio.us (http://del.icio.us/), and Flickr (http://www.flickr.com/) where users can post their own photos and tag them. Social classification, however, is not limited to tagging resources: another possibility is to tag people, examples of which are Consumating (http://www.consumating.com/), a collaborative tag-based personals website, and Kevo (http://www.kevo.com/), a website that lets users tag and contribute media and information on celebrities. Another application of people tagging is expertise classification, an emerging subfield of social classification. Here, members of a group or community are classified and ranked based on the expertise they possess on a particular topic. Expertise classification is essentially comprised of two different components: expertise tagging and expert ranking. Expertise tagging focuses on describing one person at a time by assigning tags that capture that person's topical expertise, such as â speech recognition' or â small-world networks'. information request, such as, for instance, a query submitted to a search engine. Methods are developed to combine the information about individual members' expertise (tags), to provide on-the-fly query-driven rankings of community members. Expertise classification can be done in two principal ways. The simplest option follows the principle of social bookmarking websites: members are asked to supply tags that describe their own expertise and to rank the other community members with regard to a specific request for information. Alternatively, automatic expertise classification ideally extracts expertise terms automatically from a user's documents and e-mails by looking for terms that are representative for that user. These terms are then matched on the information request to produce an expert ranking of all community members. In this paper we describe such an automatic method of expertise classification and evaluate it using human expertise classification judgments. In the next section we will describe some of the related work on expertise classification, after which we will describe our automatic method of expertise classification and our evaluation of them in sections 3 and 4. Sections 5.1 and 5.1 describe our findings on expertise tagging and expert rankings, followed by discussion and our conclusions in section 6 and recommendations for future work in section 7.
Type:
Conference Paper
Language:
en
Keywords:
Classification; Computer Science; Knowledge Organization
Local subject classification:
expert classification; social tagging

Full metadata record

DC FieldValue Language
dc.contributor.authorBogers, Toineen_US
dc.contributor.authorThoonen, Willemen_US
dc.contributor.authorvan den Bosch, Antalen_US
dc.contributor.editorFurner, Jonathanen_US
dc.contributor.editorTennis, Joseph T.en_US
dc.date.accessioned2007-03-27T00:00:01Z-
dc.date.available2010-06-18T23:32:50Z-
dc.date.issued2006en_US
dc.date.submitted2007-03-27en_US
dc.identifier.citationExpertise classification: Collaborative classification vs. automatic extraction 2006, 17en_US
dc.identifier.urihttp://hdl.handle.net/10150/105709-
dc.description.abstractSocial classification is the process in which a community of users categorizes the resources in that community for their own use. Given enough users and categorization, this will lead to any given resource being represented by a set of labels or descriptors shared throughout the community (Mathes, 2004). Social classification has become an extremely popular way of structuring online communities in recent years. Well-known examples of such communities are the bookmarking websites Furl (http://www.furl.net/) and del.icio.us (http://del.icio.us/), and Flickr (http://www.flickr.com/) where users can post their own photos and tag them. Social classification, however, is not limited to tagging resources: another possibility is to tag people, examples of which are Consumating (http://www.consumating.com/), a collaborative tag-based personals website, and Kevo (http://www.kevo.com/), a website that lets users tag and contribute media and information on celebrities. Another application of people tagging is expertise classification, an emerging subfield of social classification. Here, members of a group or community are classified and ranked based on the expertise they possess on a particular topic. Expertise classification is essentially comprised of two different components: expertise tagging and expert ranking. Expertise tagging focuses on describing one person at a time by assigning tags that capture that person's topical expertise, such as â speech recognition' or â small-world networks'. information request, such as, for instance, a query submitted to a search engine. Methods are developed to combine the information about individual members' expertise (tags), to provide on-the-fly query-driven rankings of community members. Expertise classification can be done in two principal ways. The simplest option follows the principle of social bookmarking websites: members are asked to supply tags that describe their own expertise and to rank the other community members with regard to a specific request for information. Alternatively, automatic expertise classification ideally extracts expertise terms automatically from a user's documents and e-mails by looking for terms that are representative for that user. These terms are then matched on the information request to produce an expert ranking of all community members. In this paper we describe such an automatic method of expertise classification and evaluate it using human expertise classification judgments. In the next section we will describe some of the related work on expertise classification, after which we will describe our automatic method of expertise classification and our evaluation of them in sections 3 and 4. Sections 5.1 and 5.1 describe our findings on expertise tagging and expert rankings, followed by discussion and our conclusions in section 6 and recommendations for future work in section 7.en_US
dc.format.mimetypeapplication/pdfen_US
dc.language.isoenen_US
dc.publisherdLISTen_US
dc.subjectClassificationen_US
dc.subjectComputer Scienceen_US
dc.subjectKnowledge Organizationen_US
dc.subject.otherexpert classificationen_US
dc.subject.othersocial taggingen_US
dc.titleExpertise classification: Collaborative classification vs. automatic extractionen_US
dc.typeConference Paperen_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.