Difference between revisions of "Evaluation Activities"
Jessica121 (talk | contribs) |
|||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
− | Background reading on ontology evaluation: | + | == Background reading on ontology evaluation: == |
+ | |||
[http://bioontology.org/literature_on_evaluation.html See Articles] | [http://bioontology.org/literature_on_evaluation.html See Articles] | ||
+ | |||
+ | |||
+ | == Evaluation == | ||
+ | |||
+ | [http://www.asiawriters.com/ essay writers] | ||
+ | |||
+ | We have to have some criteria based on which an evaluative statement about an ontology is produced, otherwise it might be labeled as a 'biased' opinion. Also, if we say every-ontology is good (which they certainly arent) that’s not productive either. | ||
+ | |||
+ | [Barry's suggestions] | ||
+ | |||
+ | Does it have a clear name? | ||
+ | |||
+ | Does it have clear documentation? | ||
+ | |||
+ | Does it have a clear subject matter? | ||
+ | |||
+ | Are its assertions universally true | ||
+ | (e.g. if it says A part_of B then is it true of all instances of A that they are part of some instance of B)? | ||
+ | |||
+ | Is it used by other independent groups? | ||
+ | |||
+ | [Nigam's suggestions] | ||
+ | |||
+ | There are four main axes on which an ontology ought to be reviewed: | ||
+ | |||
+ | 1 - extent to which it satisfies the purpose for which it was built | ||
+ | |||
+ | 2 - ability to express what a user might want to express (use case tests) | ||
+ | |||
+ | 3 - ease with which one can express non-sense while using it (i.e. take a few hundred use case instances and see how many were actually meaningful) | ||
+ | |||
+ | [2 and 3 will be at odds with each other much like sensitivity and specificity] | ||
+ | |||
+ | 4 - consistency checking (i.e. is the ontology formally consistent) | ||
+ | |||
+ | [Barry's input on 4] | ||
+ | |||
+ | These are two related questions: are there tools/methodology for checking? | ||
+ | what is the result of such checking? (and if no tools, what is the | ||
+ | result of a quick manual check?) |
Latest revision as of 01:09, 5 August 2010
Background reading on ontology evaluation:
Evaluation
We have to have some criteria based on which an evaluative statement about an ontology is produced, otherwise it might be labeled as a 'biased' opinion. Also, if we say every-ontology is good (which they certainly arent) that’s not productive either.
[Barry's suggestions]
Does it have a clear name?
Does it have clear documentation?
Does it have a clear subject matter?
Are its assertions universally true
(e.g. if it says A part_of B then is it true of all instances of A that they are part of some instance of B)?
Is it used by other independent groups?
[Nigam's suggestions]
There are four main axes on which an ontology ought to be reviewed:
1 - extent to which it satisfies the purpose for which it was built
2 - ability to express what a user might want to express (use case tests)
3 - ease with which one can express non-sense while using it (i.e. take a few hundred use case instances and see how many were actually meaningful)
[2 and 3 will be at odds with each other much like sensitivity and specificity]
4 - consistency checking (i.e. is the ontology formally consistent)
[Barry's input on 4]
These are two related questions: are there tools/methodology for checking? what is the result of such checking? (and if no tools, what is the result of a quick manual check?)