This is the companion web page for our paper titled "Predicting Reasoning Performance Using Ontology Metrics" accepted at 11th International Semantic Web Conference (ISWC 2012, research track). If you have any questions regarding the research work or software, please feel free to contact us.
On this page we provide the dataset we use in the paper. The dataset contains 358 ontologies we collected from the public domain. They are organized in a compressed directory and can be downloaded here (106 MB compressed, 1.8 GB uncompressed). More details of these ontologies and how we use them can be found in the paper.
The performance of 4 reasoners are evaluated in our paper: FaCT++ (version 1.5.3), HermiT (version 1.3.5), Pellet (version 2.3.0) and TrOWL (0.8). For the task of classification, each reasoner's user time on each ontology is averaged over 10 runs. The following csv files record the metric values and runtime for all the ontologies and for all the 4 reasoners.
- FaCT++: fact_cls_data.csv
- HermiT: hermit_cls_data.csv
- Pellet: pellet_cls_data.csv
- TrOWL: trowl_cls_data.csv
Note that a 50,000 seconds cut-off time is applied on all reasoners.