Week 7: Continuation of evaluation task

Last week I was able to combine several ontologies into a single owl file and load them into AgreementMaker Light. This week, I continued to work on evaluation. Following are the specific tasks I focused on:

1. For evaluation, we need to compare the matches obtained from our algorithm with the manually annotated results. The manually annotated results contain package id, id of the class and other additional information in them. I read this file, obtained corresponding URIs from the class ids and stored them using the package id as an identifier.
2. Instead of directly getting values for the precision, recall and fscore, I wrote code to keep a record of all the matchings that are made. This record will contain the source and target URIs, along with their labels and the similarity score obtained during matching. Once we have this, we can perform different analysis by altering the threshold without having to rerun the entire program again. This will also help us in analyzing the contents that the algorithm is successfully able to match and the ones that it does not.
3. Out of all the matches that are obtained, we are only interested in those pairs of source and target that are subclass of “oboe:measurementType”. Hence I filtered out the matched pairs by enumerating all the super-classes of the source and target class and checking if any of their superclass is “measurementType” or not.
4. One class from the source ontology might match with different classes of the target ontology with different similarity scores. Hence for each class from the source ontology, I check all the corresponding pairs from the target ontology and keep only the one that has the maximum similarity score.

After performing these tasks, I ran the algorithm on the entire dataset. However, since the ontologies are huge, it took very long time to finish matching the ontologies; in particular, only 35 ontologies were matched in 24 hours duration. Even after reducing some of the matching techniques that were being used, no effect was seen in the speed. For the next week, I plan to identify the main cause for this delay and optimize the algorithm. Once this gets done, I can proceed with comparing the results obtained from this matching algorithm with the manual annotation results.

Leave a Reply

Your email address will not be published. Required fields are marked *

*