{"id":2275,"date":"2014-06-23T04:05:30","date_gmt":"2014-06-23T04:05:30","guid":{"rendered":"https:\/\/notebooks.dataone.org\/?p=2275"},"modified":"2014-06-23T04:09:23","modified_gmt":"2014-06-23T04:09:23","slug":"some-analysis-from-the-coding-and-new-paper-collection-from-google-scholar-week-5","status":"publish","type":"post","link":"https:\/\/notebooks.dataone.org\/citsci-data\/some-analysis-from-the-coding-and-new-paper-collection-from-google-scholar-week-5\/","title":{"rendered":"Some analysis from the coding and new paper collection from Google Scholar – Week 5"},"content":{"rendered":"
By the last week, I had completed the first round of coding for meta-analysis, and I reviewed the previous coding during this week. For the last two weeks, out team members have discussed the criteria of paper acceptance and mechanisms for data quality assurance. While reviewing the previous coding results, I focused on applying the newly established coding scheme through the discussion.<\/p>\n
According to the coding results, data quality assurance mechanisms were most frequently used in biology, ecology, computer science, and geographic information science. Biology has 9 papers which have been categorized in the discipline, ecology has 8 papers, computer science has 7 papers, and geographic information science has 8 papers. The most papers from biology, ecology and ornithology had been searched by the keywords of \u201ccitizen science\u201d, and the most papers from geographic information science had been searched by the keywords of \u201cvolunteered geographic information science\u201d. In terms of paper from computer science and design science, they had been retrieved from both keywords.<\/p>\n
The most dominant mechanism for data quality assurance was participant training. Researches from 17 papers trained participants for increasing data quality. Participant training was mostly used in citizen science. Only two papers in the field of volunteered geographic information (VGI) trained participants, although our paper collection may not be extensive enough. Data normalization was used frequently in the second place. Data collected by different participants or different methods was adjusted to fit a same scale or to filter unusual reports by using standard and advanced statistical techniques in 13 papers. There was no bias of usage in data normalization, thus it was evenly used for citizen science and VGI. Expert review was also used often, accordingly there were 10 papers utilizing experts\u2019 knowledge and experience. These are only few examples of statistical analysis from the coding results. More extensive analysis will be done within next few weeks and the result will be shared.<\/p>\n
I have also collected a new list of papers from Google Scholar. I have tested existing codes and programs which extract citation information from Google Scholar, and found a program called as \u201cPublish and Perish\u201d. This program performed the best in terms of its usability and collected the most amounts of data. The program can be downloaded from http:\/\/www.harzing.com\/pop.htm. It is known that the quality of citation information from Scopus and Web of Science (WOS) is higher than the citation information from Google Scholar. However, we only could collect 67 papers from Scopus. With the Publish and Perish, we could collect 1,000 papers with the keywords of \u201ccitizen science\u201d and \u201cdata quality\u201d, and could collect 550 papers with the keywords of \u201cvolunteered geographic information\u201d and \u201cdata quality\u201d. We may need to clean the data, as some redundant papers can be included in the list. However, this extensive coverage of citation information will give us far better understanding on the terrain of citizen science and its efforts for assuring data quality. In the next week, we will review the list from Google Scholar to see how we can clean the data and will investigate methodologies on how we will integrate the result from Google Scholar with the result from Scopus.<\/p>\n
.<\/p>\n","protected":false},"excerpt":{"rendered":"
By the last week, I had completed the first round of coding for meta-analysis, and I reviewed the previous coding during this week. For the last two weeks, out team members have discussed the criteria of paper acceptance and mechanisms for data quality assurance. While reviewing the previous coding results, Continue reading Some analysis from the coding and new paper collection from Google Scholar – Week 5<\/span>