Week 4: Evaluate Instrument for Education – Evaluation Tool Design/Development Part 2

For this week, I continued to work on refining the questionnaire/survey evaluation tool that I started last week. Specifically, during last week, I was able to construct the outline for the questionnaire/survey as well as compile all the applicable evaluation questions and categorized the questions based on the different, corresponding evaluation areas. However, since there was a total of 90 questions, it would be difficult and unfeasible for a practical questionnaire/survey to include all the evaluation questions. As a result, this week’s work was focused on providing various filtering categories that could be used to streamline the selection process of the evaluation questions.

The categories that I currently have for selecting the evaluation questions are as follows:

  • Question Obligation (i.e. Mandatory, Recommended, or Optional)
  • Applicability to Pre-Training Evaluation
  • Applicability to During-Training Evaluation
  • Possible Question Type
  • Possible Answers for the Question
  • Evaluation Area per Kirkpatrick (i.e. Reaction, Learning, Behavior, Results)
  • Applicability per Training Duration of:
    • 1-Hour
    • Multi-Hour
    • Multi-Day
    • Multi-Week
  • Applicability to:
    • In-Person Training/Education
    • Online/Web-based Training/Education
    • Applicability to Text-Based Training/Education

Using these categories, a user of the evaluation tool should be able to review the resulting evaluation questions that are applicable to their desired scenarios and to customize the evaluation questions for his/her final questionnaire/survey accordingly.

In addition to the evaluation tool, I also worked on creating a presentation about my project that will be shared next week during the American Geophysical Union’s (AGU) Data Science Credentialing Editorial Board Meeting. I am very excited that I was invited to give two different presentations relating to data management training at the Meeting, which will take place at AGU’s headquarter in Washington, D.C. next Wednesday,  June 15th. Along with this project, I will also present the Data Management Training Survey project that I completed last fall through the sponsorship from the Federation of Earth Science Information Partners’ (ESIP).  Additional information about my ESIP project can be found under the following links: http://commons.esipfed.org/node/8755, http://commons.esipfed.org/node/8763

For next week, I am looking forward to attending AGU’s Data Science Credentialing Editorial Board Meeting and to presenting to the Board members. I will also provide my report regarding the Meeting through my reflection post.  Additionally, I will continue to improve the evaluation tool by learning more about Likert Scale, and comparing the characteristics of six different tools/platforms for implementing questionnaires and surveys (Survey Monky, Google Form, Zendesk, Desk, Qualtrics, and LimeSurvey) as well as the pros and cons of several survey variations that were collected during Week 3’s literature review (e.g. short surveys, social listening, live chat, etc).

Leave a Reply

Your email address will not be published. Required fields are marked *

*