Screencast style guide – Screencast Tutorials, Week #2

Begin with data, where possible

While it’s easy to view a video tutorial and agree or disagree with design decisions implemented, as a scientist my goal in prescribing stylistic guidelines is to ensure that each recommendation had an empirical basis. Unfortunately, there aren’t as many data-driven studies out there as I’d like. Much of the data I’ve reviewed comes from surveys or pre/post tutorial quizzing of undergraduate students administered through the libraries of public universities. The upside is that some of the DataONE tools that will eventually get tutorial screencasts have similarities to library search engines, websites, and databases. The downside is that our target audience is, generally speaking, not 20-something undergraduates. So, we’re pursuing IRB approval to do a screening of several options ourselves at the DUG meeting, particularly for visual options (like mouse cursor highlighting, animated callouts) that were not found to be as rigorously reviewed in the literature search.


Static (frame-by-frame) vs. Dynamic (animated) Tutorials

Tutorial videos can come in two basic formats: static or dynamic. Static videos would show no animation, instead having vocal instructions and a series of still images captured from the screen. An example static tutorial: http://youtu.be/m6UTAzMenNA?t=42s

Advantages of the static or frame-based style generally include a much smaller file sizes and easier translation into other languages and easier updates if the software tool changes because you can just replaces the images of affected areas rather than re-recording an entire video. They’re also easier to export into static formats, like a webpage or PDF to print out (http://www.indoition.com/screencasting-tool-choosing.htm).

Dynamic or animated tutorials are what typically comes to mind when hearing “screencast,” and display all mouse cursor movement, sometimes in addition other animation, such as callouts. An example dynamic tutorial: https://www.youtube.com/watch?v=rlK-5joOlyU

Most screencast tutorials tend to be dynamic (Sugar, Brown, & Luterbach, 2010).


Guidelines
The recommendations of Plaisant and Shneiderman (2005) neatly categorize the goals for nearly all aspects of a screencast tutorial:

1) Be faithful to the actual user interface

Record the tool itself as the user will view and interact with it. Do not include distractions. Use an example dataset that is relevant to the user.

2) Keep segments short

How short? Short. If the audience is only looking for how to use one aspect of a tool, or to understand the steps to complete one action, the bare minimum is recommended: 10 to 25 seconds (Grossman and Fitzmaurice, 2010). If the videos get into the 2-3 minute length, then viewers generally prefer to skim text documents for help (Bowles-Terry et al., 2010; Mestre, 2012), and generally learn better when referring to text documents than watching the videos, presumably because of longer videos tax the memory (Grabler, et al., 2009).

…which leads to:
3) Ensure that tasks are clear and simple

Break down complex procedures into meaningful building blocks and provide a visual label or callout to tell viewers what they are about to learn.  Ertelt (2007) found that, compared to learning-by-doing, this method of labeling significantly increases declarative knowledge but practice was needed to encode it as procedural knowledge. Adding a momentary roadblock that the viewer must click through to control the pacing further improved outcomes, especially when practice was not directly or immediately an option.

Minimize extraneous mouse movements, leaving mouse cursor in a position where it won’t cover information or distract. Storyboarding will help with this. With high-end software, extraneous mouse movements can be smoothed or edited out.

Minimize distractions by recording the smallest section of the screen that shows what’s necessary to provide context. Don’t have other tabs open in a broswer-based tool, for example, or icons cluttering the desktop.

4) Coordinate demonstrations with text documentation

In addition to visual style guidelines, vocabulary in the tutorial scripts must also be controlled. For example, is a button “clicked,” “selected,” or “chosen?” In some contexts, each of these words carries a slightly different meaning. Being consistent with vocabulary, as well as imagery, is important to keep viewers from being distracted from the task at hand (learning the tool) because they’re trying to decipher the method of instructional delivery (Mayer, 2006).

5) Use spoken narration

While delivering information in multiple modes (visual, audial) can make learning more effective (Mayer, 2003, Alessi & Trollip, 2001, pp. 21-22), presenting words that are too similar in multiple modes simultaneously increases memory load and can distract from learning (Mayer, 2006). So, rather than the tutor speaking aloud the same or similar instructions as are also written on the screen, such instructions should be presented serially (Mayer, 2006).

Pros and cons of professional voice talent are reviewed in Weiss (2005 pp.75-76). Essentially, paid voice models are generally more efficient and consistent, but cost more and may not be familiar with technical or scientific jargon. Regardless, attitude, tone, inflections and intonations, energy, and demeanor should reflect well on the project and its audience:  people learn better if dynamic visualizations are accompanied by narration that is presented in standard accent rather than foreign accent or with human voice rather than a machine -synthesized voice (Mayer, 2005).

6) Provide procedural or instructional information rather than conceptual information

The utility of this guideline depends on the audience. For novice students will little to no experience with the tool or the field, some authors recommend having an outline at the beginning, with introduction of basic concepts (Clark and Lyons, 2004; Mayer, 2006; Oud, 2009). But advanced students are often frustrated with this pacing and want to skip straight ahead to the instruction; summaries become counter-productive to their learning (Kalyuga, 2005; Oud, 2009).

One solution might be that, if conceptual background is to be given, to provide that information after the step-by-step instruction (Bowles-Terry et al., 2010).

7) Use highlighting to guide attention

I’ve found little specific information as to the effectiveness of different types of highlighting: circles and boxes around areas of the screen, dimming the rest of the screen, pointing with the mouse cursor, drawing a large arrow on the screen and more are all possible. One author reports that, as an alternative to using a cursor to focus the user’s attention on different parts of the screen, using coloured semi-transparent highlights is effective (Weiss, 2005). I’d like to test this, and look more into literature on the subject.

8) Ensure user control

Inability to control tutorial pacing is the most frequently-cited weakness of the medium, so the ability to pause and go back in a tutorial is vital. Taking control of pacing to the extreme, a choose-your-own-adventure type of path where users can pick the next short video to explore more options they’re interested in is possible and makes the experience very personalized, but also requires the most structured pre-planning ().

9) Keep file sizes small

Brevity will help here.

10) Strive for universal usability

I am on the fence about closed captioning. Most software and video serving platforms (YouTube, Vimeo) seem to make closed captioning a fairly straightforward option, but if the animations and callouts are clear enough and the video structure simple and clear enough, it feels unnecessary. That might even be a good test of the minimalism of the video. Some tutorials prefer to use callouts with a summary of each snippet of dialogue, so it feels like the callout is being read aloud (e.g., https://www.youtube.com/watch?v=dKhRaYINX9M)

As for file formats, Google has detailed recommendations for YouTube here: https://support.google.com/youtube/answer/1722171?hl=en and Vimeo lists theirs here: http://vimeo.com/help/compression

Videos can also exported as SWF files for direct embedding, rather than hosting by a third party. This would allow us to customize the player look and feel at least somewhat, and would avoid the possibility of sending users over to YouTube (from whence they may never come back, having been distracted by an avalanche of kitten videos or the like). However, some mobile devices can’t natively play flash videos.

Which leads to the question of how the videos are to be presented to the users. Contextual “show me how” links located on the webpage close to where users may stumble and look for help have been shown to increase video viewership in similar circumstances (Lindsay, et al., 2006; Bowles-Terry et al., 2010). Help systems are most often used when context-specific, useful, obvious to invoke, non-intrusive, and easily available. (Grayling, 2002).

Keeping videos very short and limited to one specific task while presenting users with a list of all available videos aids them in searching out exactly the instruction they need, one of the reasons students have cited for their preference of static tutorials over videos (Mestre, 2012). Bulleted lists or otherwise easily-scannable pages with bold drawing attention to key terms is recommended (ibid.).


Next week, I be examining the DataONE tools and prioritizing them for screencast tutorials. Then it’s on to making trial videos before the DUG meeting!


References

Alessi, S. & Trollip, S. (2001). Multimedia for learning: Methonds and Development (3rd ed.). Boston, MA: Allyn and Bacon.

Bowles-Terry, M., Hensley, M.K., and Hinchliffe, L.J. (2010). Best practices for online video tutorials: A study of student preferences and understanding. Communications in Information Literacy 4, no. 1: 1728.

Clark, R.C. and Lyons, C. (2004), Graphics for Learning: Proven Guidelines for Planning, Designing and Evaluating Visuals in Training Materials, Pfeiffer/Wiley, San Francisco, CA.

Ertelt, A. (2007). On-screen videos as an effective learning tool: The effect of instructional design variants and practice on learning achievements, retention, transfer, and motivation (Doctoral dissertation, Universitätsbibliothek Freiburg http://www.freidok.uni-freiburg.de/volltexte/3095/pdf/Dissertation_Ertelt_end.pdf).

Grabler, F., Agrawala, M., Li, W., Dontcheva, M. and Igarashi, T. (2009). Generating photo manipulation tutorials by demonstration. In Proc. ACM SIGGRAPH , pages 1–9.

Grayling, T. (2002). If we build it, will they come? A usability test of two browser-based embedded help systems. Technical communication, 49(2), 193-209.)

Grossman, T. and Fitzmaurice, G. (2010). Toolclips: an investigation of contextual video assistance for functionality understanding. InProc. SIGCHI, pages 1515–1524.

Kalyuga, S. (2005), “Prior knowledge principle in multimedia learning”, in Mayer, R.E. (Ed.), The Cambridge Handbook of Multimedia Learning, Cambridge University Press, Cambridge, pp. 325-37.

Mayer, R.E. (2003). “Elements of a Science of E-Learning. ”Journal of Educational Computing Research, 29 no.3 (2003): 297–313

Mayer, R.E. (2006), “Ten research-based principles of multimedia learning”, in O’Neil, H.F. and Perez, R.S. (Eds), Web-based Learning: Theory, Research, and Practice, Lawrence Erlbaum Associates, Mahwah, NJ, pp. 371-90.

Mestre, L. (2012). Student preference for tutorial design: a usability study. Reference Services Review 40.2:258-276.

Plaisant, C. and Shneiderman, B. (2005), “Show Me! Guidelines for Producing Recorded Demonstrations”, Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC’05), Dallas, Texas, September 21–24, 2005.

Sugar, W., Brown, A. & Luterbach, K. (2010). Examining the Anatomy of a Screencast: Uncovering Common Elements and Instructional Strategies. International Review of Research in Open and Distance Learning, 11(3).

Oud, J. (2009).Guidelines for effective online instruction using multimedia screencasts. Reference Services Review 37.2(2009): 164-177.

Weiss, A. (2005). The iTour project: a study of the design and testing of effective online animated tours as a form of interactive online documentation. PhD Dissertation. RMIT University, Melbourne, Australia.

2 Replies to “Screencast style guide – Screencast Tutorials, Week #2”

  1. Pingback: Prioritizing DataONE and Partner Tools for Screencasting, And: What is Fair Use? – Screencast Tutorials, Week #3 | DataONE Notebooks

  2. Heather, this is a nice summary of useful tips for tutorials. thanks very much for synthesizing this. best wishes, jane

Leave a Reply

Your email address will not be published. Required fields are marked *

*