Article Text
Abstract
Background Quality assurance concerns about social media platforms used for education have arisen within the medical education community. As more trainees and clinicians use resources such as blogs and podcasts for learning, we aimed to identify quality indicators for these resources. A previous study identified 151 potentially relevant quality indicators for these social media resources.
Objective To identify quality markers for blogs and podcasts using an international cohort of health professions educators.
Methods A self-selected group of 44 health professions educators at the 2014 International Conference on Residency Education participated in a Social Media Summit during which a modified Delphi consensus study was conducted to determine which of the 151 quality indicators met the a priori ≥90% inclusion threshold.
Results Thirteen quality indicators classified into the domains of credibility (n=8), content (n=4) and design (n=1) met the inclusion threshold.
Conclusions The quality indicators that were identified may serve as a foundation for further research on quality indicators of social media-based medical education resources and prompt discussion of their legitimacy as a form of educational scholarship.
- MEDICAL EDUCATION & TRAINING
Statistics from Altmetric.com
Introduction
The last decade has seen an explosion of social media-based medical education resources including blogs, microblogs (eg, Twitter), networking websites (eg, Facebook) and podcasts.1 ,2 They are increasingly being used for medical education3–5 and integrated into formal curricula.6–10 Despite this growth, the quality of social media-based resources has not been defined or standardised.11–15 While the research on the quality of eLearning resources is informative,16 social media resources differ in that they are openly available (ie, not behind a pay wall or login), unstructured (generally not part of a course or curriculum) and unregulated (most are not formally affiliated with institutions or formally appointed instructors). Early attempts at ensuring the quality of social media resources have included introducing prepublication expert peer review of individual blog posts,17 curation of online content by expert panels6 and identifying quality resources by quantifying impact.18 ,19 However, these methods lack validation and are subject to some of the drawbacks of traditional peer review such as reviewer bias.20–22
While the ultimate goal of social media-based medical education resources is to benefit patient care through enhanced knowledge translation, four stakeholder groups would benefit from a standardised assessment of quality.23–25 First, learners may not have the expertise to discern true from erroneous and important from less important content; quality standards would help learners to identify the highest quality resources. Second, educators who are unfamiliar with blogs and podcasts could benefit from quality standards that assess resources, allowing educators to appropriately recommend resources to their learners. Third, content producers could improve the design and delivery of their content by adhering to metrics of quality. Finally, academic leaders, particularly those participating in promotions committees attempting to quantify the impact of social media-based medical education resources, could use these quality standards to better adjudicate a faculty member's education scholarship.19 ,23–26
The purpose of this study was to identify the most important quality indicators for blogs and podcasts from the perspectives of health professions educators using a modified Delphi consensus process.
Methods
Participants
A self-selected group of health professions educators attending the 2014 International Conference on Residency Education (ICRE) participated in a pre-conference Social Media Summit. A modified Delphi consensus study was conducted to assess quality markers for social media educational resources. Participants were randomly assigned to two groups (Group A or Group B).
Quality indicators
A previous study identified 151 potential quality indicators for blogs and podcasts through a multi-phase methodology that included: a literature search for publications describing quality indicators for secondary resources; the extraction and qualitative analysis of those quality indicators; and four focus groups held to ensure that no important quality indicators were missed.27 The qualitative analysis divided the quality indicators into three major domains—credibility (n=53), content (n=44) and design (n=54)—each of which had multiple subthemes. These quality indicators were subsequently pilot tested internally by the research team for clarity and content validity. This modified Delphi process was similar to a previous one conducted with expert bloggers and podcasters in the area of emergency medicine and critical care which elicited the priorities of producers of these resources.28 The current study was conducted to determine whether medical educators would prioritise similar quality indicators as content producers.
Delphi survey
Using a modified Delphi methodology,29–33 two real-time sequential web-based surveys were completed during a 2 h session as outlined in figure 1. The surveys were hosted on SurveyMonkey.com.
Flowchart demonstrating the modified Delphi consensus process to identify quality indicators (QI) for blogs and podcasts.
Survey 1 assessed each of the previously identified 151 quality indicators during the first half of the session. For each indicator, individual participants anonymously rated its importance as a measure of quality for blogs and then for podcasts. A 7-point Likert scale was used with 1 labelled ‘strongly disagree’ and 7 labelled ‘strongly agree’. Basic participant demographic data were also captured. To prevent rater fatigue, participants in Group A answered questions 1–73 and those in Group B answered questions 74–151.
The results of Survey 1 were immediately compiled and used to develop Survey 2, which was completed immediately after Survey 1 during the second half of the session. Survey 2 was composed of all quality indicators from Survey 1 that had a mean score of ≥5 (out of 7), with the mean scores listed next to each survey item. Instead of a Likert scale in Survey 2, participants were asked whether they endorsed the inclusion of each of the items in the final list of quality indicators by selecting ‘include’ or ‘do not include’.
Data analysis
Descriptive statistics of the participant demographics and survey data were calculated. While consensus can be achieved through a variety of techniques,34 it was determined a priori that ≥90% consensus from Survey 2 would provide a concise but meaningful list of quality indicators, based on the previously conducted modified Delphi consensus study of bloggers and podcasters.28
Results
Participants
A total of 44 participants completed both surveys. Table 1 lists the demographic information of the participants. There was a preponderance of educators from Canada (77%) specialising in the field of emergency medicine (43%) and holding a Doctor of Medicine (64%) and/or a Masters degree (57%).
Participant demographic information for health professions educators in Groups A and B (n=22 for each)
Top quality indicators
Of the 151 initially abstracted quality indicators, there was ≥90% agreement on the importance of 13 items (table 2). Figure 1 outlines how these 13 items were derived through the modified Delphi methodology. Nine quality indicators were applicable for both blogs and podcasts with an additional three indicators specific to blogs and one specific to podcasts. The only indicator that resulted in 100% consensus for both blogs and podcasts was transparency by the authorities who created the resource (eg, author, editor, publisher) regarding conflicts of interest. In total, there were eight quality indicators in the domain of credibility (transparency and trustworthiness of authorities), four in content (subject matter) and one in design (presentation, aesthetics and functionality).
Quality indicators for blogs and podcasts with ≥90% consensus among medical education experts within the three domains of credibility, content and design
The online supplementary appendix lists all of the 151 surveyed quality indicators and the consensus results from Surveys 1 and 2 for both blogs and podcasts.
Credibility
The majority of identified quality indicators were from the credibility domain (9 of 13 indicators). The only item to garner 100% agreement involved the transparency of the authorities (author, editor, publisher) in disclosing conflicts of interest. Other identified items included transparency around the material's creation and its intent (eg, advertisement vs content, or fact vs opinion). Furthermore, the importance of transparency and clear attribution of materials was endorsed. The author's, editor's and publisher's positive reputation was deemed far less important as a marker for quality. These results suggest that, for credibility on blogs and podcasts, it is most important to be transparent by having identifiable authors, disclosing conflicts of interest and using referenced citations.
Content
Educators consistently valued high-quality, professionally represented and accurate content that was relevant for its intended audience (4 of 13 indicators). In contrast, a conversational tone and entertaining approach both scored poorly. Despite the nature of social media platforms, which are designed for open conversations, participants did not value interaction between the authority (eg, author or publisher) and the readers/listeners. Notably, peer review was not determined to be a priority.
Design
Only one of the design quality indicators achieved ≥90% consensus among educators, suggesting that high-quality content is valuable largely independent of the aesthetics and presentation design framework. The only indicator that achieved consensus reinforced the importance of podcast resources using technology that is functional for all learners. Issues of mobile-responsive design, intuitive user interface, customisability and high-quality images and audio were not as valuable to educators.28
Discussion
A diverse self-selected group of health professions educators from the 2014 ICRE Social Media Summit identified 13 quality indicators within the domains of credibility, content and design with ≥90% consensus for educational blogs and podcasts using a modified Delphi methodology. These quality indicators provide a foundation for future scholarship to identify quality and critically appraise social media educational resources.
This study builds upon the previous work in this field27 to identify the quality indicators that were felt to be the most important to a group of expert health professions educators. A modified Delphi consensus process conducted with expert emergency medicine bloggers and podcasters endorsed substantially more quality indicators at the >90% level (14 for bloggers and 26 for podcasters).28 This difference may reflect the content producers’ greater fluency about the operational nuances and pitfalls in publishing educational material using social media.
In our study, health professions educators identified four items that were deemed as quality indicators specific for blogs (n=3) or podcasts (n=1), but not both. Interestingly, the educators found it important for bloggers to be content experts on topics they wrote about, but this was not a requirement for podcasters. This may reflect how these two social media modalities are commonly used, with blogs often serving as reference tools and podcasts used to provoke discussion and transmit tacit knowledge. Furthermore, citations, references and coherence of content were important for blogs but were not criteria for podcasts. This may represent a difference in expectations around the media as it may be unwieldy to accurately list or mention full citations in audio format (even though most podcasts have a companion website or blog where references can be more easily listed). Specifically for podcasts, educators valued compatibility across all platforms. Because learners often listen to podcasts on their mobile devices, which may run on different operating systems (eg, iOS, Android), compatibility across these different devices is perceived as important to educators. In contrast, compatibility is less critical for blogs, presumably because blogs typically exist on universally accessible and often mobile responsive website platforms.
Traditional prepublication peer review has been the gold standard for quality in scholarship and print journal publications. The absence of peer review is often cited as one of the major weaknesses for digital self-publishing platforms such as blogs and podcasts.13–15 However, the peer review process has been faulted as an imperfect and unproven approach to quality assurance, with major limitations including reviewer bias, inconsistent quality in reviews and the inability of peer review to accurately identify academic fraud.20–22 ,35 ,36 We speculate that these drawbacks, in addition to the time and resources required to implement peer review, may account for the failure of our consensus findings to endorse peer review for blogs and podcasts as a quality marker. Only 69% (blogs) and 53% (podcasts) of the participants endorsed an editorial or peer review process, which was similarly found in a survey of Canadian emergency medicine residents and programme directors.36 Also, only 47% (blogs) and 40% (podcasts) of participants endorsed the inclusion of peer-reviewed citations as references.
These consensus recommendations have several limitations. First, a number of participants have contributed to or owned blogs (11/44) or podcasts (6/44) and all self-selected to attend the Social Media Summit. While this may impart a level of fluency and expertise in our panellists, it introduces bias and may limit the generalisability of our findings to the broader population of health professions educators. Second, the participants consisted of a majority of Canadians and a significant number of emergency physicians. This uneven distribution of countries and specialties probably reflects the location of the meeting (Toronto, Canada) and the popularity of social media-based education in the field of emergency medicine.1 ,3 ,4 ,37 The high consensus rate threshold of ≥90% agreement may attenuate these biases, as substantial agreement across study participants was needed to endorse a quality marker.
The next steps should include assessing the views of the other stakeholders, such as different learner groups, a broad range of content producers and a diverse network of academic leaders. Ultimately, the data resulting from these consultations should contribute to the development of practical tools to help stakeholders assess the quality of such resources.
In conclusion, by identifying the quality indicators most important to health professions educators, this modified Delphi study provides 13 quality indicators that may help develop standards, guide development and improve identification of high-quality medical education blogs and podcasts.
Main messages
Thirteen common quality indicators consistently received high consensus agreement (≥90%) among health professions educators.
Health professions educators value credibility as the most important domain in assessing quality for blogs and podcasts in the form of transparency and trustworthiness.
Similar to health professions education resources of all forms, education experts value accurate, professional and audience-specific content for blogs and podcasts.
The incorporation of a traditional peer review process did not reach consensus as a quality indicator for health professions educators.
Current research questions
Quality indicators for blogs and podcasts have been identified by this international group of health professions educators. What do other stakeholders (eg, learners, content producers, academic leaders) value as high quality and how should differences be resolved?
Can stakeholders convert the identified quality indicators into a format that consistently facilitates accurate and timely assessment of quality?
Can these quality indicators be used to help academic leaders assess the value of digital scholarship?
Key references
Cadogan M, Thoma B, Chan TM, et al. Free Open Access Meducation (FOAM): the rise of emergency medicine and critical care blogs and podcasts (2002–2013). Emerg Med J 2014;31(e1):e76–7.
Thoma B, Chan TM, Paterson QS, et al. Emergency medicine and critical care blogs and podcasts: establishing an international consensus on quality. Ann Emerg Med. Published Online First: 25 March 2015.
Smith R. Scrap peer review and beware of “top journals.” BMJ Blogs 2010. http://blogs.bmj.com/bmj/2010/03/22/richard-smith-scrap-peer-review-and-beware-of-“top-journals”/ (accessed 8 Dec 2014).
Thoma B, Chan T, Desouza N, Lin M. Implementing peer review at an emergency medicine blog: bridging the gap between educators and clinical experts. CJEM 2015;17:188–91.
Brabazon T. The google effect: googling, blogging, wikis and the flattening of expertise. Libri 2006;56:157–67.
References
Supplementary materials
Supplementary Data
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
- Data supplement 1 - Online supplement
Footnotes
Twitter Follow Michelle Lin at @M_Lin, Brent Thoma at @Brent_Thoma, N Seth Trueger at @MDaware, Felix Ankel at @FelixAnkel, Jonathan Sherbino at @Sherbino and Teresa Chan at @TChanMD.
Contributors All authors contributed by interpreting the collected data, drafting and revising the manuscript for important intellectual content, approving the final published version and agree to be accountable for all aspects of the work. Additionally, ML, BT and TC provided a significant contribution in the conception and design of the work. ML, TC and NST contributed by acquiring the data.
Competing interests NST receives a stipend for his work as the Social Media Editor for Emergency Physicians Monthly (news magazine).
Provenance and peer review Commissioned; externally peer reviewed.
Linked Articles
- Social media and health professions education
- Social media and health professions education
- Social media and health professions education
- Social media and health professions education
- Social media and health professions education