Tackling Big Data Assessment in Online Programs
Concurrent Session 1
Programs engaging in continuous and comprehensive assessment of program outcomes often find available solutions are in their infancy requiring manual review and extraction of data for reporting. This conversation will dive deeper into innovative possibilities for examining and reporting assessment data incorporating strategic planning best practices and automation.
Direct assessments are disrupting higher education particularly in innovative programs who want to leverage technology to gather, assess, visualize, and utilize learner data for quality improvement and evidence-based decision making (Hansen, 2016). This is critically important for larger institutions/programs yielding large and diverse enrollments, dispersed through multiple delivery systems. Regardless of program composition, these entities are still accountable for yielding transparent outcomes of all learners to regulatory or accrediting bodies. As a result, many institutions are moving toward online data collection mechanisms (i.e., e-portfolios) (Gruppen, ten Cate, Lingard, Teunissen & Kogan, 2018) which have historically been considered useful for collecting, organizing, and managing large volumes of data to support educational outcomes (McEwen, Griffiths & Schultz, 2015). What programs find is that collecting, organizing, and managing the data is relatively easy if they have started with a strategically aligned curriculum map and assessment plan. The challenge(s) they encounter are often in the data analysis stage of the continuous improvement cycle. Despite the rapid expansion of interest in innovative assessment modalities, few descriptions of such assessment systems have been reported (Dannefer & Henson, 2007) which can effectively and efficiently produce desired data analysis. This is likely a result of available solutions existing within an infancy (McEwen et al., 2015) stage of the technological life cycle, or perhaps lacking foresight into today’s big data needs.
It is recognized that there is a need for such a solution that allows for automated analysis, both formative and summative, to analyze large volumes of data from multiple sources (i.e., student reflections, faculty assessment, individualized- and holistic- plans of learning) that are content specific, related to professional outcomes, or perhaps competency based. Understanding related challenges will enable participants to better assess available supportive solutions for generating evidence that supports transparent learner achievement of related academic and professional goals. Collaboration and conversation are needed to addressed current challenges and opportunities for improvement.
This conversation session will center around tools, or solutions, currently employed in member programs. This conversation will be structured around three large questions. How do you currently collect and report large summative data? What would be an ideal solution (addressing current challenges)? What technologies have been employed to overcome these challenges and what are the lessons learned? Attendees will be interactive contributors throughout the session. Information will be collected and organized around the questions and shared with the attendees at the end of the session.