Tackling Big Data Assessment in Online Programs

Concurrent Session 1
Streamed Session

Watch This Session

Brief Abstract

Programs engaging in continuous and comprehensive assessment of program outcomes often find available solutions are in their infancy requiring manual review and extraction of data for reporting. This conversation will dive deeper into innovative possibilities for examining and reporting assessment data incorporating strategic planning best practices and automation.

Presenters

Dr. Richardson is a higher education leader with more than 13 years experience teaching and developing curriculum for traditional and online learning environments focused on applied learning. He has held several leadership roles in his tenure managing diverse teams across the university. His research interests focus on healthcare, human resources, competency based education, and curriculum design. Dr. Richardson brings to UNCW extraordinary experience with market analyses, program/course design, and implementation of new academic programs and specializations. He has documented proficiency in delivering results related to applied learning, accreditation initiatives, and regulatory analyses. He was appointed in 2018 as a CAHME Fellow.

Additional Authors

Sheri Conklin is the Director of e-Learning at the University of North Carolina Wilmington. She collaborates with colleagues to deign faculty professional development for online and web-enhanced course design and delivery. She also disseminates information regarding pedagogy for online and web-enhanced courses to the faculty through print and web media, as well as hosting socials. Sheri has taught both web-enhanced and fully online for UNCW for the last 8 years. Her prior experience includes Instructional Designer, e-Learning Instructional Support Specialist and special education teacher and department chair. Sheri earned a Bachelor of Arts in Art History and a teaching certificate in special education. She recently finished her Ed.D from Boise State University.

Extended Abstract

Direct assessments are disrupting higher education particularly in innovative programs who want to leverage technology to gather, assess, visualize, and utilize learner data for quality improvement and evidence-based decision making (Hansen, 2016).  This is critically important for larger institutions/programs yielding large and diverse enrollments, dispersed through multiple delivery systems.  Regardless of program composition, these entities are still accountable for yielding transparent outcomes of all learners to regulatory or accrediting bodies.  As a result, many institutions are moving toward online data collection mechanisms (i.e., e-portfolios) (Gruppen, ten Cate, Lingard, Teunissen & Kogan, 2018) which have historically been considered useful for collecting, organizing, and managing large volumes of data to support educational outcomes (McEwen, Griffiths & Schultz, 2015).  What programs find is that collecting, organizing, and managing the data is relatively easy if they have started with a strategically aligned curriculum map and assessment plan.  The challenge(s) they encounter are often in the data analysis stage of the continuous improvement cycle.  Despite the rapid expansion of interest in innovative assessment modalities, few descriptions of such assessment systems have been reported (Dannefer & Henson, 2007) which can effectively and efficiently produce desired data analysis.  This is likely a result of available solutions existing within an infancy (McEwen et al., 2015) stage of the technological life cycle, or perhaps lacking foresight into today’s big data needs.

It is recognized that there is a need for such a solution that allows for automated analysis, both formative and summative, to analyze large volumes of data from multiple sources (i.e., student reflections, faculty assessment, individualized- and holistic- plans of learning) that are content specific, related to professional outcomes, or perhaps competency based. Understanding related challenges will enable participants to better assess available supportive solutions for generating evidence that supports transparent learner achievement of related academic and professional goals.  Collaboration and conversation are needed to addressed current challenges and opportunities for improvement.

This conversation session will center around tools, or solutions, currently employed in member programs.  This conversation will be structured around three large questions. How do you currently collect and report large summative data? What would be an ideal solution (addressing current challenges)? What technologies have been employed to overcome these challenges and what are the lessons learned? Attendees will be interactive contributors throughout the session. Information will be collected and organized around the questions and shared with the attendees at the end of the session.