A New Tool for Online STEM Assessments

Concurrent Session 1

Add to My Schedule

Brief Abstract

As STEM enrollments, grading burdens, and assessment integrity needs increase, an opportunity exists for a new product that helps address these concerns and aids instructors in creating high-quality assessments. Learn more about why we created a new tool for STEM assessments, featuring custom randomized problems that are auto-graded!

Presenters

Gavin Brown is an Application Developer at Purdue University. He is part of the Studio by Purdue team who has the privilege to create innovative teaching and learning applications to further and enhance college students’ learning. The team's work has been featured by the New York Times, USA Today, CNET, and The Chronicle of Higher Education; and has won awards from Wharton-QS Stars Awards 2014: Reimagine Education, Center for Digital Education, TechPoint Mira Technology Awards, and Campus Technology.
Leah supports and enhances teaching and learning at Purdue University through the effective pedagogical use of technology that is usable by all instructors and students. She works with faculty, staff, and students to identify, analyze, and explore the teaching and learning technology needs of the university identifying instructional gaps.

Extended Abstract

The existence of crowdsourced cheating sites as well as the increased adoption of online courses has increased the need for solutions that aid in the area of assessment integrity. Assessment integrity solutions in the area of formative assessments are often lacking, or cumbersome and time-consuming to implement, leading to deweighting of formative assessments. Through providing unique problems for each student, our product strengthens assessment integrity for both formative and summative assessments. Since each problem is unique, the ability of students to engage in collaborative or crowdsourced cheating is decreased. While use of proctoring solutions are still needed for high-stakes summative assessments, unique problems can decrease the level of assessment security needed.

 

Assessment integrity is one reason why instructors may want to create their own custom problems. Though creating custom problems can be time-consuming and inconvenient compared to adopting publisher content, there are additional reasons why instructors and institutions may want to do so. The first is OER. Interest in OER and OER initiatives are another trend affecting higher education, as instructors and institutions look to make education more affordable and increase student success rates. One piece of the OER puzzle that can still be missing is the assessment piece. We believe our product can be that piece for STEM domains. Secondly, while publisher content may be considered good enough for their formative assessments, instructors often want to create their own for summative assessments, for reasons including assessment integrity and quality.

 

A third trend is a national increase in STEM enrollments, while locally, funding for TAs decrease, resulting in grading burdens for instructors intensifying. While solutions exist that can help in this area, oftentimes instructors will choose to implement multiple choice questions in place of numeric response questions to reduce their grading burden. By auto-scoring numeric responses, our product can reduce grading burdens without a reduction in assessment quality.

 

Through developing a product that allows instructors to create custom numeric response problems that are unique for every student and are auto-scored, we hope to rise to and meet these trends, and help students and instructors to be more successful.

 

During this session, we will work with attendees to create, test, and analyze exemplary STEM problems using our product. Attendees will be guided through the creation of a high quality problem. With the help of handouts and hands-on guidance, attendees will utilize randomization, combinations, and sample equations in creating their problems. As a result of engaging with our session, attendees will:

  • Understand the assessment integrity benefits that our product provides

  • Identify the different options for problem randomization that our product supports

  • Establish paths for adoption of our product or similar products in their programs