Using the USA2 Framework to Make Wise Educational Technology Decisions
Concurrent Session 10
Many institutions want to adopt innovative tools and technologies, but often overlook key criteria in making their decisions. Whether you are an administrator, instructional designer, information technology specialist, or instructor, join us to explore our USA2 framework to help you evaluate the critical characteristics of possible technology-driven innovations.
Instructional technology has tremendous potential to provide value in higher education. Reasons given for exploring instructional technology include the ability to better engage students in the learning process, as well as to help faculty to more effectively carry out their teaching responsibilities, including creating content, interacting with students, and assessing student mastery. (Chuntao, 2011) There is a plethora of products and technologies on the market, with new options constantly becoming available. New tools and technologies may be adopted for a number of reasons. An institution may decide that a new tool is needed to meet a new initiative or to address a desire for innovation. An institution may seek out alternatives when stakeholders become dissatisfied with a currently-used product, or when that product is no longer available. Sometimes, a new tool may be adopted because "everyone is doing it" or "it's cool", or because it has come to a stakeholder's attention at a conference or vendor demonstration, not because it actually meets a current need. Sometimes, it can be easy to forget that, to be effective, instructional technology must be able to provide benefits to teaching and/or learning; instructional technology should not be seen as an end in itself (Kershaw, 1996).
The 2016 National Survey of eLearning and Information Technology in US Higher Education (Green, 2016) reported top IT priorities related to instructional technology include ability to successfully assist faculty with integration of technology into their teaching, data security, and provision of adequate user support. According to the results of this survey, these concerns have not changed much over recent years, and level of success in addressing these priorities has not always been high. For example, in 2016, only 23% of faculty rated available training as excellent. Similarly, only 10% of students rated support as excellent. These figures may be reflective of another finding of the survey, which is that only 25% of institutions indicated that they assessed the impact of instructional technology on quality of instruction. (Green, 2016). This seems to confirm that technology acquisition and support decisions may not properly reflect the needs of higher education students and faculty.
There may be many stakeholders when instructional technology decisions are made: Information Technology and finance departments, administration, faculty, students, instructional designers, and possibly others. Each of these groups has different concerns and priorities. The key questions are, are all of their concerns being heard? Are their priorities being considered? At many institutions, technology acquisition and support decisions may be made by just a few individuals, without necessarily a process for seeking and evaluating input from the full range of potential users of the technology.
Many products that are initially adopted do not achieve high levels of dissemination within the institution. Others are adopted, but are fairly quickly abandoned. A large proportion of technology adoption results in failure to meet intended outcomes. According to Kirschner, Hendriks, Paas, Wopereis, and Cordewener (2004), "While 87% of the projects’ leaders noted 'improved quality of learning' to be an intended outcome of the project, only 30% reported this as an actual outcome." The reasons for failure of instructional technology to meet the needs of an institution are myriad, but we believe that many failures of dissemination and subsequent technology abandonments are due to a failure to consider the proper criteria when making adoption decisions. It is through analysis of failures of dissemination and abandonment that the USA2 framework was developed.
The USA2 framework can provide guidance to ensure that important factors are considered when making technology adoption decisions. These criteria cover the most important considerations that will influence whether adoption will be successful, both initially and over time. These criteria are: Utility, Security, Accessibility, Usability, Scalability, and Affordability. Descriptions of each of these criteria will be given during the presentation, as well as critical questions to ask related to each of the criteria.
Through case studies, we will illustrate both unsuccessful and successful technology adoptions, using the USA2 framework to evaluate the tools involved. During the presentation, we will show how the six criteria can be prioritized according to current needs, as well as degree to which each criterion has been met can be assessed. A weighted decision matrix will be used to show how the framework can help to evaluate overall likelihood of the tool or technology to meet the needs of faculty and students.
Presenters will also discuss the general topic of instructional technology adoption, focusing on how information from the USA2 evaluation can be used to help promote successful adoption of the technology. Knowledge of strengths and limitations of the tool, as determined through the USA2 evaluation, can provide helpful guidance when strategizing adoption and support.
Following the presentation, session attendees will be asked to spend 5 minutes in individual reflection, thinking about their own prioritization of the six criteria, as well as how the USA2 framework can help them.
This individual reflection period will be followed by a question-and-answer session. If attendees have questions, they will be addressed first. If not, presenters will guide the participants through evaluation of a tool, using the USA2 framework, with participants sharing their perspectives on the relative importance of each of the criteria, as well as their perception of how well the tool meets those criteria.