Tech Zero to Tech Hero: Creating a Faculty Self-Efficacy Instrument for Measuring Impact in eLearning Interventions

Concurrent Session 1
Streamed Session

Watch This Session

Session Materials

Brief Abstract

This presentation will focus on the development of a pilot self-efficacy instrument in a faculty eLearning intervention. We will discuss the results of our implementation and demonstrate its application in professional development contexts. Additionally, we will engage the audience to brainstorm uses at their home institutions through interactive activities


Jannath received her doctorate in Communication from the University of California, Davis, where she designed and implemented the department’s first hybrid introductory course. She holds an M.Ed. in Curriculum and Instruction from Lesley University. While earning her M.Ed., she served in AmeriCorps for two years teaching adolescents. Her role involved training, supervising, and supporting volunteers and teaching staff in effective instructional practices for the apprenticeship curriculum. Currently, Jannath provides instructional design and eLearning support at the Faculty Technology Center at California State University, Northridge.

Extended Abstract

Tech Zero to Tech Hero: Creating a Faculty Self-Efficacy Instrument for Measuring Impact in eLearning Interventions

Authors: Hillary Kaplowitz, Jannath Ghaznavi, & Krishna Narayanamurti


The goal of this project was to develop an instrument to assess the evidence of impact of the design and implementation of the Faculty Technology Center’s eLearning program at California State University, Northridge (CSUN). The primary component of this program is an intensive week-long intervention that focuses on how to apply research-based, discipline-appropriate pedagogies and eLearning tools and strategies to improve teaching and student learning in technology enhanced, fully online and hybrid courses.

We know from informal data that our eLearning interventions help prepare faculty to incorporate technology into their teaching. However, our challenge was to provide a more formal assessment of the extent to which our eLearning interventions provided faculty with the training and means to successfully implement eLearning tools and strategies in the classroom.

Palmer and colleagues (2016) designed a faculty development intervention that challenged faculty to reduce reliance on less-effective didactic, content-centered approaches and learn how to utilize active, learning-centered pedagogies and assessment methods, found to be most effective at promoting student engagement and learning. To assess evidence of impact and increase in self efficacy among instructors who participated in their intervention, they designed their own instrument (e.g., Palmer et al., 2016). They found that this particular type of faculty support dramatically shifted teacher beliefs along the learning-centered continuum and begin to alter teacher practices, and, therefore, improve student learning (Palmer et al., 2016).

Based on these findings, we decided to focus on self-efficacy as a measure to assess our programming. Self-efficacy is concerned with perceived capability, or one’s judgment of capability (Bandura, 2006). Research shows that teachers’ efficacy beliefs appear to affect how much effort is invested into teaching and the goals they aspire to (Woolfolk Hoy & Spero, 2005). Teachers with higher self-efficacy judgments also tend to exhibit more openness to new ideas and experimenting with new methods to meet their students’ needs (e.g., Cousins & Walker, 2000). Consequently, we determined that a self-efficacy instrument could be used to demonstrate the influence of our eLearning interventions directly on faculty self-efficacy and, therefore, indirectly on student achievement.


Following the standard approach to measuring self-efficacy in a particular domain of interest, we developed our own measure using Bandura’s (2006) recommendations on constructing self-efficacy scales. Based on the literature on teaching efficacy and student achievement, we developed a 14-item pilot pre- and post-survey where participants were asked to rate their confidence in implementing several eLearning practices on a four-point scale (1 = Not confident, 4 = Very confident). These survey items were based on the learning objectives from the eLearning Institute (See Appendix A). The four-point rating scale represents the Strength of Efficacy (Bandura, 2006). Additionally, we consulted with a CSUN faculty member with an expertise on self-efficacy who suggested we order items Items from easiest to most challenging to determine if there is a dropoff pattern, or a breaking point at which confidence declines, corresponding with Bandura’s (2006) Levels of Efficacy.

Faculty participants were asked to complete the pre-survey prior to the first day of the eLearning Institute and did the post-survey as an activity on the last day of the Institute. The final sample consisted of 42 faculty participants who applied for and received eLearning grants from the Faculty Technology Center.


The results of the data allowed us to measure whether participation in the eLearning Institute impacted faculty participants’ confidence in implementing eLearning tools and strategies. For each of the 14 items, the average mean across all participants increased from the pre-survey to the post-survey (see Appendix B).

The data revealed several key findings that can be used to improve our future programming. The following items had the largest increases in average efficacy: “create digital educational content, such as videos, for my courses” and “match eLearning solutions to instructional challenges in my courses.” This finding suggests that faculty, on average, had the highest gains in confidence for these two particular tasks, which can be interpreted as strengths of the programming. The smallest increase in average confidence across participants related to seeking help from our Faculty Technology Center when designing eLearning. This suggests that either faculty were familiar with the resources for seeking help or it is something relatively easy to figure out.

Finally, two areas where data suggested an increased in average confidence but generally the lowest rated were in the following items: “understand my students’ familiarity and experience level learning with technology” and “help students who come to me with technical issues or problems with eLearning in my courses.” These findings suggest areas of programming that could be strengthened by improving the areas of the Institute that address these objectives related to faculty connecting with students.

If we were to find a decrease in the average mean of an item from the pre-survey to the post-survey, this might indicate either faculty overconfidence or an area of programming that needs to be strengthened. For example, if hypothetically faculty initially felt confident in the task but after completing the Institute their confidence decreased, this could indicate a realization that they did not realize the scope of what was involved.


Through this presentation we hope to share a strategy that others can adopt for assessing the impact of their own faculty development programming by measuring self-efficacy changes over time. During the session, we plan to invite participants to join in a interactive brainstorming activity where we can discuss potential uses and further developments. We found the results we gathered to be informative and they provided us insight into the areas where faculty gained confidence, whether certain aspects of our programming needed to be strengthened or improved, and a sense of the overall impact of our eLearning intervention.


Bandura, A. (2006). Guide to the construction of self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents, 5: 307-337. Greenwich, CT: Information Age.

Cousins, J. B., & Walker, C. A. (2000). Predictors of educators' valuing of systematic inquiry in schools. Canadian Journal of Program Evaluation, 25-52.

Hoy, A. W., & Spero, R. B. (2005). Changes in teacher efficacy during the early years of teaching: A comparison of four measures. Teaching and Teacher Education, 21(4), 343-356.

Palmer, M.S., Streifer, A.C., & Williams-Duncan, S. (2016). Systematic assessment of a high impact course design institute. To improve the academy: A journal of educational development, 35(2), 339-361.

APPENDIX A: Survey Instrument

On a scale from 1-4 rate your confidence in implementing each of the following practices.

(1 - Not confident, 2 - A little confident, 3 - Somewhat confident, 4 - Very confident)

Survey Items:

  1. Learn new eLearning technology tools on my own
  2. Formulate a plan for integrating new eLearning tools/strategies in my courses
  3. Ask for help from an experienced colleague when designing eLearning
  4. Ask for help from the Faculty Technology Center when designing eLearning*
  5. Understand my students’ familiarity and experience level learning with technology***
  6. Help students who come to me with technical issues or problems with eLearning in my courses***
  7. Deliver course materials in an online format for my students to download
  8. Create digital educational content, such as videos, for my courses**
  9. Design eLearning activities and assignments for my courses
  10. Try a new eLearning tool/strategy in a face to face environment
  11. Try a new eLearning tool/strategy in an online environment**
  12. Overcome technical issues or problems related to eLearning tools/strategies in my courses
  13. Identify the appropriate eLearning tools/strategies to achieve my instructional goals
  14. Match eLearning solutions to instructional challenges in my courses**

Smallest average increase*

Largest average increase**

Lowest rated overall***

APPENDIX B: Graph of eLearning Self-efficacy Results


eLearning efficacy chart