top of page
Writer's pictureJonathan Everett

Episode VIII: T-CAP Conclusion

Updated: Dec 4, 2019


Welcome to Digital Tea with Mr. E. where we will discuss what’s brewing in the world of educational theory. This blog is the eighth and final episode of a series where we will explore T-CAP: A Student Learning Model and its fit for the modern digital classroom. As I write this blog the first snow flurries of the year are falling in the wooded background beyond the window pane in the photo. The foreground features a cup of Tetley Classic USA Black Tea in a Grand Canyon mug with an American eagle in flight. Surrounding the mug are iconic stained-glass pilgrims and a cornucopia creating a scene of American heritage at the time of Thanksgiving. At this point in the internship I would like to give thanks to my wife Stephanie, my principal Michele Dubiach, the members of the validity and reliability teams, my physics students and my graduate professor Dr. Josh DeSantis for the many discussions, encouragement and support I experienced while proceeding to assess the T-CAP model and assessment instrument.


This internship began with a research question: Can a model and assessment tool similar to TPACK, but focused on students, be developed to measure student technology and content integration with both validity and reliability? Throughout the internship, a thesis was tested: The T-CAP model and assessment tool will demonstrate both validity and reliability in defining student learning in the areas of content, technology and artifact production. Along the way, there was enthusiastic student and staff support for participating in the research, numerous professional discussions and debates regarding theory and how to measure student performance, two experiments and modifications made to improve the assessment instrument. This internship now ends with a reflection of what was learned exploring T-CAP and assessing its fit for the modern technology-supportive classroom.


In the first internship activity, I introduced the T-CAP student learning model and research plan to the Greenwood middle and high school faculty. A memorable moment from the presentation was when I was interrupted with applause for including student choice elements in the artifact production dimension. It was at that moment I realized T-CAP could be something that other teachers would want to use in their classes regardless of the content they teach. Over the next week, 12 volunteers from all curricular departments expressed their interest in participating in the T-CAP validity and reliability studies. That level of participation made for over 25% of the 40 member teaching staff. During that same time frame, I explained the T-CAP model to my Physics students, and asked for their support in allowing their quests (end of unit differentiated assessments) to be used in the reliability study. With enthusiastic support, 16 of 17 physics students agreed to allow their work to be evaluated in the reliability study. Collectively, the T-CAP validity and reliability experiments were made possible through the enthusiastic contributions of both the Greenwood faculty and students.


Figure 1: T-CAP: A Student Learning Model

In the validity study nine teachers assessed the T-CAP assessment instrument on a Likert scale of 1-5 for both dimensional alignment and student achievement alignment (Allen & Seaman, 2007). Statistical standards were put in place prior to the study to measure the degree of validity and hold the assessment instrument accountable for weak elements. A rubric item would only be deemed valid if it scored at least a 4.0 mean and a standard deviation no greater than 0.75, (Data Star, 2019). The validity study results revealed that all 15 assessment items on the instrument had a mean scores of 4.56 or higher and standard deviation scores of 0.527 or less. Statistically speaking, all 15 items passed the validity test. Participant comments helped provide guidance to improve a few of the instrument items which led to the creation of a revised T-CAP assessment instrument.


The reliability study featured staff participation from four high school science teachers, one elementary science teacher and one high school English language arts teacher. Inter-rater reliability is a research method designed to assess the degree at which multiple raters agree with their assessments using a common instrument, (Allen & Seaman, 2007). In this case the T-CAP rubric was used by the six member rater team to assess six Physics quests. The reliability study was conducted to assess the inter-rater agreement among the six raters using a Cronbach Alpha and standard deviation based analysis. The Cronbach Alpha is a statistical measure that determines the degree of inter-rater reliability on a scale of -1 to 1, (Zaiontz, 2015). The standard deviation analysis provided a second opinion by looking at the reverse statistic of rater variance. The Physics quests represented a variety of end-of-unit projects to provide evidence of student learning. This allowed the T-CAP rubric to be applied under diverse combinations of content learning, technology learning and artifact production.


The inter-rater reliability analysis was first performed on the rater-assessed placement of student achievement on the T-CAP model. Placement of student achievement scored an alpha of 0.804. This score places inter-rater reliability for student achievement placement on T-CAP in the very good range according to statistician Dr. Charles Zaiontz, (2015). Furthermore, the individual T-CAP dimensions of content learning, technology learning, and artifact production were each assessed for inter-rater reliability. Content learning had an alpha score of 0.864, and artifact production had an alpha score of 0.743. Both of these alpha scores were in the very good and good range respectively for inter-rater reliability. The only outlier alpha score was the 0.461 earned by the technology learning dimension. The secondary statistic for technology learning showed a ± 0.800 standard deviation on a 1-5 Likert scale.


In a meeting after the reliability study, the reliability raters concluded that variance in technology skills and usage among the raters led to different interpretations of the students’ technology use. Put another way, vetern teachers are comfortable assessing content learning and artifact production through years of experience. However, the data shows there is less agreement when comes to assessing technology learning. The reliability team hypothesizes that this is because the age of 1:1 student devices in a technology supported classroom has been a recent development. Over three years the teachers at Greenwood have been exposed to a rapid increase in new digital technology with the adoption of 1:1 student devices, a mandate to move all course materials to the Schoology learning management system, and the integration of the Google and Microsoft cloud-based application suites. This has left the staff at many different degrees of technology proficiency as best described by the SAMR model of substitution, augmentation, modification and redefinition, (Puentedura, 2015).


Much has been learned about T-CAP’s fit for the modern technology supported classroom in this internship. First, the T-CAP model and assessment instrument passed the validity standards and the inter-rater reliability standards. Statistically speaking, T-CAP is ready for classroom adoption in all core content areas at the middle and high school levels. It is interesting when you consider that content, technology and student projects have all been staples of education for many years. While content learned can confidently be measured through various assessments, the evaluation of projects with rubrics is less certain, and the determination of technology learning is even more nebulous as it tends to be an integrated feature. The T-CAP model and assessment instrument serves as a lens that can focus student learning at the intersection of content, technology and artifact production in a measurable way. T-CAP is grounded solidly in the Bloom’s cognitive thinking model (Armstrong, 2017). The T-CAP model is further organized and structured using Shulman’s model of interconnected domains (1986). This allows T-CAP to be a useful assessment instrument of student learning in a technology supported classroom in much the same way the TPACK model developed by Mishra and Koehler is useful to measure teacher proficiencies in educational practice, (2006).


The results of the research allow me to accept my thesis: The T-CAP model and assessment tool will demonstrate both validity and reliability in defining student learning in the areas of content, technology and artifact production. Interestingly, though, the benefits of T-CAP adoption go far beyond the model’s ability to assess the interconnected domains of student achievement. The greatest benefit of T-CAP lies in providing a fertile assessment structure among familiar ideas to promote the genesis of new educational creations and best practices. Personally, I followed the reverse process. First I co-created the quest model of differentiated physics assessments, and then searched the research for a model to assess its worth. Upon finding no such model, I was inspired by Mishra and Koehler’s T-PACK model of teacher integrated proficiencies (2006), to create a student focused model that similarly measured student integrated learning. This inspiration laid the foundation to what ultimately became T-CAP. Going forward, I believe the T-CAP model and assessment instrument provides the structure that inspires other professional educators to engage in project based learning, blended instruction, or the creation of new activities to facilitate differentiated learning in their technology supported classrooms.


For the readers of this internship blog series, thank you for following the T-CAP research activities and the learning that was revealed through the research analysis. Going forward, it is my plan to complete a professional research paper with the findings outlined above. It is my hope to present the T-CAP and Quest models at a professional conference and to seek publication of the models. One of the greatest elements of a technology supported classroom is that it removes isolation by bringing down the traditional walls of the classroom, allowing for good ideas to move about cyberspace to influence and inspire the creativity of others to positively impact learning opportunities for students. Feel free to leave comments below. You may also contact me privately at jeverett@greenwoodsd.org.


References

Allen, I. E., & Seaman, C. A. (2007). Likert scales and data analyses. Quality progress, 40(7), 64-65.


Armstrong, Patricia. (2017). Bloom’s taxonomy. Retrieved from https://cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/.


Data Star, Inc. (2019, November 2). How to Interpret Standard Deviation and Standard Error in Survey Research. Retrieved from http://www.surveystar.com/startips/std_dev.pdf.


Forehand, M. (2005). Bloom's taxonomy: original and revised. Emerging Perspectives on Learning, Teaching, and Technology, 8. Retrieved from

https://www.d41.org/cms/lib/IL01904672/Centricity/Domain/422/BloomsTaxonomy.pdf.


Li, Yue. (2016, November 28). How to determine the validity and reliability of an instrument. Retrieved from https://blogs.miamioh.edu/discovery-center/2016/11/ how-to-determine-the-validity-and-reliability-of-an-instrument/.


Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: a framework for integrating technology in teachers’ knowledge. Teachers College Record, 108(6), 1017–1054.

Puentedura, Reuben. (2015, October 14). SAMR: A Brief Introduction. Retrieved from http://hippasus.com/rrpweblog/archives/2015/10/SAMR_ABriefIntro.pdf.


Shulman, L. S. (1986). Those who understand: knowledge growth in teaching. Educational Researcher, 15(2), 4-14.


Zaiontz, Charles. (2015, April 3). Real statistics using excel: Cronbach’s alpha. Retrieved from http://www.real-statistics.com/reliability/cronbachs-alpha/comment-page-1/.

13 views1 comment

1 Comment


Joshua DeSantis
Joshua DeSantis
Nov 25, 2019

" T-CAP is ready for classroom adoption in all core content areas at the middle and high school levels." I could not agree more. You have done an outstanding job of confirming the existence of TCAP and of designing and perfecting the rubric used to assess it. The key now will be the who, what, when, where, and why of how it diffuses. As we have discussed, I highly encourage you to share the model at conferences and in publication. Believe it or not, not every useful model diffuses. The ones that do have a champion (Think Bergman and Sams, Wiggins and McTige, etc.). You will need to be the champion for this in order to maximize the impact.


This…

Like
bottom of page