Welcome to Digital Tea with Mr. E. where we will discuss what’s brewing in the world of educational theory. This blog is the seventh of an eight episode series where we will explore T-CAP: A Student Learning Model, and its fit for the modern digital classroom. As I prepare to write this blog post I am indoors in my basement. Excitement is brewing for the upcoming conclusion to a Stars Wars saga that started in 1977. 42 years after A New Hope, the final chapter: The Rise of Skywalker debuts in December of 2019. Speaking of brewing, in the center of the photo is cup of Twinings Lady Grey Tea in an iridescent Star Wars mug. Remembering back to my childhood, it was my mother that got me into Star Wars, quite literally as she took me to see the original Star Wars the week after my 1st birthday. My favorite memory from 3rd grade was when I was called unexpectedly out of school for a doctors appointment, only to find out my mom was taking me to the opening show of Return of the Jedi. In a tribute to mom’s favorite movie series I put her favorite Star Wars characters Padme Amidala, Princess Leia Organa, C-3P0 and R2-D2 on the left side of the mug. Mom was a big fan of the female spunk and the comedic droids in the Star Wars movies. The right side features a continuation of that theme with the most recent Star Wars heroes that I am sure mom would enjoy: Jyn Erso, Rose, Rey and BB-8. In loving tribute to my mother: Joann Everett, English Teacher, author of poetry books, founder of the Bensalem Women’s Writing Guild, and Star Wars fan, (1950-2013).
In episode six of this blog series I shared the results and analysis of the inter-rater reliability study. In figure two, you can see that internal consistency of the T-CAP instrument scored very well with a Cronbach’s alpha of 0.804. Also scoring well with good alpha’s were the T-CAP domains of content learning (0.864) and artifact production (0.743.) The one outlier was the technology learning domain which had a lower internal consistency alpha of 0.461 on the -1 to +1 scale, (Zaionitz, 2015). You can also see that the technology learning domain had the largest variance with a standard deviation of plus or minus 0.800 on the 5 point Likert Scale, (Data Star, 2019).
With the results of the inter-rater reliability and validity studies now in hand I was prepared to add the procedures, data and analysis sections to the research paper. I must admit after five months of development and planning it was exciting to finally get to write this portion of the paper. The scientist in me allowed me to craft these sections with verbal and mathematical clarity and evidence. T-CAP is now starting to feel like a real theory. Dr. Desantis, my graduate advisor, had chance to review these new sections and indicated that he liked the development of the research section and indicated his belief that paper has taken additional steps on the road to getting published. Not surprising most of the recommendations to fix something had to due with APA expectations. I continue to learn the ways of professional APA formatting in this project. I do understand that the APA way is a necessity in the publication world.
I then shared out reliability study results with the reliability team and asked them if they would be interested in meeting to discuss the technology domain and hold counsel on if it needed to be adjusted. Five of the six members of reliability team convened on November 11 during professional development time. Three of the members indicated that the technology domain was the hardest part of T-CAP instrument to use. One member said he thought this was because technology changes rapidly and therefore harder to judge from experience. Another member said what she thought of as a good use of technology in her class 5 years ago is now expected use of technology in classes 5 grades lower than what she teaches. Another member pointed out that he had to infer the proficient use of coded equations in a spreadsheet because I provide the student work in a pdf format. The others then admitted they didn’t even realize the student coded equations in the spreadsheet due to the static nature of the pdf. The technology learning specialist on the team recommended that providing exemplars showing different ratings with explanation would be helpful if I repeated this study in the future.
Our team discussion then transitioned to investigating if the technology learning rubric needs language updates to be more clear. This led to a group realization that we are all at different levels of proficiency in both our use and recognition of technology skills. The lightbulb moment then illuminated the whole team, this is about the SAMR model. In 2009 Dr. Reuben Puentedura introduced the SAMR model for assessment of technology integration. SAMR is an acronym that stands for Substitution, Augmentation, Modification and Redefinition, figure 3. The substitution and augmentation definitions serve to enhance learning with technology. Conversely, the modification and redefinition ideas revolve around using technology for transformative applications (Puentedura, 2015). This led the group to reject the notion that the technology learning instrument needed to be reworded. The decision was founded on two points, first all parts of the technology learning instrument passed the validity study and second the SAMR connection seemed to be a compelling reason for the greater variance in the technology learning instrument.
You have got to love the authentic scientific method at work in a room of science and technology professionals. In the scientific method when considering the conclusion a new question to investigate often arises from what you have learned in the initial experiment. The reliability team discussion inspired me to propose a new investigative question, “Do raters at the same level on the SAMR model show statistically greater internal consistency when using the T-CAP technology learning instrument, then when compared against all users of the rubric across all levels of SAMR?” I may have to investigate this question at some point. Personally, this feels like a very compelling investigation to pursue.
Episode VIII of this series will conclude the journal of my fall 2019 internship experiences in the time frame from November 17th to December 10th. In that time frame I will be writing the conclusion section to my T-CAP research paper. I also plan to present the T-CAP internship experience to my graduate cohort and a modified version to my colleagues at a Greenwood faculty meeting. Also, the eighth post will feature a new brew of tea in a special Christmas mug. I look forward to sharing the progression of my T-CAP research throughout this internship. Feel free to leave comments below. You may also contact me privately at jeverett@greenwoodsd.org.
References
Data Star, Inc. (2019, November 2). How to Interpret Standard Deviation and Standard Error in Survey Research. Retrieved from http://www.surveystar.com/startips/std_dev.pdf.
Puentedura, Reuben. (2015, October 14). SAMR: A Brief Introduction. Retrieved from http://hippasus.com/rrpweblog/archives/2015/10/SAMR_ABriefIntro.pdf
Zaionitz, Charles. (2015, April 3). Real statistics using excel: Chronbach’s alpha. Retrived from http://www.real-statistics.com/reliability/cronbachs-alpha/comment-page-1/
Another exceptional reflection on your research process. I enjoyed reading about the 'meta' analysis of your instrument reliability with your fellow science teachers. Not only are you hitting above your weight by involving your students in your research but you are connecting other science teachers as well. You did a nice job of pragmatic analysis. Raters have varying interpretations of instructions. Some may not read the instructions carefully. Solid instruments include instructions that are close to fool-proof. This is hard to do. You are gleaning as much as you can from your participants. This will maximize the reliability of your instrument.