Welcome to Digital Tea with Mr. E. where we will discuss what’s brewing in the world of educational theory. This blog is the fourth of an eight episode series where we will explore T-CAP: A Student Learning Model, and its fit for the modern digital classroom. As I write this blog on the shores of the Juniata River on crisp 60 °F morning, I am sipping a cup of Twinings Earl Grey Tea in my “Lighthouses of the Chesapeake” mug. This mug features drawings of the six Maryland lighthouses that light the way on foggy evenings along the shores of the Chesapeake Bay. The photo was taken on the southern third of the Juniata River in Millerstown PA. The Juniata flows south east where its waters join the Susquehanna River in Duncannon PA. The waters of the Susquehanna River continue to flow southeast until they enter the northern mouth of the Chesapeake Bay in Havre De Grace MD. The Chesapeake Bay flows south past the Cove Point and Drum Point Lighthouses near where I owned my first house at the start of my teaching career in southern Calvert County, MD. I have illustrated this water journey on the map below:
In this fourth episode of exploring the T-CAP student learning model, I have the results to share for the validity study. A validity study is one that measures the degree at which instrument such as the T-CAP rubric, is in alignment with the theory and model of T-CAP, (Li, 2016). As mentioned in the last blog post I presented the T-CAP student learning model to the Greenwood Faculty on September 10th. On that same day I wrote an email requesting participation in the validity and inter-rater reliability studies. Much to my amazement 12 volunteers sign up to participate in the T-CAP research. On September 24th, I met with the validity team during professional development time.
Eight teachers came to the validity meeting. Two volunteers from the validity study were unable to make the meeting, one due to a prior commitment and one had a family emergency. Present at the meeting were 1 ELA, 1 math, 1 social studies, and 4 science teachers. Also present was the technology instructional coach. I treated the team to a tea bar set up with various tea options, hot water, sugar, and honey. The coffee drinkers brought their own warm brew, but the tea drinkers appreciated partaking in a tea of their choice.
During the meeting I discussed the T-CAP model and the validity study requirements. I went over the definitions of content learning, technology learning and artifact production as they relate to T-CAP. We also discussed each representation of the Likert Scale values of 1-5 for T-CAP validity, (Allen, 2007). I asked the raters to consider each element of the rubric in two manners. 1) Is there good alignment with content, technology or artifact domain? 2) Is there good alignment with the domain rating, (distinguished, proficient, striving, basic and novice)? Then there was good discussion regarding application of T-CAP in different content and grade levels. This discussion piece seemed to excite some of the validity raters as they began to see how T-CAP could implemented to improve student learning in their own classrooms. The meeting finished with each participant checking to make sure they could access the Google Form for the Validity study.
I left the study open for one week. On October 2nd I saw that 9 of 10 surveys had been completed. The only missing survey was from the staff member who had a family emergency and was still not back at school. I sent an email to that staff member expressing support, and provided instructions to not bother with the validity survey. The validity study results came out better than could reasonably be expected. I have summarized the numerical data in three tables broken down by domain. The values in the tables represent the mean and standard deviation results for rubric narrative for each rating.
I am pleased to report that all 18 categories passed the statistical validity requirements of at least a 4.0 average and standard deviation less than or equal to 0.75. There were no outlier categories as all measures on the T-CAP rubric had an average between 4.56 and 4.78. The standard deviations were also in a tight grouping between 0.441 and 0.527. In total there were 9 validity raters in this study. Collectively this indicates that none of the 18 T-CAP measures need to be removed from the rubric.
In addition to the numerical data there were 32 comments made. Content learning had 12 comments, technology learning had 11 comments and artifact production had 9 comments. Collectively I used the inspiration and/or suggestions from 12 of the comments to improve the rubric. The rubric experienced improvements in syntax, word choice, consistent word choice across domains, and better placements of the Bloom’s Taxonomy verbs in the appropriate categories, (Krathwhol and Anderson, 2009). I am very grateful to the participants of the survey for taking the time to study the rubric and leave effective comments that helped me to improve the T-CAP rubric.
The comments are summarized in tables below. The comments highlighted in gold were used to improve the T-CAP rubric.
I am now proud to present the revised and improved T-CAP Assessment Rubric, version 2.0. This is the form of the rubric that will be used in the inter-rater reliability study with student quests assessments from physics.:
Episode V of this eight part series will journal my internship experiences in the time frame from October 1st to 10th. In that time frame I will present the T-CAP student learning model to my Physics students. I will discuss the research plan with them and ask for their support in getting the student and parent waivers signed. This will be a critical task in the internship as having student quests to assess is necessary for the reliability study. In addition I will also assign the first quest on Kinematic Motion to my Physics students. The blog post will share details on quest model of assessment as it pertains to Physics. Also, the fifth post will feature a new brew of tea in a special mug. I look forward to sharing the progression of my T-CAP research throughout this internship. Feel free to leave comments below. You may also contact me privately at jeverett@greenwoodsd.org.
References
Allen, I. E., & Seaman, C. A. (2007). Likert scales and data analyses. Quality progress, 40(7), 64-65.
Krathwohl, D. R., & Anderson, L. W. (2009). A taxonomy for learning, teaching, and assessing: A revision of Bloom's taxonomy of educational objectives. Longman. Retrieved from https://doubledumplings.com/a-taxonomy-for-learning-teaching-and-assessing-a-revision-of-blooms-taxonomy-of-educational-obje-ebooks-small-project-lorin-w-anderson- david-r-krathwohl.pdf.
Li, Yue. (2016, November 28). How to determine the validity and reliability of an instrument. Retrieved from https://blogs.miamioh.edu/discovery-center/2016/11/ how-to-determine-the-validity-and-reliability-of-an-instrument/.
Validity affirmed... My hat is off to you for organizing a highly effective expert panel. The quantitative data you received was quite compelling. Your instrument started off quite well "dialed in" to assess the construct of TCAP. This was 100% confirmed by your panel. Your efforts to really utilize the knowledge of your panel by gathering their qualitative data, reflecting on it, then improving your assessment based off of what they suggested (where possible) is exemplary. You have more than met the minimum threshold required to establish validity in your publication. KUDOS.