At the end of 2018, I posted upbeat blog on Team Based Learning. Having now delivered two major units in TBL style, it is high time to post some of my reflections on the topic. Quick summary: I am still upbeat, but nothing ever goes perfectly first time around and there are quite a few lessons I learnt, which will hopefully be of use to anyone else introducing TBL to their teaching.
I should probably start by plugging two books – one by Jim Sibley (and colleagues) – I just discovered there is an associated website – and of course the original book by Larry Michaelson, Arietta Knight and Dee Fink. Both provoked quite a few moments of “Oh – so that’s why that didn’t go so well!”.
I should also say that this blog post assume the reader knows the basic structure of TBL – if not, there is a quick overview here or for those that prefer video, try this
I summarise below my top 10 lessons learnt
1. Don’t make the RATS too hard!
I spent some time creating some (I thought) wonderful multiple choice questions (MCQs) for my RATs. However, the average iRAT score was sometimes as low as 40%, this really demotivated the students. You should really be aiming for an average of AT LEAST 60% (and up to 75%).
In fact my questions were good (in my opinion!), but in hindsight many were actually application activities and would have been better placed there (remember that application activities can be presented as short – 5 minutes or less (!) decision tasks phrased as MCQs where the teams hold up coloured cards to indicate their answer). In particular, the RAT questions should be
- Unambiguous (1 right answer) – questions requiring subjective judgement would be better as application activities
- Clearly answerable from the pre reading – again, for questions requiring application of the knowledge, these are usually better as application activities (some OK at top end of RAT).
- a RAT should be about 1/3 simple ‘have you done the reading’, 1/3 more complex ‘have you understood the reading’ and 1/3 ‘are you ready to apply the reading’.
- Remember that a RAT is about readiness for the application activities – so it does not have to cover all the pre reading, it is there to ensure a ‘coding scheme’ (in the Bruner sense, more details in the Sibley et al book) is firmly in place.
2. Don’t make the RATS too frequent!
We had 11 weeks, 3 hours per week. We had an iRAT every week, which was just too much. You should do one RAT per TBL module, which is 6-10 hours of teaching – so 5 RATs would be about right (one every other week) leaving plenty of time for application activities, which is where the deep learning happens.
3. Don’t argue out appeals in class
I have discovered that appeals are a really important part of TBL – the teams have to justify their appeal by referring to the pre-reading. Appeals can be about justifying an alternate answer, OR challenging the question itself (as ambiguous). In both cases, the TEAM submits an appeal, and only those teams that appeal get the credit (if appeal is accepted).
Appeals are submitted at the end of class (or shortly afterwards). We are considering submitting appeals at the end of the RAP before the application activities start.
In our case, we didn’t do any of this, and as you can imagine, having appeal type discussions in class leads to an unnecessarily confrontational situation.
We are still figuring out how to facilitate a discussion to differentiate appeals from clarifications and will take advice from the TBL community on this.
4. Don’t have a class discussion on RAT questions
As a corollary on our RAT questions (too hard, too ambiguous) we ended up with class discussions on the RAT questions – in fact team to team discussions are best saved for the application activities. The post-appeal activity (the mini lecture) should be on identifying and clarifying problematic concepts. You can get teams to explain the right answer but we found that having lengthy class conversations about RAT answers sapped the energy in the room and left some students disengaged.
5. 4S activities – specific choice
We made a classic ‘beginners error’ in that the output of our application activities were too complex – flipcharts, diagrams, flowcharts, lists and so on. In fact the ‘Specific’ bit of the 4S framework mandates a CHOICE made by the team – eg which is the highest priority risk, how much will this project cost, where should the factory be located, who is the key stakeholder…
These sorts of decisions can be communicated and compared very easily, and motivates the students to defend their choice.
More complex outputs can be incorporated into team or individual submissions (and we are still figuring out how to do this). Having said that, there is some place for the ‘gallery walk’ model, where students give feedback on each other’s more complex team outputs – eg by attaching post it notes with commentary to a flipchart.
6. 4S activities – significant problem
In my mind, this is one of the hardest bits about TBL – creating activities that by their design involve the whole team.
In theory, a shift to making specific decisions should by nature be more inclusive. Certainly this is what happened when I introduced (a few) application activities of this form (eg “what is the cost for this project phase?” or “what is the most significant risk?”)
Students tended to prefer working with a spreadsheet or other electronic output. However we found this lends itself to splitting up the task, or ‘take over’ by a dominant team member – both of which we seek to avoid.
Obviously having a decision output will circumvent some of this. For complex outputs (which were the majority of ours; they should be the minority) we found that flipcharts involved the whole team more effectively, even though students didn’t always appreciate this way of working.
7. Grading application activities.
There is an interesting debate about this in the TBL community.
We graded the activities. We felt this would reward students for their efforts ‘on task’ and would generate buy-in for application activities.
On the other hand, some TBL practitioners (including Jim Sibley) advocate keeping application activities formative, as it allows rich discussion of complex ambiguous cases – I would imagine it also encourages students to ‘take risks’ with creative solutions that they might not have ventured otherwise.
Of course it is quite difficult to grade a ‘decision’. So if these application activities are graded, some way of marking the decision process is required (verbal or written). In our case we collected in the (complex) team outputs, but one could imagine collecting worksheets or a short 100 word justification of a team decision. Post-class assessment outputs would be possible but it seems to be would go against the TBL principle of keeping team work in-class to eliminate scheduling conflicts.
We are still considering the right approach here.
8. Peer Evaluation is important
Both the TBL books mentioned are insistent about the importance of accountability, and strongly recommend summative peer marking. There are, broadly speaking, two approaches – the ‘absolute’ mark where peer evaluation contributes say 10% of the mark, and the ‘multiplier’ where peer evaluation raises (or lowers) the groups mark accordingly. The absolute method requires a decision to be made about the ‘benchmark’ score (where all team members contribute equally).
In my first TBL course, I used the multiplier method, with an additional mark for the quality of the feedback (the Koles method) – which worked well, but was quite a lot of work. In addition I needed to cap the multiplier (otherwise students with very good feedback could get over 100%).
The books strongly recommend a formative round which I implemented and certainly seemed useful. There were quite a few surprises among the student cohort…
In the second TBL course, I just used the formative round – but will upgrade this to both formative and summative next year.
It seems to me important that (i) students learn to give constructive feedback and (ii) students are given a chance to reflect and improve following formative feedback.
These benefits add value to the more obvious (and important) accountability effects of the quantitative scores.
9. Get feedback from whole cohort
I implemented a weekly survey for feedback on the course – although I got some very useful feedback this way, it was not representative of the whole cohort. So we had some surprises in the end of term evaluations – for example students requesting a mid point “comfort break” in the application activities, a request that would have been easy to accommodate had we known about it earlier.
So my take home message is to create some more inclusive way of collecting feedback – perhaps in team folders? – at some suitable stage in the course, and to reflect this back to the students.
To reiterarate – it was not enough just to give students an opportunity to feedback – we need to think about how they can give feedback and that all students do so.
10. Take the students on the journey
Finally, one piece of advice I got in our second TBL course was that the course team had ‘been on a TBL journey’ but the (new) cohort had not. Therefore we needed to support them through the transition to the new and unfamiliar way of working.
One way of doing this is to finish each ‘module’ (6-10 hours of teaching) with a wrapup that reinforces the rationale for teaching this way. Demonstrating to students how much they have learnt along the way. I also encouraged (well, actually mandated as it was graded) students reflecting on their learning. Team folders (containing team labels, voting cards, appeals forms etc) are also a subtle reminder to students (who have worked hard on their pre-reading) that you have also worked hard to prepare for class!
In summary :
- RATS should ‘test at the table of contents level’; unambiguous, not too hard and not too frequent.
- Save class discussion for the applications
- use appeals as part of the Readiness Assessment Process (pre-reading, iRAT, tRAT, appeals, clarification/mini lecture).
- Application activities should be
- ‘specific’ (choice)
- ‘significant’ (whole team)
- aligned with assessment, but not necessarily summatively graded
- Peer evaluation is key for both accountability and skills development
- Take students on the journey – and make sure you listen to them
Hope this helps with your own TBL !
A great reflection on your experience implementing TBL in your courses. Many of these same issues I also experienced and still do. Pitching the RATs right is difficult and changes with every cohort it seems to me. Approaching them as a reading quiz helps, but then two or three RAT questions do need to be constructed such that some team discussion is required. Almost App-like, but not quite. Otherwise, the team phase of the two-stage test becomes perfunctory. Ditto with the Apps – there is a Goldilocks position for Apps. I try to approach Apps as suggested by Vygotsky. They need to be in students’ zone of proximal development – too difficult to solve alone, but possible to be solved with the wisdom of the team. One of the mistakes I initially made with Apps was having the answer be a written response. It really needs to be a specific response: A or B. Otherwise members of the team are twiddling their thumbs while the one scribe writes out the team’s answer.
Keep up the good fight!
Many thanks for your insightful comments Neil. Particularly on the RATS. And in terms of ZPD, well, Vygotsky was one of the first learning theorists I read about, and his ideas instantly chimed with my experience. In terms of the applications – yes, this has been my experience. I am hoping that a *very short* written response (a few words) would be OK, but anything more than that gets too complex to compare and discuss. That’s not to say there isn’t a place for complex, reasoned, well presented WRITTEN outputs – but they will form part of the summative, individual assessment.