Corrective Feedback at Modern Educational Institutions
05 January, 2021 - 12 min read
- 1. Introduction
- 2.1 Feedback systems at CODE University of Applied Sciences
- 2.1.1 Current Implementations of sub-assessment at CODE
- 2.1.2 Current Feedback via text communication at CODE
- 2.2 Educational Feedback Systems at Lambda school, Flatiron school, and Minerva School at KGI
- 2.2.1 Lambda school
- 2.2.2 Flatiron school
- 2.2.3 Minerva School at KGI
- 3. Recommendations for improved Educational Feedback Systems
- 3.1 Integration of sub-assessment
- 3.2 Resources allocation to Feedback Systems
Feedback loops are one of the most critical drivers for learning. Despite their importance, there is little attention allocated to them, especially in learning environments where learning is done on projects and without a rigid top-down curriculum of knowledge requirements and performance benchmarks.
Corrective feedback involves a Learner such as a university student, receiving informal feedback on his or her understanding or performance of various tasks by an agent such as a professor or peer1.
According to researchers, formative feedback should be non-evaluative, supportive, timely, and specific2. Feedback should be non-evaluative, not to bring the student to dismiss the feedback as overly critical. It should also be supportive not to degenerate the learner’s motivation and instead signal that improvement is possible and support him in this process.
The educational environment that provides such a feedback system herein is also referred to as didactical feedback system, corrective feedback system, or only as feedback system.
Several aspects of didactic feedback systems will be analyzed and compared with each other. The focus will be mostly on the rate or timeliness of feedback and less about the other three mentioned aspects.
The following comparison will point out the lack of corrective feedback at modern educational Institutions with the example of CODE University of Applied Sciences (CODE) and will argue for a better feedback system and lastly point of improvement recommendations for the system at CODE.
In his book How We Learn: The New Science of Education and the Brain3 the cognitive neuroscientist Stanislas Dehaene (2020, p. 184) wrote of his personal experience learning how to program:
I had been tinkering all this time without understanding the deep, logical structure of programs, nor the proper practices that made them clear and legible. And this is perhaps the worst effect of discovery learning: it leaves students under the illusion that they have mastered a certain topic, without ever giving them the means to access the deeper concepts of a discipline.
As a student of Software Engineering, I can relate to this anecdote. However, I can also add a memorable personal anecdote from my childhood. In my first to the second school year, our teacher never corrected our spelling mistakes. Our teacher did this to encourage writing, but since we memorized the wrong spelling at such a formative age, many of my classmates and I continue to struggle with some spelling mistakes to this day.
Even when the upside of a self-directed4 learning system is clear to us, e.g., students benefiting from increased autonomy and intrinsic motivation, designers of such educational systems have to consider all its downsides described here. Those might also be why it is not yet the dominant didactic system in 99% of the cases.
The proposition is that those downsides can mitigate through better feedback loops for self-directed learning projects.
Researchers have explored the benefits of feedback as a critical component of learning5. Just as when comparing different feedback system implementations, it is helpful to look for measurements of comparison. One way to quantify the amount of feedback might be a “performance to feedback time ratio” or P2FTR. E.g., the time a student makes an error, for example, in his essay or project, to the time someone gives him feedback on his error to improve the quality of his work or reduce the number of errors. Along with this measurement, institutional staff can evaluate the rate at which students receive feedback on the work and thereby measure the corrective feedback provided to the student. We will look at the implementation of different feedback systems and use this as a coarse measure for our comparison.
CODE University of Applied Science offers several implicit ways to get feedback, besides the formal assessments of each semester's end, on the work one produced or the knowledge one applied.
The students currently have two ways to get feedback. If they take the initiative, they can:
- Book a meeting with their mentor or project Sponsor
- Book an office hour with a professor
The student has to take the initiative and contact the prospective feedback-giver via a communication channel. In most cases, this means they are waiting on the actual feedback for at least four days, e.g., learning with a P2FTR of 1-7 days. It also seems relatively unclear what the limit for the usage of office and mentor hours is and what would happen if everyone suddenly would start booking lots of them.
Secondly, from conversations with students about end-of-semester module assessments, it was made clear that the written feedback that the student receives for those assessments on the CODE internal internet is not very detailed. In a survey performed on 15 students from CODE (5-10.01.2021), Only half of the students answered that this feedback “helps them a lot to learn more.” And about 95% stated they would like to receive more feedback from professors.
These results suggest that students lack high P2FTR and need to receive more feedback to reach tier learning goals.
One can also assume that multiple small assessments can accomplish a much higher P2FTR instead of one assessment. In this scenario, the student will not work for several months and receive feedback at the semester end but work a few weeks to receive feedback from the first sub-assessment and subsequently improve his understanding. The survey shows that about one third of students would prefer this methot.
These suggestions are already partly implemented in a more open format by one professor (Frank Troll). He enables students to cross out model requirements during office hours. If the student can demonstrate a requirement in an office hour, he will not require the student to showcase it again in the final assessment. Unfortunately, this opportunity is still not known to many students.
Additionally, an alternative assessment could allow the student to schedule their assessment with the above-described feedback cycle. However, this is contingent upon if the module offers alternative assessments if the professor accepts this proposal.
Some professors are fast at giving feedback and suggestions via digital text communication. CODE uses the software product Slack for this purpose. Other professors are not willing to give feedback via text communication. They ask for a formal office hour to review students' work or ask questions about the module. The extreme example of booking a 30 min office hour to ask a 3-minute question makes this inefficiency clear. CODE should make sure professors feel comfortable giving feedback via text communication since, in many cases, it is faster and more time-efficient than a meeting. Feedback via text communication at CODE shows clear ways to improve CODE’s Feedback system by reducing friction between students and feedback providers.
Other institutions with a similar cost structure implemented better feedback loops than those at CODE.
Lambda school has a formal process for code reviews. Students perform these reviews daily with their mentors and teachers6. In an email exchange (07.01.2021), a representative of the company described the rate at which students get feedback as: “students are in Slack and Zoom all day, so they can ping others for help (their mentor, their learning group, etc.).”
She also described the array of options for students to seek feedback as “they get feedback on their projects from mentors, instructors, and other students.”
Flatiron school also has put in place a sophisticated system for fast feedback loops. Flatiron describes those systems in their podcast called Pursuing Mastery7. A quote from their website8 explains their checking:
Because our curriculum builds cumulatively, instructors assess students at the end of each module to ensure students have a strong understanding of the concepts before moving forward to the next module. Reviews and check-ins occur throughout the program to ensure students have touchpoints with instructors and can ask questions in a one-to-one setting.
Note that Lambda school and Flatiron School can categorize into a programming Bootcamp
, which means they are not obligated to follow the rigid regulations a German University of Applied Sciences has to.
Minerva School at KGI introduces a fast feedback system grounded in learning science. Minerva’s fast feedback system is described, amongst others, in the book Building the Intentional University: Minerva and the Future of Higher Education by Stephen M. Kosslyn and Ben Nelson9, presented in the following.
Minerva’s assessment system allows faculty to follow each individual student’s progress in applying concepts (p. 52). This structure shows that their quality of feedback is highly personalized because faculty, e.g., feedback-givers, know the progress and what the student is struggling with.
Furthermore, they expand on the type of feedback they provide by formative assessments:
We focus on formative rather than summative assessment (i.e., we prioritize low-stakes feedback provided in the early and middle stages of learning over high-stakes exam scores arriving at the end) and are intensely concerned with transparency and reliability in this process. (p. 59).
Moreover, students perform about 4-5 (arts and humanities, natural science, business, social science) or 6-13 (computer science) of these low stakes formative assignments throughout the semester. This assessment schedule stands in stark contrast with CODE Students that only perform one assessment at the semester end for each module.
Finally, they also describe the frequency of feedback on students' performance as “unusually frequent intervals” (p. 132). The students receive feedback, e.g., are grated after every class on their performance during it. The students benefit from this unusually frequent, even if a student criticized that sometimes the feedback lacks quality because teachers provide the same feedback statement to multiple students (J. P., personal interview, December 19, 2020).
An example of the goal and process of such sub-assessments follows. Instead of programming an app with hundreds of bad practices, anti-patterns and then submitting it for assessments at the end of the semester, the assessor or feedback-giver would point out the anti-patterns the student has made in the first sub- or mini-assessments at a point where the student has written about 20% of their application.
Implementing such a module schedule, the preservation of students’ flexibility might be an obstacle. With such a system, it is imperative for the student to plan his time investment in studying and preparing for an assessment multiple times rather than only once.
Summarizing sub-assessments in comparison with only one assessment appears as a better didactic feedback system; even if the preservation of the flexibility of students' learning schedule could pose a constraint, they are a rich source of feedback and reduce tinkering.
In the educational feedback systems that we looked at in section 2.2, we saw that the feedback-giver does not have to be a professor, which might be more cost-effective. Feedback-givers can also be, research or academic assistants, freelancers, or similar. E.g., investing resources in the employment of more feedback-givers does not have to increase the price of the offering of the institution.
Secondly, Feedback-givers may use a note-taking or CMR-like system to keep track of students' projects, skill level, and progress to minimize the time required to remember the necessary details of the project and situation of the student, further increasing the efficiency of the feedback system.
Those combined implementations could help reduce the workload of professors and improve the feedback systems.
The introduction demonstrated the importance of a robust feedback system implementation for an educational institution. The comparison of modern educational institutions and their feedback systems pointed out the weaknesses in the feedback system of CODE. Exploring other institutions like Lambda school, Flatiron school, and Minerva School at KGI made it clear that they have implemented sophisticated feedback systems. Comparing the feedback systems of CODE with those analogical systems pointed out improvement opportunities for the didactic feedback system at CODE University.
"The Power of Feedback - John Hattie, Helen Timperley, 2007" (2021), p. Available at: https://journals.sagepub.com/doi/10.3102/003465430298487 (Accessed: 10 January 2021).↩
"Focus on Formative Feedback - Valerie J. Shute, 2008" (2021), p. Available at: https://journals.sagepub.com/doi/10.3102/0034654307313795 (Accessed: 10 January 2021).↩
Dehaene, S. (2020). How We Learn: The New Science of Education and the Brain. Penguin Books Limited.↩
Knowles, M. (1975) "Self-Directed Learning: A Guide for Learners and Teachers.", Association Press, 291 Broadway, New York, New York 10007 ($4.95), p. Available at: https://eric.ed.gov/?id=ED114653 (Accessed: 17 December 2020).↩
Askew, S., Education. Assessment, G., Effective Learning, & University of London. Institute of Education (2000). Feedback for Learning. Routledge/Falmer.↩
Allred, A.(2018): A Day in the Life of a Lambda School Student. Available at: https://lambdaschool.com/the-commons/a-day-in-the-life-of-a-lambda-school-student (Accessed: 18 December 2020).↩
Pursuing Mastery By Sean Dagony-Clark (2020). Available at: https://anchor.fm/pursuing-mastery/ (Accessed: 18 December 2020).↩
(2020) Flatironschool.com. Available at: https://flatironschool.com/career-courses/coding-bootcamp (Accessed: 18 December 2020).↩
Kosslyn, Nelson, (2017), Building the Intentional University: Minerva and the Future of Higher Education, MIT Press.↩