Part A required me to draw up a guideline on sound assessment principles for colleagues without HPE training. I set out ten principles, combining established ideas such as validity, reliability, feasibility, fairness, educational and catalytic effect, transparency, and acceptability, with DSR-inspired principles like iterative development and dual contribution. I also added strategies for implementation and examples to make the framework more practical. On first submission, I believed this was comprehensive and well balanced.
Part B asked for a critique of a module’s assessment plan. I chose the Evidence and Information in Health Management module, where I had recently taught. This was challenging because critiquing a colleague’s work did not feel natural. Initially, I worried that my comments might come across as harsh or uncollegial. As I worked through it, however, I realised that honest, constructive critique required me to raise my own level of engagement. The exercise pushed me to read more broadly, apply the principles more systematically, and present recommendations that were rigorous but fair.
The peer feedback I received on Part A was affirming and constructive. My colleague wrote that the guideline was “well-crafted and synthesised” and noted that I had reached out effectively to a reader unfamiliar with DSR. She appreciated my introduction and the way I linked it to the discussion of principles. She also noted, however, that more could have been said about assessment in the traditional sense before moving into DSR. She highlighted that my first five principles were well explained and that my diagrams and examples made the work accessible. Her feedback reassured me that the structure and clarity were strong, while pointing out a gap I had overlooked.
Prof Archer’s comments as moderator provided a complementary perspective. She confirmed that the assignment was well researched and accessible but emphasised that I needed to slow down and define the principles themselves before moving into application. She reminded me that the brief was to provide a guideline for colleagues without HPE training, which meant that definitions and clear explanations were essential. She also advised me to cite my sources more explicitly to show how the argument was built from the literature. Finally, she observed that some of my recommendations and applications would be more suitable for Part B rather than Part A.
Another memorable part of Part B was creating a diagram to represent the assessment design and timeline of the module. Developing this infographic was unexpectedly enjoyable. It allowed me to tap into my creative side while also producing a resource that clarified the structure of the module at a glance. It showed the contact period, formative assessment, post-contact period, and summative submission in a way that was far easier to interpret than text alone.
Reflecting on this process, I can identify several important lessons:
The first was recognising again my tendency to assume familiarity. Both my peer and Prof Archer drew attention to this. My colleague encouraged me to expand on traditional assessment principles before applying them to DSR, while Prof Archer was even more direct, asking me to provide clear descriptions of the principles so that non-HPE colleagues could follow. I realised that I often move too quickly into application because the concepts feel obvious to me, forgetting that others may not share that background. This habit, already evident in my teaching and curriculum reflections, became even clearer in the assessment module.
The second was the value of peer feedback. At first, I was uneasy about being critiqued by a colleague, but the feedback I received was both encouraging and precise. It affirmed the strengths of my work while identifying one or two important gaps. This balance helped me take the comments seriously without feeling defensive. I saw how constructive peer review can be both supportive and developmental, especially when delivered with clarity and respect.
The third was the formative role of Prof Archer’s moderator comments. Her insistence on definitions and explicit citation reminded me that accessibility and scholarly rigour must go hand in hand. I had written as though my audience shared my context, but the assignment required me to imagine readers without that grounding. By pointing this out, she made me reconsider not only the assignment but also how I communicate in professional settings. Her reminder about citation also sharpened my awareness of the importance of showing how my argument is rooted in the literature, rather than relying too heavily on personal context and ideas.
A further lesson came from the discomfort of critiquing a colleague’s module in Part B. Initially, I struggled with the idea of analysing and pointing out shortcomings in another person’s design. Yet doing so pushed me to adopt higher standards in my own critique. It made me more deliberate in how I framed feedback, more attentive to the evidence I used, and more committed to constructive recommendations. This discomfort turned into motivation. It helped me strengthen my capacity to engage critically yet respectfully with the work of others, a skill that is essential in academic and professional practice.
The creation of the infographic (Figure 1) provided another insight. It reminded me that assessment critique is not only textual. Visual representations can clarify relationships, expose misalignments, and make assessment plans more accessible. The act of designing the infographic also helped me refine my own thinking, because distilling the content into a single diagram forced me to identify the core elements and their relationships. This creative process connected strongly to earlier insights about accessibility. Where my written work risked assuming familiarity, the visual representation made the assessment plan clearer and easier to interpret for all.
Figure 1: Overview of Module 7 Assessment Plan - Evidence and Information in Health Management
Finally, the iterative use of feedback showed me the potential of structured tools such as the response table. Rather than treating examiner comments as hurdles, I began to see them as scaffolding for improvement. This aligns with Nicol’s (2021) description of internal feedback, where comparison and adjustment drive learning. By systematically recording how I responded to feedback, I made the process transparent to myself and could trace my growth across both parts of the assignment.
Looking ahead, I plan to carry these lessons into my professional development:
I will start by ensuring clarity and accessibility. Whether writing or teaching, I will avoid assuming shared knowledge. This means defining key terms and principles before moving into application, and ensuring that explanations are pitched at the level of my actual audience rather than at my own level of familiarity.
I will also adopt a more balanced approach to critique. My natural inclination is to identify weaknesses, but I have seen how important it is to recognise strengths as well. Both my peer and Prof Archer modelled this balance in their feedback. Going forward, I want to provide feedback in the same way: constructive, encouraging, and specific.
Another commitment is to strengthen constructive alignment in my own teaching. Part B reminded me how easily misalignments can creep in when outcomes emphasise higher-order competencies but assessments focus on recall. I will review my own modules carefully to ensure coherence between outcomes, activities, and assessments.
I intend to embrace collegial critique more actively. Rather than avoiding the discomfort of analysing a colleague’s work, I will treat these opportunities as professional dialogues that benefit everyone involved. I have already experienced how much I learned by engaging critically with another module, and I want to make this part of my ongoing development.
Feedback literacy will also remain a priority. Using response tables helped me to engage systematically with feedback. I would like to model this practice for students, encouraging them to record how they respond to comments and to see feedback as part of learning rather than a judgment.
Finally, I will continue to apply a DSR lens to assessment design. Just as artefacts are refined through cycles of iteration, assessment frameworks can also be piloted, critiqued, and improved. This perspective has shifted assessment from something that felt intimidating to something I can engage with as an ongoing process of refinement. Incorporating creative visual artefacts such as infographics (Figure 2) will be part of this approach. They can make complex designs easier to understand and can also provide an enjoyable way of engaging with otherwise technical material.
Figure 2: Roadmap to improving Module 7 - Evidence and Information in Health Management
Boud, D. & Molloy, E. (2013). Feedback in Higher and Professional Education: Understanding it and doing it well. Routledge.
Carless, D. & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325.
Harden, R. & Lilley, P. (2025). A traditional or an innovative approach to assessment: The Assessment PROFILE. Medical Teacher, 47(1), 50–57.
Nicol, D. (2021). The power of internal feedback: exploiting natural comparison processes. Routledge.
Norcini, J., Anderson, B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., … Roberts, T. (2011). Criteria for good assessment: consensus statement and recommendations from the Ottawa 2010 Conference. Medical Teacher, 33(3), 206–214.
Schuwirth, L. W. T. & Van der Vleuten, C. P. M. (2011). General overview of the theories used in assessment: AMEE Guide No. 57. Medical Teacher, 33(10), 783–797.
Tai, J., Ajjawi, R., Boud, D., Dawson, P. & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76, 467–481.