The Research Methodology module provided an opportunity to bring together the theoretical foundations established in earlier modules. Having completed research degrees before, I was familiar with protocol development, but this module offered something different: a confluence of my previous work in Design Science Research, my growing interest in AI's role in education, and my developing understanding of Health Professions Education scholarship. I chose to focus on Design Science Research faculty experiences with artificial intelligence integration in South African higher education, a topic that sits at the intersection of my professional interests in public health, digital health, educational technology, and Health Professions Education.
The module included two assignments for the formative and summative assessments: a literature review and a research protocol. There was also assessment of contributions to online discussion, which I found valuable for testing my thinking.
Early in the module, I posted my provisional approach to the online forum, seeking feedback on whether I was on the right track. I used the SPIDER framework for qualitative evidence synthesis to structure my research question and Saunders' research onion (Figure 1) to map my methodological choices. I also created a diagram illustrating my positioning within the research onion. Working through these frameworks publicly forced me to be explicit about decisions I might otherwise have left vague, and I was pleased to learn that these contributions were viewed positively.
Figure 1: The ‘research onion’ for this research project, selections underlined. Source: adapted from Saunders et al. 2019
Looking back, I am relieved that I had intentionally maintained a consistent theme throughout my coursework. This decision meant I could reuse artefacts such as diagrams and concept mapping tables, which not only saved time but also allowed me to refine my thinking on a relatively new topic. The feedback ecosystem diagram I had developed during the Teaching and Learning module (Figure 2) found its way into both assignments and became a visual anchor for explaining how AI disrupts traditional DSR feedback processes.
Figure 2: Feedback ecosystem for learning within the Design Science Research paradigm. Source: Student (Teaching & Learning Module)
This reuse of visual artefacts reflects my inclination towards design thinking, though I recognise that I have not yet grounded this approach in educational design literature.
This was also the first module where the university's AI guidelines were strictly applied. We were required to be transparent about our LLM prompts and outputs. I found this challenging, though not for the reasons one might expect. My workflow involves iterative refinement, using AI to ensure grammatical consistency and maintain academic writing style while building content. I am confident that all ideas are my own, and I remain vigilant about bias and apply ethical AI standards throughout. Yet documenting this process felt like it constituted a separate assignment altogether.
What complicates AI transparency further is the stochastic nature of large language models. In this context "stochastic" means that these systems incorporate randomness in generating responses, so the same prompt can yield different outputs each time. Added to this, recent enhancements mean that LLMs increasingly incorporate specific users' context and interaction history into their responses. The practical implication is that an examiner testing a student's prompt may receive a quite different response from what the student originally saw. This makes verification difficult and raises questions about how we assess AI-assisted work fairly. There was a tension between wanting to demonstrate thoughtful, responsible AI use and not wanting to appear defensive, particularly given that AI forms part of my teaching practice and will be the focus of my research.
The examiner feedback on both assignments was constructive and helped strengthen my work. For the formative assessment (the literature review), feedback highlighted that I had not included sufficient Health Professions Education literature and that the link to HPE needed to be more explicit. There was also a question about whether I was targeting a specific group of faculty and what would happen if they did not adapt their teaching. These observations cut to the heart of my disciplinary positioning, and I worked to address them in the summative assessment.
For the protocol (the summative assessment), the feedback affirmed several aspects of my work: the study definitions table was noted as very useful, the research gap was described as well articulated, the choice of snowball sampling was affirmed, and the sample size justification was considered well argued. The methodological feedback was particularly instructive. There was a recommendation to include Mezirow as the primary author of transformative learning theory, a significant oversight on my part given that transformative learning underpins much of my theoretical framework. The feedback also expanded my understanding of member checking, noting that beyond sending transcripts to participants for verification, there is a second level where coded and themed versions should be shared so participants can confirm whether their contributions have been interpreted faithfully. I had not considered this deeper form of interpretive validation. There was also a suggestion that there be random sampling of interview transcripts by another authorised party to verify accuracy, adding another layer of trustworthiness I had not built into the protocol.
Other feedback addressed structural and practical matters, including that I spoke of "DSR faculty" as if it were an identity rather than describing "faculty making use of DSR", a subtle but meaningful distinction in how I frame my participants. There was also observation of repetition between my data generation and study setting sections, and advice to remove conference and publication costs from the budget since these occur after the research phase. These comments revealed that while my conceptual work was sound, my protocol writing needed tightening.
One output from this period that I am particularly proud of is my methods paper on Design Science Research, published in the African Journal of Primary Health Care & Family Medicine (https://phcfm.org/index.php/phcfm/article/view/5194). While this was not a course requirement, I saw value in consolidating my thoughts on DSR's application in public health as part of the peer-reviewed literature. I will certainly cite this in my final assignment.
Due to time management and word count constraints, I was unable to include appendices with sample questionnaires and consent forms. In hindsight, this was a missed opportunity to receive feedback and refine these instruments before the formal research commences.
Reflecting on this module, several insights have emerged, many of which connect to patterns I have observed across my MPhil journey.
The feedback on the literature review revealed a recurring pattern in my work: I tend to assume that readers will understand the connections I am making between disciplines. The observation about the HPE literature gap was not simply about citation counts. It was about making my disciplinary positioning explicit. I am studying DSR faculty from Built Environment disciplines, yet my ultimate aim is to inform HPE's adoption of DSR methodology. Without sufficient HPE literature anchoring this argument, the transdisciplinary leap remains implicit rather than justified. This connects directly to feedback I received in the Assessment module about not assuming shared understanding, and in the Curriculum Development module about defining terms that seem self-evident to me. The pattern is now undeniable: I consistently overestimate the extent to which readers share my frame of reference.
The requests for clarification on terminology reinforced this insight. Questions about what I meant by "Built Environment disciplines" and what "mature" meant in the context of educational contexts highlighted terms I use fluently in my own thinking but that require definition for readers who do not share my background. The HPE scholarly approach has certainly challenged me in this regard. The rigour of argumentation pushes me to keep asking "why" and to defend my views with evidence, even when I feel the logic is self-evident.
The omission of Mezirow troubled me on reflection. I had referenced transformative learning theory through secondary sources without engaging with the foundational work. This is not merely a citation gap; it reflects a tendency to work with concepts at surface level rather than tracing them to their theoretical roots. Mezirow's framework for perspective transformation is directly relevant to understanding how faculty might experience AI integration as a disorienting dilemma that challenges their assumptions about teaching. By not engaging with this primary source, I weakened my theoretical foundation.
The expanded understanding of member checking has implications for my research practice. I had thought of member checking as a verification exercise: did I transcribe your words correctly? But the suggestion to share coded and themed versions points to something more profound: did I interpret your meaning faithfully? This interpretive dimension aligns with the reflexive thematic analysis approach I adopted from Braun and Clarke, which emphasises the researcher's active role in constructing themes rather than merely discovering them in the data. Member checking at the interpretive level acknowledges that participants should have input into how their experiences are framed, not just recorded.
The AI transparency requirement prompted deeper reflection on my own practice. I use AI iteratively, as a thinking partner rather than a content generator. Yet the requirement to document this process made me realise how difficult it is to capture iterative refinement in a linear declaration. The prompts I use evolve as my thinking develops; they are not discrete inputs yielding discrete outputs. The stochastic and personalised nature of current LLMs adds another layer of complexity to this documentation challenge. This experience has given me empathy for students navigating similar requirements and has shaped how I might approach AI transparency in my own teaching.
I also discovered that multi-institutional research will require ethics approval or permission from each participating institution. This had not occurred to me, perhaps because my previous research experience was in health services research governed by the National Health Act, where health research ethics committees provide oversight for studies involving patients, healthcare workers, or health records. Educational research with faculty participants operates under different governance arrangements. This realisation is somewhat daunting, as it presents logistical challenges for a masters-level study. I will need to make pragmatic decisions about scope, keeping in mind that this is not a PhD, nor, as I joked with colleagues, a Nobel Prize submission.
Interestingly, the theoretical framework I developed for the protocol, drawing on Bandura's social cognitive theory and the concept of self-efficacy, resonates with my own experience as a developing researcher. Self-efficacy beliefs influence how people approach challenges and persist through difficulties. Throughout this module, I noticed my own confidence fluctuating: buoyed when the gap articulation was praised, uncertain when foundational sources were found missing. This parallel between my research framework and my lived experience as a learner is worth noting, though I am cautious about over-interpreting it.
The value of the online discussion contributions also became clear. Taking time to articulate my methodological choices using the SPIDER framework and research onion forced me to be explicit about decisions I might otherwise have left implicit. The positive feedback on these contributions reinforced the importance of making thinking visible, both for my own clarity and for others who might engage with my work.
The feedback ecosystem diagram itself underwent further development during this module. I had originally created it as a static representation of the multiple feedback channels within DSR learning: the teacher, peers, authenticity checks, co-creation cycles with users, and interaction with the design artefact itself. For the summative assessment, I evolved this into a video loop that better captures the dynamic, cyclical nature of these feedback processes. The animation (Figure 3) allowed me to show how feedback flows and accumulates over time, rather than presenting it as a frozen snapshot. This iteration on my own artefact exemplifies the very approach I am researching: using design thinking to refine conceptual tools through successive cycles of development. It also demonstrates how maintaining a consistent theme across the programme creates opportunities for deepening and extending earlier work rather than starting afresh each time.
Figure 3: Animated illustration of the feedback ecosystem for learning within the Design Science Research paradigm, using a Fantasy visual genre to convey the dynamic nature of feedback flows
Moving forward, I have identified several areas requiring attention, though I recognise these will need to be balanced against the practical constraints of completing a masters dissertation.
I am grateful for the feedback I received on both assignments, as it has directly shaped how I approached the summative assessment and will continue to inform my dissertation work. Acting on the formative feedback, I ensured that HPE literature was better represented in the protocol, explicitly connecting DSR faculty experiences to HPE scholarship on technology integration, faculty development, and pedagogical transformation. I drew on literature around feedback literacy to anchor my discussion of how AI disrupts feedback processes. The connection between DSR and HPE is now more explicit, though I will continue to strengthen this throughout the dissertation.
The Mezirow omission has prompted me to return to primary texts rather than relying on secondary summaries. For transformative learning specifically, I will read Mezirow's work directly and trace how subsequent scholars have applied it in HPE contexts. This deeper engagement will strengthen my theoretical framework.
The feedback about terminology reminds me that definitions must be clear for readers outside my immediate disciplinary context. The study definitions table provides a template; I will expand this in my dissertation and ensure that all discipline-specific terms are explained. This connects to the broader pattern of not assuming shared understanding that I need to actively counter.
On the methodological side, I will build deeper member checking into my protocol. Beyond transcript verification, I plan to share preliminary themes with participants, inviting them to confirm or challenge my interpretations. This adds time and complexity, but it strengthens the trustworthiness of my findings. I also need to prepare the instruments I omitted from the protocol, including the interview schedule, consent forms, and information sheets.
The multi-institutional ethics question requires a strategic conversation with my supervisor. We will need to determine when and how to seek clearance from participating institutions. This may require narrowing my participant pool or accepting that the ethics process will extend my timeline. I am learning to be comfortable with pragmatic compromises that maintain research integrity while acknowledging the constraints of a masters programme.
The experience of documenting my AI use has made me more reflective about how I integrate these tools into my practice. The challenges of capturing stochastic, context-sensitive AI interactions in a transparent way are not unique to me; they reflect broader questions about how academia will assess AI-assisted work. I will consider how I might model this transparency for my own students and contribute to the emerging conversation about responsible AI use in scholarly work.
The Research Methodology module has challenged me to articulate what I often take for granted. The process of developing a protocol forced me to make my assumptions visible and defend them with evidence. While this was at times uncomfortable, it has strengthened my research design and prepared me for the scrutiny that lies ahead. I am grateful to my examiners for their thoughtful engagement with my work and the time they invested in providing feedback that has genuinely improved my thinking. I look forward to carrying these lessons into the dissertation phase.
Bandura, A. (1997). Self-efficacy: The exercise of control. W.H. Freeman.
Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352.
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325.
Cooke, A., Smith, D., & Booth, A. (2012). Beyond PICO: The SPIDER tool for qualitative evidence synthesis. Qualitative Health Research, 22(10), 1435–1443.
Mezirow, J. (1991). Transformative dimensions of adult learning. Jossey-Bass.
Saunders, M. N. K., Bristow, A., Thornhill, A., & Lewis, P. (2019). Understanding research philosophy and approaches to theory development. In M. N. K. Saunders, P. Lewis, & A. Thornhill (Eds.), Research Methods for Business Students (8th ed.). Pearson Education.
Schön, D. A. (1983). The reflective practitioner: How professionals think in action. Basic Books.
Editorial support:
Anthropic's Claude Opus 4.5
Video generation:
Google Gemini's Veo 3.1