Loading…

Supervision 010525

Overview:

  • plans for upcoming/in progress design interventions
  • annotated bibliography and brainstorm of personal motivations for the research
  • two ethics forms in progress

Plans for upcoming / work in progress projects:

Workshops about AI Literacy + Learning Futures

Inspired by the podcast episode from IBM about AI in Education (AI in education: Safety, literacy, and predictions, 2024) I want to explore how we teach students to critically interact with AI when they use it. I want to encourage a movement away from the large corporations such as OpenAI and empower students to work with Large Language Model’s in a local and more personalised way. I also want to engage the students with the idea of actively and imaginative speculating on the future of AI in education. Throughout the workshop I want to offer visual/narrative prompts that can act as catalysts for discussion or participatory ideation.

[These workshops could act as an onboarding to a further research project where students who choose to participate would make their own DIY LLM that they would work with over the duration of the course] I’m not 100% sure on this yet

**_Project started: Not yet
Ethics draft begun: yes - Post-Registration Ethics Approval Form.docx

CCI Staff Interviews

In order to prepare for the workshops I would like to have a series of conversations with members of staff across CCI about the concept of AI literacy and how they are conveying this to students. Currently I get a sense (from general comments and casual conversation) that AI use is mostly frowned upon and I would like to get a clearer understanding of this - and specifically dig into what staff members think our responsibility is in regards to the ever growing prevalance of AI in industry (and society more broadly).

**_Project started: Not yet
Ethics draft begun: no -

AI Supervisor ‘Supervisor Bot’

Embracing the fear of ‘AI taking our jobs,’ I aim to directly explore what an ‘AI supervisor’ could contribute to my PhD learning process. To do this, I’ll assess the expected traits of a human supervisor, as per UAL guidance, and evaluate whether a locally hosted Large Language Model can meet these needs. Given ethical concerns surrounding mega corporations like OpenAI and their privacy implications, it’s important to imagine alternative ways of interacting with AI technology. Currently, advising or promoting the use of services like ChatGPT in this context raises red flags. This investigation leads me to consider running local Large Language Models, with a vision to empower students (and myself) to create custom models tailored to our specific needs.

Currently I am building a v1 prototype to start testing what is possible with my current hardware and technical provisions. A brief discussion of the prototype:
_This prototype explores the potential of a LLM to act as a supervisor, by adopting a ‘persona’ and listening to conversations between the student (me) and my supervisors, acting as an additional supervisor offering feedback.
It will allow a local LLM to observe conversations and respond with helpful context, using voice (maybe) and text. It does not aim to have an interface and won’t be a fully robust usable application at this stage.

**_Project started: yes, I would like to get v1 done before next meeting
Ethics draft begun - yes - Post-Registration Ethics Approval Form.docx

DIY bot

As an extrapolation from the ‘Supervisor Bot’ I am thinking about how to extend this project in a sense, by taking what I will have learnt and designing a framework and workshop that I can give my students around how they can build their own custom LLM bots. Within this context, the students would have agency in building their supervisor/tutor and the characteristics that would embody. In this sense it will be personal to the student and due to them customising it, they would need to have a deeper level of understanding about it, which will hopefully lead to a more critically engaged utilisation of the tool.

**_Project started: Not yet - project needs more work done to the AI supervisor bot beforehand
Ethics draft begun - no


  • Here is the link to the annotated bibliography of books (and other content) that I have engaged with recently: annotated-bib-150425.docx

  • I have begun to try to express some of the underlying motivations and drive behind this research and try to start linking it to my positionality. Here is a link to a very rough draft of me collating ideas and thoughts together (might not be that legible at the moment). add link to personal motivations: personal-motivation-brainstorm.pdf The following summary of a pgcert reading is also connected to these thoughts garrett-2024-imagined-futures-of-racialised-phds


Notes from meeting

staff interviews -
consider doing interviews more broadly in UAL – Disciplinary differences
research design - surveys first, then to lead to interviews - to get some early insight
conversations rather than interviews?
surveys can inform discussion guides to scafold interview/conversation

Do I highlight participants positionality? is it important for my research? How do I deal with this ethically

->

Ethics
for main ethics doc - when will I need to move out of UAL infrastructure to others
zoom out a little bit from project based to more broad research design
Speak with Tom Lynch?

one main ethics form - then more than one consent form for each project

Deadline September

->

**Things to look at

  • Atomic Human
  • The near future lab

->

**Admin & ToDos

  • supervision form meeting -> send to Silke and Eva
    To do for next meeting:
  • Research Design for next meeting
  • Ethics form complete for review
  • Check with Tim - is there a break in the ethics review before summer