Public Engagement
Sussex Neuroscience public engagement activities
We use some essential cookies to make this website work.
We'd like to set additional cookies to understand how you use our site so we can improve it for everyone. Also, we'd like to serve you some cookies set by other services to show you relevant content.
View our privacy policy.
Sussex Neuroscience public engagement activities

Local contacts and activities

News from the Open Research Technologies Hub
Posted on behalf of: Sussex Digital Humanities Lab (SHL Digital)
Last updated: Friday, 16 January 2026
Members of SHL Digital are organising a symposium on the Ethics of Generative-AI Assisted Authoring at . The Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB) is the largest Artificial Intelligence society in the United Kingdom.
Whilst there is no shortage of discussion of AI ethics, the recent widespread uptake of Gen-AI tools to support authoring raises new questions (Formosa et al. 2025, Murray 2024). The primarily deontological frameworks that attempt to provide general principles for AI ethics tend to be aimed at decision-making contexts, and to re-define ethics in terms of checklists for privacy protection, safety, accountability and fairness (Hagendorff, 2020).
Within authoring, we include the production of written texts, multimodal texts and software. It is easy to think of case studies of Gen-AI assisted authoring that most would find to be ethical (such as a dyslexic writer using a Gen-AI tool to correct grammatical errors and improve clarity of a draft) and that most would find to be unethical (such as a student submitting code generated from a prompt based on an assignment brief for assessed coursework).
We invite discussions and explorations of how various factors influence perceptions of ethicality, including the purpose and context of the authoring, the extent to which the output communicates the author’s own ideas, the impact on the author’s own cognitive abilities, the impact on the author’s wellbeing at work, whether the original task was appropriate, the environmental and resource costs of the use, and the specific models and servers used for the generation.
Contributions reflecting on the following topics and questions are very welcome:
Submission information: We invite extended abstracts of up to 1500 words, to be presented during the symposium through panels and short-talks (to be curated by programme committee after reading submissions). for more information.
Deadline: 27 February
Symposium format: 1-day symposium
Submission link:
Programme Committee
The symposium PC is led by members of the Sussex Digital Humanities Lab, in collaboration with other 名媛直播 colleagues from across the disciplines.
References
Formosa, P., Bankins, S., Matulionyte, R., & Ghasemi, O. (2025). Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure. AI & Society, 40(5), 3405-3417.
Glazko, K. S., Huh, M., Johnson, J., Pavel, A., & Mankoff, J. (2025). Generative AI and Accessibility Workshop: Surfacing Opportunities and Risks. In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-6).
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and machines, 30(1), 99-120.
Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. In Proceedings of the 2025 CHI conference on human factors in computing systems (pp. 1-22).
Murray, M. D. (2024). Tools do not create: human authorship in the use of generative artificial intelligence. Case W. Res. JL Tech. & Internet, 15, 76.
Villegas-Galaviz, C., Martin, K. Moral distance, AI, and the ethics of care. (2024). AI & Society 39, 1695–1706