Yemi Gabriel

View the Project on GitHub yemigabriel/UniEssexMsc

Peer Response (Md Aminur Rahman)

Thank you for an insightful discussion. You highlighted the risk of Artificial Intelligence (AI) writers perpetuating biases due to the inherent biases in training datasets. This reminds me of the incident with Microsoft’s Tay chatbot. in 2016. Tay, an AI chatbot, was designed to engage in conversation and produce written content based on user interactions. However, within 24 hours, it began to post racist, sexist, and anti-semitic tweets after negative interactions with users (Wolf et al. 2017). The incident emphasised the importance of enhancing moderation and filtering in AI systems to avoid the dissemination of harmful, bigoted ideas. It serves as a reminder of the risks involved in deploying AI in public environments without sufficient safeguards.

You also highlighted the risk of over-reliance on AI tools. There are concerns about AI tools causing cognitive atrophy (Sætra, 2023). The concern is that as we allow AI to perform mentally challenging tasks, we might run the risk of not being able to do this ourselves in the long run (Sætra, 2019). While I understand these concerns, I think freeing up mental space thanks to AI can lead to even more innovative advances in society. For example, the invention of the calculation meant that humans could compute faster, and accomplish even more, freeing up mental space for other problem-solving tasks.

Ultimately, there has to be a balance between all that AI tools do for us as a society and the safeguards we implement to ensure responsible usage.

References

Sætra, H. S. (2019) ‘The Ghost in the Machine’, Human Arenas, 2(1), pp. 60–78.

Sætra, H. S. (2023) ‘Generative AI: Here to stay, but for good?’, Technology in society, 75, p. 102372.

Wolf, M. J., Miller, K. and Grodzinsky, F. S. (2017) ‘Why we should have seen that coming’, ACM SIGCAS Computers and Society. Association for Computing Machinery (ACM), 47(3), pp. 54–64.