Research Methods & Professional Practice, Week 1, E-Portfolio Activity, Reflective Activity 1, Ethics in Computing in the age of Generative AI
E-Portfolio Activity, Reflective Activity 1
E-Portfolio Activity, Reflective Activity 1
Ethics in Computing in the age of Generative AI
In this research article, “Worldwide AI Ethics: A Review of 200 Guidelines and Recommendations for AI Governance” a thorough comprehensive analysis of global ethical standards for Artificial Intelligence (AI) is discussed and its issues highlighted. Authored by Nicholas Kluge Corrêa and colleagues, it examines 200 documents from 37 countries and six continents, published between 2014 and 2022. These documents include recommendations, policy frameworks, and codes of conduct from public bodies, academic institutions, private companies, and civil society organizations (Corrêa et al. 2023). The study identifies 17 key ethical principles prevalent across these guidelines, with the most common being transparency, security, justice, privacy, and accountability. However, it also highlights areas with less emphasis, such as truthfulness, intellectual property, and rights. The authors note that while these guidelines often articulate ethical values, they frequently lack practical implementation strategies and legally binding regulations (Press, 2023). The key challenge identified in the research is the difficulty in establishing a universal consensus on ethical principles for AI governance, due to diverse perspectives and interests of various stakeholders involved, including public bodies, private companies and civil society organizations (Corrêa et al. 2023).
Reflecting on the evolution of AI through the lens of the research article by Corrêa et al. (2023), as well as its uses and implementations on a global scale, it is evident that AI has transformed from a theoretical notion into a pervasive force, shaping various aspects of our society. This transformation is marked by significant milestones, such as development of neural networks in the 1950s, the introduction of machine learning algorithms, and the recent emergence of advanced models like GPT-4 and ChatGPT (Malik, 2023). As artificial intelligence (AI) technologies proliferate, they offer both significant advantages and significant concerns, thus it is important to carefully consider how various nations regulate their use. Different countries’ legislative frameworks, cultural views, and degrees of technological innovation have all had an impact on AI’s unequal development and governance. For instance, the European Union (EU) has taken a proactive stance on AI governance with its General Data Protection Regulation (GDPR) and proposed AI Act, which emphasizes accountability and transparency in AI systems (European Commission, 2021). In contrast, countries like China have adopted a more centralized approach to AI governance, focusing on rapid deployment and innovation while prioritizing state control over ethical considerations (Zeng, 2020). The disparity in governance reflects differing societal values regarding privacy, security, and innovation.
The benefits of AI are vast, ranging from enhanced productivity and efficiency to breakthroughs in healthcare and environmental sustainability. However, these advantages come with significant risks. Ethical concerns such as algorithmic bias, privacy violations, and potential job displacement pose challenges that require careful consideration (Corrêa et al., 2023). While there is no single global standard for AI governance, several frameworks aim to address ethical concerns. The OECD’s Principles on Artificial Intelligence promote inclusive growth and well-being while ensuring that AI systems are transparent and accountable (OECD, 2019). Similarly, the UNESCO Recommendation on the Ethics of Artificial Intelligence provides guidelines for member states to foster responsible AI development (UNESCO, 2021). These frameworks signify a growing recognition of the need for ethical oversight in AI deployment. In the UK, the Government has implemented a 10-year plan, named the National AI Strategy, with an aim to position the UK as a global leader in AI. It focuses on three pillars: investing in AI research, enabling an AI-driven economy, and ensuring effective governance. The governance framework emphasizes a pro-innovation regulatory approach that balances ethical considerations and public safety while fostering growth. This strategy reflects the UK’s commitment to responsible and innovative AI development (UK Government, 2021). Putting this in financial terms, the UK already has an extensive history of AI spending, and is currently ranked third in the world for private venture capital backing of AI companies, with 2019 investment reaching almost 2.5 billion.
Companies that wish to take full advantage of AI must recognize the risks that align with its use and be ready to accept accountability. This entails recognizing AI’s potential and assuming a leadership role to prevent its abuse. While AI does not replace or take over human judgements regarding its use and significant effects, a responsible approach allows businesses, their clients and stakeholders to benefit from all available technical advancements. Bias is potentially a major problem, as AI development is always based on choices of the researchers involved. This makes it difficult to design a completely impartial system which is why AI ethics is such a crucial field. Businesses that use AI must continue to be responsible, safe and compliant; regulations should also be fair and open (Bechkoum, 2022).
Moving forward, to effectively address the challenges posed by AI, whilst successfully maximizing its benefits, several actions could be considered:
- Implement Ethical Impact Assessment: Businesses should adopt this assessment as standard practice to evaluate AI systems against established ethical guidelines. This would ensure alignment with societal values and enhance accountability (Fujitsu, 2023).
- Adopt International Collaboration: Countries should cooperate together to create harmonized regulatory frameworks that handle the cross-border concerns posed by AI technologies. This might include sharing best practices and developing worldwide standards for ethical AI use.
- Enhance Public Engagement: Discussions concerning AI governance involving a range of stakeholders, such as technologists, engineers, ethicists, policy makers and communities, can result in more inclusive decision-making procedures that represent societal values.
- Invest in Education and Training: Training on ethical issues in AI development should be provided to tech industry professionals, especially developers and researchers. This will support the development of a workforce that values responsible innovation.
The journey of artificial intelligence from its inception to its current state underscores both its transformative potential and the complexities it introduces into legal, social, and professional realms. By implementing robust ethical frameworks and fostering international cooperation, stakeholders can navigate the challenges posed by AI while promoting its benefits for society at large. As highlighted by Corrêa et al. (2023), establishing a consensus on ethical principles is crucial for ensuring that AI serves humanity positively.
References:
Press, C. (2023). Researchers measure global consensus over the ethical use of AI. [online] Techxplore.com. Available at: https://techxplore.com/news/2023-10-global-consensus-ethical-ai.html?utm_source=chatgpt.com#google_vignette [Accessed 4 February 2025].
Corrêa, N. K., Galvão, C., Santos, J. W., Del Pino, C., Pinto, E. P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E., & de Oliveira, N. (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns (New York, N.Y.), 4(10), 100857. Available from: https://doi.org/10.1016/j.patter.2023.100857 [Accessed 4 February 2025].
Unesco.org. (2023). Ethical Impact Assessment: A Tool of the Recommendation on the Ethics of Artificial Intelligence. [online] Available from: https://www.unesco.org/en/articles/ethical-impact-assessment-tool-recommendation-ethics-artificial-intelligence. [Accessed 5 February 2025].
Malik, E. (2023). Artificial Intelligence (AI) and ChatGPT: history and timelines. [online] Office Timeline. Available from: https://www.officetimeline.com/blog/artificial-intelligence-ai-and-chatgpt-history-and-timelines. [Accessed 5 February 2025].
European Commission (2021). Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS. [online] Europa.eu. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206. [Accessed 5 February 2025].
Zeng, J. (2020). Artificial intelligence and China’s authoritarian governance. International Affairs, 96(6). Available from: Doi: https://doi.org/10.1093/ia/iiaa172. [Accessed 5 February 2025].
OECD (2024). Artificial intelligence. [online] OECD. Available from: https://www.oecd.org/en/topics/policy-issues/artificial-intelligence.html. [Accessed 5 February 2025].
UK Government (2021). National AI Strategy. [online] GOV.UK. Available from: https://www.gov.uk/government/publications/national-ai-strategy. [Accessed 5 February 2025].
Professor Kamal Bechkoum (2022). AI offers business huge potential but ethics still count. [online] Bcs.org. Available from: https://www.bcs.org/articles-opinion-and-research/ai-offers-business-huge-potential-but-ethics-still-count/ [Accessed 5 February 2025].
AI Ethics Impact Assessment Casebook. (n.d.). Available from: https://www.fujitsu.com/global/documents/about/research/technology/aiethics/fujitsu-AIethics-case_en.pdf. [Accessed 5 February 2025].