Ethical Implications of Using AI in eLearning
As artificial intelligence (AI) and deep learning tools become increasingly complex and autonomous, it will become more difficult to understand how they work and make decisions. This means businesses will face some tough ethical questions and be subject to regulatory consequences when things go wrong.
Nevertheless, these tools can improve our ability to evaluate, learn and adopt dynamic learning strategies. "How Will Artificial Intelligence Impact Learning" clearly outlines how AI fits into eLearning (think marking and grading, accessibility, tutoring, and content creation), but it's equally important to acknowledge and be aware of the ethical implications of AI adoption.
In this article, I will identify some of the questions that should be considered when adopting AI-based eLearning strategies. I will also look at the ethical implications surrounding accuracy, transparency and privacy.
AI Adoption and the Ethics-Based Questions You Should Ask
The practical application of AI may seem way down the road because the time and cost investment is prohibitive, however, there's no question that adoption is inevitable. Regardless of when adoption will occur in your organization, you should be thinking of its impact and the ethical implications that follow.
Some of the questions you should ask include:
- Why are ethics a key issue for industries thinking of adopting AI into their business processes?
- How can human values be embedded in AI, especially when it comes to learning, as education is inherently human-driven?
- What steps should be taken to minimize the risk of bias?
- How can transparency be achieved?
- What steps are required to achieve accountability?
- How do we keep humans in the loop in a process that uses algorithms and machine-generated knowledge?
To date, AI experts don't have complete answers to these questions. Therefore, it is our responsibility to ensure that we continue to explore these ethical queries and the effects they have on eLearning.
Other Ethical Implications
Now that we've identified some of the questions you should be asking, I'm going to switch gears and discuss three current challenges that developers are contemplating today: accuracy of content, transparency and privacy.
1. Who is responsible for the accuracy of content?
With AI, it's important to remember that the creators of the algorithms are not the creators of the content their algorithms produce. In other words, if something goes wrong, it's not clear who is held accountable. One of the struggles that AI adopters are having is that machines cannot be held accountable in the same way that humans can. For example, in less than 24-hours, tweets from a Twitter chat box went from "humans are cool" to "Hitler was right". In an eLearning context, this could have disastrous implications because the information that people are acquiring could be incorrect. Human involvement in setting objectives and ensuring that AI doesn't distort content will be critical for future use of AI in education.
Similarly, AI mirrors our biases, which means that it reflects the bias of its creators. eLearning developers will need to ensure that they take this into consideration as it will affect the type and accuracy of content that is produced.
How can AI technology be created such that it doesn't have prejudiced beliefs? First, there will need to be many individuals involved in its development who have various beliefs and backgrounds. Second, as the technology evolves, users need to be critical about the information they receive. Cathy O'Neil wrote in the MIT Technology Review that "Algorithms replace human processes, but they're not held to the same standards. People trust them too much." Consequently, educators will need to emphasize critical thinking as a key developmental skill in all learning programs.
2. How can transparency be achieved?
In an article written for The New York Times Magazine, Cliff Kuang explores the limited transparency offered on decisions made through artificial intelligence systems. He explains, "it's a more profound version of what's often called the 'black box' problem - the inability to discern exactly what machines are doing when they're teaching themselves novel skills." This is becoming a very real concern as the European Union just released the General Data Protection Regulation (GDPR), which requires that any decision made by a machine be readily explainable. For L&D this might relate to issues like:
- AI-based assessment - will AI be making decisions that affect promotions?
- AI-based learning - will AI decide what people should learn? How will this affect their career development?
- Are individuals being given the information they need to become competent in their current role?
- Are individuals being presented with appropriate material for extending their skills and capabilities?
- Is it fair? How does AI make decisions about who is led into new areas and what new areas are recommended?
To answer these questions requires unpacking an extensive batch of algorithms that no single individual today likely understands. This has led to a lack of transparency and an ethical dilemma for developers of AI and adopters of AI in eLearning. Therefore, my recommendation is to ensure that these questions can be answered before adopting such technology.
3. How can privacy be upheld?
One of the most popular learning approaches today is personalized eLearning. Education is a personal experience and learners not only learn in diverse ways, they also have different goals and expectations. To address these factors, instructional designers began introducing personalized functions into eLearning through intelligent tutoring systems and adaptive applications. Despite their benefits, a number of ethical issues arise from the use of personalization in eLearning systems. One of the most obvious is that of personal privacy, because personalization systems can personalize content only by collecting information about users to calculate what their interests or requirements may be.
Furthermore, AI uses data based on collected personal information, potentially inferring the wrong information about a user. Not only does this compromise individual capability through an increased dependence on personalization to make decisions for the learner, but it also means that personal information is being collected without the permission of the individual, resulting, in some cases, in the abuse of this information.
When it comes to privacy and AI adoption in eLearning, instructional designers will need to be hyper-aware of where their data is coming from and ensure that learners' consent to the information they are providing to these systems.
This article identified some of the questions that should be considered when adopting AI-based eLearning strategies. It also discussed the ethical implications surrounding accuracy, transparency and privacy.
There is great promise of benefits from the use of AI technologies in eLearning, but there are also pitfalls to be addressed before such technologies become mainstream. One of the main challenges regarding the ethical effects of such projects is the issue of evaluating technologies sufficiently early to enable a useful influence on fundamental learning concepts and design. Nonetheless, I don't want to scare you away from using AI because I truly believe it is the future of eLearning. As the technology evolves so too will our design and learning strategies as long as we continue to ask the questions identified above and ensure that the end goal is the achievement of established learning objectives.
Sarah is an Instructional Designer at BaseCorp Learning Systems and is currently completing a PhD in Educational Technology. Her research focuses on implementing competency-based learning systems in all types of organizations. When she doesn't have her nose in a book you can find her at the gym, on the ice, on the ski hill, drinking wine or in a coffee shop … with her nose in a book.