Generative AI and its Impact on eLearning

Delaney Caulfield

Generative AI is revolutionizing industries worldwide, and eLearning is no exception.

A subset of artificial intelligence, generative AI applications use learning models and algorithms to create new content. These systems use existing technologies and data to produce new text, images, audio, code, simulations, or video.

By creating unique content, providing real-time feedback, and personalizing learning experiences, tools like ChatGPT, DALL-E, and others have the potential to transform education. But while generative AI offers exciting possibilities, it also brings significant challenges related to ethics, environmental sustainability, and companies’ return on investment (ROI).

Let’s take a deeper look into the role of artificial intelligence in eLearning.

Ethical Concerns

When you think about artificial intelligence, it’s hard not to wonder about the legality and ethics of training machine learning algorithms to create new content by using writers, artists and content creators’ existing material. There are some serious ethical concerns surrounding generative AI in eLearning, these include:

  • Privacy and data security

Privacy remains a paramount concern in generative AI. To produce personalized results, many AI systems require large datasets, which often include personal and sensitive information about users. In education, this data may include information on students' academic performance, behaviour patterns, and even psychological insights based on their responses. The potential for misuse or unauthorized sharing of this data is a critical issue. Some AI companies have faced backlash for unclear data privacy policies, which can lead to trust issues among learners, instructors, and institutions alike.

A recent scandal surrounding this issue involved OpenAI’s ChatGPT, which experienced a data leak in early 2023, exposing sensitive user data, including chat history and payment information. These breaches illustrate how vulnerabilities in AI systems can lead to significant privacy risks, particularly for minors or vulnerable populations in an educational setting.

  • Deepfakes and misinformation

Generative AI tools can create hyper-realistic images, videos, and audio, known as deepfakes. While deepfakes can be beneficial in creating immersive eLearning content, they also raise concerns about authenticity and misinformation. In eLearning, deepfakes could be used to impersonate educators or other trusted figures, leading to credibility issues. In recent years, a series of deepfake incidents have highlighted the potential harm that hyper-realistic fake content can have on public trust and safety. As eLearning continues to adopt generative AI, institutions need clear guidelines to ensure these tools are used ethically and responsibly.

  • Copyright infringement

As generative AI models rely on vast datasets, often sourced from publicly available content, to generate new materials, these datasets can include copyrighted images, text, and other media, raising questions about intellectual property infringement. In eLearning, this becomes a concern when content generated by AI closely resembles existing copyrighted material without permission or proper attribution. Companies like OpenAI, Stability AI, and others have faced legal scrutiny and backlash from creators whose work may be inadvertently or intentionally repurposed by AI. As a result, educational institutions using generative AI for content creation may need to carefully monitor AI output to avoid potential copyright violations.

Environmental Impact

The environmental impact of generative AI tools is another growing concern. Training large language models, such as GPT-4, requires massive computational power and energy, contributing to high carbon emissions. Research from the University of Massachusetts, Amherst, found that training a single large AI model can emit over 626,000 pounds of CO₂—equivalent to the emissions of five cars over their entire lifespans. Additionally, running AI models like ChatGPT daily across multiple platforms demands substantial power, often sourced from non-renewable resources, thereby adding to the carbon footprint.

For eLearning platforms, which may use AI-based tools to personalize learning or generate content at scale, this environmental cost poses a significant ethical dilemma. As organizations aim to meet sustainability goals, they must weigh the benefits of generative AI against the environmental impact of their energy consumption. Some companies have begun to address this issue by investing in renewable energy or optimizing models to be less resource-intensive, but challenges remain in making these technologies environmentally sustainable.

Cost-Effectiveness and ROI

Despite the promise of generative AI, many companies are finding that the return on investment (ROI) in eLearning applications isn’t as high as anticipated. The high cost of developing, implementing, and maintaining these AI systems, coupled with the need for ongoing technical support, is challenging the economic viability of generative AI for many eLearning providers. Let’s break that down:

  • High initial and maintenance costs

Building and maintaining generative AI systems is expensive. Companies must often purchase high-end computational resources, secure large datasets, and employ teams of data scientists and engineers. Moreover, the rapid pace of AI technology advancements means that frequent updates and retraining are required to keep systems accurate and relevant, adding to ongoing costs. Many companies have found that the cost savings they expected through automation are outweighed by these maintenance expenses.

  • Underwhelming results and user disengagement

The novelty of generative AI often draws initial excitement, but sustaining learner engagement has proven challenging. Some eLearning companies report that learners lose interest in AI-generated content more quickly than in traditional, instructor-led formats. Additionally, AI systems can sometimes provide generalized feedback that lacks the depth and nuance of human instructors, making the learning experience less valuable for students. As a result, companies are re-evaluating the role of generative AI, particularly when learner outcomes don’t meet expectations.

  • The hidden costs of reputational damage

Scandals involving privacy, copyright infringement, or environmental concerns can have lasting impacts on an organization’s reputation. For educational institutions, ethical missteps in AI use can lead to a loss of trust among students and stakeholders. When learners and educators see AI as a tool that sacrifices privacy or environmental responsibility, they may be less likely to engage with AI-powered eLearning content, ultimately affecting the platform’s value.

What’s Next

Generative AI holds tremendous potential for the future of eLearning, but its ethical, environmental, and economic challenges must be addressed for it to be a viable long-term solution. Educational institutions and eLearning providers have a responsibility to use AI in ways that protect user privacy, minimize environmental impact, and deliver real value to learners. This may involve adopting policies that restrict the use of deepfakes, investing in sustainable energy sources, or focusing on hybrid learning models that integrate AI as a complement rather than a replacement for human educators.

The path forward requires collaboration between AI developers, educational institutions, and policymakers to ensure that AI technologies are used responsibly, transparently, and sustainably.

As the landscape of eLearning continues to evolve, SkillBuilder® is working to strike a balance between harnessing the power of generative AI and preserving the ethical standards, environmental health, and economic viability that sustain quality education. Contact us today to find out how we can turn your boring online learning course into an engaging masterpiece of education.

Download the Free eBook


Delaney Caulfield

Delaney graduated from McMaster University with a Bachelor of Arts degree in English and Cultural Studies. After working in an assortment of industries, she spent nearly a decade sharpening her writing and editing skills in the fast-paced field of journalism. Now she works as an Instructional Designer with BaseCorp where she enjoys flexing her passion for learning and creativity.