How and why, or why not use it for Learning
Once upon a time, Artificial Intelligence (AI) was the stuff of science fiction, but now it is here.
In 1969, Stanley Kubrick directed the epic science fiction film 2001: A Space Odyssey, in which humans were pitted against HAL, a computer with a human personality. HAL was the operating system (OS) on the spaceship until conflict arose, people died, and (spoiler alert) the humans pulled the plug.
In 1991, James Cameron directed the hit science fiction film Terminator 2: Judgment Day. The film focuses on the conflict between AI and humanity. As the story goes, Cyberdyne Systems creates an automated national defense OS called Skynet, an automated national defense OS. In 1997, Skynet takes control of all US strategic defense systems.
A Scene from the Film
The Terminator: The Skynet Funding Bill was passed. The system goes online on August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.
Sarah Connor: Skynet fights back.
Spoiler alert: Humans were virtually eliminated, and the machines took over the Earth, until time travel enabled individuals to change history, and humanity was saved.
Six Terminator films were made between 1984 and 2019. Clearly, the topic of conflict between machines and humankind has been a topic of concern for decades, if not hundreds of years.
Luddite Movement – Opposing Automating Labor
Consider the Luddite movement that began in England in the 19th century. It was a secret organization of textile workers strongly opposed to automating labor. Members feared the loss of their livelihoods as weaving machines, which had only recently been introduced, could produce textiles at a much lower cost than hand looms and other standard labor practices of the day. The movement culminated in a region-wide rebellion from 1811 to 1816, that was suppressed at the cost of many protesters’ lives.
Over time, the term ‘luddite’ has come to mean one opposed to industrialization, automation, computerization, or technology in general.
Concerns about the rapid rise of AI, and current actions
Recently, Sam Altman, CEO of Open AI, the maker of ChatGPT and GPT-4, two generative AI systems very much in the news these days, told the US Congress that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too.” His comments reflect a growing concern that generative AI tools could be used to mislead people, spread falsehoods, violate copyright protections, and cause job losses.
Altman proposed that a new regulatory agency is needed to impose safeguards that would block AI models capable of “self-replication and self-exfiltration into the wild.”
Blueprint for an AI Bill of Rights
Altman’s testimony before the US Congress dovetails with the Biden administration publishing a Blueprint for an AI Bill of Rights that is intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world. Rather than focusing on specific enforcement actions, the plan calls for the government and the private sector to work together and design new rules to regulate how the new technologies impact business, and society in general.
The Blueprint defines vital core principles to be incorporated into AI systems: Limit the impacts of algorithmic bias; give users control of their data; and ensure that automated systems are used safely and transparently.
The Whitehouse Whitepaper closes with the following call to action, “Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values.”
Worldwide, authorities are racing to draw up rules for artificial intelligence. The European Parliament has been working for several years to draw up guardrails for AI.
How do the AI Act Rules Work?
The AI Act, first proposed in 2021, would govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including transparency, accountability, and accuracy of data.
A Balance
The opportunity to automate mundane tasks, increase efficiency, and grow productivity is certainly alluring. In practice, we are only beginning to scratch the surface of what may be possible.
Until now, animation has been the easiest, low-cost solution for creating training videos. Unfortunately, while this saves time and money, it is also less engaging than content featuring human presenters. Enter the many advantages of AI-driven content creation.
uQualio’s point of view on AI in Learning
At uQualio – Video4Learning, the video eLearning and communication platform, we are busy being early, and thoughtful, adopters of AI, carefully rolling out features to help our customers.
Currently, we have not integrated ChatGPT or GPT-4 as an app on our platform, but we do offer integrations for the following awesome AI video platforms.
Click to see how easy it is!
If you are excited about the possibilities of using AI – like we are at uQualio, check out the example of using ChatGPT for eLearning scripts and video generation in video generation (sample from Elai.Io).
https://uqualiodemoaccount.uqualio.com/target/b163792801a7420f8cda8333b4dc92fa
Additional Useful Posts
Maybe you want to try making video eLearning? Sign up for a free uQualio trial

Achieve Effective & Affordable Video Training
– uQualio is the award-winning easy-to-use all-in-one NextGen LMS software for any types of online video learning