How and why, or why not use it for Learning
Gone are the days when Artificial Intelligence (AI) was the stuff of science fiction. Today, it is very much part of our day-to-day life.
In 1969, Stanley Kubrick directed the epic science fiction film “2001: A Space Odyssey,” in which humans were pitted against HAL, a computer with a human personality. HAL was the operating system (OS) on the spaceship until conflict arose, people died, and (spoiler alert) the humans pulled the plug.
In 1991, James Cameron directed the hit science fiction film “Terminator 2: Judgment Day.” The film focuses on the conflict between AI and humanity. As the story goes, Cyberdyne Systems creates an automated national defense OS called Skynet, an automated national defense OS. In 1997, Skynet took control of all US strategic defense systems.
A Scene from the Film
The Terminator: The Skynet Funding Bill was passed. The system went online on August 4, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29. In a panic, they try to pull the plug.
Sarah Connor: Skynet fights back.
Spoiler alert: Humans were virtually eliminated, and the machines took over the Earth, until time travel enabled individuals to change history, and humanity was saved.
Six Terminator films were made between 1984 and 2019. Clearly, the topic of conflict between machines and humankind has been a topic of concern for decades, if not hundreds of years.
Luddite Movement – Opposing Automating Labor
Consider the Luddite movement that began in England in the 19th century. It was a secret organization of textile workers strongly opposed to automating labor. Members feared the loss of their livelihoods as weaving machines, which had only recently been introduced, could produce textiles at a much lower cost than hand looms and other standard labor practices of the day. The movement culminated in a region-wide rebellion from 1811 to 1816 that was suppressed at the cost of many protesters’ lives.
Over time, the term ‘luddite’ has come to mean one opposed to industrialization, automation, computerization, or technology in general.
Concerns about the Rapid Rise of AI and Current Actions
Recently, Sam Altman, CEO of Open AI and the maker of ChatGPT and GPT-4, two generative AI systems very much in the news these days, told the US Congress that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too!” His comments reflect a growing concern that generative AI tools could be used to mislead people, spread falsehoods, violate copyright protections, and cause job losses.
Altman proposed that a new regulatory agency is needed to impose safeguards that would block AI models capable of “self-replication and self-exfiltration into the wild.”
Blueprint for an AI Bill of Rights
Altman’s testimony before the US Congress dovetails with the Biden administration publishing a Blueprint for an AI Bill of Rights that is intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world. Rather than focusing on specific enforcement actions, the plan calls for the government and the private sector to work together, and design new rules to regulate how the new technologies impact business and society in general.
The Blueprint defines vital core principles to be incorporated into AI systems: Limit the impacts of algorithmic bias; give users control of their data; and ensure that automated systems are used safely and transparently.
The White House White Paper closes with the following call to action: “Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values.”
Worldwide, authorities are racing to draw up rules for artificial intelligence. The European Parliament has been working for several years to draw up guardrails for AI.
How do the AI Act Rules Work?
The AI Act, first proposed in 2021, would govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk – from minimal to unacceptable. Riskier applications will face tougher requirements, including transparency, accountability, and accuracy of data.
The opportunity to automate mundane tasks, increase efficiency, and improve productivity is certainly alluring. In practice, we are only beginning to scratch the surface of what may be possible.
Until now, animation has been the easiest, low-cost solution for creating training videos. Unfortunately, while this saves time and money, it is also less engaging than content featuring human presenters. Enter the many advantages of AI-driven content creation.
uQualio’s Point of View on AI in Learning and Development
At uQualio video4learning – one of the leading digital training platforms – we are busy being early and thoughtful adopters of AI, carefully rolling out features to help our customers. Currently, we have not integrated ChatGPT or GPT-4 into our platform, but we do offer integrations for the following awesome AI video platforms that allows you to create your own uQualio AI videos.
Click to see how easy it is!
If you are excited about the possibilities of using AI (as we are at uQualio about the application of generative AI in learning and development), check out the example of using ChatGPT for eLearning scripts in video generation (sample from Elai.Io).
Additional Useful Posts
- uQualio makes it easy to import AI-generated videos
- Turn any text into AI-generated training videos
Want to try making video eLearning content? Sign up for a free uQualio trial.
Achieve Effective & Affordable Video Training
– uQualio is an award-winning, easy-to-use, all-in-one NextGen LMS software for any types of online video training.