meta pixel

AI in Learning – Good or Evil?

AI in learning

SHARE

How and why – or why not – use AI in learning.

Gone are the days when artificial intelligence (AI) was the stuff of science fiction. Today, it is very much part of our day-to-day life.

In 1969, Stanley Kubrick directed the epic science fiction film, 2001: A Space Odyssey. In the film, humans were pitted against HAL, a computer with a human personality. HAL was the operating system (OS) on the spaceship until conflict arose, people died, and (spoiler alert) the humans pulled the plug.

In 1991, James Cameron directed the hit science fiction film “Terminator 2: Judgment Day.” The film focuses on the conflict between AI and humanity. As the story goes, Cyberdyne Systems creates an automated national defense OS called Skynet, an automated national defense OS. In 1997, Skynet took control of all US strategic defense systems.

A Scene from the Film 

The Terminator: The Skynet Funding Bill was passed. The system went online on August 4, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

Spoiler alert: Humans were virtually eliminated. The machines took over the Earth, until time travel enabled individuals to change history – and humanity was saved.

Between 1984 and 2019, six Terminator films were made. Clearly, the topic of conflict between machines and humankind has been a topic of concern for decades, if not hundreds of years.

Luddite Movement – Opposing Automating Labor

Consider the Luddite movement that began in England in the 19th century. Basically, it was a secret organization of textile workers strongly opposed to automating labor. Members feared the loss of their livelihoods as weaving machines, which had only recently been introduced, could produce textiles at a much lower cost than hand looms and other standard labor practices of the day. The movement culminated in a region-wide rebellion from 1811 to 1816 that was suppressed at the cost of many protesters’ lives.

Over time, the term ‘luddite’ has come to mean one opposed to industrialization, automation, computerization, or technology in general.

Concerns about the Rapid Rise of AI and Current Actions  

Recently, Sam Altman, CEO of Open AI and the maker of ChatGPT and GPT-4, two generative AI systems very much in the news these days, told the US Congress that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too!” His comments reflect a growing concern about the misuse of generative AI tools to mislead people, spread falsehoods, violate copyright protections, and cause job losses.

Furthermore, Altman proposed the need for an agency to impose safeguards that would block AI models capable of “self-replication and self-exfiltration into the wild.”

Blueprint for an AI Bill of Rights

Altman’s testimony before the US Congress dovetails with the Biden administration, publishing a Blueprint for an AI Bill of Rights. The Blueprint acts as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world. Rather than focusing on specific enforcement actions, the plan calls for the government and the private sector to work together, and design new rules to regulate how the new technologies impact business and society, in general.

The Blueprint defines vital core principles to be incorporated into AI systems:

  1. Limit the impacts of algorithmic bias;
  2. Give users control of their data; and
  3. Ensure the safe and transparent use of automated systems.

Interestingly, the White House White Paper closes with the following call to action: “Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone. This important progress must not come at the price of civil rights or democratic values.”

Worldwide, authorities are racing to draw up rules for artificial intelligence. The European Parliament has been working for several years to draw up guardrails for AI.

How Do the AI Act Rules Work? 

First proposed in 2021, the AI Act would govern any product or service that uses an artificial intelligence system. In essence, the act will classify AI systems according to four levels of risk – from minimal to unacceptable. This means, riskier applications will face tougher requirements, including transparency, accountability, and accuracy of data. 

A Balance

To be true, the opportunity to automate mundane tasks, increase efficiency, and improve productivity is certainly alluring. In practice, we are only beginning to scratch the surface of what may be possible.

Until now, animation has been the easiest, low-cost solution for creating training videos. Unfortunately, while this saves time and money, it is also less engaging than content featuring human presenters. Enter the many advantages of AI-driven content creation.

uQualio’s Point of View on AI in Learning and Development

At uQualio Video4Learning – one of the leading video training platforms – we are busy being early and thoughtful adopters of AI. This means, we are carefully rolling out features to help our customers. Currently, we have not integrated ChatGPT or GPT-4 into our video eLearning platform. However, we do offer integrations for the following awesome AI video platforms. These platforms allow you to create your own uQualio AI videos.

Click to see how easy it is! 

We, at uQualio, are highly enthusiastic about the application of generative AI in learning and development. Are you too excited about the possibilities of using AI in learning? Check out the example of using ChatGPT for eLearning scripts in video generation (sample from Elai.Io).

– uQualio is an award-winning, easy-to-use, all-in-one NextGen LMS software for any types of online video training.