When AI is discussed in the media, one of the most popular topics is how it could result in the loss of millions of jobs, as AI will be able to automate the routine tasks of many jobs, making many employees redundant. Meanwhile, a major figure in the AI industry has declared that, with AI taking over many jobs, learning to code is no longer as necessary as it used to be, and that AI will allow anyone to be a programmer right away. These developments undoubtedly have a huge impact on the future of the labor market and education.
Elin Hauge, a Norway-based AI and business strategist, believes that human learning is more important than ever in the age of AI. While AI will indeed cause some jobs, such as data entry specialists, junior developers, and legal assistants, to be greatly diminished or disappear, Hauge says that humans will need to raise the knowledge bar. Otherwise, humanity risks losing control over AI, which will make it easier for it to be used for nefarious purposes.
“If we’re going to have algorithms working alongside us, we humans need to understand more about more things,” Hauge says. “We need to know more, which means that we also need to learn more throughout our entire careers, and microlearning is not the answer. Microlearning is just scratching the surface. In the future, to really be able to work creatively, people will need to have deep knowledge in more than one domain. Otherwise, the machines are probably going to be better than them at being creative in that domain. To be masters of technology, we need to know more about more things, which means that we need to change how we understand education and learning.”
According to Hauge, many lawyers writing or speaking on the legal ramifications of AI often lack a deep understanding of how AI works, leading to an incomplete discussion of important issues. While these lawyers have a comprehensive grasp of the legal aspect, the lack of knowledge on the technical side of AI is limiting their capability to become effective advisors on AI. Thus, Hauge believes that, before someone can claim to be an expert in the legality of AI, they need at least two degrees – one in law and another providing deep knowledge of the use of data and how algorithms work.
While AI has only entered the public consciousness in the past several years, it is not a new field. Serious research into AI began in the 1950s, but, for many decades it was an academic discipline, concentrating more on the theoretical rather than the practical. However, with advances in computing technology, it has now become more of an engineering discipline, where tech companies have taken a role in developing products and services and scaling them.
“We also need to think of AI as a design challenge, creating solutions that work alongside humans, businesses, and societies by solving their problems,” Hauge says. “A typical mistake tech companies make is developing solutions based on their beliefs around a problem. But are those beliefs accurate? Often, if you go and ask the people who actually have the problem, the solution is based on a hypothesis which often doesn’t really make sense. What’s needed are solutions with enough nuance and careful design to address problems as they exist in the real world.”
With technologies such as AI now an integral part of life, it’s becoming more important that people working on tech development understand several disciplines relevant to the application of the technology they’re working on. For example, training for public servants should include topics such as exception-making, how algorithmic decisions are made, and the risks involved. This will help avoid a repeat of the 2021 Dutch childcare benefits scandal, which resulted in the government’s resignation. The government had implemented an algorithm to spot childcare benefits fraud. However, improper design and execution caused the algorithm to penalize people for even the slightest risk factor, pushing many families further into poverty.
According to Hauge, decision-makers need to understand how to analyze risk using stochastic modeling and be aware that this sort of modeling includes the probability of failure. “A decision based on stochastic models means that the output comes with the probability of being wrong, leaders and decision-makers need to know what they are going to do when they are wrong and what that means for the implementation of the technology.”
Hauge says that, with AI permeating almost every discipline, the labor market should recognize the value of polymaths, which are people who have expert-level knowledge across multiple fields. Previously, companies regarded people who studied multiple fields as impatient or indecisive, not knowing what they wanted.
“We need to change that perception. Rather, we should applaud polymaths and appreciate their wide range of expertise,” Hauge says. “Companies should acknowledge that these people can’t do the same task over and over again for the next five years and that they need people who know more about many things. I would argue that the majority of people do not understand basic statistics, which makes it extremely difficult to explain how AI works. If a person doesn’t understand anything about statistics, how are they going to understand that AI uses stochastic models to make decisions? We need to raise the bar on education for everybody, especially in maths and statistics. Both business and political leaders need to understand, at least on a basic level, how maths applies to large amounts of data, so they can have the right discussions and decisions regarding AI, which can impact the lives of billions of people.”
VentureBeat newsroom and editorial staff were not involved in the creation of this content.
The post Humans must adapt to AI’s fundamental changes to the labor market and the future of learning appeared first on Venture Beat.