The game is on. Artificial intelligence (AI) is pretty much the hottest topic right now. Many would argue that we’ve reached the point of maximum hype in AI discussions.
Here we draw on key messages from our recent book Beyond Genuine Stupidity – Ensuring AI Serves Humanity to highlight five of the most critical issues and resulting choices facing governments, businesses, society, and individuals as we prepare for the full impacts of AI on the economy.
As futurists, it comes as no surprise that we are arguing for analysis, policy experimentation, and, in some cases, pre-emptive action to prepare for what could be the most disruptive changes that most people of working age will have experienced.
1. Don’t Believe What You Read - Technological Unemployment and The New Jobs Landscape
The AI technology vendors are struggling to hold a consistent line. On the one hand they are selling the return on investment case for AI – predicated on headcount reductions. However, as this has become a contentious issue they are now arguing the “augmented intelligence” angle. The new line is that AI will free up people from routine tasks to do more creative work and focus on problem-solving. Whilst this is attractive, in reality, how many employers are going to follow that path? The evidence to date suggests most are going for cost base reduction.
Some evangelists argue that AI will create a host of new jobs and that the new industries that emerge will generate new employment. Whilst this is a possibility, in both cases, the majority of those jobs will require at least degree-level education. These businesses will also be highly automated from the start, and there could be a major time lag between bank staff and truckers being made redundant and the new jobs being created.
The challenge here for governments is to model a range of scenarios, including extreme ones. From this, they can start assessing the tax implications of different levels of unemployment, explore policy options they might pursue under different scenarios, and identity necessary immediate actions they should be taking because they are valid under all scenarios.
2. Reskilling the Workforce and Transforming Education
For adults, in most countries, the provisions for retraining and lifelong learning are at best woeful. However, the facilities already exist in schools and colleges, and there is no shortage of people who can deliver the training. Exponential change requires an exponential increase in provision for retraining – the cost of inaction will manifest itself in higher unemployment costs, rising mental health issues, and skill shortages across the economy.
At the school level, we need to take a hard look at the assumptions that govern current curriculums. In practice, it is impossible to know what jobs a nineteen-year-old entering university today might be doing in three to four years’ time, let alone what career path an eleven-, seven-, or two-year-old might pursue. Indeed, for those aged under eleven, the bulk of the jobs they’ll do probably don’t exist yet. Hence, we need to be equipping them with the skills that will allow them to take up these new opportunities when they arise. This means a far greater emphasis on social and collaborative skills, conflict resolution, problem-solving, scenario thinking, and accelerated learning.
3. Universal / Guaranteed Basic Incomes
There will inevitably be employment casualties from the process of automation. The question also arises as to how people will be able to afford the goods and services now being produced by the machines if they no longer have jobs. Many have argued for provision of a guaranteed basic income (GBI) across society – at a rate typically higher than unemployment benefit. Countries from Canada and Finland to India and Namibia have been experimenting with different models for how this might work.
Simply exhorting people to find work won’t solve the problem or feed their dependents. This is where governments need to work together to try different experiments and see the impacts on funding costs, economic activity, the shadow economy, social wellbeing, crime, domestic violence, and mental health. There will be strongly polarised political views on such an option. However, doing the experiments is not committing to the policy, but will provide evidence on which to base policy decisions when the need for action arises.
4. New Employer’s Responsibilities - Robot Taxes, Total Employment Responsibility, and Deferred Redundancy
A lot of the potential issues around the introduction of AI and other disruptive technologies will arise from the choices made by employers. Will they retain the staff freed up by technology or release them in order to make higher profits? Whilst there is no wish to hold back the process or pace of innovation, questions are being raised about how to address the social costs. If unemployment costs rise, or GBI schemes are introduced – who will pay for them? One option is the introduction of so-called robot taxes, where firms effectively pay a higher rate of taxes on the profits they derive from increased automation. This has already met with opposition from business circles but has some support from technology pioneers in Silicon Valley.
Opponents of GBI schemes and robot taxes have yet to offer substantive alternative policy options. Two options that have surfaced are firstly the notion of a total employment responsibility. Based on turnover in the previous year, your firm would be responsible for a total level of employment in the economy. So, if your turnover was one-millionth of national GDP, you’d be responsible for ensuring the employment of one-millionth of the workforce. This might be through direct employment, subcontractors, suppliers who work solely for you, or new businesses you support.
Another unpopular option is deferred redundancy: workers staying on your payroll at full pay until they find another job. It is easy to oppose all the ideas but large employers and governments need to be thinking now about viable policy alternatives for a world in which we might need a smaller workforce.
5. Ethics, Governance, and Ownership of the Technology
Arguments are surfacing which suggest AI is too important to leave its evolution to the private sector. A proliferation of voluntary ethical charters is starting to emerge to govern the development and application of AI and robotics. The challenge is that AI is recognised as a critical future technology by leading industrial nations such as China, Korea, Taiwan, and the USA. It has become an economic battleground, and ethics may not be a prime consideration in the race for AI superpower status. In response, there is a growing argument for state regulation and oversight of AI. This would probably require the capabilities of a regulatory AI to conduct such a governance role as, in the relatively near future, the capabilities and reasoning of most AIs is likely to outstrip humans’ abilities to monitor them.
Given these challenges, there is also an argument being made for governments to nationalise the ownership of AI intellectual property and then licence it back to firms. In this way, governments could regulate the deployment more effectively, and raise revenues to cover the expected social costs. Such moves are likely to prove hugely unpopular in some quarters, while others will argue they are the inevitable consequence of technologies that could ultimately be beyond human oversight and control.
The reality is that the pace at which AI is advancing has far outstripped the ability of governments, businesses, and individuals to identify the potential impacts, assess the possible implications, and try out potential solutions. A genuinely stupid strategy here would be to cover our eyes and ears and hope the problem goes away, never arises, or simply gets resolved by omnipotent market forces. A more enlightened option is to undertake serious assessment of the most radical possible outcomes, developing policy options for the worst-case scenarios, and implementing actions now which will be beneficial however the game plays out.
About the Authors
Rohit Talwar, Steve Wells, Alexandra Whittington, April Koury, and Helena Calle are futurists with Fast Future - a professional foresight firm specialising in delivering keynote speeches, executive education, research, and consulting on the emerging future and the impacts of change for global clients. Fast Future publishes books from leading future thinkers around the world, exploring how developments such as AI, robotics, exponential technologies, and disruptive thinking could impact individuals, societies, businesses, and governments and create the trillion-dollar sectors of the future. Fast Future has a particular focus on ensuring these advances are harnessed to unleash individual potential and enable a very human future. See: www.fastfuture.com
Beyond Genuine Stupidity – Ensuring AI Serves Humanity
The first book in the Fast Future series explores critical emerging issues arising from the rapid pace of development in artificial intelligence (AI). The authors argue for a forward-looking and conscious approach to the development and deployment of AI to ensure that it genuinely serves humanity's best interest. Through a series of articles, they present a compelling case to get beyond the genuine stupidity of narrow, short term, and alarmist thinking and look at AI from a long-term holistic perspective. The reality is that AI will impact current sectors and jobs—and hopefully enable new ones.
A smart approach requires us to think about and experiment with strategies for adopting and absorbing the impacts of AI - encompassing education systems, reskilling the workforce, unemployment and guaranteed basic incomes, robot taxes, job creation, encouraging new ventures, research and development to enable tomorrow’s industries, and dealing with the mental health impacts. The book explores the potential impacts on sectors ranging from healthcare and automotive, to legal and education. The implications for business itself are also examined from leadership and HR, to sales and business ethics. See: http://fastfuturepublishing.com/main/shop/bgs/