Reshaping Future Professions

Information technology and robotic automation are destructive to the labour market for routine and repetitive jobs, and such disruptive force is likely going to transform our work altogether. Many of those who resist change will ultimately become permanently unemployed while those who are bold enough to embrace or propose change will become the new elitists.

This is the general tone of The Future of Profession by Richard Susskind and Daniel Susskind. Although many have contended that professional work is complicated and non-routine and thus is not subject to the threat of being replaced by the computer program and robots, Susskinds find that empirical evidence suggest that many tasks professionals perform will be automated, albeit the process is incremental. Susskinds then propose a model that captures the process of commoditizing tasks executed by professionals.

This is where my belief departs from Susskinds’ argument. The model introduced by Susskinds features a streamlined approach that fails to represent the outcome of externalization. Most systematized techniques have a spillover effect after externalization, enabling professionals to find new approaches to the existing problem. Professionals will be able to draw insights from the data generated by the automated tasks and make fundamental changes to their business model. We observe that after accounting firms digitized their work in tax preparation, the Big Four accounting firms have shifted their focus to tax planning, a previously unconceivable task. Thus, we conclude that the nature of commoditization should not be represented as a straight line. Instead, it should take the form of a circle. This being said, I redrew the model below:

profession-commoditizationThen, it becomes clear that digitization and automation of the professional tasks don’t necessarily lead to the destruction of the professions. However, if we fail to innovate and race ahead of computers, we would then be forced to live in a jobless society. Assuming that there are almost unlimited opportunities to innovate within each profession, I anticipate that the nature of the professions will transform from today’s service-based model to a research-based model.

However, this assumption will be met with concerns and objections suggesting that the assumption is overoptimistic—we might eventually reach the bottleneck of innovation, and any further improvement will be unfeasible. I would respond to this concern by contending that we are very far from that bottleneck. Although we currently see signs that suggest the end of Moore’s law, the microprocessors we currently use are very far from reaching computronium, or optimal configuration for computation. This observation can be applied to many other professions. Professionals might be wrong on suggesting that their profession has reached or will reach optimal efficiency where no further improvement can be achieved any time soon.

Another concern our assumption faces is that this model only works if the speed of innovation matches the speed of automation. Since we are unable to stay ahead of the machine, our job will eventually be replaced by machines. However, we can find empirical evidence suggesting that this should not be of our concern. After years of rapid development of artificial intelligence technology in 1980s, AI researchers find that the expert system they developed is too expensive to maintain and doesn’t have the capacity of thinking, which then result in decreased funding and reduced interest in the field. Instead of abandoning the field altogether, researchers shift their focus onto developing machine learning capability with deep learning algorithm. This new approach, accompanied by increasingly capable processing power and readily available big data, has revolutionized the AI research field. Although it takes many decades for researchers to overcome AI winter, researchers still find myriad opportunities in implementing the existing technology during the period. This can be applied to other professions. After digitizing the task, the marginal cost of providing such service will be driven to zero, and professionals will then be able to address latent demand while they discover new ways in innovating the work.

The last concern we face is that some might suggest that we are wrong to think that machines are incapable to innovate. Machines do indeed have limited capability in finding new and interesting relationships between seemingly unrelated data sets. However, these findings are not considered as innovative and that these findings are unable to transform the modern professions. We are building this model based on currently available technologies and their reasonable future developments, and thus, we should not consider the possibility of competing with machines to innovate. This is the best strategy for us at the current stage and it is best suited for our need to transform the professions in the decades to come.

Next, I will present some examples of how professions will transform to become research-based.

Education. We start by looking at K-12 schools. Kindergarten and elementary schools will employ new methods to inspire youngsters. The output will then be monitored by machine and compiled into data sets, which can be used to find the most efficient teaching methods. Teachers will start to rely on computer games and websites to personalize teaching experience. For high schools, the focus of teacher will shift from traditional teaching to activity based teaching. Since the online teaching tools (such as Khan Academy and Duolingo) will be better at teaching curriculum bonded materials, teachers should then emphasize on teaching student how to research and apply their knowledge to real-world scenarios. For Universities and colleges, the utility of lecturers will be reduced since students can always find better teaching materials online –massive open online courses (MOOC) will be widely available. Students will use the university facilities to apply their knowledge and conduct experiments. Eventually, the university will once again become a place of research.

Health Care. As we continue to digitize tasks in health care industry, we are going to become less reliant on doctors. Health gadgets and online communities will help us to monitor health conditions, diagnose symptoms and adjust to better lifestyle. This will, in turn, free up doctor’s time, and allow doctors to specialize and emphasize on finding new treatment to diseases. Systemization and externalization of surgical tasks will also enable para-professional to conduct minor surgeries. The replacement of doctor in the operating room will allow doctors to further concentrate their effort on research and development.

There are drawbacks to the research-based model. Since the professions are research oriented, professionals will be forced to join bigger organizations and corporations in order to receive research grant and stay up to date in their own practices. Those who resist changes will be demoted to the status of para-professionals and are forced to compete with a greater number of competitors including machines and lower skilled workers. The benefit of adopting research-based model is that it will enable more people with lower educations to obtain a para-professional job through vocational training, and thus would be helpful in alleviating the threat of robotic automation. We would then see the labour market shift from industrial jobs to service sector jobs.

The book The Future of the Professions provides us with a fascinating outlook on how the increasingly capable technology will replace human professionals. This marks the beginning of a revolutionary period in which we transform the professions from service-based to research-based. The book is superb at explaining the progress and morale behind this unconventional transformation. I would recommend this book to anyone who is interested in the future of society and the future of the professions.

Preparing for Full Automation

I read Martin Ford’s Rise of the Robot because it presents the capacity of robots and machine learning algorithms and the social and economic impacts of substituting human labour with those technologies. The book also introduces a policy-oriented solution to the looming threat of large-scale unemployment if such a transition is realized.

As an intermediate step towards Artificial General Intelligence (AGI), advances in robotic technology and machine learning software, which can be also categorized as “narrow AI”, is going to pose a greater threat towards our near-term economic development – robots are likely going to further automate factory production and eliminate the need for routine and repetitive job in many industries including electronic manufactory and textile industries. Full automation is going to reduce labour cost and improve efficiency in such industries.

With no need for manual labour in factories, many workers with low education will find themselves unable to locate a new job since service industry is also going to be automated soon afterwards. Gourmet robots and more intelligent self-service robots have great potential in revolutionizing fast food and retail industries. Ford claims that once the cost of purchasing such robots falls below the cost of labour, companies will be motivated to transition to robotic technology. However, the rate in which companies adopt robotic technology in the service industry is more likely to be gradual. In order to implement such technology, companies also need to make changes to their basic business models, undergoes major innovation for brick and mortar stores, and reorganizes kitchen space and internal logistics for fast food industries.

Furthermore, some white collar jobs will be replaced by machine learning algorithms. Machine learning algorithm will thrive at routine and repetitive white collar job, and are likely going to yield more value than human workers. As we have seemed in many examples of machine learning software, self-driving car is undoubtedly safer than human-operated car and Jeopardy! winning Watson is better at organizing information than any other human contestants.

Ford concludes that this unprecedented automation movement will have a grave impact on our economy. Many people will become permanently unemployed as they are unlikely going to find an alternative job even with vocational training. Those who don’t have a job will decrease their consumption level and those who still do will follow the case as they foresee a jobless future. Consumption will decrease rapidly, and the downward spiral of price decrease is going to start a deflation cycle. Eventually, the society will end up with a techno-feudal scenario where top 5% will be able to live prosperously and the bottom 95% have zero income, no ownership of capital, and no means of sustaining lives.

I would argue that such scenario is unlikely going to play out in the near-term future, since we can safely assume that the narrow AI machine intelligence wouldn’t have a comparative advantage over human in jobs that require creativity. As we can see in the case of AlphaGo, even after computer simulates thousands of random games of self-play, the computer is still surprised by an unexpected move by a human player. Such observation can be implied to real-life situations, where brute force calculation of possible outcomes will ultimately lead to a combinational explosion.

Let’s consider the case of online video sharing, an entirely new industry that currently enjoys rapid growth as advertisers find that younger audiences are abandoning cable television and spend more time online. Although YouTube’s content creators earn most in this industry, some new video sharing platforms including Vine and Snapchat managed to expand the market and create new opportunities for content creators. Additionally, the top earner for each platform rarely overlaps, albeit many top earners have multiple accounts across different platforms.

Ford might oppose to this idea by suggesting that the market has a long tail distribution where 80% of total income are earned by 20% of creators and that there are limited opportunities in this market. I would argue that the potential of the internet is not yet fully realized in many countries, and more categories and form factors are going to be created as the market expand. Likewise, other online platforms are going to follow a similar trend. Newly created successful platforms will be able to generate a sizable labour demand that acts as a buffer zone for the unemployed and recently graduated.

There is also another labour market that will see moderate growth alongside the automation process. As we customize robotic technologies for each candidate industry, the demand for software and hardware engineer are going to rise respectively. These jobs are essential for creating industrial solutions for companies that want to automate their production process and digitize their workforce. Development of vocational training and K-12 education of computer science is critical for the economy.

Nevertheless, we also need to build a safety net for the worst case scenario. Ford claims that the guaranteed basic income approach proposed by Friedrich Hayek is sufficient in solving the polarizing income inequality problem. The guaranteed income does have some downfalls in that it encourages free riders to cheat the system and permanently exit job market. Furthermore, since cities have vastly different standard of livings, it is very hard for a federal or state government to implement a guaranteed income standard that will satisfy everyone’s need. The guaranteed income approach is also likely going to face strong political opposition in both parties in that the policy will fundamentally transform the role government play in the economy.

There are some valuable takeaways from this book that inspires me to focus my research on the future of professions and viable economic and political strategies.

First, the job market in near-term future is likely going to change rapidly, and it will be a life changing traumatic experience for those who have a routine and repetitive job to be excluded from the workforce. We need to find new ways of employing those who lose their jobs but are unwilling to stay unemployed.

Second, very few economists and political scientists are engaged in the conversation of how we should react to the potential threat of spreading structural unemployment. As we fully automate industries and subsequently make human labour obsolete, we need to find more viable economic and political strategy to ensure the transition to jobless society as smooth as possible.

The book Rise of the Robot is an interesting read as it delves deep into the world of robots. It extensively explores implications of robotic technologies that are previously unthinkable. The book also provides a viable policy that can potentially solve the polarized income inequality problem. The book is a good read for everyone since it is a problem that is faced by the entire human race, and the increasing attention towards this problem is going to help solve this very urgent problem.

[mla-end]

Ford, Martin R. “Technologies and Industries of the Future.” Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic, 2015. N. pag. Print.           

 

[mla-start]

A Risk We Cannot Ignore

Recent progress in developing general purpose machine learning algorithms in playing the ancient game of Go and developing self-driving car shines a light on artificial intelligence (AI) and the benefit of achieving a workerless society. With such potential, more resources will be poured into the field and hasten the development of artificial general intelligence. Although the arrival date of an omnipotent AI agent might be decades away from now, the competing countries and sponsors might rush the process and obtain the first-mover advantage in controlling a superintelligent AI.

The contested development might pose a grave danger as Nick Bostrom points out in Superintelligence: Paths, Dangers, Strategies. The rushed solution might be one that would turn malignant after it has gained a decisive strategic advantage. Furthermore, dissimilar to the prediction made by many science fictions, the greatest threat of artificial intelligence might not be that it turns out to be malignant. Instead, an agent that follows the human order but doesn’t understand human moral value would take a short cut in ways that are unimaginable to us. In order to avoid superintelligence from causing pain and suffering to us, we need to develop superintelligence that adheres to our values.

The question of how to prevent existential crisis caused by superintelligence can thus be divided into three parts:

  1. What type of superintelligence should we focus on developing?
  2. How to control superintelligent agent when it is under development?
  3. How does a superintelligent agent acquire values that are aligned to our moral value?

Bostrom answers those three questions by analyzing many methods in solving them, and here I will summarize possible solutions to those questions.

Type of superintelligence: Bostrom proposes four possible paths to superintelligence, of which the most direct path is whole brain emulation (WBE). With whole brain emulation, we need not to consider the motivation and value acquisition since they will inherent our innate value and develop moral value in an understandable manner. Although some emulated minds might be malignant, we will be able to identify problems when we detect abnormalities in their thinking pattern. However, WBE is not the most efficient superintelligence and thus will bring about a second transition from emulation to artificial intelligence. With this in mind, we need to focus on solving AI control and value acquisition problem even if we concentrate our effort primarily on developing WBE. Other paths towards artificial intelligence include biological enhancement and brain-computer interaction. Biological enhancement would follow the pattern of evolution in nature, which takes a long time to change by a small fraction, and brain-computer interaction is likely going to be too invasive for ours liking.

Control problem: without control under developmental phase, we would be exposed to the chance of having an AI agent that doesn’t meet requisite moral and ethical value to create an existential crisis for the entire human race. Although the agent might have an altruistic final goal, it might want to achieve that goal by taking shortcuts. The creation of computronium, where agent turns all resources in the galaxy into computing devices, might be inevitable if it doesn’t realize our true intention. The control problem can be solved by capping its capacity or selecting its motivation. Those methods can be used together in creating oracle, genie or sovereign, in which it would not be able to gain a decisive strategic advantage over all human beings.

Value acquisition: we would not be satisfied with the controlled superintelligence since it would not realize its full potential. Thus, we need to ensure that the superintelligence follows our value. We, as human beings, have a complex understanding of the world around us. The understanding of the historical event, present event, and environment shapes our moral value. Since the values are different for two group of people and for a single group at different time periods, the task of transforming our value into computer understandable code might be infinitely hard. In order to solve this problem, Bostrom introduces the Coherent Extrapolated Volition (CEV) developed by Eliezer Yudkowsky, which I will discuss in greater detail in future articles. Coherent extrapolated volition, in brief, is a method for agents to find our intention and values: The computer needs to do what we meant but not what we asked.

There are some messages in the books that are valuable for those who are eager to create superintelligent agents.

First, superintelligence and artificial intelligence do not necessarily lead to disasters if we put enough emphasis on the control and value acquisition problem. Even if the superintelligence gains a decisive strategic advantage and forms a singleton, the correct value would prevent it from causing pain and suffering to human beings.

Second, some promising methods might not yield the intentioned outcome. If we don’t think hard enough about all possible outcomes, the superintelligent agent will then expose and exploit the underlying loopholes in our logics.

Superintelligence: Paths, Dangers, Strategies is a groundbreaking book that explores solutions to the control and value acquisition problems for the development of superintelligence. The book provides insights into many critical issues we might face in the future. The book is a suggested read for everyone who are curious about the paths we might take in the future and those who are interested in developing a superintelligence.