Recently, the media, social networks and consulting reports such as McKinsey (McKinsey Global Institute 2017) have been using such terms as ‘”big data”, “artificial intelligence” (AI), and the “Internet of Things” (IoT).
The importance of these emerging technologies has been widely recognised, even among those with an aversion to numbers, statistics, and data science. In a previous article, I provided an overview of a boom in big data (Konishi 2014). At the time, I strongly hoped that the emerging popularity of statistics and data science would last. As it turns out, statistics and data science found a role as a tool for machine learning – in which computers learn from data without being explicitly programmed – which is a data analysis method attracting much attention in the current AI boom. What started as vague interest is now a tool indispensable for AI and evidence-based policymaking.
Meanwhile, the AI boom that started in 2012 shows no signs of fading, as AI continues to evolve and find its way into our everyday lives. Why is this AI boom – the third one – lasting so long?
The timeline in Figure 1, taken from Konishi and Motomura (2017), shows the development of AI, machine learning, big data, and relevant large-scale national projects over the years. In the first half of the second AI boom, machine learning was not linked with AI technologies. The construction of information systems therefore involved an enormous amount of human work, for example writing programs for computers to perform specific tasks and processes. The latter half of the second AI boom overlapped with the second boom in neuroscience, a period in which numerous theoretical and applied studies on neural networks were carried out. As the processing capacity of computers, however, was limited and large-volume data – ‘big data’ – were not available, neuroscience at this stage failed to achieve a sufficiently high level of precision. This led to the ‘ice age’ of neuroscience in the first half of the 1990s.
As shown in Figure 1, AI technologies and neural networks had no major boom from the 1990s through the first half of the 2000s, and nor did relevant large-scale national projects from the first half of the 2000s through 2015. A key turning point came in 2006, when a group of researchers introduced a new learning algorithm that allowed a neural network to have multiple hidden layers, connecting the input and output layers. This activated research on deep learning. Then, in 2012, a machine using a deep learning technology won an image recognition contest, attracting attention from all over the world and creating a deep learning boom. In parallel, Japan’s third AI boom began in 2013. These booms overlap with a big data boom that began in 2012. Furthermore, in May 2015, the Artificial Intelligence Research Center (AIRC), the largest AI research and development (R&D) hub in Japan, was established within the National Institute of Advanced Industrial Science and Technology (AIST). It is undertaking R&D on next-generation AI through March 2020, and the industry-government-academia collaboration in AI research has once again been set in motion.
Figure 1 AI booms and related developments
Source: Konishi and Motomura (2017)
Today, we have everything needed for AI development, ranging from machine learning-based AI technologies, high performance computing, and big data to relevant large-scale projects and industrial application needs. Together they offer an unprecedented opportunity, and explain why the current boom is so extensive and why it is having an impact on the whole of society.
AI literacy and service business operations
AI technologies introduced for use in service business operations are mostly purpose-specific, rather than multipurpose as with humanoid robots. They are designed to perform and automate specific tasks previously carried out by humans as well as, or better than, humans could do them. In a previous article, I defined AI literacy as being conscious about whether excessive labour, money, or time is spent carrying out tasks that machine learning and other AI technologies could performing – for example, categorisation, repetition, exploration, organisation, and optimisation (Konishi 2015). AI is most efficient when it is used to perform the kind of tasks in which repeating, increasing the number of combinations, or spending more time will lead to greater accuracy and value.
For instance, suppose there is a project that involves finding and approaching potential customers. When classifying existing customers for this purpose, finding distinctive features or properties of those customers would be the best that a human worker could do. AI, however, would make it possible to identify many features or properties per customer by segmenting customers based on their purchasing behaviour.
Meanwhile, an AI system programmed to access information on economic trends, or information about rival companies, would be able to produce a strategy that reflects this information. This is very elementary, but information must be on the network to be accessible to AI, or conversely, everything on the network must be accessible as information. AI, and information and communications technologies (ICT) currently available for use in service operations, are devices rather than machines, unlike those that have been introduced for use in manufacturing operations. Those devices – whether wearables or personal computers – can be introduced on a piece-by-piece basis and make a difference, thanks to the accumulation of massive data as well as to the reduced size, refinement, and prevalence of information technology equipment.
Against this backdrop, an increasing number of companies – both big and small – are willing to collect big data and use AI technologies. No matter how convenient they are, however, it remains our task to determine which technologies to employ for our business processes. It is not easy to gain insights into what it takes, in which context, to make a project successful and sustainable.
What makes AI sustainably useful?
Konishi and Motomura (2017) reviewed the purposes and outcomes of 28 AI projects undertaken by AIST. Many of them were practical, intended for application in real-world service operations – for instance, to improve work processes for medical and nursing care services, make service recommendations to clients in the entertainment industry, or optimise logistics. Through this review, we found answers to a simple question: What types of AI projects develop sustainably?
- The purpose of the project and the uses of technology are clear, and benchmark targets can be translated into data.
- Workers on the frontline are highly motivated and have strong needs for AI.
- It is possible to accumulate and integrate data at a relatively low cost via sensors, networks, and the Internet.
- It is possible to collect data continuously by integrating the process into the regular flow of work.
- Knowledge obtained by analysing data or the results of calculation can be used as additional data.
We can see that data play crucial roles in AI projects. Meanwhile, researchers are developing techniques and algorithms collectively referred to as ‘AI technologies’. If they are put into practical applications as soon as they are available, these technologies are quick to penetrate. In today’s programming techniques and computing environment, it is easy to improve and utilise existing systems in accordance with the needs of workers. In other words, since data are the key determinant of the performance of machine-learning-based AI, the sustainable availability of high-quality data is crucial to competitiveness. The quality of these data depends on the ability of frontline workers to collect them, whether in a retail shop or in a company office. Collecting as much information as possible, on the behaviour of as many people as possible, will enhance the precision of analysis and the value of that data.
The initial stage of decision-making over whether or not to introduce an AI system is important. In selecting a specific AI system from many available options, it is important to consider the outcome or value-added we intend to generate with this task, and ask why we want machines to take this task away from human workers. Just look around your own room. You will see quite a few home appliances that were purchased to improve efficiency, or for pure satisfaction, but which you have not used for a long time. The same is true for business and AI investments. We often seek to introduce AI as if doing so were itself the goal. To avoid this pitfall, we must make sure we know the reason and the purposes for introducing AI.
Editors’ note: This column was reproduced with permission from the Research Institute of Economy, Trade and Industry (RIETI).