Self-driving everything. This year will see the rapid growth of products, services, and business processes that use the power of machine learning algorithms to automatically get better as more people use them — just as self-driving systems get better at navigating roads over time by recognizing patterns and learning from any mistakes.
This can and will impact all areas of life, such as:
- Self-optimizing cities. Imagine traffic lights that constantly and automatically adapt in real time to improve traffic flow
- Self-driving homes, with home automation systems that adapts to your rhythms, such as automatically turning the heat on half an hour before you get home.
- “Lights-out” finance organizations with processes that learn and improve from every exception that requires human interaction, getting ever-closer to closing the books automatically.
- Marketing that proactively adapts to prospects, automatically maximizing exposure to content that interests them while minimizing anything perceived as spam.
- Human resources departments that automatically get better at shortlisting candidates based on successful hires.
- Business intelligence tools that automatically propose answers rather than always waiting for you to ask a question.
- And many, many more.
Ethics everywhere. Artificial intelligence is designed to maximize certain behaviors (“get to the destination without crashing the car”), based on the data provided (camera, lidar, traffic rules, etc). But bad data or badly chosen KPIs can lead to unethical and biased results.
For example, if your new, automated HR processes are taught using prior hiring data that was full of human bias, the resulting algorithm will also be biased. And we often observe sub-optimal behavior from human beings because of badly-designed incentive plans — the principle difference with AI is that it will do the bad things much faster and more effectively!
We can’t just outsource our responsibilities to machines. Someone needs to be clearly responsible for any decisions made by algorithms, with the power and resources to make changes when problems arise. To do this effectively there must be transparency, with the ability to monitor and track the resulting effects of any automated tasks.
2018 is likely to see more high-profile cases of “algorithm abuse,” leading to organizations investing in specialized roles around AI adoption.
Algorithm whisperers. Because AI is highly dependent on the data it is fed and the patterns we train it to look for, we need people who have the skills to do this right. Call these people “algorithm whisperers” who can make sure that these technology only do what they’re supposed to do..
An algorithm whisperer’s job is to have a deep understanding of the context of algorithm use. It’s about understanding the data and the algorithms that are being used, and interpreting the results. At the end of the day, bad data means bad results – it’s critical to have someone with the skills to tell what data has been collected, when it doesn’t make sense and why, and who understands the impact this will have on results.
Anscombe’s Quartet famously illustrates some of the problems — these data sets all have the same statistical characteristics (mean, variance, correlation, etc.) but would result from very different types of processes.
This kind of data analysis is what data scientists specialize in, but what really distinguishes an algorithm whisperer is creativity. For example, data scientists working on the 9/11 memorial in New York initially determined that it was impossible to achieve the level of adjacency that had been requested to memorialize all of the people impacted. Yet data artist Jer Thorpe managed it by using all resources available, such the physical characteristics of the place, the length of the names, and the choice of font.
Algorithm whisperers can also use their deep expertise to figure out what the results of predictive studies mean. For example, a subway authority wanted to use an algorithm for predictive maintenance, to figure out in advance when a machine might break down. But looking closely at the data, it turned out that every time a single machine broke down, the machine next to it would also break down within a day. It was almost as if they were “catching” the breakage from each other, like a common cold. It didn’t seem to be logical. Was it a data quality problem with the same machine counted twice? Maybe machines were installed together and tended to break down together after a certain amount of time? The answer was that this was a result of repair teams with strict service level agreements. If they missed the window for fixing a machine they were paid less. So, if there was a broken part, they would order a new one, but replace it immediately using a part from the machine next to it. They would go back the next day to fix the “new” breakage. It took an algorithm whisperer with a full understanding of the context of the data to successfully and correctly interpret what was actually going on.
It’s unlikely that “algorithm whispering” will be a mainstream job title any time soon, but in 2018 today’s data scientists will get even better at the creative aspects of their role, while business people will adapt the way they work to the new opportunities.
Data sovereignty. AI is only as good as the data you have available. But the question of who owns and controls data is far from being a neutral question, and the issue of data sovereignty will reach new visibility in 2018, with at least four different levels of discussion:
- There are big rifts between the approaches adopted by various countries, from the buttoned-down European Global Data Protection Regulation (GDPR), via the more-or-less-anything-goes approach of the US, to China’s government ownership of any and all data on its citizens. Today’s computer systems weren’t designed with national boundaries in mind, and retro-fitting them for today’s sometimes-conflicting regulatory environments is proving to be complex, confusing, and expensive.
- Data is the foundation for the business models of the future, and the biggest opportunities are often where different organizations collaborate and share information across a “digital supply chain” and competing as an ecosystem. It’s clear that this can create added value for society as a whole, but it’s less clear how to deal with corresponding issues of joint ownership of data, the possibility of intellectual property leaks, and more. We’re seeing these issues play out in the internet of things space, as various organizations experiment with new business models for gathering and sharing data across companies that traditionally compete with each other.
- Within companies, the traditional approach of a single “data warehouse” that tries to bring together all relevant business data needed for decision-making has been discredited. Instead, there’s a rise in systems that “orchestrate” data flows across different sources inside and outside the organization. This is essentially a “federation” approach to data ownership with a compromise between individual silos and the needs of the organization as a whole.
- Consumers are increasingly aware of just how much privacy they have been giving up. There may be a backlash about how “their” personal information is being collected, controlled, and monetized. There have been advances in systems that allow consumers more control over how their information is used. And various models have been proposed for “personal data sovereignty” that flip the equation so that individuals have control over their data, and can choose to provide it to vendors in return for compensation — but it is hard to see how large-scale adoption of such models would be feasible.
We’re likely to see a lot of talk this year all of these areas — but, unfortunately, little or no resolution of the complex underlying issues.
Technology is about being human. What’s the real killer technology to get the most out of AI? It’s people!
The biggest effect of increased artificial intelligence and automation in 2018 will be the rising importance of human skills. To weather the coming storm, it’s important to know what AI is capable of, but the key isn’t to compete with it — it’s to double-down on the human skills that got you where you are today.
For example, when more of the medical diagnoses are being handled by algorithms, the doctors that stand out will be those with the best patient skills. And finance teams will reward the people that making sure that the company as a whole actually uses that data to further the needs of the business, rather than those who are good at collecting and processing on data.
For decades, the power of technology has been advancing quicker than our ability to adapt to its use. This is our opportunity to optimize our non-technical skills including change management, leadership, and corporate culture.
And finally, we need human judgment more than ever. AI is a very powerful tool, but just because something is now feasible doesn’t mean it’s a good idea. We all need to support organizations such as the Partnership on AI and their mission to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.