Do you worry artificial intelligence will take over the world? Many do. From Elon Musk worrying about DeepMind beating humans in the advanced game of Go in 2017, to members of Congress, European policy makers (see A European approach to artificial intelligence), and academics, there’s this feeling that this is the decade to take AI seriously, and it is taking hold. Though, not for the reasons you might think and not due to any present threat.
This is where algorithms come in. What is an algorithm, you may ask? The simplest way to think of it is as a set of instructions that machines can understand and learn from. We can already instruct a machine to calculate, process data, and to reason in a structured, automated way. However, the problem is, once said instructions are given, the machine will follow them. For now, that’s the point. Unlike human-beings, machines follow instructions. They don’t learn that well. But once they do, they could cause problems.
I don’t want to make a sensationalist argument about the idea of computers one day surpassing human intelligence, better known as the singularity argument (see NYU philosopher David Chalmers’ musings on the topic.) Rather, manufacturing might be the best example of why AI algorithms are beginning to matter more to the general public. One fears that machines will vastly accelerate their prowess at our expense. Not by some advanced reasoning, necessarily, but because of the optimization within the boundaries of what an algorithm says.
Manufacturing is about making things. But when machines make things, we need to pay attention. Even if what the machines make is simple. I’ll explain why.
From rainboots to cell phones and back
Say, a factory has been making rain boots. I love rain boots because I grew up in an area of Norway where it rains a lot; I love to be outside, subject to the many elements of nature. Nokia made the rain boots I grew up with. Yes, the Nokia we know today as the electronics company used to make rubber boots. Why is this key? Because once you make something, you are destined to want to make improvements. That makes sense. You could say that’s human nature.
What happened to Nokia is well known and goes a bit like this: Initially a paper mill, by the time I was a kid, manufacturing rubber boots (and tires) was particularly successful for the company. However, they saw further opportunities. Hence, at some point in the 1980s, they shifted to electronics and rapidly changed the factories around, building a big structure of local suppliers when they began making cell phones. This ushered in the mobile communications revolution, which started in Scandinavia and spread to the rest of the world. Understandably, many have written the story of Nokia in the 1990s (see Secrets behind the Finnish miracle: the rise of Nokia).
My example is straightforward. Perhaps, too simple. But think of it this way. If a large company can rapidly transition from making paper to write on, to boots that make it easier to be out in the rain, then finally, to cell phones that alter the way humans communicate: how easy will the next step be? Suppose a company that manufactures cell phones decides to make nanobots and maybe those take off in a decade, altering humanity with minuscule machines autonomously running around everywhere, capable of reassembling and altering the human experience. What if that happens without considering how we want it to occur, who we want to be in charge, and the ultimate aims?
Suggesting that robots consciously helped Nokia decide to make cell phones would be a stretch. But acknowledging that technology had a role in allowing a Finnish rural area on its northern shore to think they could obtain world domination in a new industry plays a significant part.
Nokia’s story hasn’t been so rosy over the recent decade given that they failed to take into account the emergence of software-based iOS and Android operating systems. Now, as a result, Nokia doesn’t make phones anymore. In a bit of a comeback story, they now make networking and telecom infrastructure, network security solutions, Wi-Fi routers, smart lighting and smart TVs (see Nokia’s Comeback Story). Nokia still makes things, that’s true. The only observation to make is that Nokia always seems to enjoy mixing up the things they make. Even the manufacturing decisions of human beings are, at times, hard to understand.
Manufacturing means making things and things do evolve. Broadly, what we make today has changed from just a decade ago. 3D printers have decentralized production of many advanced products, both in industry and the home. The life-altering consequences of 3D printing have not yet occurred. We don’t know if this will last but we do know the FDA’s focus is on regulating the manufacturing of products (see here) like the printed pills or medical devices that ensue, the obvious intellectual property and liability issues, or the issues around being able to print firearms. Ultimately, the policy discussion on what negative consequences 3D printing might have beyond this is non-existent, and few of us have bothered to think about it.
I’m not suggesting 3D printing is dangerous in and of itself. Perhaps, this is a bad example. Nevertheless, things that initially look mundane can alter the world. There are plenty of examples: the hunter/gatherer’s arrowhead made of metal that starts wars, ritual masks which protect us from COVID-19, nails which build skyscrapers, movable types printing presses which (still) fills our factories with printed paper and power the publishing business, light bulbs which enable you to see and work inside at night, I could go on. Nobody I know about sat down in the late 1800s and predicted Nokia would move its production from paper to rubber to electronics, and then away from cell phones. Perhaps they should have.
Humans are poor predictors of step change, the process where one change leads to more change, and suddenly, things are radically different. We don’t yet understand this process because we have little practical knowledge of exponential change; we can’t picture it, calculate it, or fathom it. However, time and time again, it hits us. Pandemics, population growth, technological innovation from book printing to robotics, it typically hits us without warning.
The trick with futurism isn’t if, but when. One might actually be able to predict change just by picking some new production methods and stating that they will become more prevalent in the future. That’s simple enough. The tricky part is to figure out exactly when and especially how.
Paper clips are not the problem
Consider my factory example again, but this time, imagine the machines are in charge of numerous decisions, not all decisions, but production decisions like optimization. In his book Superintelligence, Oxford University’s dystopian humanist Nick Bostrom famously imagined an AI optimization algorithm running a paper clip factory. At some point, he says, imagine that the machine reasons that learning to divert ever-increasing resources to the task is rational, ending up gradually turning our world into paper clips, and resisting our attempts to turn it off.
Despite being a smart guy, Bostrom’s example is pretty dumb and misleading (yet, memorable). For one, he fails to account for the fact that humans and robots are no longer separate entities. We interact. Most clever robots are evolving into cobots or collaborative robots. Humans will have many chances to correct the machine. Even so, his basic point remains. There may be a step change at some point, and if that change happens fast enough and without sufficient oversight, control might be lost. But that extreme outcome seems a bit far-fetched. Either way, I agree, we need to regulate the humans who operate these machines and mandate that workers are always in the loop by appropriately training them. That type of training is not going well. It currently takes too long and it takes specialty skills both to train and to be trained. I know one thing. In the future all kinds of people will be operating robots. Those who don’t, will be pretty powerless.
Augmenting humans is better than mindless automation, regardless if we never fully merge with machines. The two concepts are logically distinct. It is possible for both people and robots to be stuck automating for automation’s sake. That would do great damage to manufacturing going forward. Even if it doesn’t produce killer robots. I believe a merger is hundreds of years away, but that’s not the point. Even if it is only thirty years away, the self-propelling machines operating on simplistic algorithms that lose control, that scenario already happens on the shop floor. Some of those machines are thirty years old and run on old, proprietary control systems. Their main challenge is not that they are advanced but the opposite. They are too simplistic to be able to communicate. This is not a problem for tomorrow. It is a pre-existing problem. We must open our eyes to it. Think about this the next time you step into your rubber boots.
I still have my Nokia boots from the 1980s. They have a hole in them, but I keep them to remind myself where I’m from and how far I’ve walked. Rain keeps falling, too, and as long as it’s clean enough I don’t want a better fix for it than those boots. Then again, I’m human. A robot would presumably have moved on already. What’s the AI version of rainboots, I wonder. It is not a cell phone. It’s not a rain sensor. It boggles the mind.
Digital boots today mean you can personalize them because they have 3D-printed designs on them. There are virtual shoes that exist only as NFTs (non-fungible tokens) that can be sold and traded. The top virtual sneakers are worth $10,000 these days (see What Is an NFT Sneaker, and Why Is It Worth $10,000?). I’m not scared of those but should I be? If the virtual world becomes valued more than the physical world, perhaps I will. Or should I wait to be worried until an AI’s own avatar buys its own NFT boot to tackle the “rain”? If we build algorithms in our own image, it is more likely an AI would be good at things we wish we were good at but typically are not, such as buying stocks, building loyal friendships (perhaps with both machines and humans), and remembering things. The industrial metaverse might be surprisingly sophisticated–full of digital twins that mimic our world and surpass it in fruitful ways–or it might be shockingly simple. Maybe both. We just don’t know yet.
We need to regulate AI algorithms because we don’t know what’s around the corner. That’s reason enough, but as for how we do it, that’s a longer story. Allow me one more quick observation, perhaps all fundamental algorithms should be made publicly available. The reason is, if not, there is no way of knowing what they might lead to. The top ones are quite well known (see Top 10 Machine Learning Algorithms), but there is no worldwide overview of where and how they’ll get used. It is especially the unsupervised algorithms that should be watched carefully (see Six Powerful Use Cases for Machine Learning in Manufacturing), whether they are used to predict maintenance or quality, to simulate production environments (e.g. digital twins), or to generate new designs a human would never think of. In today’s landscape, these unsupervised algorithms are typically so-called artificial neural networks, attempting to mimic the human brain.
I have started to worry about neural nets, only because I find their logic hard to understand. The problem is that most experts, even those deploying them, don’t understand how these algorithms move from step to step or layer to layer. I don’t think the metaphor of “hidden layers”, which is often used, is very apt or very funny. There should be no hidden layers in manufacturing, in automated tax collection, in hiring decisions, or in college admissions, for starters. Perhaps you should consider getting worried, too? One thing is for sure, humans and machines making things together will change the world. It already has, many times over. From paper to rainboots, and the layers of today’s artificial brains, nothing should be left unexplored. We should not be hiding from the simple fact that from many small changes, a bigger change can suddenly appear.
Source: https://www.forbes.com/sites/trondarneundheim/2022/04/07/the-reasons-to-regulate-ai-algorithms-are-simpler-than-you-think/