Here’s something that you probably hadn’t been yet mulling over: Mortal computers.
But maybe you should be.
The heady topic came up at the recent and altogether quite prominent annual conference on AI that is especially focused on the advent of neural networks and machine learning, namely the Conference on Neural Information Processing Systems (known by insiders as NeurIPS). Invited keynote speaker and a considered longtime AI guru Geoffrey Hinton made the intriguing and perhaps controversial contention that we should be thinking about computers in a mortal and immortal context.
I’ll be addressing the notable assertion and doing so in two ways that at first won’t necessarily seem connected though after a bit of added elucidation they will become more clearly related to each other as to the mortal versus immortal contentions.
The two topics are:
1) Integrally binding together both hardware and software for AI mechanizations rather than having them as distinct and separate allies
2) Transferring or distilling of machine learning formulations from one AI model to another that does so without requiring nor necessarily desiring (or even feasibly otherwise possible) a straight ahead full purebred copying
All of this has big-time considerations for AI and the future direction of AI development.
Furthermore, there is a slew of very thorny AI Ethics and AI Law concerns that arise too. These types of AI-envisioned technological advancements are usually bandied around on a purely technological basis long before there is a realization that it might also have noteworthy Ethical AI and AI Law repercussions. In a sense, usually, the cat is already out of the bag, or the horse is out of the barn, prior to the awakening that AI Ethics and AI Law should be given due diligence participation.
Well, let’s break that belated afterthought cycle and get in on the ground floor on this one.
For those of you interested overall in the latest insights underlying AI Ethics and AI Law, you might find informative and inspirationally engaging my ongoing and extensive coverage at the link here and the link here, just to name a few.
I am going to first herein cover the above point about the binding together of hardware and software. A discussion and analysis of the topic will occur hand-in-hand. Next, I’ll touch upon the matter of copying or some say distilling the crucial elements of a machine learning AI system from one AI to a newly devised AI as a target.
Let’s get started.
Binding Together Of Hardware And Software For AI
You probably know that by and large the design of computers is such that there is the hardware side of things and separately there is the software side of things. When you buy an everyday laptop or desktop computer, it is construed as being a general-purpose computing device. There are microprocessors inside the computer that are used to then run and execute software that you might purchase or write on your own.
Without any software for your computer, it is a hunk of metal and plastic that basically won’t do you much good, other than acting as a paperweight. Some would say that software is king and rules the world. Of course, if you don’t have hardware upon which to run the software, the software isn’t going to do much good. You can write as many lines of code as your heart desires, yet until the software is being used via a computer the formulated source code is as flimsy and flightless as a beauteous work of poetry or a thrill-a-minute detective novel.
Allow me to momentarily switch to another avenue that might appear to be far afield (it won’t be).
We often try to draw analogies between how computers work and how the human brain works. This attempt to make conceptual parallels is handy. That being said, you have to be cautious about going overboard on those analogies since the comparisons tend to break down when you get closer to the meaty details.
Anyway, for sake of discussion, here’s an analogy often used.
The brain itself is informally at times referred to as wetware. That is a catchy way to phrase things. We know that computers consist of hardware and software, so it is clever to use the “ware” part of the coining to describe what a brain amounts to. Nestled in our noggins, the mighty and mysterious brain is found floating around, mentally calculating all of our deeds (some good, while some of our thoughts are decidedly not filled with goodness).
At an average weight of around a mere three pounds, the brain is a remarkable organ. Somehow, and we don’t yet know how, the brain is able to use it’s on the order of 100 billion neurons and perhaps anywhere from 100 to 1,000 trillion interconnections or synapses to do all of our thinking for us. How do the biological and chemical properties of the brain give rise to intelligence? Nobody can say for sure. This is a quest of the ages.
I ask you this, is the brain ostensibly hardware-only, or is it both hardware and software combined?
Noodle on that brain teaser.
You might be tempted to claim that the brain is simply hardware (in a general sense). It is an organ of the body. Similarly, you might say that the heart is hardware, the bladder is hardware, and so on. They are all mechanizations akin to when we talk about artifacts that have a physical form and do physically related actions.
Where then is the software that runs humans?
I’d dare suggest that we all pretty much agree that the “software” of humankind somehow resides in the brain. The steps required to cook an egg or fix a flat tire are instructions that are embodied in our brains. Using that earlier noted computer analogy of hardware and software, our brain is a piece of hardware as it were, for which we learn about the world and the instructions of what to do are “running” and “stored” within our brains.
On a computer, we can readily point at the hardware and say that this is hardware. We can have a listing of source code and point to the listing as software. Nowadays, we electronically online download software and install it on our laptops and smartphones. In the days of olden times, we used floppy disks and punch cards to store our software for loading onto the hardware of the computer.
I am getting you into an important conundrum.
Once you’ve learned something and the knowledge is present in your brain, can you still distinguish between the “hardware” of your brain and the presumed “software” of your brain?
One argued position is that the knowledge in your brain is not particularly separable from the conceptions of hardware and software. The analogy thusly to the nature of computers breaks down, some would fervently contend. Knowledge in the brain is intertwined with and inseparable from the hardware of your brain. The biological and chemical properties are interweaving the knowledge that you mentally possess.
Stew on that for a bit of mental reflection.
If we hope to someday devise computers that are on par with human intelligence, or even exceed human intelligence, perhaps we can use the structures of the brain and its inner workings as a guide to what we need to do to attain such a lofty goal. For some in the field of AI, there is a belief that the more we know about how the brain works, the better our chances of devising true AI, sometimes referred to as Artificial General Intelligence (AGI).
Others in AI are less enamored of having to know how the brain works. They emphasize that we can proceed apace to craft AI, regardless of whether we are able to unlock the secret inner workings of the brain. Don’t let the mysteries of the brain impede our AI efforts. Sure, keep trying to decode and decipher the human brain, but we cannot sit around and wait for the brain to be reverse-engineered. If that someday is doable, wonderful news, though maybe it is an impossibility or will occur eons from now.
I’m ready to now share with you the mortal and immortal computer contention. Please make sure you are sitting down and ready for the big reveal.
A computer that has a clear-cut separation of the hardware and the software could be claimed as being “immortal” in that the hardware can persist forever (within limits, of course), while the software could be written and rewritten time and again. You can keep a conventional computer going for as long as you can make repairs to the hardware and keep the contraption able to power up. You can still make use today of the crude home computers from the 1970s that used to come in kits for assembling, despite there being nearly fifty years old or so (a long time in computer years).
Suppose though that we opted to make computers that had the hardware and software working inseparably (I’ll say more about this shortly). Consider this on the same basis that earlier I mentioned that the brain perhaps has an integral composition of hardware and software. If that were the case, it could be suggested that the computer of this ilk would no longer be immortal. It would be construed as being “mortal” instead.
Per the remarks made at the NeurIPS conference by invited keynote speaker and noteworthy AI guru Geoffrey Hinton, and as stated in his accompanying research paper:
- “General-purpose digital computers were designed to faithfully follow instructions because it was assumed that the only way to get a general-purpose computer to perform a specific task was to write a program that specified exactly what to do in excruciating detail. This is no longer true, but the research community has been slow to comprehend the long-term implications of deep learning for the way computers are built. More specifically the community has clung to the idea that the software should be separable from the hardware so that the same program or the same set of weights can be run on a different physical copy of the hardware. This makes the knowledge contained in the program dies or the weights immortal: The knowledge does not die when the hardware dies” (as contained in and cited from his research paper “The Forward-Forward Algorithm: Some Preliminary Investigations”, preprint available online).
Note that the particular kind of computing being discussed in this type of AI makes use of Artificial Neural Networks (ANNs).
Let’s straighten things out about this.
There are real-world biological neurons in our brains. You use them all the time. They are biologically and chemically interconnected into a network in your noggin. Thus, we can refer to this as a neural network.
Elsewhere, there are shall we say faked “neurons” that we computationally represent in computers for purposes of devising AI. Many people in AI also refer to those as neural networks. I believe this is somewhat confounding. You see, I prefer to refer to them as artificial neural networks. This helps to right away distinguish between a reference to in-your-head neural networks (the real thing, as it were), and computer-based ones (artificial neural networks).
Not everyone takes that stance. A lot of people in AI just assume that everyone else in AI “knows” that when referring to neural networks they almost always are talking about ANNs — unless a situation arises wherein for some reason they want to discuss real neurons and real neural networks in the brain.
I trust that you get my drift. Most of the time, AI people will say “neural networks” which is potentially ambiguous because you don’t know if they are referring to the real ones in our heads or the computational ones we program into computers. But since AI people are by and large dealing with computer-based instances, they default to assuming that you are referring to artificial neural networks. I like to add the word “artificial” to the front end of the wording to be clearer about the intentions.
Moving on, you can somewhat consider these computational artificial neurons as a mathematical or computational simulation of what we think actual biochemical physical neurons do, such as using numerical values as weighting factors that otherwise happen biochemically in the brain. Today, these simulations are not nearly as complex as real neurons are. Current ANNs are an extremely crude mathematical and computational representation.
Generally, ANNs are often the core element for machine learning (ML) and deep learning (DL) — please be aware that there is a lot more detail to this, and I urge you to take a look at my extensive coverage of ML/DL at the link here and the link here, for example.
Returning to the immortal versus mortal types of computers here’s more to ruminate on per the researcher:
- “The separation of software from hardware is one of the foundations of Computer Science and it has many benefits. It makes it possible to study the properties of programs without worrying about electrical engineering. It makes it possible to write a program once and copies it to millions of computers. If, however, we are willing to abandon immortality it should be possible to achieve huge savings in the energy required to perform a computation and in the cost of fabricating the hardware that executes the computation. We can allow large and unknown variations in the connectivity and non-linearities of different instances of hardware that are intended to perform the same task and rely on a learning procedure to discover parameter values that make effective use of the unknown properties of each particular instance of the hardware. These parameter values are only useful for that specific hardware instance, so the computation they perform is mortal: it dies with the hardware” (ibid).
You’ve now been introduced to how immortal and mortal are being used in this context.
Let me elaborate.
The proposition is that a computer that is purpose-built based on ANNs could be devised such that the hardware and software are considered inseparable. Once the hardware someday no longer functions (which, of course, we are saying integrally enmeshes the software), this type of computer is seemingly no longer useful and won’t function anymore. It is said to be mortal. You might as well bury the ANN-based computer since it won’t do you much good henceforth after the inseparable hardware and software are no longer viably working as a team.
If you wanted to try and relate this to the analogy of a human brain, you might envision the dour situation of a human brain that completely deteriorates or that is somehow irreparably harmed. We accept the notion that a person is mortal and their brain will ultimately and inevitably stop working. The knowledge they contained in their brain is no longer available. Unless they happened to try and tell others or write down what they knew, their knowledge is gone to the world at large.
You’ve undoubtedly heard or seen reports of attempts to preserve brains, like putting them into a frozen state, under the theory that maybe humans could someday be immortal or at least extend beyond their customary lifetimes. Your brain might live on, even if not in your body. Lots of sci-fi movies and stories have speculated on such ideas.
We are now ready for a detailed look-see at the mortal computer and the immortal computer as a concept and what it foretells.
Mindful Discussion And Considerate Analysis
Before diving into the guts of this analysis of the postulated approach, a few important caveats and additional points are worth mentioning.
The researcher emphasized that the coined mortal computers would not particularly replace or shove out of existence the immortal computers which we today refer to as conventional digital computers. There would be a coexistence of both types of computers. I say this because the reaction by some has been that the call to order was a blanket claim that all computers of necessity are or will be heading toward the mortal type.
That wasn’t a claim being made.
During his talk, he mentioned that these specialized neuromorphic-oriented computers would carry out computational work known as mortal computations: “We’re going to do what I call mortal computation, where the knowledge that the system has learned and the hardware, are inseparable” (as quoted in a ZDNET article by Tiernan Ray on December 1, 2022).
And notably: “It will not replace digital computers” (ibid).
Also, these new types of computers are decidedly not soon going to be at your local computer store or available for purchase online right away, as stated during his presentation: “What I think is that we’re going to see a completely different type of computer, not for a few years, but there’s every reason for investigating this completely different type of computer.” The uses would differ too: “It won’t be the computer that is in charge of your bank account and knows exactly how much money you’ve got.”
An additional twist is that the mortal computers would seemingly be grown rather than being fabricated as we do today for the manufacturing of computer processors and computing chips.
During the growth process, the mortal computer would increase in capability in a style of computational maturation. Thus, a given mortal computer might start with hardly any capability and mature into what it was being aimed to become. For example, suppose we wanted to create cell phones via the use of mortal computers. You would start with a simpleton variant of a mortal computer that has been initially shaped or seeded for this purpose. It would then mature into the more advanced version that you were seeking. In short: “You’d replace that with each of those cell phones would have to start off as a baby cell phone, and it would have to learn how to be a cell phone.”
On one of his foundational slides about mortal computation, the benefits were described this way: “If we abandon immortality and accept that the knowledge is inextricable from the precise physical details of a specific piece of hardware, we get two big benefits: (1) We can use very low power analog computation, (2) We can grow hardware whose precise connectivity and analog behavior are unknown.”
Part of the same talk and also as contained in his preprint research paper is a proposed technique for how ANNs can be better devised, which he refers to as using a forward-forward networking approach. Some of you that are versed in ANNs are undoubtedly already quite aware of the use of backpropagation or back-prop. You might want to take a look at his proposed forward-forward technique. I’ll be covering that fascinating approach in a future column posting, so be on the watch for my upcoming coverage about it.
Shifting gears, let’s consider what is being said in the hallways and byways of the AI community about this brash mortal computer machination.
We’ll start with what some would say is a non-starter on the topic all told.
Are you ready?
Stop calling this thing a mortal computer.
Likewise, stop proclaiming that today’s conventional computers are immortal.
Both uses are just plain wrong and abundantly misleading, skeptics exhort.
An everyday dictionary definition of that which is immortal consists of something that cannot die. It lives forever. In order to not die, you presumably have to say that the thing itself is alive. You are treading on the wrong track to assert that today’s computers are alive. No reasonable person would ascribe bona fide “living” properties to modern computers. They are machines. They are things. They are not persons nor animals or of a living condition.
If you want to stretch the definition of immortal to allow that we are referring to non-living entities too, in that case, the non-living entity will seemingly have to never decay and cannot inevitably disintegrate into dust. Can you make such a claim about today’s computers? This seems outstretched (side note: we could of course get into a grand philosophical discussion about the nature of matter and existence, but let’s not go there in this instance).
The gist is that the use or some would say misuse of the words “mortal” and “immortal” is outlandish and uncalled for. Taking a commonly used vernacular and reusing it for other purposes is confusing and makes for murky waters. You have to be willing to apparently reconceptualize what mortal and immortal mean in this specific context. This becomes problematic.
Even more disconcerting is that these word choices tend to anthropomorphize the computer aspects.
There are already more than enough issues associated with anthropomorphizing AI, we certainly don’t need to make up more such possibilities. As I’ve extensively discussed in my coverage of AI Ethics and Ethical AI, there is all manner of wild ways that people ascribe sentient capacities to computers. In turn, this misleads people into falsely believing that AI-based computers can think and act as humans do. It is a slippery slope of endangerment when society becomes lulled into believing that today’s AI and computing are on par with humankind’s intellect and common sense, see for example my analysis at the link here and the link here.
Okay, we can reject or have disdain for the awkward wording choices, but does that suggest that we should toss out the baby with the bathwater (an old expression, probably nearing retirement)?
Some argue that perhaps we can find better wording for this overall approach or conception. Discard the use of “mortal” and “immortal” so that the rest of the ideas aren’t tainted by inappropriate or improper usage. Meanwhile, there are counterarguments that it is perfectly acceptable to use those word choices, either because they are befitting, or because we shouldn’t be inflexible about how we opt to reuse words. A rose is a rose by any other name, they declare.
To avoid further acrimonious debate herein, I am going to henceforth avoid using the words “mortal” and “immortal” and will merely state that we have two major types of computers being bandied around, one that is a conventional digital computer of today and the other is a proposed neuromorphic computer.
No need to drag the mortality conundrum into this, it would seem. Keep the skies clear to see what else we can make of the matter at hand.
In that case, some would argue that the proposed idea of a neuromorphic computer is nothing new.
You can trace back to the earlier days of AI, especially when ANNs were initially being explored, and see that there was talk of devising specialized computers for doing the work of artificial neural networks. All kinds of new hardware were proposed. This still occurs to this day. Of course, you could counterargue that most of today’s exploration of specialized hardware for ANNs and machine learning is still based on the conventional approach to computing. In that sense, this analog inseparability of the hardware and software does push the envelope somewhat, and the proposition of “growing” the computer does too, at least with regard to going outside of the considered mainstream.
In short, there are some that are fully steeped in these matters that are surprised that anyone else might be surprised by the propositions being floated. These notions are either the same as before or echo what is already being examined in various research labs.
Don’t get your hair in a fuss, they say.
This does take us to another facet that is bothersome for many.
In one word: Predictability.
Today’s computers are generally considered predictable. You can take a look at the hardware and the software to figure out what the computer is going to do. Likewise, you can trace what a computer has already done to ferret out why it did whatever it did. There are of course limits to doing this, thus, I don’t want to overstate the predictability, but I think you get the idea overall.
You might be aware that one of the thorny issues confronting AI today is that some AI is devised to be self-adjusting. The AI that developers put into place might change itself while it is being used. In the realm of AI Ethics, there are numerous examples of AI that were put into use that at first did not have undue biases or discriminatory tendencies, which then gradually were computationally self-mutated during the time that the AI was in production, see my detailed assessments at the link here.
The worry is that we are already entering into a setting entailing AI that is not necessarily predictable.
Suppose AI for weapons systems undergoes self-adjustments and the result is that the AI arms and launches lethal weaponry at targets and times not expected. Humans might not be in the loop to stop AI. Humans that are in the loop might not be able to respond quickly enough to overtake the AI actions. For additional chilling examples, see my analysis at the link here.
For neuromorphic computers, the concern is that we are putting unpredictability on steroids. From the get-go, the essence of a neuromorphic computer could be that it works in a fashion that defies prediction. We are flaunting unpredictability. It becomes a badge of honor.
Two camps exist.
One camp says that we can live with the unsavory unpredictability concerns, doing so by putting guardrails to keep the AI from going a bridge too far. The other camp argues that you are taking the world down a dangerous path. The day will arise that the claimed guardrails either fail, or they aren’t stringent enough, or that by accident or evil intent the guardrails are removed or fiddled with.
Should we wave away the qualms about neuromorphic computers and predictability?
Per the remarks of the researcher: “Among the people who are interested in analog computation, there are very few still who are willing to give up on immortality.” Furthermore: “If you want your analog hardware to do the same thing each time… You’ve got a real problem with all these stray electric things and stuff.”
I’ll ratchet this up.
A looming and somewhat gloomy perspective is that the so-called predictability associated with today’s digital computers is going in the direction of unpredictability anyway. As mentioned, this can particularly happen per AI that self-adjusts on conventional computer platforms. Just because the neuromorphic computers might be seemingly unpredictable is not ergo a sign that conventional digital computers are in fact predictable.
The unpredictability steamroller is coming at us, full steam, no matter which computing platform you want to choose. For my assessment of the latest efforts to try and attain AI safety in this light, see the link here.
This twist about predictability ought to get your mind noodling on something of an unearthed nature, kind of. Those of you that are involved in AI Ethics and AI Law might not have been considering the ramifications of neuromorphic computers.
You probably have been aiming at conventional digital computers that run AI. Well, guess what, you’ve got an entirely additional and emerging segment of AI computing that you can now stay up worrying about at night. Yes, neuromorphic computers. Put that on your to-do list.
Sorry, more sleepless nights for you.
Let’s briefly consider what AI Ethics and AI Law have been doing about conventional digital computing and AI.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
The part of this that you might not have previously given much thought to is how those same AI Ethics precepts and the burgeoning list of new AI Laws will apply to neuromorphic computers. To clarify, AI Ethics and AI Law indeed have to take that explicitly into account. I am pointing out that few are doing so, and be advised that there is a wallop of a chance that the advent of neuromorphic computers will throw many for a loop in terms of a new dimension for trying to reign in AI.
We need to be considering Ethical AI and AI Laws in a wide enough manner to encompass whatever AI is newly devised, including neuromorphic computers.
The seesaw alternative is a classic cat-and-mouse gambit. Here’s how that goes. New ways of crafting AI are conceived of and built. Existing AI Ethics and AI Laws are caught off-guard and do not fully encompass the latest AI shenanigans. A hurried effort is made to update Ethical AI precepts and modify those newly minted AI Laws.
Lather, rinse, repeat.
It would be better for us all to stay ahead of the game, rather than get caught behind the eight ball.
Conclusion
I’ve taken you on a bit of a journey.
At the start, I proffered that there would be two major topics to be examined:
1) Integrally binding together both hardware and software for AI mechanizations rather than having them as distinct and separate allies
2) Transferring or distilling of machine learning formulations from one AI model to another that does so without requiring nor necessarily desiring (or even feasibly otherwise possible) a straight ahead full purebred copying
The first topic on the binding together of hardware and software has been the bulk of the journey herein. This led us into the mortal versus immortal computing morass. Out of which there were some crucial AI Ethics and AI Law considerations that otherwise would not usually be brought up since this type of computer-related topic is usually seen by some as a purely technological one rather than entailing any societal impacts concern.
I say it is wisest to be sooner and safer, rather than later and worse off when it comes to bringing up Ethical AI and AI Law.
The second topic, which I haven’t as yet articulated herein, relates materially to the first topic.
Here’s the deal.
Suppose we have a “mortal computer” and we want to preserve the capabilities so that we are able to have a backup or ostensibly copies of what the AI contains. We might be worried that a particular mortal computer is nearing its end. Yikes, we are dependent upon it. What are we to do? One answer is that we ought to copy the darned thing.
But, copying a neuromorphic computer of the kind being sketched will be harder than it might seem at first glance. Things can get tricky.
Perhaps we should come up with a copying ploy that will be generalizable and applicable to circumstances involving machine learning and artificial neural networks. We want this to work on large-scale and extremely large-scale instances. We would be willing too to have the copy not be an exact duplicate, and instead might be essentially equivalent or perhaps even better devised as a result of the copying action.
A technique known as distillation has been proposed.
I’ve run out of space for today’s column, so I’ll be taking up this second topic in an upcoming column. I figured you would want to know about the relationship right away between that second topic and the first topic that was extensively covered herein. Think of this as an added note serving as a teaser or trailer of what is coming up next.
Remain on the edge of your seat, since the distillation topic is a pretty good standout.
As Batman used to say, keep your bat wings crossed and be ready for the same bat-time and bat-channel of unraveling the vexing question of how to copy an ANN or machine-learning model or neuromorphic computer to another one.
A final remark for now. There’s a famous line in the movie The Dark Knight Returns in which our caped crusader says this: “The world only makes sense if you force it to.” I’ll try to hold to that ideal when I cover the second topic on AI-related distillation.
Stay tuned for Part 2 of this exciting and enthralling double-header.
Source: https://www.forbes.com/sites/lanceeliot/2022/12/07/ai-shake-up-as-prominent-ai-guru-proposes-mind-bending-mortal-computers-which-also-gets-ai-ethics-and-ai-law-dug-in/