Tesla AI Day 2022 has now come and gone, settling down into the history books for all to reexamine and analyze to their heart’s content.
You see, on Friday evening of September 30, 2022, and taking place in Silicon Valley with a worldwide online audience, the latest installment in the annualized Tesla AI Day events occurred. This highly anticipated showcase is said to provide Tesla and Elon Musk an opportunity to strut their stuff and show off their latest AI advances. Musk typically emphasizes that these events are principally for recruiting purposes, hoping to whet the appetites of AI developers and engineers from around the globe that might be enticed to apply for a job at Tesla.
I’ve been inundated with requests to do a deep analysis that goes beyond the already multitude of somewhat lite-related reporting depictions that have hit the Internet about this latest Tesla AI Day shindig. I’ll bring you up to speed in a moment and be especially covering some significant AI Ethics and AI Laws related considerations that heretofore don’t seem to have garnered notable online attention.
And, to be clear, these are points that urgently and importantly do need to be brought to the surface. For my overall ongoing and extensive discussions about AI Ethics and AI & Law, see the link here and the link here, just to name a few.
Let’s unpack what happened at this year’s Tesla AI Day.
A HEAD NOD TO THE AI DEVELOPERS AND ENGINEERS
Before I jump into the substance of the presentations, allow me to say something crucial about the AI developers and engineers that either presented or that in a herculean manner have been doing the behind-the-scenes AI work at Tesla. You have to give them credit for trying to make sense out of the at times wacky directives coming from Musk as to what they should be working on and the pace at which they should be getting their work accomplished.
I’ve mentioned Musk’s leadership style and his AI technical acumen in many of my prior postings, such as the link here and the link here. On the one hand, he is sharp enough to seemingly know generally what is going on in AI and he serves as a tremendous inspirational force for aiming high on seeking AI achievements. No doubt about it.
At the same time, he also appears at times to be bereft of practicality per se and is aglow with untamed wishes and AI dreams aplenty. He seems to render deadlines out of thin air. Hunches are the norm instead of any reasoned back-of-the-envelope attempts to pencil out real-world estimates. He conjures up fanciful visions of how world-changing AI is going to be miraculously devised and spouts about unattainable timelines without seemingly a decided splash of systematic and mindful thought (his multitude of predictions about the advent of AI-based fully autonomous self-driving cars have repeatedly demonstrated farfetched and unsupportable claims).
Some insist that he is a genius and geniuses are like that. It is the nature of the beast, as it were.
Others exhort that a shoot-from-the-hip calls-the-shots leader is inevitably going to stumble and potentially do so at a heavy cost that otherwise just wasn’t necessary.
Not that it isn’t handy dandy to have a top leader that cares about being in the depths of things. It can be enormously helpful. But when the wide-eyed visionary steps a bit far out of bounds, it can be difficult or career-challenging to try and clue them in as to what is really happening. As mentioned on social media, some of the AI developers and engineers on the stage with Musk appeared to silently cringe at various of his over-the-top proclamations. Probably, their minds were vociferously racing to figure out what they can do or say to try and save face, keeping this barreling train on a modicum of realistic proper track and not flying entirely off the rails.
Tip of the hat to those AI developers and engineers.
THREE MAIN TOPICS THIS TIME AROUND
Okay, with that keystone preface, we can cover the Tesla AI Day particulars.
There were essentially three major topics covered:
(1) Making of a walking robot that has humanoid characteristics (i.e., Bumble C, Optimus)
(2) Advances associated with Tesla’s Autopilot and so-called Full Self-Driving (FSD)
(3) Efforts associated with the Tesla specialized supercomputer named Dojo
Some quick background in case you aren’t familiar with those Tesla initiatives.
First, Elon Musk has been touting that the next big breakthrough for Tesla will entail the development and fielding of a walking robot that resembles humanoid characteristics. At the Tesla AI Day 2021 last year, there was a rather embarrassing “demonstration” of the envisioned robot that involved a person wearing a robotic-looking costume that leaped and danced around on the stage. I say embarrassing because it was one of the most cringe-worthy moments of any AI showcase. This wasn’t any kind of mock-up or prototype. It was a human in a flimsy costume.
Imagine that those that have been working tirelessly in AI research labs and robotics throughout their lives are in a sense upstaged by a person wearing a costume and prancing around in front of worldwide beaming cameras. What made this especially beguiling was that much of the conventional media ate this up, hook, line, and sinker. They plastered pictures of the “robot” on their front pages and seemed to gleefully and unquestionably relish that Musk was on the verge of producing long-sought sci-fi walking-talking robots.
Not even close.
Anyway, this year the person in the costume was no longer apparently needed (though, perhaps they were waiting in the wings in case suddenly urgently required to reappear). A somewhat robotic humanoid resembling system was brought onto the stage at the opening of the session. This robot was referred to as Bumble C. After showing us this indicated initial version of the future envisioned robot, a second somewhat humanoid resembling robotic systems was brought onto the stage. This second version was referred to as Optimus. Bumble C was indicated as the out-the-gate first-attempt prototype and is further along in terms of existing functionality than Optimus. Optimus was indicated as the likely go-forward version of the humanoid envisioned robot and might eventually be versioned into becoming a production model available in the marketplace.
By and large, most of the action and attention for Tesla AI Day 2022 was focused on these kind-of walking robots. Banner headlines have ensued. The advances related to Autopilot and FSD have not garnered similar attention, nor did the details about Dojo get much newsprint.
Speaking of Autopilot and FSD, we ought to make sure that some air time is given to that part of the Tesla AI Day. As faithful readers know, I’ve covered many times at extensive length Tesla’s Autopilot and so-called Full Self-Driving (FSD) capabilities.
In short, Tesla cars are today rated as a Level 2 on the autonomy scale. This means that a human licensed driver is required at all times to be at the wheel of the car and be attentive for driving purposes. The human is the driver.
I mention this significant point about the level of autonomy because many non-technical people falsely believe that today’s Teslas are at Level 4 or Level 5.
Wrong!
A Level 4 is a self-driving car that drives itself and does not need nor expect a human driver at the wheel. Level 4 is then bounded with respect to a specific targeted Operational Design Domain (ODD). For example, an ODD might be that the AI can drive the car only in a particular city such as San Francisco, and only under stipulated conditions such as sunshine, nighttime, and up to light rain (but not in snow, for example). A Level 5 is an AI-based self-driving car that can autonomously operate essentially any place and under any conditions that a human driver could manageably operate a car. For my detailed explanation of Level 4 and Level 5, see the link here.
You might be surprised to know that Teslas with Autopilot and so-called FSD are only Level 2. The naming of “Full Self-Driving” would certainly seem to imply that the cars must be at least Level 4 or possibly Level 5. Ongoing angst and outcry have been that Tesla and Musk named their AI driving system “Full Self-Driving” when it clearly is not. Lawsuits have ensued. Some countries have taken them to task for the naming.
The usual counterargument is that “Full Self-Driving” is an aspirational goal and that there is abundantly nothing wrong with naming the AI driving system for what it is intended to eventually become. The counter to that counterargument is that people buying or driving a Tesla with the FSD are lulled into (or, critics say fooled into) believing that the vehicle is indeed Level 4 or Level 5. I won’t belabor the point herein and suggest that you might take a look at this link here for further insights on such matters as Autopilot and FSD.
The third topic entailed the Tesla specialized supercomputer known as Dojo.
As helpful background, please be aware that much of today’s AI systems make use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern-matching techniques and technologies. The tech that is under the hood of ML/DL often makes use of Artificial Neural Networks (ANN). Think of artificial neural networks as a crude kind of simulation that tries to mimic the notion of how our brains utilize biological neurons that are interconnected with each other. Do not mistakenly believe that ANNs are the same as true neural networks (i.e., wetware in your noggin). They are not even close.
When devising AI for self-driving, the use of artificial neural networks is extensively relied upon. Most self-driving cars contain specialized computer processors that are adapted to handle ANNs. To program and establish the ANNs, a car maker or self-driving tech maker will usually make use of a larger computer that will allow for large-scale testing. The devised ANNs can then be downloaded into the autonomous vehicles using over-the-air (OTA) updating capabilities.
In the case of Tesla, they have been devising their own supercomputer that is tailored to doing ANNs. This provides a proprietary capability that can potentially efficiently and effectively arm the AI developers with the kind of computational bandwidth they need to craft the AI that will run in their self-driving cars.
One other thing about artificial neural networks and ML/DL.
Besides using such tech for self-driving cars, the same kind of tech can be used for programming robots such as the humanoid looking systems such as Bumble C and Optimus.
In total, I trust that you can now see how the three major topics of the Tesla AI Day are related to each other.
There is a Tesla specialized supercomputer named Dojo that enables the development and testing of ML/DL/ANNs using large-scale processing capabilities. Those ML/DL/ANNs can be programmed for serving as an AI driving system and downloaded into Tesla cars accordingly. In addition, the programming for the robotic systems of Bumble C and Optimus can likewise be devised on Dojo and downloaded into the robots. Dojo can provide double duty. AI developers assigned to Autopilot and FSD can use Dojo for their work efforts. AI developers assigned to Bumble C and Optimus can use Dojo for their efforts.
As you might guess, there is a potential overlap or synergy between the ML/DL/ANN efforts of the AI developers for Autopilot/FSD and those of the Bumble C and Optimus pursuits. I’ll say more about this so remain on the edge of your seat.
You are officially now onboard with what is going on and we can dive into the Tesla AI Day 2022 details.
Congrats on making it this far.
SOME KEY TAKEAWAYS FROM VULNERABILITIES AND ISSUES
There is a slew of AI-related issues and concerns that arose upon watching the Tesla AI Day 2022.
I can’t cover them all here due to space constraints, so let’s at least pick a few to dig into.
In particular, here are five overarching issues I’d like to cover:
1) AI-related laws and legal Intellectual Property (IP) rights issues
2) AI-related laws newly coming onto the books such as COPPA
3) AI ethics and the robotics problem
4) AI-related laws for self-driving are not the same for walking robots
5) Legal exposures of dovetailing AI teams for self-driving and walking robots
I’ll cover them one at a time and then do a wrap-up.
AI LAWS AND LEGAL INTELLECTUAL PROPERTY RIGHTS
We shall start with a legal entanglement yet to rise but that could be quite notable.
First, please be aware that Bumble C and Optimus were showcased as presumably walking robotics systems that seemed to have artificial legs, feet, arms, hands, and somewhat of the main torso and a head-like structure. Thus, they resemble humanoid envisioned systems that you’ve seen in all manner of sci-fi films.
During the presentations, it was stated that Bumble C has semi-off-the-shelf components. This makes sense in that to rapidly devise a first prototype, the quickest approach usually consists of cobbling together other already-known and already-proven elements. This gets you up and running quickly. It buys you time to devise proprietary components if that’s what you want to ultimately have.
The presentations also seem to state that Optimus was composed of apparently predominantly homegrown or proprietary components. How much of the shown Optimus has that preponderance was unclear. Also, whatever it did have, the implied suggestion was that the goal was to aim for being as proprietary as possible. This can make sense in that it means you can pretty much have full control over the components and not be reliant upon a third party to provide them.
So far, so good.
A bit of hitch though might be coming down the pike.
Allow me a moment to explain.
You might be vaguely aware that Musk has derided the use of patents, a form of Intellectual Property (IP). His recent tweets have indicated that IP is apparently for the weak. This implies that IP is seemingly used mainly for trolling purposes. On top of that, the implication is that IPs such as patents slow down or impede progress in technology.
Given that philosophy emanating from the top of Tesla, we have to ask ourselves some prudent questions.
Will Tesla be seeking patents for the proprietary components of Bumble C and Optimus?
If so, then doesn’t this imply that Tesla and Musk are “weak” in the same sense that Musk has derided others that seek IP protections?
If they don’t aim to get patents for the robotic systems, one wonders how they will feel if others start devising walking robots of a similar nature and do so by mimicking or outright copying Bumble C and Optimus. Will Tesla and Musk legally go after those that do so, claiming that the components are trade secrets and of a protected nature?
Or might they patent the technology and then make the patents openly available to all comers? This was considered important means of enabling the adoption of EVs. Does the same apply to robotics systems?
Perhaps, even more, alarming for Tesla and Musk will be the possibility that they are infringing on other robotics systems that do have patents and established IPs.
One could reasonably guess that at the frenetic pace of the AI developers and engineers at Tesla, they are not necessarily carefully and mindfully doing patent searches to make sure their components do not infringe on existing patents. The odds are that this is probably not on top of mind, or even if discussed is possibly being set aside for now. Why delay now when you can push the potential IP legal problem further down the road? If you are faced with harsh deadlines, you make do for now and assume that someone else perhaps years from now will pay the price for that current neglect.
Patents galore exist in the AI space. There is a byzantine array of patents for robotic hands, robotic arms, robotic legs, robotic feet, robotic torsos, robotic heads, and the like. It is a legal minefield. I’ve been an expert witness in Intellectual Property rights cases in the AI field and there is an enormous glut of patents, along with their often-overlapping nature, which presents a foreboding bit of territory.
For those of you holding patents on robotics limbs and other walking robotics components, proceed to dig them out. Start taking a close look at Bumble C and Optimus. Get your IP lawyers queued up. With each passing day, a goldmine is being built for you, one that if it relies on your IP will be a tidy payoff from a gigantic company with gloriously deep pockets.
You can shrug off the stinky label of being “weak” while resplendently on your way to the bank.
AI LAWS AND RELATED LEGAL COMPLICATIONS COMING INTO EXISTENCE
You might be wondering what Bumble C and Optimus are going to be used for. Since Bumble C seems to be on the outs as a quick-and-dirty prototype, let’s just focus on Optimus, which is considered the ongoing and future robot of keen interest at Tesla.
What will Optimus be used for?
According to Musk, he has suggested that with such robots we will never need to lift a hand or do any kind of chore or physical work again. In the home, the walking robot will be able to take out the trash, put your clothes into the washer, fold your clothes after taking them out of the dryer, make your dinner, and do all manner of household chores.
In the workplace, Musk has suggested that such robots can take on assembly line work. Besides working in factories or potentially harsh working conditions, these robots can work in the office too. During the presentation, a short video clip of an office environment showed the robot moving a box as though delivering the box to a human working in the office. We even were teased by a short video clip of the robot watering a plant in an office setting.
I am sure each of us can easily come up with a litany of ways to use a walking robot that has a set of features akin to that of humans.
I’ve got a twist for you.
Imagine Optimus is being used in a home. The robot is performing household chores. We would naturally assume that Optimus will have some form of conversational interactivity, perhaps like an Alexa or Siri. Without some viable means of communicating with the robot, you would be hard-pressed to comfortably have it moving around in your household among you, your significant other, your children, your pets, and the like.
For those watching online, we did not seem to be privy to any demonstration whatsoever of any speaking or conversational capacity of Optimus. Nor was there any semblance of an indication of the processing capabilities.
Instead, we only saw Bumble C barely be able to walk out onto the stage (wobbly, uncertain, and I’m guessing caused heart stoppage for the engineers as they prayed to the robotics gods that the darned thing would not collapse or go amuck). Optimus was pushed or manhandled onto the stage. No walking took place. We were informed that supposedly Optimus is on the cusp of being able to walk.
A classic demo dodge and unbelievably so, including that it seems much of the conventional news media bought into it.
Dancing robots everywhere must have felt shame for what happened on that stage.
But I digress. Back to Optimus serving as a walking robot in an everyday household and let’s assume children are present in this homestead.
California recently enacted a new law known as the California Age-Appropriate Design Code Act (COPPA). I am going to be discussing this new law in my column and you can bet that other states will soon be enacting similar laws. This is a law that anyone devising AI needs to know about (well, anyone devising any kind of computing that might come in contact with children needs to be aware of this too).
The gist of the law is that any system likely to be accessed by children will need to comply with provisions about ensuring the privacy aspects of the child. Various personnel information that an AI system or any computing system might collect about the child has to meet specific children’s data privacy and children’s rights. Penalties and other legal repercussions for failing to abide by the law have been specified.
If Optimus is used in a household that contains or might include children, the robot could readily be collecting private information about the child. Spoken utterances might be recorded. The location of the child might be recorded. All manner of detailed information about the child could be detected by the robot.
Has the Optimus team been considering how to abide by this new law and the emerging plethora of new AI laws?
Again, this is probably low on the priority list. My point though is that this law and other AI-related laws are springing up like wildfire. An AI-based walking robot is going to be walking into a hornet’s nest of laws. Tesla can either get attorneys on this now and anticipate what is going to legally arise, hopefully trying to prevent getting into legal quagmires and providing guidance to the AI developers and engineers, or do the usual tech-oriented thing and just wait and see what happens (typically only after getting mired in a legal morass).
Pay me now, or pay me later.
Techies often don’t contemplate the pay me now and end up summarily getting caught by surprise and paying later.
AI ETHICS AND THE ROBOTICS PROBLEM
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that many nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Has Tesla and Elon Musk been giving serious and devoted attention to the AI Ethics ramifications of a walking robot?
According to what was stated at the presentations, apparently, only cursory attention has been so far allocated to the matter.
Musk was asked during the Q&A whether they have been looking at the big picture aspects of what walking robots will do to society. We already all know that Musk has repeatedly stated that he views AI as an existential risk to humankind, see my coverage at the link here. One would certainly assume that if one is making robots that will walk amongst us, and that he expects perhaps millions upon millions of these robots to be sold for public and private use, it naturally raises humankind’s Ethical AI issues.
The response to the question seemed to suggest that the efforts underway are too premature to be notably exploring the AI Ethics possibilities.
That’s yet another classic and woeful techie response.
Many AI developers and engineers consider AI Ethics to be an afterthought topic. No need to confuse existing AI work efforts. Just keep pushing ahead. Someday, sure, maybe AI Ethics will rear its head, but until then it is heads-down and full speed ahead.
Unfortunately, a head-in-the-sand approach to Ethical AI is bad news for everyone. Once the AI or in this case robotic system gets further down the path of development, it will become increasingly hard and costly to embrace AI Ethics precepts into the system. This is a shortsighted way of dealing with Ethical AI considerations.
Suppose they wait until the walking robot is already being placed in people’s homes. At that juncture, the chances of harm to humans have risen and besides the potential of doing harmful damage, a firm that has waited until the latter stages will find itself facing enormous lawsuits. You can bet that tough questions will be asked about why these types of Ethical AI facets were not given due consideration and why they weren’t dealt with earlier on in the AI development life cycle.
The fact that Musk has repeatedly brought up AI Ethics considerations when discussing the existential risks of AI makes this seeming oversight or lack of current concern for Ethical AI on his walking robots an even more beguiling question.
Musk’s knowingness makes this especially disconcerting.
Some top execs don’t even know that there are Ethical AI issues to be confronted — I’ve discussed ardently the importance of companies establishing AI Ethics Boards, see the link here.
AI LAWS FOR SELF-DRIVING ARE NOT THE SAME FOR WALKING ROBOTS
I mentioned earlier that existing Tesla’s that use Autopilot and FSD is at Level 2 of autonomy.
This is handy for Tesla and Musk because they can cling to the idea that since a Level 2 requires a human driver actively at the wheel, nearly anything that the AI self-driving system does is escape proof from a responsibility perspective. Tesla and Musk can merely insist that the human driver is responsible for driving.
Note that this will not be the case for Level 4 and Level 5, whereby the automaker or fleet operator, or someone will need to step in as the responsible party for the actions of a self-driving car.
Also, note that this claim of the human driver being responsible can only be stretched so far at both Level 2 and Level 3, and we will soon be seeing legal cases as to how far that can go.
During the presentation, there were several points made that the work on AI self-driving cars can be readily ported over or be reapplied to the realm of walking robots. This is somewhat true. This is though also somewhat misleading or in some instances a dangerous portrayal.
We can start with the obvious carryover involving AI-based vision processing. Self-driving cars make use of video cameras to collect imagery and video of the surroundings around the vehicle. The use of ML/DL/ANNs is typically used to computationally find patterns in the data collected. You would do this to identify where the roadway is, where other cars are, where buildings are, and so on.
In theory, you can reuse those same or similar ML/DL/ANNs to try and figure out what a walking robot is encountering. In a household, the robot vision system would be scanning a room. The video and imagery collected could be computationally examined to figure out where the doors are, where the windows are, where the couch is, where people are, etc.
Seems sensible.
But here’s the twist.
For those Level 2 self-driving cars, the driving is dependent upon a human driver. The legal responsibility for what the car does is generally on the shoulders of the human driver. No such protection is likely in the case of a walking robot.
In other words, a walking robot is in your house. Assume that you as an adult are not teleoperating with the robot. The robot is freely moving around the house based on whatever AI has been set up in the walking robot.
The robot bumps into a fishbowl. The fishbowl crashes to the ground. A child nearby is sadly cut by flying glass. Luckily, the child is okay and the cuts are minor.
Who is responsible for what happened?
I dare say that we would all reasonably agree that the robot is “at fault” in that it bumped into the fishbowl (all else being equal). There is an ongoing and heated debate about whether we are going to assign legal personhood to AI and ergo potentially be able to hold AI responsible for bad acts. I’ve covered that at the link here.
In this case or scenario, I don’t want to get stuck in the question of whether this AI has legal personhood. I am going to say that it does not. We will assume that this AI has not risen to a level of autonomy that we would believe is deserving of legal personhood.
The responsible party would seem to be the maker of the walking robot.
What did they do to devise the robot to avoid bumping into things? Was it foreseeable that the robot might do this? Was there an error inside the robot that led to this action? On and on, we can legally question what took place.
Have Tesla and Musk realized that the legal wink-wink they are doing with their cars is not likely to carry over to the robots that they are seeking to make?
Walking robots are a different animal, as it were.
Once again, legal and ethical repercussions arise.
LEGAL EXPOSURES FOR DOVETAILING THE TEAMS
The presentations suggested that a lot of crossovers from the AI self-driving team are taking place with the walking robotics team. Per my earlier indication, this does seem sensible. Many aspects of the hardware and the software have similarities and you might as well get double duty when you can. In addition, this can hopefully speed up the robotics side as it frantically is trying to get going from a dead start and catch up with aspirational declarations of Musk.
There is though a twist.
Seems like there is always a twist, but, then again, life seems to be that way.
Suppose the AI self-driving team is stretched thin trying to help the walking robotics team. We can certainly envision that this could readily happen. Here they are, having their hands full with trying to attain Autopilot and FSD to higher and higher levels of autonomy, and meanwhile, they are being yanked into the walking robotics team which is sprinting ahead on their efforts.
To what degree is the AI self-driving team becoming distracted or overwrought by this dual attention, and will it impact the self-driving ambitions?
And, not just ambitions, but you can logically anticipate that burnout by the self-driving team could lead to bugs creeping into the self-driving system. Perhaps they didn’t do the triple checking that they usually did. Maybe they got feedback from the walking robotics team and changed the self-driving code, though this change might not have been as well-tested and well-measured as it should be.
In short, anyone seeking to sue Tesla over the self-driving would now have a rife opportunity to go toward contending that whatever issues might be claimed or found in Autopilot or FSD would not have been there but for the management decision made to dovetail the two otherwise disparate teams into working together.
Imagine how that might look to a jury.
The self-driving team was zooming along and entirely focused on self-driving. They then got lurched over into this new walking robotics effort. The contention could be that this led to errors and omissions on the self-driving side. The company wanted to have its cake and icing too but ended up splitting the cake and some of the icings fell to the floor.
We don’t know that dovetailing has created such vulnerabilities. It is simply a possibility. For sharp lawyers looking to go after Tesla on the self-driving side, the door is being opened to provide a legal opening.
CONCLUSION
A lot of eye-rolling resulted from the Tesla AI Day 2022.
For example, Musk indicated that the walking robots will produce on the order of two times the economic output as apparently compared to humans. He even followed that claim by saying that the sky is the limit on productivity possibility.
Where are the definitive figures that can transparently illuminate the two times or N-times productivity improvements?
I am not saying that the two times or N-times are wrong. The issue is that such unsubstantiated claims that come out of thin air are otherwise pure hyperbole until some substance to support those claims is provided. The particularly worrisome aspect is that reporters are reporting that he made such claims, and those claims in turn are gradually going to be repeated and repeated until it becomes “factual” and nobody realizes that it was concocted perhaps off-the-cuff.
Another eye-rolling statement was that Musk said that the walking robots might cost around $20,000.
First, if that turns out to be the case, it is remarkable given the likely cost of the components and the costs associated with the development and fielding of the walking robots, plus presumably a need for a tidy profit. How did he come up with the number? Because it sounds good or because it was based on a solid analysis?
We also do not yet know and nor was there a discussion about the maintenance associated with these walking robots. The maintenance of a car is quite different than the maintenance of a walking robot. How will the robot get to whatever maintenance location is needed, given the bulky size and weight involved? Will human maintenance workers need to come to your home to do the maintenance? How much will maintenance cost? What expected frequency of maintenance will be needed?
Suppose that the cost is $20,000 or akin to that figure. I’m sure that for Musk, the $20,000 seems like pocket change. How many people could afford to buy one such walking robot at the $20,000 price tag? I dare say, not many. You could try to argue that it is the cost of a car (a lower-end car). But a car has seemingly a lot more utility than a walking robot.
With a car, you can get to work and make money to pay your bills. You can use a car to go and get your groceries. A car can allow you to get to a hospital or take a trip for fun. A walking robot that waters your plants in your home or that makes your bedsheets for you does not quite seem to have the same valued utility.
To clarify, yes, there would be many people at higher income levels that could afford to have a walking robot in their home. In that sense, there definitely would be some market for walking robots. The question though is whether this will be equitable in our society. Might there be those that can afford walking robots and those that cannot?
We might also reasonably doubt that walking robots would get the same sense of societal respect and earnest support that EVs get. You can sell EVs by emphasizing that it helps the environment in comparison to conventional car. The government can also provide incentives to do so. Does any of that also apply to walking robots? Seems like a harder sell.
A few more comments and we’ll close off this discussion for now.
A notable eye-rolling about the walking robots entails the unabashed anthropomorphizing of the robots.
Anthropomorphizing refers to the portrayal of AI as being on par with humans. People can be fooled into thinking that AI can do what humans can do, possibly even exceeding what humans can do. Those people so tricked are then likely to end up in a dire pickle. They will assume that AI can perform in ways that humans can.
When the Bumble C walked out onto the stage, it waved its arms. The arm waving was exactly what you would expect a human to do. Your initial gut reaction is bound to be that the walking robot was “thinking” and realized that it was walking onto a stage in front of people and cameras. The robot decided that it would be polite and sociable to wave at the assembly.
I assure you that the robot was not “thinking” in any manner of human thinking.
For all we know, there was an operator that was standing somewhere nearby or maybe working remotely that was controlling the arms of the robot. In that sense, the robot didn’t have any software that was operating the arms.
Suppose that there was software operating the arms. The software was probably extremely simplistic that once activated would raise the arms, wave, and then do this for some short period of time. It is highly unlikely that the software consisted of a vision processing system that was capturing video imagery of the audience and then made a computational “reasoning” of opting to wave the robot’s arms.
My point is that the act of having the walking robot wave is a false or misleading portrayal of what the robot can actually do, and it fools people into assuming that the robot is human-like. I’ve said the same concerns about the dancing robots, by the way. It is cute and grabs headlines to have waving robots and dancing robots. Unfortunately, it also overstates what these robots can actually do.
Referring to the processor of the walking robots as a Bot Brain is yet another example of anthropomorphizing. Those processors are not brains in the meaning of human brains. It is a misappropriation of wording.
You might be right now exclaiming that everyone or at least many in AI leverage anthropomorphizing to try and stand out and make their AI receive praise and attention. Yes, I would agree with you. Does that though make two wrongs turn into a right? I don’t think so. It is still a bad approach and we need to try and curtail or at least reduce its popularity. This admittedly is like pushing a hefty boulder up a steep and never-ending hill.
Now let’s render a final remark on this topic.
Elon Musk has previously stated this about where AI is heading: “Mark my words, AI is far more dangerous than nukes…why do we have no regulatory oversight?” He made similar statements during the Tesla AI Day.
I agree with him on having regulatory oversight, though I add a bit of clarification that it has to be the right kind of regulatory oversight. I’ve taken to task regulatory oversight about AI that is woefully missing the mark, such as explained at the link here.
One hopes that Tesla and Musk will not only support the advent of prudent and proper laws about AI, but they will also be a first-mover to showcase the importance of both soft laws such as AI Ethics and hard laws that are on the books.
As sage wisdom tells us, our words serve as a lamp to guide our feet and forge a beaconing light for the path ahead.
That about covers things.
Source: https://www.forbes.com/sites/lanceeliot/2022/10/02/five-key-ways-that-ai-ethics-and-ai-laws-reveal-troubling-concerns-for-teslas-ai-day-showcase-and-the-ever-expanding-ai-ambitions-of-elon-musk/