Your Golden Opportunity To Shape U.S. AI National Priorities By Answering Vital Questions About The Future Of AI And Humanity, Get To It Says AI Ethics And AI Law

Opportunity awaits you.

In today’s column, I’ll be taking a close look at how U.S. national priorities on AI are being formulated, including that the White House has recently asked for input from the public by seeking responses or earnest answers to more than two dozen AI human-alignment questions. You can partake in possibly shaping the direction of our nation when it comes to the advent and seemingly unstoppable and perhaps (some say) out-of-control progress of advanced AI.

Most people would assume that they would never get a chance to proffer their two cents about AI. Sure, you might post something on social media or splash a comment on your personal website or blog. The chances are quite high that the missive will be lost in a vast sea of wide-eyed and wild perspectives concerning AI.

Instead, in the opportunity I am about to describe, your viewpoints are supposed to go directly to those that will be advising our top leaders on matters of AI.

I suppose you could say that this is your golden opportunity to have your voice heard, assuming you have something erstwhile to say about AI. If you don’t have anything of substance to indicate then you might want to wait and later on explore a likely recap that will be produced out of this governmental initiative (I’ll be covering that recap in a future column, once the final briefing comes out). Furthermore, though you can decide what to do, it probably would be wise to avoid sending in garbled and nonsensical responses since it will make the task of ferreting out insightful and useful replies that much more arduous. Please don’t gum up the works, as it were.

You decide what you think is best to do.

A smarmy person would undoubtedly exhort that providing input to the government is an exercise in futility. They would probably insist that any such attempt to get widespread public input is itself an absurd charade. All those submitted responses will simply end up in a government-approved overly expensive larger-than-life-sized wastebasket. Nothing you say will ever see the light of day. Your comments will be fodder for bouts of immense laughter within the halls of governmental agencies.

A cynic would go further and proclaim that this is a boondoggle to pull the wool over the eyes of the public by making us think that we are providing input. The reality, it seems, is that our government has already made up its mind. This ruse insidiously aims to placate people, making them complacent and falsely presume that the government wants to abide by public desires.

I don’t know what to tell you about all of those downbeat reactions. Could they be accurate and apt as to pessimism and assumptions of tomfoolery? One supposes so. On the other hand, I’d prefer to take a slightly more happy-face perspective and hope that this is relatively genuine and on the level. Anyone in the dour and sour camp is unlikely to be otherwise persuaded. Let’s try to be upbeat and proceed on the basis that this is going to be productive and useful.

Call me optimistic.

Here’s a different twist on the topic.

We might at least take a gander at the questions that are being asked.

Sometimes the questions being posed say as much as do the various and myriad proposed answers that ultimately arise. You see, the questions tend to identify what seems to be at issue. The questions highlight what we seemingly don’t know and what we imagine to be important that we don’t know. Also, the answers might be extraordinarily far down the road or possibly there aren’t any discernable answers to be had.

Contemplating the nature of the questions is something tangible and in the here and now.

I am reminded of one of my all-time favorite movies The Russia House. This was a spy film of a thinker’s nature and regrettably did not get as much attention as it rightfully deserves. The cast included stalwarts such as actors Sean Connery, Michelle Pfeiffer, Roy Scheider, and others. If you relish a good plot and an old-time U.S. versus Soviet Union cold war intrigue, this is the movie for you (side note, this is not an action film, so don’t have in mind explosions, shoot-outs, and the like).

I bring up the enjoyable movie due to a key plot point that connects directly to my discussion herein.

This is a spoiler alert so you might skip the rest of this paragraph if you aim to watch the movie and want to be surprised by what happens. Ready? Okay, here it comes. A crucial matter at the end of the film is that a list of questions about nuclear armaments turns out to be a kind of reveal in that it shows the hand of the side that has devised the questions. This is a shocker that perhaps those watching the movie might not have anticipated. I’ll add that some critics have argued that the ending is shall we say, far-fetched and that the questions could have been seeded with all manner of cleverly concocted questions, seeking to fool the other side. Anyway, it is a noteworthy conception and a handy exemplar of the inherent value of questions.

Questions do matter.

Moving on, here’s what I’m going to cover in this discussion.

First, I’ll provide background about which governmental agency is asking the questions regarding U.S. national AI policies. Turns out that there are now a plethora of various agencies seeking input on a wide range of AI aspects, though oftentimes on a specific subtopic such as AI used in home appraisals, AI used for hiring and firing purposes, etc. I’ve covered many of those in my prior columns, see the link here.

This particular request for input is of a broader nature encompassing overarching national priorities about AI across the board.

After explaining the context, we will jump into the questions that have been posed. Some of the questions might seem daunting. If you aren’t already up-to-speed about the latest in AI, perhaps some or many of the questions won’t seem to make much sense to you.

I am going to provide a brief elucidation to showcase why each question is likely being asked and what types of answers are seemingly being sought. You might liken this to taking a test and the instructor kindly beforehand provides you with a convenient cheat sheet to give you a leg up on the test. I will also be providing links to my column coverage on a variety of pertinent topics in case you want to learn more about the latest in AI.

One thing you must certainly know is that AI is being portrayed these days as a doom and gloom pursuit.

You would have to be living in a cave that has absolutely no Wi-Fi access to not otherwise be aware that blaring headlines are currently warning us about AI as an existential or extinction risk, see my balanced analysis about that provocative claim, at the link here. For those of you wondering why these outspoken urgings about AI are incessantly right now garnering massive headlines and spurring widespread social media chatter, the root of this recent spark can be laid at the emergence of generative AI such as the widely and widely popular ChatGPT by AI maker OpenAI, along with other generative AI apps such as GPT-4 (OpenAI), Bard (Google), Claude (Anthropic), etc.

Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on text found on the Internet. For my detailed elaboration on how this works see the link here.

The usual approach to using ChatGPT or any other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.

To make it abundantly clear, please know that today’s generative AI and indeed no other type of AI is currently sentient. Whether today’s AI is an early indicator of a future sentient AI is up to highly controversial debate. The claimed “sparks” of sentience that some AI experts believe are showcased have little if any ironclad proof to support such claims. It is conjecture based on speculation. Skeptics contend that we are seeing what we want to see, essentially anthropomorphizing non-sentient AI and deluding ourselves into thinking that we are skip-and-hop away from sentient AI. As a bit of up-to-date nomenclature, the notion of sentient AI is also nowadays referred to as attaining Artificial General Intelligence (AGI). For my in-depth coverage of these contentious matters about sentient AI and AGI, see the link here and the link here, just to name a few.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

With those foundational points, we are ready to jump into the matter at hand.

The White House Wants To Know About Your Views On The Future Of AI

There is a federal entity known as the Office of Science and Technology Policy (OSTP).

Have you ever heard of it?

I would think that not many are especially familiar with this entity since it doesn’t quite get as much media attention as a slew of other federal agencies (though, someone steeped in science or technology might be semi-familiar with the entity). The OSTP was devised by Congress and put together in conjunction with the realization that U.S. presidents over the years had sought ongoing science and technology advisement on a somewhat ad hoc basis. Congress put this entity formally into law via the National Science and Technology Policy, Organization, and Priorities Act of 1976.

What does the OSTP do, you might be wondering.

The overall mandate is to provide the sitting U.S. President with insights and advice about science and technology, ranging across both national and international considerations. The Director of the OSTP is informally known as the Science Advisor to the President. To clarify, the OSTP is part of the Executive Office of the President (EOP) and is thus construed as an element of the White House.

Here is the stated mission as currently posted on the official website of the OSTP:

  • “Providing advice to the President and the Executive Office of the President on matters related to science and technology;”
  • “Strengthening and advancing American science and technology;”
  • “Working with federal departments and agencies and with Congress to create bold visions, unified strategies, clear plans, wise policies, and effective, equitable programs for science and technology;”
  • “Engaging with external partners, including industry, academia, philanthropic organizations, and civil society; state, local, Tribal and territorial governments; and other nations; and,”
  • “Ensuring equity, inclusion, and integrity in all aspects of science and technology.”

A notable pronouncement about AI by the OSTP took place last year when they unveiled an AI Bill of Rights, which I covered extensively at the link here. One quick point that I made is that the naming of the AI Bill of Rights was somewhat confusing since the moniker perhaps suggests that this was a list of rights for AI, as though AI was gaining ground on legal personhood. The listing was in fact about human rights in an era of AI. Just wanted to make that clear. Also, if you are interested in the trials and tribulations associated with whether or not we ought to assign legal personhood to AI, take a look at my analysis at the link here.

Recently, the OSTP issued an RFI (request for information) seeking public input about the establishment of national priorities associated with AI.

Here are two excerpts from that RFI that identifies the basis for the missive:

  • “AI has been part of American life for years, and it is one of the most powerful technologies of our generation. The pace of AI innovation is accelerating rapidly, which is creating new applications for AI across society. This presents extraordinary opportunities to improve the lives of the American people and solve some of the toughest global challenges. However, it also poses serious risks to democracy, the economy, national security, civil rights, and society at large. To fully harness the benefits of AI, the United States must mitigate AI’s risks.”
  • “By developing a National AI Strategy, the Federal Government will provide a whole-of-society approach to AI. The strategy will pay particular attention to recent and projected advances in AI, to make sure that the United States is responsive to the latest opportunities and challenges posed by AI, as well as the global changes that will arrive in the coming years. Through this RFI, OSTP and its National AI Initiative Office seek information about AI and associated actions related to AI that could inform the development of a National AI Strategy.”

I’ll make a quick remark and then continue with this discussion. I noted above that the OSTP is associated with whatever sitting U.S. President is at hand. As you know, our country seems intensely polarized these days. I point this out because some might express doubts about how the national priorities on AI will be shaped as per whichever U.S. President at the time is in the hallowed office. If you are perchance on one side or the other of today’s polarization, you either might herald that the national priorities on AI are being devised at this time or you might believe it is going to be skewed.

Time will tell.

The RFI allows for responses by individuals and by organizations. You can expect that lots of companies, think tanks and the like will be responding to the RFI. Individuals responding will probably tend toward those in the know about AI such as AI researchers, AI faculty, AI scholars, AI scientists, etc. That being said, and since AI is such a heated and outspoken topic nowadays, I would bet that a lot of people that are not necessarily within the AI field per se will respond too.

Yes, all of us have a gigantic stake in the AI realm.

AI is obviously already impacting our lives. This is assuredly going to become even more pronounced. In days past, matters of policy such as AI Ethics and AI Laws were often left to those that seemingly were versed in AI, such as AI technologists. I think we can all now see that a diverse and widespread set of viewpoints is needed. We are all in this village together.

There are twenty-nine questions in the RFI. You can answer just one question, if there is only one that catches your eye or that you believe you can best address. You can answer a smattering of questions. You can answer all of them. Pick and choose to your heart’s content. The last question of the twenty-nine is a catchall in case you didn’t see a pressing question that you think should have been asked. You can provide such additional questions and presumably provide a semblance of answers to them.

Make sure to follow the instructions in the RFI as to how to format and provide your responses. I’ve submitted to these types of RFI’s many times. If you foul up and do not follow the instructions, it becomes easy for your response to be waylaid. Though you might be tempted to carp about having to follow stated instructions, the instructions are usually of merit and allow for figuring out what you had to say. Imagine that they get thousands of responses that were all over the map in terms of formatting and structure. It would be a veritable nightmare to try and assemble and glean what was in the formidable morass.

This brings up a related facet. The RFI indicates that you are limited to responses that are ten pages in size with 11-point or larger fonts being used (the page limit is exclusive of the use of a cover letter and a set of references or bibliography for your response). Some of my AI colleagues have been irked at the ten-page content limit. They exclaim that you cannot sensibly answer roughly thirty hefty questions about AI in just ten pages, i.e., ergo allowing only about three answers per page. Just like taking one of those essay tests in school, they insist that you would have to write in something like a 4-point sized font to squeeze your answers into a measly ten pages and that the true limit should be more akin to perhaps a hundred pages or more.

Well, a point noted, but no sense in tilting at windmills, one might say.

The RFI also notes that the responses might be inevitably posted online or otherwise publicly released. Keep this in mind. I say this because, as the RFI forewarns, do not include proprietary information since it could end up widely available. Be careful too of using copyright materials since the copyright owner might come after you for violating their copyright, depending upon how egregious the copying is.

There is a bit more in the RFI that provides salient caveats and notifications. Make sure to read it carefully if you intend to reply to the RFI.

The twenty-nine questions are organized into a half-dozen categories. This is somewhat helpful because otherwise, the list of questions might seem haphazardly scattered and endless. By loosely grouping the questions, it perhaps adds some clarity. Not everyone concurs with the categorization. Some assert that a given question belongs in a different category or should appear in more than just one category. Some insist that additional categories should have been provided and that the half-dozen categories don’t finely enough delineate the questions. On and on it goes.

Here are the stated categories:

  • Protecting rights, safety, and national security
  • Advancing equity and strengthening civil rights
  • Bolstering democracy and civic participation
  • Promoting economic growth and good jobs
  • Innovating in public services
  • Additional input

You could assert that by having six categories you are dividing up the roughly thirty questions into sets of five, on average. That seems manageable. Should there be ten categories and sets of three on average? Maybe yes, maybe no. Again, perhaps this is being somewhat nitpicky, and in any case, it is what it is.

I will next list each of the questions as shown within their respective stated category. I will show you in quotes the question posed. I then provide a brief elucidation and hopefully some useful hints of what the question is seeking to divine.

After having walked you through the questions, I’ll finish up with a few recommendations on how to mull over and compose your response. Perhaps a given question or several might get your juices flowing and you will eagerly seek to respond to the RFI. Another possibility is that you’ll find useful the set of questions and it will get your mind racing as to what is happening with AI. You might decide that there isn’t a particular need for you to respond to the RFI. All in all, you might be entirely satisfied with at least knowing what kinds of questions are being raised to identify our national priorities on AI.

This brings up yet another related sidebar.

Some within the AI arena are concerned that there are only twenty-nine questions, well, really twenty-eight that are specific since the last one is about providing your own added questions. Can twenty-eight explicit questions truly address the gamut of AI Ethics and AI Law issues that we are confronting now and that we must plan to confront?

I assure you that it would be relatively easy, possibly trivial, for anyone versed in these topics to come up with a dozen more questions. Actually, make that many dozens of questions. I suppose then that you could fault the RFI for only asking the roughly thirty that are being posed. But, of course, the RFI does provide that last question that asks for more questions, a kind of escape clause or pressure valve.

The retort to that “last question” open-ended option is that nobody knows right now what those other added questions are. In essence, everyone is right now focusing on the twenty-nine. Suppose that a response includes a firebrand crucial question that wasn’t in the twenty-nine. Only the OSTP will know. No one else can try to respond to it due to not even knowing that the question existed.

You see the dilemma, I’m sure.

The reply to that consternation is that presumably the results will be made public. And, presumably, this will be an iterative process. If there are new questions posed that are vital, we might hope that those will be promulgated, and additional rounds of responses undertaken.

Another stark concern is that suppose the answers are suppressed or that additional questions that are submitted are suppressed. Suppose we aren’t informed about what answers were given, possibly because the entity decided the answers weren’t worthy or maybe they are worried about new questions that seem to be forbidden questions. Etc.

Here’s what the RFI overall indicates:

  • “OSTP invites input from any interested stakeholders. OSTP will consider each comment, whether it contains a personal narrative, experiences with AI systems, or technical legal, research, policy, or scientific materials, or other content that meets the instructions for submissions to this RFI.”

I believe we’ve covered enough of the underlying hullabaloo and can now get into the weeds.

You might want to sit in a comfy chair and have a glass of fine spirits at the ready.

Are you ready?

Here we go.

Category (a): Protecting Rights, Safety, And National Security

  • 1. ”What specific measures – such as standards, regulations, investments, and improved trust and safety practices – are needed to ensure that AI systems are designed, developed, and deployed in a manner that protects people’s rights and safety? Which specific entities should develop and implement these measures?”

This first question is a mouthful. Turns out that all of the questions are of similar complexity. I’ll just briefly give demonstrative pointers.

The deal here is that AI is being devised and will undoubtedly further expansively be devised that undercut human values, commonly known as the AI human-alignment problem or gap, see my coverage at the link here. A mounting issue underlies how to best ensure that AI aligns with humanity such as protecting our human rights. You see, AI of even a non-sentient variety can contain undue biases, perform in discriminatory ways, and so on. On top of that, there is a clear indication that AI can gravely endanger us. Consider the ongoing and expanding use of AI-based apps that control our railroads, our electrical grid, and otherwise all manner of daily systems that we lively depend upon. AI that goes awry or maybe is cyber-hacked can spell trouble and substantively harm us.

Given those looming concerns, we ought to be setting standards that require AI makers and those that deploy AI to be mindful of what the AI they are fostering is and what it can do. Along with standards, we will need AI Law regulations that pertain to an AI era. Should we let existing standards bodies establish the standards or do we need some other means of doing so? What is the timeline for those standards? How will the standards be implemented and enforced? Do existing laws sufficiently cover use for an AI era, or do we genuinely need new laws that speak directly to AI issues?

If you have reasonable thoughts and constructive answers to these daunting matters, you might want to consider responding to the RFI. I phrase things that way because if your answer is simply a stark declaration to ban all use of AI, now and forever, that’s an imprudent answer that sorely lacks rationality and sensibility, see my discussion at the link here.

  • 2. “How can the principles and practices for identifying and mitigating risks from AI, as outlined in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework, be leveraged most effectively to tackle harms posed by the development and use of specific types of AI systems, such as large language models?”

Now that I’ve warmed you up with my elucidation for the first question above, I can go a bit faster in showcasing my explanations henceforth.

This second question pertains to the AI Bill of Rights, so you’ll want to get up-to-speed on it, see my analysis at the link here. In addition, this question refers to the AI risk management framework (RMF), which I’ve covered at the link here.

Okay, once you’ve learned about those two artifacts, the question posed in this question is how do we implement those various mechanisms? The implementation details were not explicitly laid out as yet and we need to figure out how to best put those into practice. It is time to make the rubber meet the road.

  • 3. “Are there forms of voluntary or mandatory oversight of AI systems that would help mitigate risk? Can inspiration be drawn from analogous or instructive models of risk management in other sectors, such as laws and policies that promote oversight through registration, incentives, certification, or licensing?”

This third question slops over somewhat into the prior question about AI risks.

We are faced with the tradeoffs of asking AI makers and those that deploy AI to voluntarily do the right thing by seeking avidly to reduce or mitigate AI risks, or we might outright force them into doing so via mandatory approaches. One viewpoint is that everything should be mandated. The downside there is that some believe this could constrain advances in AI and also skyrocket the cost to bring AI into the marketplace. There is a tension between allowing for innovation and at the same time preventing wholescale Armageddon. See my coverage at the link here.

  • 4. “What are the national security benefits associated with AI? What can be done to maximize those benefits?”

You often hear about the existential risks of AI, such as AI that enslaves us. That brings up the non-sentient AI versus a postulated but completely speculative sentient AI. I am going to herein focus on non-sentient AI. That’s enough of a handful for now.

The thing is, contemporary “ordinary” AI has downsides, plus has upsides that we need to also give weight to. I’ve discussed that much of today’s AI has a dual-use capacity, meaning that it can be put to good and helpful uses, while with hardly any changes can be equally deployed for wrongdoing and evil purposes (see the link here).

We must make sure that we don’t lose sight of the positives of AI. This specific question asks for an indication of the national security benefits that AI can bring to our country. AI that is properly devised and deployed could aid in protecting us from being overpowered by other nations or might aid us in dealing with natural calamities. Besides identifying those upsides, the question also asks to explain how those upsides can be fortuitously maximized rather than simply idly or minimally put into use.

  • 5. “How can AI, including large language models, be used to generate and maintain more secure software and hardware, including software code incorporating best practices in design, coding and post-deployment vulnerabilities?”

This fifth question refers to AI such as large language models (LLMs), which you can think of as generative AI, per my earlier herein description. You typically might use generative AI or LLMs to generate essays or engage in an interactive dialogue, perhaps going over the life of Abraham Lincoln or when seeking to come up with a new recipe for a delicious dinner.

Another use of generative AI or LLMs consists of using those AI apps to do computer programming or software development, see the link here. The belief is that software will gradually be a lot easier to devise and implement by using AI as a coding tool, either as an adjunct to a human developer or in lieu of needing a human programmer. The same applies to designing and fielding hardware. A possible gotcha is that the AI might inadvertently craft code or hardware that has bugs or holes. Those then might be exploited by cyberhackers. This question wants you to speculate how we might be able to deal with or limit those potentially disastrous exposures.

  • 6. “How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?”

This is another question that has a cybersecurity theme to it.

Our existing and evolving critical infrastructure such as our Air Traffic Control (ATC), our energy supply system, and so on, extensively makes use of computers, and we daily run the risk that those computers will be hacked into performing adverse actions. Maybe we can use AI to spot those vulnerabilities. Maybe we can use AI to address those vulnerabilities.

Well, forthrightly, we already know that AI can be used for those purposes, so the crux here is how to best devise this and do so on a timely and responsive basis.

  • 7. “What are the national security risks associated with AI? What can be done to mitigate these risks?”

You might recall that question #4 above was about the national security benefits of AI. This seventh question is the other side of the coin, namely what are the national security risks of AI? Of course, besides laying out the risks, you are to proffer means to mitigate or reduce those risks. For ideas on this, see my coverage at the link here and the link here.

  • 8. “How does AI affect the United States’ commitment to cut greenhouse gases by 50-52% by 2030, and the Administration’s objective of net-zero greenhouse gas emissions no later than 2050? How does it affect other aspects of environmental quality?”

This question is the first on this list that covers a particularly specific matter. All told, this pertains to the use of AI as a hoped-for ally in dealing with climate change, see my analysis at the link here.

Category (b): Advancing Equity And Strengthening Civil Rights

  • 9. “What are the opportunities for AI to enhance equity and how can these be fostered? For example, what are the potential benefits for AI in enabling broadened prosperity, expanding economic and educational opportunity, increasing access to services, and advancing civil rights?”

The questions earlier in this list were likely to dovetail somewhat into the matter of human rights and equity, but perhaps only peripherally so. This is straightly aimed at AI for enhancing equity.

I realize the question is written toward emphasizing the upsides of AI enabling equity, but I would assume that in that same light, the answer you might devise would cover the use of AI that undermines or undercuts equity and how that is to be dealt with. Up to you.

  • 10. “What are the unique considerations for understanding the impacts of AI systems on underserved communities and particular groups, such as minors and people with disabilities? Are there additional considerations and safeguards that are important for preventing barriers to using these systems and protecting the rights and safety of these groups?”

This question involves ascertaining how AI can and does impact particular segments of society. The wording seems to emphasize safeguards and protections. I would guess that you might also include how AI can be supportive of and beneficial to societal segments, covering both sides of the AI coin. Seems reasonable to include.

  • 11. “How can the United States work with international partners, including low- and middle-income countries, to ensure that AI advances democratic values and to ensure that potential harms from AI do not disproportionately fall on global populations that have been historically underserved?”

I’ve previously covered in my column postings that AI is a nation-state consideration, see the link here and the link here. We are going to have nations that seek to use AI as a tool to overpower other nations, or at least attempt to subjugate other nations based on the leverage of AI. That’s worth being on the watch for happening.

Another viewpoint is that AI will bolster democracy and aid countries in the pursuit of freedom. Some countries might not have the resources to build or deploy AI. Should the U.S. seek to aid other countries that don’t have AI resources, aiding them to employ AI, and if so, how should we do so? This might be especially difficult to pull off for countries that are impoverished or face other struggles.

  • 12. “What additional considerations or measures are needed to assure that AI mitigates algorithmic discrimination, advances equal opportunity, and promotes positive outcomes for all, especially when developed and used in specific domains (e.g., in health and human services, in hiring and employment practices, in transportation)?”

I mentioned earlier in this discussion that there are federal agencies in specific areas that are also exploring how to best guide AI development and implementation that falls in their particular realm or sphere. Most of the other questions so far on this list are rather generally focused and not specific to any particular sphere.

This one is a handy question for those of you that are especially versed in a specific realm. You can offer your two cents about AI in the realm that you know best. Go at it.

  • 13. “How might existing laws and policies be updated to account for inequitable impacts from AI systems? For example, how might existing laws and policies be updated to account for the use of generative AI to create and disseminate non-consensual, sexualized content?”

Generative AI is known for being able to spew hate speech and other unsavory discourse, see my coverage at the link here. Attempts to minimize or catch these untoward emitted outputs are ardently underway, such as AI advances that I describe at the link here.

This question asks about laws and policies that could be applied to this problem. You might identify existing laws and policies, plus you might note where those are weak or limited and thus propose that new laws and new policies pertaining to AI should be enacted too.

Category: (c) Bolstering Democracy And Civic Participation

  • 14. “How can AI be used to strengthen civic engagement and improve interactions between people and their government?”

I dare say that most people might agree that at times there are disconnects between what the government does and what people believe the government should be doing. I suppose that isn’t an overly controversial assertion.

Anyway, we can use AI to foster a greater closeness between the government and the people. This question asks how we might do so. I would also suggest that AI can have the opposite effect, causing people to land even further away from the government or vice versa. I’d think that this duality should be mentioned in the answer, along with cogent ways to overcome that kind of undesirable outcome.

  • 15. “What are the key challenges posed to democracy by AI systems? How should the United States address the challenges that AI-generated content poses to the information ecosystem, education, electoral process, participatory policymaking, and other key aspects of democracy?”

With the next major election coming up, the news is filled with outcries that we are going to face all manner of AI-devised deepfakes, disinformation, misinformation, and so on. That is a prime means of AI serving to usurp democracy. This question asks what we should do about this. Plus, and I hope I don’t sound like a broken record that keeps repeating itself, I would suggest that AI can provide a huge boost for democracy, in addition to the dour side of undercutting it.

  • 16. “What steps can the United States take to ensure that all individuals are equipped to interact with AI systems in their professional, personal, and civic lives?”

This question tends to throw some people for a bit of a loop. Here’s why. The question posits that people need to know how to contend with AI. A lot of AI insiders are already versed in AI and perhaps have found personal tricks and tips that they use when dealing with AI.

Not everyone is in that same boat. Thus, this question is somewhat surprising to some that already are familiar with AI. The point here is that the public at large is not necessarily in the know about how to cope with AI. Should we institute nationwide training or education about AI? Do we need to include AI awareness and familiarity in our existing public schools? I trust this gives you the drift of what this question is asking about.

Category (d): Promoting Economic Growth And Good Jobs

  • 17. “What will the principal benefits of AI be for the people of the United States? How can the United States best capture the benefits of AI across the economy, in domains such as education, health, and transportation? How can AI be harnessed to improve consumer access to and reduce costs associated with products and services? How can AI be used to increase competition and lower barriers to entry across the economy?”

A lot of handwringing has been mentioned by the media about the loss of jobs due to AI adoption. Be aware that this is not a cut-and-dried topic. Some AI will indeed lead to job losses, while other AI is predicted to lead to job additions. It is a mixed bag. Jobs though are just one slice of the economic pie. There are many other ways that AI will impact our economy.

This question is pretty much upside-worded. How can we utilize AI to benefit our economy, aid consumers, and lower barriers to entry? There is decidedly much to mention on those fronts. I’m not going to indicate that you ought to also cover the downsides since you might run out of room when writing your answer, and in this particular case, maybe aim to stick with the upsides. Your choice.

  • 18. “How can the United States harness AI to improve the productivity and capabilities of American workers, while mitigating harmful impacts on workers?”

I already somewhat spilled the beans in my elucidation to the prior question that dealt with the economy. This question here covers the worker impacts specifically.

There is more than the job loss or job additions element when you look at how work will be altered due to AI. AI has the possibility of boosting worker productivity. If done astutely, this could be a boon to workers and how long we work and how we work. The nature of our livelihood and even our leisure time could change as a result of AI adoption. With this comes the possibility of adverse impacts. I’ve discussed how AI is used to oversee human workers and can be demoralizing and dehumanizing, see the link here. Aim for the benefits, and seek to contain or eradicate the downsides.

  • 19. “What specific measures – such as sector-specific policies, standards, and regulations – are needed to promote innovation, economic growth, competition, job creation, and a beneficial integration of advanced AI systems into everyday life for all Americans? Which specific entities should develop and implement these measures?”

I view this question as somewhat akin to the first question on this list. We are back to the matter of standards, regulations, and the like. The key seems to be in this question to try and get specific. Whereas the initial question was quite widely framed, this one seeks to be more on-target. You are to zoom into a detailed perspective.

I also liked that this question brings up the notion of AI in the everyday life of people. I’ve often found when speaking at conferences and events that the attendees want to get a sense of how AI is going to be manifested in our day-to-day lives. That is an edifying grounding that makes this seem more real and less abstract.

  • 20. “What are potential harms and tradeoffs that might come from leveraging AI across the economy? How can the United States promote quality of jobs, protect workers, and prepare for labor market disruptions that might arise from the broader deployment of AI in the economy?”

In my view, this question overlaps somewhat with questions #17 and #18. The good news is that if you are answering those questions, you probably ought to toss this one into your hopper too. More bang for your buck.

The main distinction seems to be that whereas the former questions were about beneficial uses and improvements to us, this question explicitly spurs the considerations of tradeoffs and how to counterbalance both the good and the bad.

  • 21. “What are the global labor force implications of AI across economies, and what role can the United States play in ensuring workforce stability in other nations, including low and middle-income countries?”

Many of the questions so far on this list are principally about the U.S. and not especially seeking outright considerations outside the U.S. (an exception, of course, would be question #11, for example).

We all realize that today’s economy is a global one. Countries cannot especially thrive or survive in isolation, or at least if they try to do so they are going to find things quite bumpy. The question here pertains to AI as used by the global workforce. If some country X implements AI and they become much more productive than U.S. workers, what might that foretell for the U.S. workers and our economy? Likewise, what if we use AI while other countries don’t, might this adversely impact their workers and their economy?

All of this can be a complex web of how AI can both aid a country and yet also possibly undercut a country, and this will be happening across and within all countries, at varying levels and paces.

  • 22. “What new job opportunities will AI create? What measures should be taken to strengthen the AI workforce, to ensure that Americans from all backgrounds and regions have opportunities to pursue careers in AI, and otherwise to prepare American workers for jobs augmented or affected by AI?”

This is yet another job-related question.

Here, one added angle is whether we should be encouraging people to go into AI as a field of endeavor. Right now, lots of people are scrambling to learn about AI, including how they can devise AI apps. Many predictions about job opportunities suggest that knowing how to craft AI is going to be on the upswing for some time ahead. If that’s the case, we might want to consider how to aid people in pursuing AI careers. It can be quite lucrative and rewarding overall.

I probably should stop at that point and move to the next question, but I feel compelled to say more.

Not wanting to be a party crusher, but some believe we will have AI that does coding and AI development for us (see my remarks on question #5). If that is the case, an argument is made that by the time people get trained in devising AI, it will be a primarily automated process and we won’t need all of those AI-devising human workers. They will have misleadingly gone on a fool’s errand. The counterargument is that the AI that devises AI won’t be good enough or will take long enough to be advanced that those AI worker jobs will still be needed aplenty. And an additional argument is that by knowing how AI is devised, those workers can benefit from that understanding, even if they aren’t devising AI per se.

  • 23. “How can the United States ensure adequate competition in the marketplace for advanced AI systems?”

If you want to address a heated hot button of a question, this one ought to be in your sights. The situation is that some believe we do not have adequate competition amid the various AI makers today, or if we do have such competition that it might not last. At the top echelon of AI making, there is said to be a concentrated set of firms that do the bulk of the AI we see in use today.

Why is that considered bad? The concern is that if AI is going to be so pervasive in our lives, do we really want to be at the beck and call of a smattering of firms that make AI? They essentially can call the shots and we won’t be able to do much about it. The old saying, some assert, applies that absolute power corrupts absolutely.

You would indubitably get strident pushback from today’s AI makers on this point. They would argue that there are lots and lots of firms that make AI. They keep springing up daily. There is also the contention that these AI makers are welcoming standards and regulations, though some wonder how much of that is a wink-wink pretense. Dive into this one, if you dare.

Category (e): Innovating In Public Services

  • 24. “How can the Federal Government effectively and responsibly leverage AI to improve Federal services and missions? What are the highest priority and most cost-effective ways to do so?”

I have previously covered in my column postings about how the federal government is making use of AI, see the link here. Some would insist that the government is dreadfully behind the eight ball. AI needs to become part and parcel of all federal services and missions. Period, full stop.

The odds though of waving a magic wand to make that happen is exceedingly slim. It will take time. It will take money, as in taxpayer dollars and the like. Therefore, we will presumably need to prioritize. In that case, what should the priorities be? How can this be done without also wasting tons of money in the process? Etc.

An added consideration is whether we might rue the day that we decided to put AI throughout all branches and levels of our government. I assume you can easily see the potential Big Brother concerns that arise in such a scenario.

  • 25. “How can Federal agencies use shared pools of resources, expertise, and lessons learned to better leverage AI in government?”

You can probably guess that sadly we often have parts of the government reinventing the wheel when it comes to AI.

There might be a governmental realm that has already jumped ahead and devised suitable AI, but for sometimes exasperating reasons, this is not shared with other areas. It could be bureaucratic inertia, lack of awareness, lack of willingness to share, and so on (there are possible legitimate reasons too, such as perhaps security, privacy, and other claimed considerations).

This question asks you to lay out ways that sharing can be done in fruitful, sensible, and productive ways. Meanwhile, presumably also not oversharing or doing sharing simply for the sake of sharing. You get the picture.

  • 26. “How can the Federal Government work with the private sector to ensure that procured AI systems include protections to safeguard people’s rights and safety?”

I’ve covered in my columns that new regulations and guidance are emerging about the procurement of AI by the federal government, see the link here. A difficulty for the government can be that government workers or procurement officers might not know how to scrutinize the AI that they procure and assure that the AI meets appropriate standards and protections.

This question asks how to try and have the government and the private sector work together to streamline the procurement process about acquiring or licensing AI from companies, prudently and stringently, doing so on a win-win basis for all parties.

  • 27. “What unique opportunities and risks would be presented by integrating recent advances in generative AI into Federal Government services and operations?”

We’ve had a few questions on this list that directly dealt with generative AI. I’m glad that not all of the questions were shaped in that direction since it would unduly overemphasize just one type of AI (there are plenty of other types of AI beyond just generative AI).

But since we all know that generative AI is spreading like wildfire, it does make sense to have a few questions that specifically delve into the generative AI topic. In this question, the issue is how to have the federal government use generative AI, doing so in an appropriate way. If the government goes hog wild and starts using generative AI for all of its services, this is bound to be a disaster. Generative AI can be replete with errors, biases, falsehoods, glitches, AI hallucinations, and other maladies.

This is your chance to tell the feds what are good and sound ways to employ generative AI, and what risks and threats must also be taken into account.

  • 28. “What can state, Tribal, local, and territorial governments do to effectively and responsibly leverage AI to improve their public services, and what can the Federal Government do to support this work?”

The list here has been aimed at a federal level of government.

You could easily take the same set of questions and substitute the need to consider the same exact concerns and benefits that can arise at the state level, Tribal level, local level, and territorial level. On top of that, you can readily say that whatever happens at the federal level is bound to impact AI use at those other levels. There is also the other view that what happens at those other levels with AI can impact what occurs at the federal level.

Similar to how I earlier pointed out that what one country does with AI can impact another country, the same interlocking, and intermingling will happen across levels within a country. We would be quite remiss to focus solely on the feds and neglect or forget about what happens at the other levels. The AI would end up being splintered, fragmented, likely incompatible, cause excessive costs, and be a nightmare for all of us.

Category (f): Additional Input

  • 29. “Do you have any other comments that you would like to provide to inform the National AI Strategy that are not covered by the questions above?”

This last question is the catchall that I mentioned and allows you to provide additional comments beyond the ones directed by the aforementioned listed questions.

My suggestion is that if you have comments that do not seem to fit into any of the other twenty-eight questions, the odds are that this means that you must have another question in mind that isn’t on this list. I suppose you might argue with me about that logic, but I think it is reasonably sound.

The crux is that for whatever additional comments you have, and assuming they can be rallied into one or more new questions, it might be beneficial to state those questions. This in turn will make it possibly easier for others at a later date to respond to your comments. If your comments are seemingly one-offs and do not speak to a question at hand, it will make life harder to try and coalesce toward trying to resolve or solve a particular question or problem.

That is merely a suggestion and you can say whatever comes to your mind, which hopefully will at least be relevant and revealing for the AI matters being explored.

Conclusion

Whew, I applaud you if you made it through all of those questions and the explanations.

Give yourself a grand score of an A+.

Your next step, if you choose to accept this heroic and valiant national mission, consists of putting together your own answers and making a document of them so that you can submit it. According to the RFI, you are to submit your comments by no later than July 7, 2023, at 5:00 p.m. ET. To provide your responses, you’ll need to go to the Federal eRulemaking portal and access docket designation OSTP-TECH-2023-0007. For your convenience, here’s a link to the RFI and also where to submit your comments in response to the RFI, at the link here.

I’ll conclude for now with some ending words.

Dr. Seuss famously said that sometimes profound questions are complicated, and the answers are simple. I would tend to lean toward the opposite in this use case, namely that the questions are relatively straightforward, and the answers are going to be hugely complicated.

We’ll see.

As a final thought, I’m sure that some of you are already logging into ChatGPT or some other generative AI and going to ask the AI to answer these questions for you. Clever. But, sorry to say, others have already thought of that trickery. You might get good answers from generative AI. You might get utterly predictable answers. You might get answers that are AI hallucinations or otherwise off-base. Like a box of chocolates, be wary of whatever you get and make sure to double-check and triple-check it.

If you do use generative AI for that purpose, I suppose one might decry that you are neglecting the need to answer from the heart. Plus, you never know whether generative AI will provide answers that are self-serving and aimed at making AI into our overlords. Imagine that we answer tough questions about the future of AI by AI itself taking us down its own perniciously thorny primrose path.

Is that yet another AI existential risk?

Whoa, I think I’ve just come up with a handy-dandy question as an answer to the twenty-ninth question for providing an additional question that ought to be bandied around. I’ll start typing that up right away.

Source: https://www.forbes.com/sites/lanceeliot/2023/06/07/your-golden-opportunity-to-shape-us-ai-national-priorities-by-answering-vital-questions-about-the-future-of-ai-and-humanity-get-to-it-says-ai-ethics-and-ai-law/