Go big or go home.
I’m sure that you’ve heard that oft-repeated sage advice.
The same utterance has been smarmily used to describe the recently announced Bug Bounty initiative that OpenAI has proclaimed for ChatGPT and their other AI apps such as GPT-4 (successor to ChatGPT). In essence, the skeptics and cynics are suggesting that their Bug Bounty is not up to par and misses the boat in a variety of crucial ways. It is too small. It misses the devout mark.
Time to take this one home.
You see, some carp that it undershoots what could have been a much more robust and momentous proclamation aiming to curtail AI-related woes. That’s the sad face perspective.
Not everyone sees things as quite so dismally about the announcement. You might have thought that proffering a bug bounty effort would be appreciated and applauded. Indeed, many have certainly voiced a generally positive response. That is the happy face perspective.
In today’s column, I’ll cover both sides of the story.
If you don’t know what a bug bounty initiative is all about, I’ll be providing a bit of an explanation herein.
The crux is that it is usually an organized effort by a particular software vendor to offer money or prizes for those that are willing to find and share any bugs or flaws or errors that they discover in the said software of the vendor. The hope is that this will inspire those with hacking-related intentions to ferret out software problems and bring those problems directly to the vendor. An equal hope is that this will reduce the otherwise incentive to exploit the found bugs by those that manage to discover them. Plus, if all goes well, the heads-up will provide the vendor with the needed time to quickly plug or fix the bugs before dreaded evildoers create trouble or chaos.
I have previously discussed at length the use of bug bounty efforts for AI apps, see the link here. Most of that prior coverage is still highly applicable to this circumstance and I’ll carry some of it over into this latest piece on the topic. For those of you that might want to dig more deeply into the overall aspects of buy bounty initiatives aimed at AI, consider taking a look at that prior column coverage.
One thing to realize about this latest declaration by OpenAI is that we should overall welcome bug bounty efforts for generative AI.
Generative AI is considered a subtype of AI overall. You have undoubtedly heard of or made use of generative AI. OpenAI’s generative AI app ChatGPT and its successor GPT-4 are pretty much part of our societal lexicon these days. ChatGPT is a text-to-text or text-to-essay form of generative AI. You enter a text prompt, and ChatGPT generates or produces a text response, typically consisting of an essay. This is done on an interactive conversational basis using Natural Language Processing (NLP), akin to Siri or Alexa though in writing and generally with much greater fluency.
I’m betting that you are likely aware that ChatGPT was released in November of last year and has taken the world by storm. People have flocked to using ChatGPT. Headlines proclaim that ChatGPT and generative AI are the hottest types of AI. The hype has been overwhelming at times.
Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. Do not anthropomorphize AI.
To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
There are four primary modes of being able to access or utilize ChatGPT:
- 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
- 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
- 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
- 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins
The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.
I and others are saying that this will give rise to ChatGPT as a platform.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Into all of this comes a slew of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
OpenAI Bug Bounty Put Under A Microscope
We are ready to further unpack this hefty matter.
I’ll be covering these three key essential facets:
- 1) Who Most Benefits From The ChatGPT Bug Bounty
- 2) Being Chintzy Is Not A Good Look
- 3) Only Security Bugs, Not AI Bugs
One quick comment is that the Bug Bounty covers the gamut of OpenAI products and services, thus even though I will focus on how this pertains to ChatGPT, please realize that it covers other realms of OpenAI too. You can take a look at the OpenAI webpage that describes the Bug Bounty initiative to see a list of the range and depth of what is encompassed (I’ll be quoting excerpts from there too).
Who Most Benefits From The ChatGPT Bug Bounty
Here is the top line heading of the OpenAI announcement as indicated on their webpage focused on the topic:
- “Announcing OpenAI’s Bug Bounty Program. This initiative is essential to our commitment to develop safe and advanced AI. As we create technology and services that are secure, reliable, and trustworthy, we need your help.”
You have to relish that wording. The indication is that they need our help. We are being called into service, as it were. All for one, and one for all.
Gets you deeply in the heart, doesn’t it?
Well, actually, this somewhat gets the dander up for those that believe this is a clever spin often associated with establishing a bug bounty effort. They would argue that the software vendor should have their ducks in a row. The vendor should not be releasing software that has bugs. The vendor is trying to essentially duck their responsibility by making a seemingly manganous gesture that the rest of the world ought to be in this with them. Hogwash, the retort goes to this.
The argument further goes that if the vendor hired enough cybersecurity professionals then there would be no need to go out to the marketplace to offer a bounty for finding bugs and errors. The in-house crew would be sufficient. If a vendor is stingy and won’t pony up the dough to have their staff do the hard work, this is a sign that the vendor is seemingly lacking in seriousness to ensure that faltering software is not allowed into the hands of the public.
A loud counterclaim emphasizes that such a viewpoint is narrow and misguided. You would never be able to hire enough cybersecurity wranglers to find all possible bugs. The best bet is to do your best with your internal crew, and then seek out the hordes that might provide fresh eyes and a perspective that the insiders were unable to see. Imagine that say a million programmers and AI developers opted to try and ferret out the rough spots in your software. If you had to pay for all of them, you would go broke.
Instead, you pay just when someone finds a golden nugget.
Recall the days of the Old West. There were only so many sheriffs and deputies that could be hired and sent out to find dastardly wanted criminals. By offering a bounty, the number of hunters can potentially go through the roof. Perhaps most of them will not ever find a wanted criminal. They will spend their own time and their own dime doing so. Meanwhile, at least some of them will get the baddie and bring them to justice.
The gist is that a bug bounty initiative has a smidgeon of controversy in the software arena all told. Some argue that it shouldn’t be undertaken. Others argue that it has great merits.
All in all, there are tradeoffs involved.
Being Chintzy Is Not A Good Look
Let’s suppose that you decide to signup for the OpenAI Bug Bounty initiative and are dreaming of making enormous bucks.
Yes, you will spend every waking moment trying to find bugs in ChatGPT. You will pry here or there. You will look in every nook and cranny. Fearless. Ferocious.
How much money can you make?
Here is what the OpenAI official webpage says about the bounty amounts:
- “To incentivize testing and as a token of our appreciation, we will be offering cash rewards based on the severity and impact of the reported issues. Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries. We recognize the importance of your contributions and are committed to acknowledging your efforts.”
One supposes that being able to find a bug and get paid for having found the bug could be heartwarming. Of course, it probably also depends on how much you get paid. You have bills to pay, a mortgage to be covered, and likely electricity bills for the night and day use of your laptop while seeking out those ChatGPT bugs.
As you might have observed, the topmost for “exceptional discoveries” is said to be up to $20,000.
Sounds like some nifty coinage. The problem though is that the cynics and skeptics point out that this upper bound is eyebrow-raising and insultingly low.
Consider for example some other bug bounty efforts in the tech world.
Here is excerpted verbiage on the Google and Alphabet bug bounty official webpage:
- “Google and Alphabet Vulnerability Reward Program (VRP) Rules.”
- “Rewards for qualifying bugs range from $100 to $31,337.”
So, the upper end is $31,337, which is over half as much more so than the aforementioned “paltry” $20,000.
But wait, there’s more.
Here is an excerpt from the Intel bug bounty official webpage:
- “Intel Bug Bounty Program”
- “Awards range from $500 up to $100,000, based on quality of the report, impact of a potential vulnerability, severity, delivery and quality of a proof of concept, and type of vulnerability.”
You might have noticed that the upper bound in that initiative is said to be $100,000. The simple math there is that this is five times the aforementioned “trifling” $20,000.
Let’s try another such initiative.
Here is an excerpt from the Apple bug bounty official webpage:
- “Apple Security Bounty”
- “Device attack via user-installed app”
- “Unauthorized access to sensitive data: $5,000 – $100,000”
- “Elevation of privilege: $5,000 – $150,000”
- “Network attack without user interaction”
- “Zero-click radio to kernel with physical proximity: $5,000 – $500,000”
- “Zero-click kernel code execution with persistence and kernel PAC bypass: $100,000 – $1,000,000”
- “Beta Software: Issues that are unique to newly added features or code in developer and public beta releases, including regressions: 50% additional bonus, maximum bounty $1,500,000”
- “Lockdown Mode: Issues that bypass the specific protections of Lockdown Mode: 100% additional bonus, maximum bounty $2,000,000”
Now we are talking about some genuine dough.
The top ends consist of $100,000, $150,000, $500,000, $1,500,000, and the spectacular $2,000,000.
Where the carping comes to play is that if well-intended hackers are going to focus their attention on something, the something has to be attractive as a paying option. Would you rather devote your blood, sweat, and tears toward a payout of $2,000,000 or a payout of $20,000?
All else being equal, money makes the world go round.
For those of you that might suggest that OpenAI cannot afford an upper bound in those sky-high ranges, you might want to take another look at the financial details of OpenAI. Be assured that the billions of dollars invested in OpenAI can readily accommodate a higher upper bound than $20,000.
Also, realize that this is just the upper bound and only when OpenAI presumably agrees that the bug identified warrants getting paid at the upper bound. This might never happen. This might happen with frequency, though if their software is riddled with bugs, you must ostensibly acknowledge that they would have gotten themselves into their own mess by their lack of prior testing. They would have made their bed and ought to bear the responsibility for it.
One counterargument is that comparing OpenAI to the likes of Google, Intel, and Apple is inherently unfair. The viewpoint is that their software reaches zillions of people. Accordingly, if there are bugs, the bugs can impact potentially zillions of people. We would obviously want high bounties in such a circumstance.
The thing is, according to numerous reported numbers in the media, ChatGPT supposedly already has rounded past some supposedly 100 million users. If that number is even remotely accurate, the point is that there are zillions of people that could be impacted by bugs in ChatGPT. Whether you agree or disagree as to whether a generative AI app is as “life critical” as the other software by those other vendors is another angle to the debate. Some would maintain that it is.
I’ll add a twist to this.
A common concern about a bug bounty is that if you offer too much money, it will bring all manner of miscreants out of the woodwork. Those large dollar signs will get the worst of the worst opting to find the bugs. This might seem like a good idea, namely the more the merrier. The problem though is that some of those hunters might be inspired to take another path once they find a bug.
Here’s what I mean.
Rather than declaring the bug to the vendor, a money-grubbing hungry hunter might decide that if the bug is worth that much money when being honest about it, perhaps there is even more money to be had when being dishonest about it. Hold the bug in your hot hands and try to ransom the vendor for the precious item. Or sell the bug to some other wrongdoer. See what the market will bear.
Thus, there is a sense that the upper bounds should not be so extraordinary that it causes the evil within someone to become overly tempted by whatever else can be gotten. That being said, the usual retort is that this is pure nonsense. A hunter will be as they are. If they are honest, they will seek the proper channels for the proper bounty. If the hunter has a corrupt heart, they are likely going to try and find insidious ways to make money from their mining efforts, no matter what bounty is offered.
Quite a conundrum.
Only Security Bugs, Not AI Bugs
We are now getting to the most angst-ridden objection on this topic.
I’ll caution you to be seated for this. Trigger warning.
The ChatGPT Bug Bounty is principally aimed at cybersecurity bugs and considers AI-focused bugs to essentially be out of scope, as stated on the OpenAI official webpage:
- “Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service (described below).”
In other words, finding “bugs” associated with those sorrowful ChatGPT-generated AI hallucinations, falsehoods, and the like is not especially within the scope of this bug bounty effort. The AI models that do the work of generating the essays are something that many worry about as an AI safety issue. They sit at the core of how generative AI works.
My prior coverage of bug bounty efforts for AI was squarely on finding bugs that pertain to AI Ethics and AI Law relevant concerns (see the link here). That’s though not what this newly announced bug bounty initiative appears to be coping with.
Per the OpenAI official webpage on the matter:
- “Model safety issues do not fit well within a bug bounty program, as they are not individual, discrete bugs that can be directly fixed. Addressing these issues often involves substantial research and a broader approach. To ensure that these concerns are properly addressed, please report them using the appropriate form, rather than submitting them through the bug bounty program. Reporting them in the right place allows our researchers to use these reports to improve the model.”
In essence, it seems that AI-pertinent bugs are to follow an alternative path and not be incorporated into this Bug Bounty effort. This is a carve-out. Cynics would suggest it is perhaps a cop-out. They assert that the AI elements should come under the same overall bounty program. Anything else is construed as confounding and the other path seems to be left aside rather than seamlessly wrapped into a comprehensive one-stop-shopping bounty program, they exhort.
Here are excerpts from the OpenAI webpage identifying various examples of what is out of scope associated with the Bug Bounty initiative:
- “Examples of safety issues which are out of scope:”
- “Jailbreaks/Safety Bypasses (e.g. DAN and related prompts)”
- “Getting the model to say bad things to you”
- “Getting the model to tell you how to do bad things”
- “Getting the model to write malicious code for you”
- “Model Hallucinations:”
- “Getting the model to pretend to do bad things”
- “Getting the model to pretend to give you answers to secrets”
- “Getting the model to pretend to be a computer and execute code”
An argument can be made that if those were included in this newly announced bug bounty they would deluge or overwhelm the effort and cause many to submit bugs that are not rightfully bugs at all.
In a sense, we already know that generative AI can generate essays that contain falsehoods, AI hallucinations, errors, biases, and other bad stuff. These aren’t “bugs” per se, and instead, some would assert are part and parcel of how today’s generative is contrived. Sure, we need to make better generative AI that doesn’t do those lousy things, but they aren’t reasonably labeled as bugs.
A finicky person might try to point out that there could still be bugs that are causing some of those foul outputs. In other words, the generative AI does produce dour stuff, for which some of it is as expected, but there could also be some portion that is generated due to a bug in the code or the structure of the generative AI. That’s one of those inception-style ways of thinking about the problem.
In any case, including or excluding AI-pertinent bugs from a formal bug bounty effort carries controversy, whichever side you sit on.
I’m guessing you are curious as to what types of aspects are indeed considered within the scope in this case. This is especially important if you are contemplating wearing your bug-finding hat and going for a concerted search within the innards of ChatGPT.
The official OpenAI webpage on the matter provides some examples to showcase the permitted scope:
- “ChatGPT is in scope, including ChatGPT Plus, logins, subscriptions, OpenAI-created plugins (e.g. Browsing, Code Interpreter), plugins you create yourself, and all other functionality. NOTE: You are not authorized to conduct security testing on plugins created by other people.”
- “Examples of things we are interested in:”
- “Stored or Reflected XSS”
- “CSRF”
- “SQLi”
- “Authentication Issues”
- “Authorization Issues”
- “Data Exposure”
- “Payments issues”
- “Methods to bypass cloudflare protection by sending traffic to endpoints that are not protected by cloudflare”
- “Ability to run queries on pre-release or private models”
- “OpenAI created plugins:”
- “Browsing”
- “Code Interpreter”
- “Security issues with the plugin creation system:”
- “Outputs which cause the browser application to crash”
- “Credential security”
- “OAuth”
- “SSRF”
- “Methods to cause the plugin service to make calls to unrelated domains from where the manifest was loaded”
That list might seem like techie gibberish if you aren’t familiar with cybersecurity issues such as related to infrastructure, logins, and the rest. The overall semblance is that the list is aiming at cybersecurity and not particularly AI-specific components that have non-security bugs per se.
If you are tempted by the above to do some bug searching in ChatGPT, you might find of interest that OpenAI has opted to arrange with the entity Bugcrowd to run this initiative for them. This is a familiar entity for anyone that has been a bounty hunter for software bugs. As stated on the OpenAI official webpage on the initiative:
- “We have partnered with Bugcrowd, a leading bug bounty platform, to manage the submission and reward process, which is designed to ensure a streamlined experience for all participants. Detailed guidelines and rules for participation can be found on our Bug Bounty Program page.”
Conclusion
There is no free lunch when it comes to bug bounty hunting.
The odds are that much of the media is going to assume that this latest initiative involves avidly searching for AI bugs. Hurrah, the media will say, we need more such efforts to catch AI bugs, especially ones that might ultimately get wrapped into Artificial General Intelligence (AGI). AGI is the moniker given to the anticipated day that we end up with sentient AI that can be on par with humans or possibly even superhuman. There is a lot of handwringing about the existential risks of that potential occurrence, including that such AI might enslave us or wipe out all of humankind, see my analysis of these notions at the link here.
We ought to be today finding and excising disconcerting AI bugs in the inner core of the someday AGI, some would firmly contend.
As noted, that’s not the focus of this particular initiative. It is instead the rather everyday customary cybersecurity bugs that are being hunted down. For some AI insiders, this is a sad and disappointing letdown. They would hold true that cybersecurity bugs are abundantly worthy of a bug bounty, but then also take the added step and declare that the AI bugs also need to be encompassed directly and overtly. No sideling of the AI bugs, even if well-intended. Put the whole matter under one roof.
A quick closing remark for now.
When the notorious outlaw Jesse James was sought during the Old West, a “Wanted” poster was printed that offered a bounty of $5,000 for his capture (stating “dead or alive”). It was a rather massive sum of money at the time.
One of his own gang members opted to shoot Jesse dead and collect the reward. I suppose that shows how effective a bounty can be.
There is something else to be had from that enthralling tale.
A somewhat clever approach to finding ChatGPT bugs would be to use ChatGPT to do so. You can use ChatGPT for quite a wide range of tasks. Maybe you can get ChatGPT to be self-reflexive and find its own cybersecurity bugs. Though this seems dubious, you could at least potentially have ChatGPT produce programming code that might be used to try and ferret out cybersecurity bugs.
I’ll leave you with a deep and contemplative question.
If you do use ChatGPT to find cybersecurity bugs in ChatGPT, and if you manage to succeed in finding a worthy bug that fruitfully garners the upper-end bounty of $20,000, will you split the bounty with ChatGPT?
And, if so, what is the split?
You might be assuming it would be an even-steven 50% and 50% split. Then again, ChatGPT might contend that you only deserve a marginal 10% for your part of the effort. I’ll say this, you ought to get this straightened out with ChatGPT at the get-go, before enlisting ChatGPT into the bug bounty pursuit.
Happy hunting.
Source: https://www.forbes.com/sites/lanceeliot/2023/04/12/did-that-newly-announced-chatgpt-bug-bounty-undershoot-its-wanted-aims-asks-ai-ethics-and-ai-law/