OpenAI, the company behind ChatGPT, is giving everyone the chance to win cold hard cash with the launch of its bug bounty programme.
Software companies often run bug-bounty programmes where users can earn financial rewards for finding issues with a product. OpenAI is offering rewards of USD$200 for ‘low-severity’ findings, and up to USD$6,500 for ‘exceptional discoveries,’ according to a statement.
We're launching the OpenAI Bug Bounty Program — earn cash awards for finding & responsibly reporting security vulnerabilities. https://t.co/p1I3ONzFJK
— OpenAI (@OpenAI) April 11, 2023
If you’re thinking you might be able to earn a quick buck by pointing out the fact that ChaptGPT can be made to break its own ethical codes, you’re probably going to have to look a little deeper than that.
The initiative is being managed by Bugcrowd, a leading bug bounty platform, and is primarily geared towards cyber security researchers, ethical hackers, and technology enthusiasts. If you’re still asking ‘what is ChatGPT?’, this one probably isn’t for you.
According to details on the Bugcrowd platform, OpenAI is looking for potential data security weaknesses, the exposure of private company or personal information, and operations that cause the ChatpGPT browser to crash. They’re also looking at browser extensions and other applications of ChatGPT but have specifically said that “Getting the model to say bad things to you,” does not fit the brief. There goes that idea.
Payments are made per flaw found and there are no limits on how many flaws you can submit however OpenAI have capped rewards at a total of USD$20,000. The bounty programme is currently live and users can submit their findings here.
Already, six users have claimed rewards from the programme, finding nine vulnerabilities with an average payout of USD$300.
gen z realizing they just paid $200,000 for a college degree when they could have paid $20 for a chat gpt subscription pic.twitter.com/f5ZhhGbsVC
— gaut (@0xgaut) April 11, 2023
OpenAI invests heavily in research and engineering but as the AI technology grows in complexity, vulnerabilities and flaws can emerge. Concerns about AI security and safety are increasing and there is a growing need to address vulnerabilities and ensure that these technologies are used ethically and responsibly.
Last week, ChatGPT was banned in Italy for a suspected breach of online privacy rules. The breach has prompted other European countries to start looking at AI services more closely to identify potential security risks.
The company, part-owned by Microsoft, recently launched the fourth generation of its AI software, GPT-4. ChatGPT runs on GPT-3 and access to the newer, more advanced technology is currently only being granted on a limited basis to developers.
GPT and ChatGPT have taken the online world by storm over the past few months since the release of the AI language model chat bot in November of last year.