OpenAI introduces identity checks to access its most advanced models

In a context where artificial intelligence plays an increasingly influential role in society, OpenAI announces an important innovation to ensure greater security in accessing and using its technologies. The company has indeed announced that it will introduce an identity verification process for some organizations that intend to use its most advanced AI models. The change, described in a new support page on the official site, represents a significant tightening of the access methods to OpenAI’s API.

A new security barrier for OpenAI: Verified Organisation

The process, called Verified Organisation, will be a prerequisite for developers and companies that want to unlock the most advanced features available on the OpenAI platform. According to the published information, to complete the verification it will be necessary to present a government-issued ID from a country where the OpenAI API is available.

The measure is not only administrative. Behind this choice, there is the clear intention of the company to make artificial intelligence safer and more responsible. OpenAI has indeed stated that it “takes its responsibility very seriously in ensuring that AI is accessible but also used safely.” The goal of the verification is to reduce the risks associated with the misuse of APIs while continuing to offer the most advanced features to a broader community of developers.

One document, one single organization every 90 days

An important aspect of the new process concerns the limitation: only one organization can be verified with a document every 90 days. Additionally, not all requests will be accepted, indicating that OpenAI intends to adopt strict criteria in selecting access to its most sensitive resources. This element also introduces an initial form of qualitative selection among the organizations that wish to access the most powerful AI models.

Fight against abuse and theft of intellectual property

The introduction of verification comes after OpenAI released various reports on attempts to misuse its technologies. Groups allegedly linked to North Korea, according to reports, have attempted to exploit OpenAI’s models for unauthorized purposes.

But it is not just a geopolitical issue. The measures also concern the protection of intellectual property. At the beginning of 2024, as revealed by Bloomberg, OpenAI initiated an investigation to verify if a group linked to the Chinese laboratory DeepSeek had extracted massive volumes of data through the OpenAI API, with the suspicious intention of training their own models in violation of the terms of use.

In another case, which emerged in February, several Chinese accounts were banned for using ChatGPT in activities related to social media monitoring, as reported by a threat intelligence report. This example further underscores OpenAI’s concerns about the use of its platform for unauthorized or potentially harmful purposes.

Responsible access and advanced technologies

In light of these dynamics, the company led by Sam Altman seems to want to find a delicate balance between the accessibility of AI technology and its responsible regulation. The message is clear: those who want to access the most performant models will first have to prove to be a reliable entity.

The initiative can also be interpreted as a signal towards the growing maturity of the AI ecosystem. As models become more complex and capable, attempts at manipulation, cloning, or illicit exploitation also increase. In this scenario, the verification of organizations becomes a sort of security filter, capable of limiting misuse while maintaining an open innovation environment.

An impact on the developers’ community

Although the intentions are oriented towards protection and responsibility, the new policy could also provoke reactions in the developer community. Some might perceive the addition of verification as an obstacle to accessing innovation, especially if they are in countries with restrictions on API usage or if they cannot meet the imposed requirements.

However, OpenAI has clarified that this is a measure aimed at a minority that deliberately violates usage policies. For the majority of developers who operate within the rules, verification should not represent a limit, but rather an additional protection against unauthorized uses that could compromise the entire ecosystem.

Towards a safer artificial intelligence

With this move, OpenAI reaffirms its commitment to ethical and safe artificial intelligence, setting a new standard for the industry. The growing focus on security, intellectual property, and ethical responsibility demonstrates how AI is not just a matter of technological efficiency, but also of governance and control.

As models become increasingly performant, the introduction of verification reinforces the need for a continuous dialogue between innovation and the ethics of use. And OpenAI, with this choice, seems to want to lead the sector not only in terms of technological advancement but also in terms of responsible leadership.

Source: https://en.cryptonomist.ch/2025/04/14/openai-introduces-identity-checks-to-access-its-most-advanced-models/