In a recent interview with CGTN, Xie Dong, the Chief Technology Officer (CTO) at IBM Greater China Group, shared his insights on the financial sustainability of ChatGPT-like large language models (LLMs). Xie emphasized that while these models have ignited interest and excitement in the AI community, their implementation comes at an extraordinary cost that might be beyond the reach of many enterprises.
The cost conundrum and its barrier to implementation
Xie pointed out that the deployment of ChatGPT and similar language models could lead to exorbitant expenses for companies. He drew attention to a report by Analytics India Magazine, which indicated that OpenAI, the developer behind ChatGPT, might face financial challenges and even bankruptcy due to the staggering operational costs of around $700,000 per day to maintain the model. This revelation underscores the substantial financial burden associated with operating such advanced AI systems.
The driving force behind these high costs is the resource-intensive nature of training large language models, which demands a significant number of graphics processing units (GPUs). Xie explained that even after the GPUs are set in motion, substantial financial resources are continuously “burnt,” regardless of user engagement. This cost escalates significantly when measured by the number of tokens processed, making the expenditure for each usage surpass even the most imaginative estimates.
Revolutionary potential amid financial hurdles
Despite the financial challenges, Xie acknowledged the revolutionary impact of ChatGPT and similar technologies. He positioned these models as a tipping point in the realm of artificial intelligence, signaling the advent of generative AI. However, he also cautioned that this could be just the beginning, with generative AI poised for further growth and innovation.
IBM’s strategic direction in AI
Xie’s insights shed light on IBM’s strategic approach to AI. While IBM has a history of AI milestones, such as Deep Blue’s victory over Garry Kasparov and Watson’s triumph on Jeopardy, the company has been exploring diverse avenues for AI development. Xie highlighted that, instead of focusing solely on creating ChatGPT-like models, IBM’s researchers have been delving into foundation models. These models are trained on vast sets of unlabeled data, enabling them to cater to various tasks and domains.
Building on foundation models, IBM aims to craft specialized and sophisticated models tailored to specific enterprise use cases. Xie emphasized that this approach allows IBM to provide more value to its clients, enabling them to address domain-specific challenges effectively.
Generative AI’s promising future and regulatory landscape
IBM’s new platform, powered by foundation models, also introduces generative AI capabilities. The company anticipates that the generative AI market will burgeon, projecting a value of $8 billion by 2030. Xie cited an impressive figure of over 85 million job vacancies in this burgeoning sector.
However, concerns over AI regulation have surfaced globally. To address these concerns, Xie highlighted the importance of compliance with data protection regulations and safeguarding privacy. He stressed the significance of reliability in both the data used to train AI models and the models themselves. Xie’s viewpoint aligns with the broader sentiment that ethical and regulatory considerations are pivotal for responsible AI development.
Xie Dong’s perspectives offer a comprehensive overview of the challenges posed by the financial demands of ChatGPT-like large language models. Despite these challenges, Xie remains optimistic about the transformative potential of generative AI and IBM’s strategic focus on building specialized models that cater to diverse enterprise needs while prioritizing regulatory compliance and privacy protection.
Source: https://www.cryptopolitan.com/ibm-cto-concerns-financial-chatgpt-like-llms/