Microsoft, Peking University attempt to equip OpenAI’s GPT-4 with Android skills

Microsoft Research and Peking University researchers have reached a new milestone in their attempts to teach OpenAI’s GPT-4 how to operate within the Android operating system.

In a joint report, the study achieved relative success in finetuning large language models (LLMs) to operate autonomously in a specific operating system. While generative artificial intelligence (AI) has found myriad use cases, the technology has found it challenging to work within the borders of an operating system without human interference.

The study highlighted several reasons for generative AI’s inability to explore Android autonomously, including the reliance on reinforcement training. Most LLMs use trial and error to explore a new environment, setting the stage for security issues in their application.

“Firstly, the action space is vast and dynamic,” the report read. “Secondly, real-world tasks often require inter-application cooperation, demanding farsighted planning from LLM agents. Thirdly, agents need to identify optimal solutions aligning with user constraints, such as security concerns and preferences.”

To solve the challenges, the research team initiated AndroidArena, which was designed as a training environment for LLMs to explore the Android operating system. Preliminary studies highlighted new flaws in the way of autonomous exploration for LLM, focusing primarily on understanding and reasoning.

As the experiments within the AndroidArena proceeded, the researchers noted additional challenges to reflection and exploration by models.

While exploring potential solutions, the team eventually settled for prompting LLMs with detailed information on previous attempts to reduce incidents of errors. By embedding previous memories in prompts, the researchers recorded a 27% spike in accuracy when operating Android systems.

The solution yielded positive results when extended to other LLMs, including Google’s Bard (NASDAQ: GOOGL) and Meta’s LLaMA 2 (NASDAQ: META), with researchers optimistic for new iterations to demonstrate advanced functionalities.

Optimizing AI one feature at a time

While generative AI has enjoyed mass adoption rates, researchers are scrambling behind the curtains to fix several problems associated with the offering. One study by Anthropic AI focused on stifling incidents of sycophancy in LLMs and earned plaudits from industry players, while AutoGPT and Microsoft (NASDAQ: MSFT) are testing an AI monitoring tool to flag harmful real-world outputs.

“We design a basic safety monitor that is flexible enough to monitor existing LLM agents, and, using an adversarial simulated agent, we measure its ability to identify and stop unsafe situations,” said the Microsoft-backed research.

Other studies are focused on merging blockchain technology with AI, while some are pursuing labeling AI-generated content to stifle the proliferation of deepfakes.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube videoYouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Source: https://coingeek.com/microsoft-peking-university-attempt-to-equip-openai-gpt-4-with-android-skills/