We love machines. We follow our navigation system to go to places, and carefully evaluate recommendations about travel, restaurants and potential partners for a lifetime, across various apps and websites, as we know algorithms could spot opportunities that we may like, better than we can ever do. But when it comes to final decisions about health, our job or our kids, for example, would you trust and entrust AI to act on your behalf? Probably not.
This is why we (FP) talk to Kavya Pearlman (KP), Founder & CEO at XRSI, which is the X-Reality Safety Intelligence group she put together, to address and mitigate risks in the interaction between humans and exponential technologies. She is based on the West Coast of the US, of course. This is our exchange.
FP. What’s going on with the advent of AI?
KP. For years, tech companies have normalized the idea that we must give up our most valuable asset, our data, in exchange for digital convenience. We always click “accept” without ever asking questions. Now, with the rise of wearables and 𝐀𝐈-𝐢𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐞𝐝 𝐬𝐲𝐬𝐭𝐞𝐦𝐬, the stakes are much higher. It’s not just about browsing history or location data anymore. Companies are harvesting insights from our bodies and minds, from heart rhythms and 𝐛𝐫𝐚𝐢𝐧 𝐚𝐜𝐭𝐢𝐯𝐢𝐭𝐲 to 𝐞𝐦𝐨𝐭𝐢𝐨𝐧𝐚𝐥 𝐬𝐭𝐚𝐭𝐞𝐬. And still, almost no one is asking: 𝐇𝐨𝐰 𝐝𝐨 𝐰𝐞 𝐭𝐫𝐮𝐬𝐭 𝐭𝐡𝐞𝐬𝐞 𝐬𝐲𝐬𝐭𝐞𝐦𝐬 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐦𝐨𝐬𝐭 𝐢𝐧𝐭𝐢𝐦𝐚𝐭𝐞 𝐝𝐚𝐭𝐚? 𝐖𝐡𝐚𝐭 𝐩𝐨𝐰𝐞𝐫 𝐝𝐨 𝐰𝐞 𝐡𝐚𝐯𝐞 𝐢𝐟 𝐰𝐞 𝐝𝐨𝐧’𝐭 𝐭𝐫𝐮𝐬𝐭 𝐭𝐡𝐞𝐦? 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐭𝐡𝐞 𝐢𝐧𝐝𝐢𝐜𝐚𝐭𝐨𝐫𝐬 𝐨𝐟 𝐭𝐫𝐮𝐬𝐭 𝐰𝐞 𝐬𝐡𝐨𝐮𝐥𝐝 𝐝𝐞𝐦𝐚𝐧𝐝?
This isn’t just a technical challenge. It’s a governance challenge and at its core, a question of 𝐭𝐫𝐮𝐬𝐭. Without transparency and accountability, AI risks amplifying hidden biases, eroding trust, and leaving people without recourse when systems get it wrong. 𝘛𝘳𝘶𝘴𝘵 𝘤𝘢𝘯𝘯𝘰𝘵 𝘦𝘹𝘪𝘴𝘵 𝘪𝘧 𝘸𝘦 𝘥𝘰𝘯’𝘵 𝘬𝘯𝘰𝘸 𝘸𝘩𝘢𝘵 𝘥𝘢𝘵𝘢 𝘪𝘴 𝘣𝘦𝘪𝘯𝘨 𝘤𝘰𝘭𝘭𝘦𝘤𝘵𝘦𝘥, 𝘩𝘰𝘸 𝘪𝘵’𝘴 𝘶𝘴𝘦𝘥, 𝘰𝘳 𝘩𝘰𝘸 𝘥𝘦𝘤𝘪𝘴𝘪𝘰𝘯𝘴 𝘢𝘳𝘦 𝘮𝘢𝘥𝘦.
FP. Can you really create a system that does that, transparency and accountability?
KP. You can, if you want to. As an example, we just launched our 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐥𝐞 𝐃𝐚𝐭𝐚 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 (𝐑𝐃𝐆) 𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝. It provides concrete guardrails for AI and wearable technologies, including c𝐥𝐞𝐚𝐫 𝐩𝐨𝐥𝐢𝐜𝐢𝐞𝐬 𝐨𝐧 𝐰𝐡𝐚𝐭 𝐝𝐚𝐭𝐚 𝐜𝐚𝐧 𝐚𝐧𝐝 𝐜𝐚𝐧𝐧𝐨𝐭 𝐛𝐞 𝐮𝐬𝐞𝐝, p𝐫𝐨𝐭𝐨𝐜𝐨𝐥𝐬 𝐟𝐨𝐫 𝐦𝐚𝐧𝐚𝐠𝐢𝐧𝐠 𝐀𝐈 𝐨𝐮𝐭𝐩𝐮𝐭𝐬 𝐚𝐧𝐝 𝐞𝐧𝐬𝐮𝐫𝐢𝐧𝐠 𝐭𝐡𝐞𝐢𝐫 𝐪𝐮𝐚𝐥𝐢𝐭𝐲, e𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲 𝐥𝐨𝐠𝐬 𝐬𝐨 𝐝𝐞𝐜𝐢𝐬𝐢𝐨𝐧𝐬 𝐚𝐫𝐞𝐧’𝐭 𝐡𝐢𝐝𝐝𝐞𝐧 𝐢𝐧 𝐚 𝐛𝐥𝐚𝐜𝐤 𝐛𝐨𝐱, a𝐥𝐢𝐠𝐧𝐦𝐞𝐧𝐭 𝐰𝐢𝐭𝐡 𝐠𝐥𝐨𝐛𝐚𝐥 𝐫𝐞𝐠𝐮𝐥𝐚𝐭𝐢𝐨𝐧𝐬 𝐭𝐨 𝐩𝐫𝐨𝐭𝐞𝐜𝐭 𝐢𝐧𝐝𝐢𝐯𝐢𝐝𝐮𝐚𝐥𝐬 𝐚𝐜𝐫𝐨𝐬𝐬 𝐛𝐨𝐫𝐝𝐞𝐫𝐬, and so forth.
FP. Why should a company adopt these standards?
KP. They do have the incentive to do it, as consumers and fans out there will know who’s serious and who’s not. Organizations that meet standards can be easily identified. AI doesn’t just need smarter models; it needs smarter governance. Because trust is not automatic. It is earned, sustained, and protected through responsible data governance. The question is no longer “can AI do this?” but rather “can we trust the way it’s being done?”.
FP. Trust is not automatic and consumers’ benefit, in line with human values, may not necessarily be the objective of this or that model. We need new standards, recognized across public and private enterprises. Groups like XRSI are working on it. The right time to understand, guide, label, measure, etc… is now.
By Frank Pagano
Source: https://en.cryptonomist.ch/2025/08/24/can-we-completely-trust-ai/