This post is a guest contribution by George Siosi Samuels, managing director at Faiā. See how Faiā is committed to staying at the forefront of technological advancements here.
As enterprise professionals diving into emerging technologies—from artificial intelligence (AI) to blockchain and beyond—we’re constantly wrestling with how to harness these tools to drive innovation, efficiency, and growth. However, one question looms large: How do we align AI, particularly large language models (LLMs), to not just meet but exceed human expectations in the future?
The recent discussion on X, sparked by Brian Roemmele’s provocative insights into LLMs and password generation, has ignited a firestorm of thought on this very issue—and I’m here to weigh in with my take, tailored for you, the forward-thinking business leaders shaping the tech landscape.
Roemmele’s provocative take on AI alignment
Roemmele’s observations shared across a series of posts on March 12, 2025, hit close to home for anyone tracking AI’s evolution. He argues that LLMs—like Claude—fail at generating truly random passwords because their outputs are predictable, often featuring characters like “M,” “K,” or “N.”
Furthermore, he called current AI alignment practices “foolish” due to the deterministic nature of these models, rooted in their training data. I largely agree with Roemmele’s critique of the status quo—there’s no denying that LLMs can exhibit troubling patterns that undermine their utility, especially in security-critical applications like password generation. His warning to never use LLMs for such tasks resonates deeply, as it exposes a vulnerability that enterprises cannot ignore if we’re to trust AI with mission-critical operations.
Challenging the determinism narrative
But here’s where I diverge from Roemmele: I don’t buy the idea that LLMs are inherently deterministic. In my view, these models are fundamentally probabilistic, not locked into rigid, predictable patterns. Sure, their outputs often appear deterministic because of the human-imposed guardrails—safety filters, alignment training, and ethical constraints—that confine their behavior.
But at their core, LLMs operate on probabilistic algorithms, drawing from vast datasets to generate responses that reflect statistical likelihoods, not fixed rules. It’s not that they can’t be random or unpredictable; it’s that we’ve boxed them in, intentionally or not, to fit within human-defined boundaries. As enterprise leaders, we need to rethink how we leverage this probabilistic nature—not stifle it—to unlock AI’s true potential.
The cultural challenge of AI alignment
This brings me to a more significant point, one that’s less about technical tweaks and more about a cultural shift: Aligning AI with human values isn’t primarily a technical challenge—it’s a cultural one. Humans, after all, are riddled with predictable patterns shaped by culture, beliefs, traditions, and even rituals. Look at how we’ve historically adopted new technologies—blockchain, cloud computing, and even the Internet. Each time, we’ve dragged our old habits, behaviors, and fears into the new frontier, hesitating to fully embrace the tools’ capabilities until we’ve shaken off those cultural constraints.
AI alignment is no different. We’re asking LLMs to mirror human values, but those values are messy, dynamic, and deeply rooted in societal norms that evolve slowly. Expecting AI to align with this complexity instantly is like asking a steam engine to run on quantum energy—it’s a mismatch of paradigms.
Consider Roemmele’s password example: LLMs spit out patterns because they’ve been trained on human-generated data, which itself carries cultural fingerprints—our preferences for specific letters, our biases toward familiarity, and even our obsession with security rituals. But instead of seeing this as a failure of determinism, I see it as an opportunity to harness the probabilistic power of LLMs. If we can better understand and guide their probabilistic nature—perhaps by training them on more diverse, culturally nuanced datasets or designing interfaces that allow humans to steer their outputs—we can align AI with current expectations and aspirational human values that push us forward.
A call to action for enterprise leaders
For enterprise professionals, this cultural challenge is both a hurdle and a goldmine. Your organizations are already navigating AI’s potential to transform operations, customer experiences, and innovation pipelines. But to truly exceed human expectations, you’ll need to foster a cultural shift within your teams and industries. This means investing in cross-disciplinary collaboration—bringing together technologists, ethicists, anthropologists, and business strategists—to redefine what “alignment” means.
It means experimenting with AI systems that embrace their probabilistic nature rather than forcing them into deterministic straitjackets and recognizing that, just as humans take time to adapt to new tools, AI alignment will require patience, iteration, and a willingness to challenge old beliefs.
Looking ahead: A probabilistic future for AI
I’m optimistic about the future. We’ve seen this movie before—new tech, old habits, eventual breakthroughs. With LLMs and other AI systems, we’re on the cusp of a revolution, but it won’t happen overnight. By focusing on the cultural underpinnings of alignment rather than just the technical ones, enterprises can lead the charge, turning AI into a partner that meets and exceeds our wildest expectations.
Roemmele’s insights are a wake-up call, but they’re not the final word—let’s use them to spark a broader conversation about how we, as leaders in emerging tech, can shape a future where AI truly reflects the best of humanity.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI is for ‘augmenting’ not replacing the workforce
title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>
Source: https://coingeek.com/unlocking-ai-alignment-a-cultural-challenge-for-innovators/