HumAIn
We need technology, because it offers solutions capable of reducing suffering, mitigating intolerable risks, and improving lives.
But no technology should ever be paid for at the price of a cognitive debt that would cost us the sovereignty of thought, and, with it, sever our connection to who we are.
Evaluating the artificial integrity of digital technologies, and even more so when they include AI, is a responsibility inherent to any so-called digital transformation. This evaluation should enable the identification of functional artificial integrity gaps and the definition of preventive, corrective, and mitigation measures to address their impacts.
1. Functional Misappropriation: The use of a technology for purposes or in roles not intended by its designer and/or the organization using it, cases in which the software’s intended logic and the internal governance mechanisms are rendered ineffective or inoperative, creating functional and relational confusion.
Example: a chatbot designed to answer questions about the company’s HR policies is used as a substitute for human hierarchy, handling conflict resolution or task assignment.
2. Functional Loophole: The absence of necessary steps or features due to them not being developed, and therefore not present in the system’s operational logic—creating a “functional void” (analogous to a legal loophole) with respect to the user’s intended use.
Example: a content generation technology (such as generative AI) that does not allow direct export of the content into a usable format (Word, PDF, CMS) with the expected quality, thus limiting or blocking its operational use.
3. Functional Safeguards: The absence of guardrails, human validation steps, or informational alerts during the system’s execution of an action with potentially irreversible effects that may not align with the user’s intent.
Example: a marketing technology automatically sends emails to a contact list without any mechanism to block the sending, request user confirmation, or generate an alert in case a critical condition, such as validating the correct recipient list, is missing.
4. Functional Alienation: The creation of automatic behaviors or conditioned responses, akin to Pavlovian reflexes, that diminish or eliminate the user’s capacity for reflection and judgment, leading to a gradual erosion of their decision-making sovereignty and, consequently, their free will.
Example: the systematic acceptance of cookies, or blind validation of system alerts by cognitively fatigued users.
5. Functional Ideology: An emotional dependency on the technology that leads to the weakening or suppression of critical thinking, and fosters the mental construction of an ideology that fuels narratives of relativization, rationalization, or collective denial regarding their proper functioning, or lack thereof.
Example: justifying shortcomings or errors inherent to the technology’s operations with arguments like “It’s not the tool’s fault” or “The tool can’t guess what the user forgets”.
6. Functional Cultural Coherence: A contradiction or conflict between the logical framework imposed or influenced by the technology and the behavioral values or principles promoted by the organizational culture.
Example: a digital-based workflow that leads to the creation of validation and control teams overseeing the work of others, within an organization that promotes and values team empowerment.
7. Functional Transparency: The absence or inaccessibility of transparency and explainability regarding the decision-making mechanisms or algorithmic logic of a technology, particularly in cases where it may anticipate, override, or go beyond the user’s original intent.
Example: a candidate pre-selection technology that manages trade-offs and conflicts between user-defined selection criteria (e.g., experience, education, soft skills) without making the weighting or exclusion rules explicitly visible, editable, or verifiable by the user.
8. Functional Addiction: The presence of features based on gamification, instant gratification, or micro-reward systems specifically designed to hack the user’s motivation circuits, activating neurological reward mechanisms (dopamine, serotonin, norepinephrine, etc.) to trigger repetitive, compulsive, and addictive behaviors. These mechanisms can lead to emotional decompensation (as a form of compensatory refuge) and self-reinforcing cycles (withdrawal-like phenomena).
Example: notifications, likes, infinite scroll algorithms, visual or sound bonuses, levels reached through point systems, badges, ranks, or scores, used to sustain user engagement in an exponential and lasting way.
9. Functional Property: The appropriation, repurposing, or processing of personal or intellectual data by a technology, regardless of its public accessibility, without the informed, explicit, and meaningful consent of its owner or creator, including but not limited to: personal data, creative works (text, images, voice, video, etc.), behavioral data (clicks, preferences, locations, etc.), knowledge artifacts (academic, journalistic, open-source content, etc.).
Example: an AI model trained on images, texts, or voices of individuals found online, thereby monetizing someone’s identity, knowledge, or creative works without prior authorization, and without any explicit opt-in mechanisms, licensing, or transparent attribution.
10. Functional Bias: The failure of a technology to detect, mitigate, or prevent biased outputs or discriminatory patterns, either in its design, training data, decision logic, or deployment context, resulting in unjust treatment, exclusion, or systemic distortion of individuals or groups.
Example: a facial recognition system that performs significantly worse on individuals with darker skin tones due to imbalanced training data without functional bias safeguards or accountability protocols.
Because they form a system with us, these 10 functional artificial integrity gaps must be analyzed through a systemic approach, ranging from the nano level (biological, neurological), to the micro level (individual, behavioral), the macro level (organizational, institutional), and up to the meta level (cultural, ideological).
The cost of artificial integrity deficits in systems, whether or not they involve AI, directly burdens the organization’s capital: human (skills, engagement, mental health), cultural (values, internal coherence), decision-making (sovereignty, accountability), reputational (stakeholder trust), technological (actual value of technologies), and of course, financial (inefficiency costs, underperformance of investments, maintenance overruns, corrective expenditures, legal disputes, lost opportunities, and value destruction).
This cost results in sustained value destruction, driven by intolerable risks and an uncontrolled increase in the cost of capital invested to generate returns (ROIC), turning these technological investments into a structural handicap for the company’s profitability, and consequently, for its long-term viability.
A company does not choose a responsible digital transformation for the sake of society, in opposition to or in ambivalence with its own objectives.
It chooses it for itself, because its long-term performance depends on it, and because it helps strengthen the living fabric of the society that sustains it and upon which it relies to grow.
That is why we cannot be satisfied with designing machines that are just artificially intelligent. We must also ensure that they exhibit artificial integrity by design.
Source: https://www.forbes.com/sites/hamiltonmann/2025/05/30/ten-artificial-integrity-gaps-to-guard-against-with-machines-intelligent-or-not/