The COVID-19 pandemic exposed deep structural inequities in how data is collected and used to inform crisis responses, particularly for persons with disabilities (PwD) in the Global South. In a commentary published in Big Data and Society, KETEMU’s co-founder, Dr Abdul Rohman, and his colleagues argue that pandemic responses in countries like Indonesia and Vietnam were marred by two forms of irrationality: situational and contextual. These irrationalities stemmed from misusing data practices suited for non-crisis contexts during emergencies and from applying narrow assumptions about disability that ignored socio-cultural complexity.
While the article focuses on disability rights and pandemic preparedness, its insights resonate with ongoing debates about AI governance in Southeast Asia. Both domains grapple with similar challenges: data scarcity, systemic biases, and the tension between urgency and equity. As governments and organizations in the region accelerate AI adoption, these lessons are critical for designing governance frameworks that are inclusive, ethical, and resilient.
Situational Irrationality and AI Risk Adoption
Situational irrationality, as we describe, occurs when practices from one context are misapplied to another, such as relying on low-involvement demographic data during a fast-moving health crisis. In AI governance, a parallel risk emerges when global AI standards or Western-centric benchmarks are transplanted into Southeast Asian contexts without adaptation. For example, importing risk frameworks designed for high-resource environments into countries with limited digital infrastructure can lead to ineffective or even harmful outcomes.
Risk adoption in AI should therefore be context-sensitive. Southeast Asia’s socio-technical landscape is diverse: urban centers like Singapore and Jakarta coexist with rural communities where connectivity is sparse. Governance models must account for these disparities, ensuring that risk mitigation strategies such as algorithmic audits or bias detection are feasible and meaningful locally. Blindly applying “best practices” without situational awareness risks perpetuating inequities, much like pandemic responses that sidelined PwD due to inaccessible data systems.
Contextual Irrationality and Human-in-the-Loop Imperatives
Contextual irrationality arises when the assumptions underlying data use fail to match the lived realities of marginalized groups. In Vietnam, disability data was often reduced to economic eligibility metrics, ignoring diversity in needs and rights. Similarly, AI systems trained on incomplete or biased datasets can reinforce stereotypes or exclude vulnerable populations. When governance frameworks prioritize efficiency over inclusivity, they replicate the same ableist logic Rohman critiques.
This is where human-in-the-loop (HITL) principles become indispensable. HITL ensures that human judgment complements automated decision-making, particularly in high-stakes domains like healthcare, education, and social protection. For Southeast Asia, embedding HITL in AI governance is not just a technical safeguard; it is a cultural and ethical necessity. Local organizations, civil society groups, and affected communities must be actively involved in shaping AI systems, from data collection to deployment. Their participation can surface contextual nuances that algorithms alone cannot capture, mitigating the risk of “contextual irrationality” in AI-driven policies.
Building a situation-culture centric AI Governance: Three Takeaways
1. Leverage Human Infrastructure for Data Gaps
The commentary highlights how organizations of PwD in Indonesia mobilized volunteers to collect high-involvement data during the pandemic. Similarly, AI governance can harness local human networks to validate and enrich datasets, especially where digital records are incomplete. This approach aligns with HITL and strengthens trust in AI systems.
2. Balance Safety and Equity in Risk Frameworks
Pandemic plans often prioritized population-level safety over equity for PwD. AI governance faces a similar trade-off: optimizing for efficiency versus ensuring fairness. Southeast Asian regulators should design risk frameworks that weigh both dimensions, recognizing that equity is not an afterthought but a core principle.
3. Invest in Capacity Building for Marginalized Voices
The commentary calls for empowering organizations of PwD to collect and use data, AI governance must invest in digital literacy and advocacy skills among marginalized groups. Their informed participation can counterbalance technocratic decision-making and prevent algorithmic ableism.
The irrationalities in disability data use are cautionary tales for AI governance. They remind us that technology does not operate in a vacuum; it is entangled with social norms, political structures, and cultural biases. For Southeast Asia, adopting AI responsibly means embracing risk frameworks that are adaptive, embedding human oversight, and centering equity at every stage of governance. In doing so, the region can avoid repeating the mistakes of pandemic responses and instead chart a path toward inclusive, contextually rational AI futures.