ketemuhub.org

Bridging Discussion, Research, and Policy for Ethical Tech

Governing AI in Critical Times for Peace and Security

Artificial Intelligence (AI) is often framed as a driver of progress, but in societies marked by conflict, technology can amplify risks as easily as it offers solutions. Southeast Asia is embracing AI at a rapid pace, yet its implications for peacebuilding remain poorly understood. Indonesia’s experience provides a starting point for this conversation, and recent cases of AI misuse across the region show why governance matters.

Conflict and Digital Activism in Indonesia

Indonesia’s democratic transition has been punctuated by episodes of communal violence, such as the Ambon conflict in the early 2000s. These clashes were fueled not only by physical violence but by the spread of rumors and misinformation. In this context, digital platforms became both a threat and a lifeline.

KETEMU’s co-founder, Dr Abdul Rohman, researches how peace activists in Ambon used social media to counter false narratives and sustain interfaith dialogue. Initiatives like Filterinfo, a Facebook-based rumor verification network, helped reduce tensions by combining technology with cultural strategies and trust networks. These efforts worked because they were rooted in local realities. Technology amplified their reach, but human relationships carried the weight of reconciliation.

This lesson is critical as AI enters peacebuilding. Automated fact-checking and predictive analytics sound promising, but without cultural grounding, they risk repeating old mistakes. Peacebuilding is not just about speed or scale, it is about localized governance and context.

AI Misuse in Southeast Asia

AI is not inherently harmful, but its misuse in fragile contexts can destabilize societies. Consider these cases:

  1. Disinformation in Indonesia’s Elections
    During Indonesia’s 2024 general election, AI-generated deepfakes circulated widely, including a video of former President Suharto endorsing a political party and an audio clip of candidate Anies Baswedan being scolded by a party leader. These manipulations distracted voters from substantive issues and fueled polarization.
  2. Hate Speech Amplification in Indonesia
    AI-driven content personalization and automation have intensified hate speech online, creating echo chambers and reinforcing divisions. Studies show that generative AI tools like GPT-4 are being used to produce misleading content at scale.
  3. Deepfake Harassment in the Philippines
    After Rodrigo Duterte’s arrest by the ICC, an AI-generated video falsely depicted a slain drug war victim as alive, accusing his sister of lying. The video went viral, triggering harassment and undermining trust in justice processes.
  4. Disaster Disinformation in Myanmar
    Following the 7.7 magnitude earthquake in March 2025, AI-generated videos misrepresented the scale of destruction, complicating humanitarian relief and eroding trust in official information.

These examples show that AI misuse is not hypothetical, it is happening now, and it intersects with conflict, governance, and security.

Building AI Governance for Peace

Indonesia’s experience mirrors broader regional trends. Digital technologies expand opportunities for activism, but they also equip states and non-state actors with new tools for control and manipulation. As Rohman argues, social movements endure not because of technology alone but because of cultural continuity, memory, identity, and trust.

If peacebuilding is to guide AI governance, three priorities stand out:

  1. Contextual Design
    AI systems must reflect cultural and linguistic diversity. Tools should support, not replace, human networks that sustain trust.
  2. Inclusive Policy-Making
    Civil society actors, peacebuilders, and minority groups need a formal role in shaping AI policies to prevent reinforcing existing divisions.
  3. Regional Cooperation
    ASEAN should lead efforts to create a charter for ethical AI use in conflict-sensitive contexts, complementing cybersecurity and digital trust initiatives.

The Stakes for Southeast Asia

AI’s trajectory depends on governance choices. If adopted solely for economic gain, it risks deepening inequalities and enabling new forms of repression. But if inclusion and peacebuilding shape governance, AI could strengthen social cohesion.

Indonesia offers both caution and hope. Initiatives like Filterinfo show that technology can support peace when anchored in trust and cultural continuity. They also remind us that tools alone cannot sustain reconciliation. As Southeast Asia moves toward an AI-driven future, the question is clear: will AI serve as a bridge for dialogue or as a mechanism for division?

Alt Text (Short):
Infographic with four panels showing AI governance priorities:

Ethical Use: Tools should support, not replace, human trust networks.

Contextual Design: AI systems must reflect cultural and linguistic diversity.

Inclusive Policymaking: Civil society and minority groups should help shape AI policies.

Regional Cooperation: ASEAN should lead ethical AI efforts in conflict-sensitive contexts.

Leave a Reply

Your email address will not be published. Required fields are marked *