Deadline: 20 February 2025
The AI Trust and Safety Re-imagination Programme invites innovators across the public and private sectors to collectively re-imagine Trust and Safety (T&S), through measures that both prioritize equitable and practical approaches and foster shared, public–private sector responsibility.
As the United Nations Development Programme (UNDP) advances its commitment to closing the global AI equity gap, a pivot to a more proactive re-envisioning of AI Trust and Safety that centres on equitable growth and is sensitive to local contexts is more important than ever.
Translating insights from UNDP’s forthcoming 2025 Human Development Report on AI into practical action, the AI Trust and Safety Re-imagination Programme, aligned with the AI Hub for Sustainable Development’s goal of ‘growing together’ in the AI revolution, is designed to strengthen local enabling environments and complement ongoing research and policy efforts related to fostering AI advancement.
Trust and Safety (T&S) refers to the practical methods of managing emerging AI escalations, as well as approaches to identifying, defining and mitigating risks. T&S also pertains to how policies and laws are interpreted into product designs, business operations, escalations processes, and corporate communications to achieve their intended purposes.
Aims
- The programme aims to advance T&S beyond reactive measures, by creating practices that actively anticipate and prevent harm, and create safer development environments. These practices ought to be tailored to the sensitivities of local contexts and the needs of impacted communities.
Submission Theme
- Round 1: Systemic T&S regional challenges and the impact of AI
- Description: Region-specific challenges (and opportunities) to realizing effective T&S implementation and protections. What existing T&S challenges could be made worse by AI and what existing systems make it difficult to protect local communities at scale?
Why apply?
- Applicants with successful submissions will have an opportunity to:
- Present their ideas in a multi-stakeholder forum consisting of leading AI researchers, experts, and innovators who can validate and potentially support implementation of their ideas at scale.
- Engage with government stakeholders and experts to co-design and test collaborative strategies towards re-imagining AI Trust and Safety in different regional contexts.
- Join UNDP’s AI Trust and Safety community, contribute to thought leadership publications and participate in programming development initiatives through the AI Hub for Sustainable Development and other related activities.
Eligibility Criteria
- Submissions are encouraged from innovators working at the intersection of AI and T&S, which may range from prototypes, unpublished or recently published research, to a fully-fledged product or solution that has been deployed. Successful submissions must function beyond the ideation stage and demonstrate substantive outputs or findings.
- Submissions are also welcome from individuals with significant expertise from private sector companies, startups, industry leaders integrating with AI in their business sectors, relevant civil society groups, research institutions, universities and other similar organizations.
- Teams applying could comprise of a university department, corporate R&D teams, non-governmental organizations working in the field of T&S, industry alliances, think tanks, etc. Please note that only up to three people may present for a team and one of the team members must be an English speaker.
- Applicants should currently be working on impactful research, interventions or solutions in one of the following areas:
- Scalable local insights and solutions: Approaches to T&S that are sensitive and adaptive to local contexts and industries. Examples may include a locally specific taxonomy of AI harms; red-teaming for language-specific vulnerabilities; and interventions specific to auditing for AI fraud in the finance sector.
- Local risk forecasting and prevention: Projects that forecast AI risk by leveraging local knowledge on the digital risk and vulnerabilities landscape, or projects that seek to develop locally focused, proactive strategies (e.g. technical AI auditing systems for business-to-business (B2B) financial industry scams in local languages).
- Shared responsibility: Public–private collaborative approaches to AI escalations and risk management practices that impact developing countries and distribute risk and responsibility (e.g. product-feedback mechanisms).
For more information, visit UNDP.