Participatory AI refers to a type of artificial intelligence (AI) that involves human participation in the decision-making process of the AI system. This approach to AI aims to involve humans in the design and operation of AI systems in order to increase transparency, accountability, and trust in the technology.
When to use
One way participatory AI can be implemented is through the use of explainable AI, which is designed to provide humans with an understanding of how the AI system is making decisions. This can involve the use of techniques such as decision trees, neural networks, and other models that can be used to explain the reasoning behind the AI's decisions.
Another approach to participatory AI is through the use of collaborative AI, which involves humans working alongside AI systems to make decisions. This approach can involve the use of human-in-the-loop systems, where humans review and verify the decisions made by the AI before they are implemented.
Overall, participatory AI aims to increase the accountability and transparency of AI systems, and to ensure that they are being used ethically and responsibly. It is an important consideration as AI becomes more prevalent in society and has the potential to impact many aspects of our lives.
How to use
Here are some steps to use participatory AI:
- Identify the stakeholders: It is important to identify the people who will be affected by the AI technology and involve them in the process. This can include end-users, community leaders, policy experts, and technical experts.
- Define the problem: Clearly define the problem you are trying to solve with AI and ensure that it is aligned with the community's needs.
- Gather data: Collect data from the community to ensure that the AI solution reflects their values, beliefs, and needs. This can be done through surveys, focus groups, or other participatory research methods.
- Co-design the solution: Involve stakeholders in the design process, including the development of algorithms, ethical considerations, and testing.
- Test and iterate: Test the AI solution with the community
What you get
Participatory AI has several objectives, such as:
- Addressing the "Alignment problem" of AI.
- Empowering citizens, customers, and community stakeholders.
- Supporting the large-scale collaboration needed to tackle humankind's intertwined crises. (Symmathesy)
- Providing support for the growth of collective consciousness and wisdom at a larger scale.
More info and resources
dl.acm.org
dl.acm.org
Participatory AI for humanitarian innovation: a briefing paper
This working paper outlines the current approaches to the participatory design of AI systems, and explores how these approaches may be adapted to a humanitarian setting to design new ‘Collective Crisis Intelligence’ (CCI) solutions.
www.nesta.org.uk
Centre for Collective Intelligence Design
The Centre for Collective Intelligence Design will explore how human and machine intelligence can be combined to make the most of our collective knowledge and develop innovative and effective solutions to social challenges.
www.nesta.org.uk
Participatory AI futures: lessons from research in climate change
by Helena Hollis and Dr Jess Whittlestone
medium.com
Envisioning Communities: A Participatory Approach Towards AI for Social Good
Research in artificial intelligence (AI) for social good presupposes some definition of social good, but potential definitions have been seldom suggested and never agreed upon. The normative question of what AI for social good research should be "for" is not thoughtfully elaborated, or is frequently addressed with a utilitarian outlook that prioritizes the needs of the majority over those who have been historically marginalized, brushing aside realities of injustice and inequity. We argue that AI for social good ought to be assessed by the communities that the AI system will impact, using as a guide the capabilities approach, a framework to measure the ability of different policies to improve human welfare equity. Furthermore, we lay out how AI research has the potential to catalyze social progress by expanding and equalizing capabilities. We show how the capabilities approach aligns with a participatory approach for the design and implementation of AI for social good research in a framework we introduce called PACT, in which community members affected should be brought in as partners and their input prioritized throughout the project. We conclude by providing an incomplete set of guiding questions for carrying out such participatory AI research in a way that elicits and respects a community's own definition of social good.

ui.adsabs.harvard.edu
Exploring the role of public participation in commercial AI labs
What public participation approaches are being used by the technology sector?
www.adalovelaceinstitute.org
Futurepedia - The Largest AI Tools Directory | Home
Futurepedia is the largest AI tools directory. Browse 700+ AI tools in 40+ categories like copywriting, image generation and video editing. Search and filter the tools by categories, pricing and features.
www.futurepedia.io
Experts in the community
George Pór, george@futurehow.site