Technology is Part of the Problem, but also Part of the Solution — BCB #126
Researchers are exploring how technology could improve dialogue—and democracy.
There is a growing consensus that technology is exacerbating polarization, bolstering noxious people and views, and generally making our conversations worse. But in fact, there are numerous efforts underway to leverage new technologies to improve discussion, resolve conflict, and enhance democratic processes. In this week’s issue, we’re highlighting an AI mediator that outperforms humans, small design tweaks for better online conversations, and a big new report on the many ways technology could help improve democracy.
How AI could help humans find common ground
A recent study indicates that when it comes to helping people find common ground, artificial intelligence moderation may be more helpful than human moderation. The study, conducted by a team from Google DeepMind, used a large language model called the Habermas Machine to create statements that people with varying opinions could all agree on (Habermas was the scholar who invented the phrase “public sphere”). Small groups of UK residents discussed controversial questions—such as “should we lower the voting age to 16?” or “should the National Health Service be privatized?”—and then wrote individual paragraphs about their opinions. Researchers put these statements into the Habermas Machine, which generated statements designed to be acceptable to all the group members.
After participants ranked the statements and provided feedback, the Habermas Machine created revised statements. Participants then selected the one they liked the best. The outcome was a statement intended to incorporate all perspectives and invoke agreement from all participants.
Here’s the interesting part. The researchers compared the success of AI mediation to human mediation and found that participants preferred the Habermas Machine’s statements 56% of the time compared to 44% for the human mediators’ statements.
Group opinion statements generated by the Habermas Machine were consistently preferred by group members over those written by human mediators and received higher ratings from external judges for quality, clarity, informativeness, and perceived fairness. AI-mediated deliberation also reduced division within groups, with participants’ reported stances converging toward a common position on the issue after deliberation; this result did not occur when discussants directly exchanged views, unmediated.
The human mediators in the study were financially incentivized to create statements the group members agreed with, but they were not professional mediators. It would be interesting to compare the Habermas Machine to professional mediators to see which is more successful. However, the researchers behind the study point out, the power of the Habermas Machine is not its “potentially ‘superhuman’ mediation but rather its ability to facilitate collective deliberation that is time-efficient, fair, and scalable.” They even designed a “virtual citizens’ assembly” to see how their AI could work in the real world, and found that participants in this setting moved towards agreeing with each other in a majority of instances.
Although finding agreement may not be the end-all, be-all to our current political and societal issues, the researchers argue that “agreement is a prerequisite for people to act collectively.”
There are considerable benefits to a technology that helps people find agreement in a time-efficient, fair, and scalable manner. Many real-world situations require groups of people to agree over the content of a written statement. These include, but are not limited to, contract agreement, conflict resolution, jury deliberation, diplomatic negotiations, constitutional conventions, artistic co-creation, and political or legislative discussions, as well as formal citizens’ assemblies. More generally, finding common ground is a precursor to collective action, in which people work together for the common good.
Design tweaks to make hard online conversations easier
Beyond finding common ground, another set of researchers recently released a study detailing the ways in which thoughtful design can facilitate and improve tough conversations online. They interviewed six therapists and 21 social media users who have had to navigate arguments in digital spaces, and came up with a handful of design recommendations that platforms can employ to turn down the temperature when exchanges get testy:
Encourage Reflection: Digital tools could prompt users to reflect on their emotional state and underlying needs before initiating difficult conversations. This reflection can help clarify intentions and foster empathy.
Facilitate Pauses: Design features could help users communicate the need for a pause effectively. For instance, an app might suggest taking a break after detecting heightened emotional language.
Promote Attunement: Communication platforms could incorporate features that indicate tone, allowing users to better understand the emotional context of messages. This might include options for expressing emotions through emoji or other visual cues.
Mutual Consent Features: Platforms should consider integrating mechanisms for mutual consent regarding sensitive topics. For example, a prompt could ask both parties if they are ready to discuss a difficult issue before proceeding.
What might this look like? Apps could promote attunement by offering potential rephrases when users are drafting messages that seem unnecessarily harsh or testy. Or, mutual consent might involve asking both parties whether they’d like to send and/or receive read receipts at the outset of a conversation.

These are small fixes, but that’s part of their appeal. One can imagine how these suggestions might be useful for anyone from a major platform to a much smaller app or startup. Small design tweaks might have a useful impact on conversations and keep everyone’s blood from boiling.
Mapping the growing world of peacebuilding technologies
These two new research initiatives aren’t one-offs. In fact, the intersection of peacebuilding efforts and algorithmic technologies is becoming a sprawling field. A new report from the Toda Peace Institute offers a useful overview of the growing world of deliberative technologies. The report comes out of an international peacebuilding workshop the Institute co-hosted last summer which explored the ways that technologies, specifically AI, can be designed more thoughtfully to improve democracies. (We covered a similar workshop last year.)
Among the technologies that this report discusses are computational democracy, collective intelligence, bridging systems and algorithms, and digital public squares. Each of these is a growing field unto itself, and each represents a way in which technology could make it easier, rather than harder, for people to hear each other.
Computational democracy… seeks to enhance democratic processes by leveraging advanced computational tools such as statistics, machine learning, and algorithms to analyze and interpret the views of large groups of people… By processing and synthesizing large volumes of data, computational democracy aims to create a more inclusive and representative decision-making process. …
Collective intelligence is the ability of large groups of people to find answers and develop solutions collaboratively… Before technology, managing collective intelligence was difficult. Large numbers of people cannot all speak at the same time. Town halls could last for hours or days if they were to give time for everyone to speak. New technologies solve this problem. On digital platforms thousands or even millions of people can speak at the same time. AI can help to digest and synthesize large amounts of public input to identify themes and patterns.
Bridging Systems and Bridging Algorithms… involve developing digital tools that identify common ground between polarized groups by analyzing and mapping areas of agreement and disagreement. Bridging algorithms, in particular, aim to foster constructive dialogue by surfacing shared values and perspectives, reducing antagonism, and encouraging more nuanced understanding among users. …
Digital public squares are virtual spaces designed to facilitate open, inclusive, and democratic discussions among diverse groups of people. Deliberative technologies are designed to achieve this goal. These platforms aim to replicate the function of traditional public squares by providing a venue for citizens to engage in dialogue, share ideas, and participate in decision-making processes on various social, political, and cultural issues.
For those of you feeling disheartened about how hard it has become to hear each other—especially online—this work is a necessary tonic. Technology may be part of the problem of polarization, but it may well also be a key part of the solution.
Quote of the Week
A way to approach difficult disagreements:
The RISA Framework:
R = Is it Real?
I = Is it Important?
S = Is it Specific enough?
A = Are you Aligned?If you can confirm these 4 things, you have the best chance of making progress
I have been trying to fine tune my thoughts on an intersubjective mediating function. It’s essentially a reflective intersubjective mediated dialectic that allows for the exchange of proposals, counters, and revised proposals and counters. It’s eye opening to see how many people are in this space. Thanks for the article.