1. What Are the Debates in the Deliberative Democracy Community in Relation to Technology?

Technology has recently entered the scene of deliberative democracy, both as a subject of deliberation and as a way to improve deliberative processes. In the recent Artificial Intelligence (AI) Action Summit held in Paris in February 2025, Missions Publiques (2025) and the Stanford Deliberative Democracy Lab launched the Global Coalition for Inclusive AI Governance. This initiative aims at bringing together more than 10,000 citizens from across the world to deliberate about the future of AI governance. In Flanders (Belgium), the ‘amai!’ programme uses civic engagement tools such as crowdsourcing and citizen juries to design and fund narrow-specific AI projects, to broaden the audiences involved in the development of new technologies (Duerinckx et al. 2024). Big Tech platforms are increasingly organising their own deliberative projects (Malkin & Alnemr 2024). In the European Union (EU) context, a European Citizens’ Panel (ECP) on the topic of ‘Virtual Worlds’ was held in 2023, followed by another on AI in 2024.

In parallel, the growing momentum of deliberative minipublics on the regulation and governance of technology is matched by the increasing use of technology as a way to improve citizen assemblies (CAs). There is increasing literature by both academics and practitioners about the possibilities of using technology to improve deliberation. The mainstream position in this debate appears to be that, while it is not a bulletproof solution, technology holds the potential to make deliberative democracy better, by integrating AI into citizens’ assemblies (McKinney 2024) or by using it to bring deliberation to the masses (Landemore 2024). What these innovations have in common is a belief in technological solutionism, a concept advanced by Evgeny Morozov (2013), which suggests that societal problems can be ‘solved’ through technology. In the specific case of deliberative democracy, it is the belief that technology is an ‘efficient’ way of solving the ‘problems’ of deliberative minipublics. Indeed, Goñi (2025) has explored how, while the deliberative democracy literature carries on such ‘technological solutionism’, the science and technology studies (STS) literature engages in a sort of ‘participatory solutionism’ in the design of new technological artefacts. Relatedly, deliberative minipublics are seen as a ‘new paradigm of democracy’ (Landemore 2020) that can better connect citizens with political elites and consequently further democratise democracy. However, the emergence of this so-called ‘deliberative wave’ is also the result of lobbying efforts by deliberative democracy consultants.

First, we take stock of the latest developments in CAs, focusing specifically on the ECPs and Google’s Habermas Machine (HM). We argue that introducing technology as a ‘solution’ to ‘fix’ some of the ‘problems’ within the deliberative democracy community reinforces the depoliticisation and disintermediation of deliberative initiatives. Therefore, the focus on the technological component distracts from criticising the problematic nature of minipublics from a systemic understanding of deliberation.

2. The European citizens’ panels and the Habermas Machine

2.1. Technology in the European citizens’ panels: Object of deliberation and methodological innovations

The past 5 years have been marked by the emergence of democratic innovations in the EU (Oleart 2023a). Technology has been an important vector of these new processes as both an object of deliberation and as a leading methodological innovation. However, the track record in terms of democratic outreach is poor, as political conflict was heavily neutralised. Some authors argue that ECPs and other minipublics are sometimes simply a self-legitimising strategy (Kindermann 2025) of ‘participatory-washing’ by the convening institution, a form of public relations aimed at persuading the public that the work of an organisation is the result of a democratic process of citizen participation (Palomo Hernández 2024: 106).

In the specific 2023 ECP on Virtual Worlds, which we observed and assisted, multiple innovations were put forward by the organisers. During the first session (24th–26th February 2023), participants were introduced to Sara Lisa Vogl, a virtual reality (VR) artist who appeared in her VR avatar. Her contribution was focused on making sense of the Metaverse as a ‘free’ space where the structural injustices and discrimination in the ‘physical’ world can be overcome. In VR, people can be truly ‘equal’. Relatedly, the second session (10th–12th March 2023) of the virtual worlds ECP was organised online, but the participants were given a set of goggles to be able to participate in a discussion about the regulation of virtual worlds through a virtual environment. Each participant created their own avatar within a virtual world scenario of the European Commission. This Sims-like virtual world did not bring an added value to the deliberative process, and in fact added new obstacles, especially in relation to the digital divide and the inclusion of older or less technology-savvy participants. In the end, the virtual world remained an anecdote as the online session was conducted almost solely through an online videoconferencing programme that did not stimulate discussion and interaction between participants. Unsurprisingly, the final recommendations of regulations on the virtual worlds were very business-friendly, with calls for the EU and its member states to cooperate closely with the private sector (on ‘citizenwashing’ EU tech policies via ECPs, see Petit & Oleart 2025). In another ECP, which one of us observed, on the topic of ‘Learning Mobility’ (March–April 2023; European Commission 2023: 21), AI was mobilised during the first session by the organisers to create a set of ‘personas’ that the participants described as being the ‘target groups’ for learning mobility. The AI programme DALL-E generated realistic profile pictures that were meant to facilitate empathy by the participants towards these target groups and be able to identify challenges for them. In practice, several of these ‘personas’ were women of colour, who paradoxically were mostly missing from the panels themselves.

The experiences of the ECPs are just a case study of what is an emerging trend, where technology played a key role in the dynamics of the discussions. Simultaneous translation enabled communication between participants speaking different languages. The use of digital tools by the facilitators and their assistants was crucial for the organisation and processing of the data generated in the discussions. The facilitation team annotated, summarised and categorised the main interventions, points of conflict and agreements between the participants of the deliberative group in a large qualitative database. The organisation’s team was responsible for analysing the data, pooling the proposals of the different groups and constructing the final set of proposals to be voted on by the full group of participants. Thus, translators and facilitators play an essential role, and their biases can affect the outcome of the mini-public. The use of new technologies is sometimes framed as an opportunity to reduce the effects of these and other biases in citizens’ assemblies. However, the fixation with the elimination of bias can be counterproductive. On the one hand, AI itself incurs in the reproduction of biases or hallucinations. On the other hand, bias is inherent in politics: as humans do, AI will always come up with a partial and imperfect outcome, as not every interest can be accommodated in a political decision. In any case, as Innerarity (2024: 1670) argues, democracy ‘does not owe its ultimate legitimacy to the goodness of its decisions but to the popular authorisation that underpins those decisions’.

Problems emerge, however, when automation through AI is framed as an opportunity to address some of the limitations of CAs: from tools used for translation to tools for clustering and generating policy proposals through crowdsourced input (McKinney 2024). In fact, Landemore (2024) argues that AI can help reconcile the trade-off between deliberation and mass participation. However, she admits that to do so, we must ‘let go the ideal of all minds engaged in one common deliberation and instead settle for an approximation’ (Landemore 2024: 40), therefore further disentangling deliberation from mass politics. While minipublics would potentially multiply the number of participants through technology, the internal dynamics of such processes and their relationship with the public sphere would be minimally affected.

2.2. The Habermas Machine, a ‘deliberation without citizens’: Generating ‘consensus’ contra Habermas?

Interestingly, AI is not only conceived as a tool for streamlining processes or massively amplifying participation. More radical proposals also envisage AI as a catalyst for deliberation itself. This is the case, among others, of the ‘HM’ developed by Google’s DeepMind. The HM is a system of large language models (LLMs) designed to act as a mediator (replacing human mediation) in deliberative groups that promises to be able to ‘promote agreements’ or ‘find common ground’ between people with opposing political positions (Bakker et al. 2022). The HM has only been used experimentally, and its designers claim that they do not intend to use the machine publicly. However, the HM has a series of norms and assumptions embedded in its design that require detailed exploration (on this, see Palomo Hernández 2025).

The HM consists of two complementary LLMs: (1) A generative model that suggests statements that reflect the diversity of positions of the members of the deliberative group, and (2) a personalised reward model that scores statements based on how it expects different members of the deliberative group to agree with it (Tessler et al. 2024). The statements generated by the HM are based on participants’ responses to questions generated by the machine itself. These statements are then evaluated by the participants according to the quality of the argument and the level of agreement it reflects with their position. The process of evaluation by the participants and the generation of consensus statements by the machine is repeated several times. According to the experiments carried out by the designers, the HM generates group consensus statements that raise a higher degree of agreement among participants than those written by human mediators. In these experiments, the topics discussed by the deliberative groups arose from ‘potentially divisive’ questions generated by an LLM from a small set of seed exemplars. Although external to the remit of the HM, agenda-setting by the AI itself has an important bias from the past (Innerarity, 2023). On the one hand, AI generates issues based on what has historically been considered ‘potentially divisive’, being unable to generate moments of political rupture or change. On the other hand, by linking what is political to what is ‘potentially divisive’ as the object of future consensus, AI is unable to identify already established consensuses as objects of possible political deconstruction and debate (Palomo Hernández 2025).

Therefore, in the case of the HM, AI does not contribute to solving the problems of CAs (see Farrell & Han 2025). Rather, it brings about new ones. Beyond the generation of topics for debate, the HM is designed for the generation of political agreements. However, in order to build these agreements, the HM leaves aside human-to-human argumentative and reason-based interaction. Through the HM, participants do not contrast their opinions in a deliberative exchange. Participants only rank their degree of agreement with the consensus statements generated by the HM. The HM, in its eagerness to structure the process solely towards obtaining agreement as a result of deliberation, causes its deliberative impact at the systemic level to be minimal. The HM leaves aside the deliberative skills of the participants or, at least, makes them unnecessary for its functioning. Thus, in a sort of ‘deliberation without citizens’, the HM simplifies and reduces human intervention in the deliberative process to a minimum.

The Google DeepMind designers (Tessler et al. 2024: 1) state that they have named this AI system ‘HM’ after Jürgen Habermas since he ‘proposed that when rational people deliberate under idealized conditions, agreement will emerge in the public sphere’. However, Habermas does not argue that deliberation should be structured towards the end or telos of consensus. Consensus is understood by Habermas (2022) not as a possible and desirable outcome of deliberation, but as a precondition for the kind of discourse that should guide deliberation. Habermas’ early conception of the public sphere is problematic given the omission of questions around slavery or the exclusion of racialised communities, elements that are constitutive of the bourgeois public sphere (Willems, 2023). However, Habermas’ public sphere is still based on the openness of participation by multiple actors, and contesting through deliberation public authorities’ decisions, thereby generating ‘communicative power’ as a way to challenge the state’s ‘administrative power’. However, in the HM, there is no discursive or argumentative interaction between participants at all. The change of position of some participants is not explained by an argumentative communicative exchange between peers, but by an algorithmic calculation of what the participants will consider more or less ‘acceptable’.

3. If Technosolutionism Harms Democracy and Deliberation, What Are the Alternatives?

While in STS literature, citizen participation is often seen as the ‘solution’ to the ‘problem’ of technology, in deliberative democracy theory, technological innovations arise as the ‘solution’ to the ‘problem’ of citizen participation (Goñi 2025). Technosolutionism applied to deliberation assumes that the problem with minipublics is that their nature hinders their scalability (McKinney & Chwalisz 2025) or other ‘solvable’ technical problems, ignoring structural limitations. Although AI can multiply participants, streamline processes and reduce organisational costs, the use of AI does not solve the two main problems of minipublics: the inherent depoliticisation and individualisation of citizen participation (Oleart 2023b). The technosolutionist perspective adopted by many practitioners of deliberative minipublics and scholars naturalises the relationship between deliberative democracy and minipublics, while sidelining the role of conflict and collective actors. By conceiving of the ‘problems’ in small and technical ways (scalability, sortition, moderation, translation, inclusion, speed, etc.), it misses the wood for the trees. It moves the discussion away from the debate on the systemic desirability of minipublics, sidelines a mass politics conception of democracy, and leaves unquestioned the political economy of AI, owned overwhelmingly by a small group of Big Tech companies. Furthermore, it contributes to the disintermediation of participation, as participation becomes increasingly mediated by technology. With AI technosolutionism, it is possible to participate as individuals in ‘deliberative’ processes without actually engaging with collective actors or in collective discussions. AI may become the new mediator that channels the voice of ‘everyday people’, reinforcing the commodification of minipublics where deliberation is turned into a product to be sold. Indeed, Big Tech platforms appear to be interested in developing deliberative ‘products’ such as the HM as a way to ‘citizenwash’ their corporate profit-seeking nature, and deliberative consultancies are likely to integrate AI because it facilitates their work. The process of marketisation continues its cycle: consultants sell CAs to governments and institutions, and Big Tech entrenches itself in yet another area of society.

Democratising the relationship between technology and participation entails reclaiming a systemic understanding of deliberation, rather than focusing on the specific procedures or technical methods that ‘improve’ deliberation within minipublics. Therefore, the way to materialise deliberation at the systemic level is not, as Landemore (2024) proposes, scaling up minipublics to cover an important but insufficient part of the population. The focus ought to be oriented on embedding deliberative initiatives to foster an agonistic public sphere (Mouffe 2013), connecting with relevant mediators such as political parties, civil society, trade unions or social movements. This also means questioning some of the premises of deliberative minipublics. Rather than aiming for the ‘descriptive representation’ of sortition, perhaps we should be thinking about the kind of institutions that can channel the existing structural conflicts that exist in society to reverse material inequalities. A more meaningful problematisation asks: how can we empower the social and political groups that are structurally absent from the public sphere? Relatedly, considering the increasing business of deliberative consultancies and the private ownership of AI, how can we democratise the political economy upon which deliberative initiatives are built?

From this perspective, technological ‘solutions’ to ‘improve’ deliberative democracy appear as primarily yet another ‘shortcut’ (Lafont 2020) that moves attention away from articulating a real democratic infrastructure based on vibrant debates in the public sphere and strong collective actors. It is certainly tempting to imagine that democracy has technological ‘fixes’ or ‘solutions’ in the context of the void left by hollowing out of mass membership organisations (Mair 2013), especially in the EU context, where there is a longstanding consensus-oriented understanding of deliberation and democracy (Crespy 2014; Oleart & Theuns 2023). However, such temptations are misguided. Not only does the HM perpetuate this view, but it also conceives agreement as the sole and ultimate goal of deliberation (Palomo Hernández 2025).

In fact, the technosolutionist trend within deliberative circles, including the HM, is not coherent with the very notion of the public sphere, meant to connect the different spaces of a polity through deliberation rather than condensing them. Instead, the combination of consensus-oriented minipublics and technological ‘fixes’ reinforces a minimalistic conception of democracy and deliberation. If technology is supposed to be part of the ‘solution’, what are the ‘problems’? And ‘who’ is meant to take the lead in tackling those problems via technology? This is even more problematic in the context of public spheres that are largely shaped by the ‘algorithmic control of communication flows that is feeding the concentration of market power of the large internet corporations’ (Habermas 2022: 167). Considering that the HM has been developed by Google’s DeepMind, it is problematic to situate Big Tech corporations as both part of the problem and the solution. Quoting Black feminist Audre Lorde, ‘the master’s tools will never dismantle the master’s house’. However, some scholars and practitioners are increasingly building close ties to Big Tech as a way to close the gap between minipublics and mass publics.

For instance, the Stanford Deliberative Democracy Lab (2025) collaborated closely with Big Tech companies to convene a forum using their AI-assisted Stanford Online Deliberation Platform to gather citizens’ views on AI agents. Coherently, the Stanford Deliberative Democracy Lab director, James Fishkin (who owns the registered trademark of ‘deliberative polling’), co-wrote a paper arguing that technology—including AI-assisted moderation—contributed to scaling both the quantity and quality of deliberation (Fishkin et al. 2025). Similarly, Hélène Landemore served in 2023 as an advisor to the Democratic Inputs to AI program at OpenAI, while reproducing an overly optimistic view of the potential uses of AI for deliberation in her recent publications (Landemore 2023, 2024). Interestingly, Landemore herself has acknowledged the limits of private companies investing in democratic innovations, given that “the reality is that these companies are a huge part of the problem to begin with. Their incentives are completely misaligned with the common good” (Giesen 2024). This illustrates how corporate and Global North actors are at the driving seat of technology and participatory innovations, and are likely to continue reproducing our society’s systemic political and economic hierarchies.

Our article suggests that the artificial sense of ‘equality’ present in deliberative minipublics is only worsened through experiments such as the HM, which reinforce the disentangling of democracy from mass politics and undermine the ability to exert collective power through mass membership organisations and the public sphere. Furthermore, it places blind faith on Big Tech to be part of the solution. The alternative to technosolutionism is to orient deliberative democracy towards fostering an agonistic public sphere guided by political contestation rather than a micro ‘representative sample’ or a machine-led pseudo deliberation without citizens. This is not to say that technology is necessarily problematic for democracy. There are interesting recent examples of technology being mobilised in a democratic direction, such as the articulation of worker-owned technologies to strengthen the political power of vulnerable communities (Grohmann 2023). This emphasises that technology can contribute to reimagining and strengthening democracy. But in order to do that, democracy needs to be oriented towards mass politics and collective actors rather than reproducing the narrow conception of deliberation upon which minipublics are built.

Competing Interests

The authors have no competing interests to declare.

References

Bakker, M., Chadwick, M., Sheahan, H., Tessler, M., Campbell-Gillingham, L., Balaguer, J., McAleese, N., Aslanides, J., Botvinick, M., & Summerfield, C. (2022). Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems, 35, 38176–38189.

Crespy, A. (2014). Deliberative democracy and the legitimacy of the European Union: A reappraisal of conflict. Political Studies, 62(1_suppl), 81–98. DOI:  http://doi.org/10.1111/1467-9248.1205

Duerinckx, A., Veeckman, C., Verstraelen, K., Singh, N., Van Laer, J., Vaes, M., & Duysburgh, P. (2024). Co-creating artificial intelligence: Designing and enhancing democratic AI solutions through citizen science. Citizen Science: Theory and Practice, 9(1), 43. DOI:  http://doi.org/10.5334/cstp.732

European Commission (2023). Final Report on European Citizens’ Panel on Learning Mobility. Luxembourg: Publications Office of the European Union. DOI:  http://doi.org/10.2775/019728

Farrell, H., & Han, H. (2025). AI and democratic publics. Retrieved August 13, 2025. Retrieved from: https://knightcolumbia.org/content/ai-and-democratic-publics

Fishkin, J., Bolotnyy, V., Lerner, J., Siu, A., & Bradburn, N. (2025). Scaling dialogue for democracy: Can automated deliberation create more deliberative voters? Perspectives on Politics, 23(2), 434–451. DOI:  http://doi.org/10.1017/S1537592724001749

Giesen, L. (2024). What a Global Citizen’s Assembly Powered by AI Might Look Like: Interview with Hélène Landemore and David Mas | PART II. Retrieved October 3, 2025. Retrieved from: https://democracy-technologies.org/participation/global-citizens-assembly-ai-deliberation/

Goñi, J. I. (2025). Citizen participation and technology: Lessons from the fields of deliberative democracy and science and technology studies. Humanities and Social Sciences Communications, 12(1), 287. DOI:  http://doi.org/10.1057/s41599-025-04884-y

Grohmann, R. (2023). Not just platform, nor cooperatives: Worker-owned technologies from below. Communication, Culture & Critique, 16(4), 274–282. DOI:  http://doi.org/10.1093/ccc/tcad036

Habermas, J. (2022). Reflections and hypotheses on a further structural transformation of the political public sphere. Theory, Culture & Society, 39(4), 145–171. DOI:  http://doi.org/10.1177/02632764221112341

Innerarity, D. (2023). Predicting the past: a philosophical critique of predictive analytics. IDP. Revista d’Internet, Dret i Política, 39, 1–12. DOI:  http://doi.org/10.7238/idp.v0i39.409672

Innerarity, D. (2024). The epistemic impossibility of an artificial intelligence take-over of democracy. AI & Society, 39(4), 1667–1671. DOI:  http://doi.org/10.1007/s00146-023-01632-1

Kindermann, P. A. (2025). Simulating democratic reform in the EU: self-legitimation through participatory innovation. Journal of European Public Policy, 1–28. DOI:  http://doi.org/10.1080/13501763.2025.2554911

Lafont, C. (2020). Democracy without shortcuts. Oxford: Oxford University Press. DOI:  http://doi.org/10.1093/oso/9780198848189.001.0001

Landemore, H. (2020). Open Democracy: Reinventing Popular Rule for the Twenty-First Century. Princeton: Princeton University Press.

Landemore, H. (2023). Fostering more inclusive democracy with AI. Finance & Development, 60(4), 12–14.

Landemore, H. (2024). Can artificial intelligence bring deliberation to the masses? In R. Chang & A. Srinivasan (Eds.), Conversations in philosophy, law, and politics (online ed.). Oxford: Oxford Academic. DOI:  http://doi.org/10.1093/oso/9780198864523.003.0003

Mair, P. (2013). Ruling the void: The hollowing of Western democracy. New York: Verso Books.

Malkin, C., & Alnemr, N. (2024). Big Tech-driven deliberative projects. GLOCAN Technical Paper 5/2024.

McKinney, S. (2024). Integrating artificial intelligence into citizens’ assemblies: Benefits, concerns and future pathways. Journal of Deliberative Democracy, 20(1). DOI:  http://doi.org/10.16997/jdd.1556

McKinney, S., & Chwalisz, C. (2025). Five dimensions of scaling democratic deliberation: With and beyond AI, DemocracyNext.

Missions Publiques (2025). A Coalition for Inclusive AI. Retrieved September 30, 2025. Retrieved from: https://missionspubliques.org/a-coalition-for-inclusive-ai/?lang=en

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. New York: PublicAffairs.

Mouffe, C. (2013). Agonistics: Thinking the world politically. New York: Verso Books.

Oleart, A. (2023a). Democracy without politics in EU citizen participation: From European Demoi to Decolonial Multitude. London: Palgrave. DOI:  http://doi.org/10.1007/978-3-031-38583-4

Oleart, A. (2023b). The political construction of the ‘citizen turn’in the EU: disintermediation and depoliticisation in the Conference on the Future of Europe. Journal of Contemporary European Studies, 1–15. DOI:  http://doi.org/10.1080/14782804.2023.2177837

Oleart, A., & Theuns, T. (2023). ‘Democracy without Politics’ in the European Commission’s Response to Democratic Backsliding: From Technocratic Legalism to Democratic Pluralism. Journal of Common Market Studies, 61(4), 882–899. DOI:  http://doi.org/10.1111/jcms.13411

Palomo Hernández, N. (2024). Shielding deliberation 150 citizens at a time? Competing narratives of citizens’ assemblies as drivers for a better-informed EU citizenry. Revista de las Cortes Generales, 77–113. DOI:  http://doi.org/10.33426/rcg/2024/118/1831

Palomo Hernández, N. (2025). Towards Automating Deliberation? The Idea of Deliberative Democracy Embedded in Google’s Habermas Machine. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1951–1960.  http://doi.org/10.1609/aies.v8i2.36687

Petit, P., & Oleart, A. (2025). Citizenwashing EU Tech Policy: EU deliberative mini-publics on virtual worlds and artificial intelligence. Politics and Governance, 14. DOI:  http://doi.org/10.17645/pag.10468

Stanford Deliberative Democracy Lab (2025). Industry-wide deliberative forum invites public to weigh in on the future of AI agents. Retrieved August 13, 2025. Retrieved from https://deliberation.stanford.edu/industry-wide-deliberative-forum-invites-public-weigh-future-ai-agents

Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., et al. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852. DOI:  http://doi.org/10.1126/science.adq2852

Willems, W. (2023). The reproduction of canonical silences: Re-reading Habermas in the context of slavery and the slave trade. Communication, Culture & Critique, 16(1), 17–24. DOI:  http://doi.org/10.1093/ccc/tcac047