Projects

 

Trasformations of Epistemic Institutions

The insreasing availability and competence of generative artificial intelligence in the form of large language models (LLMs) is revolutionizing knowledge production and dissemination. We investigate the impact of language generators on epistemic institutions, such as newsrooms, academia, and startups attempting to monetise AI, and their adaption to this technological shift. We explore AI's influence on knowledge policy, perceptions of epistemic authority, and its users’ anthropomorphized relationships with AI. Our approach combines ethnography and systems-oriented social epistemology, allowing us to understand the role of AI in society and its effects on institutional norms, practices, and power dynamics.

Democracy's Digital Playground

The rapid development of digital technologies has exposed the limitations and static nature of traditional political institutions, prompting the need for innovative institutional designs. We examine the potential of a digital playground for democracy, an artificial world where institutional mechanisms can be tested and compete against each other. This digital world would allow for transparency, easy participation, and the calibration of risk to optimize the testing of institutional mechanisms. It would encourage experimentation while minimizing potential negative consequences. Starting with small-scale educational applications for younger populations, the digital playground has the potential to shape the future of institutional innovation and inform our understanding of democratic systems. Learn more in Petr Špecián's recent essay.

LLMs4Democracy

Large language models (LLMs) capable of analyzing vast amounts of text and generating human-like responses to users’ prompts, could impact democracy significantly. They can access more information than human experts and tailor their findings to individuals’ needs. Their use could broaden access to expert knowledge and make it more exploitable by democratic assemblies. However, LLMs ‘hallucinate,’ producing mistaken claims with no hint of uncertainty. Their algorithms are opaque, hindering correction of errors or undesirable behaviors. Their control by a handful of private companies is also problematic. May LLMs still enhance democracy? We study the ways in which democracies could harness the increasingly capable AI expert systems to tackle the complex challenges of the 21st century.