Christopher Starke

I'm a Post-Doc in Political Communication currently working at the Department of Social Sciences (University  of Düsseldorf) and the Düsseldorf Institute for Internet and Democracy (DIID).



Starke, C. (2021). European Solidarity under Scrutiny: Emprical Evidence for the Effects of Media Identity Framing. Cham: Palgrave Macmillan

Peer-reviewed Publications

Starke, C & Lünich, M. (2020). Artificial intelligence for political decision-making in the European Union: Effects on citizens’ perceptions of input, throughput, and output legitimacy. Data & Policy, 2, e16 .

Wallaschek, S., Starke, C. & Brüning, C. (2020). Mapping Solidarity in the Public Sphere: A Discourse Network Analysis of German Newspapers from 2008-2017. Politics and Governance,  8(2,) 257–271.

Starke, C., Marcinkowski, F., Wintterlin, F. (2020). Social Networking Sites, Personalization and Trust in Government: Empirical Evidence for a Mediation Model. Social Media + Society.

Marcinkowski, F., Kieslich, K., Starke, C. & Lünich, M. (2020). Implications of AI (Un-)Fairness in Higher Education Admissions: The Effects of Perceived AI (Un-)Fairness on Exit, Voice and Organizational Reputation. Proceedings of the ACM Conference on Fairness, Accountability and Transparency (p. 122–130).                                                                                                                                                                                                                    
Lünich, M., Starke, C., Marcinkowski, F. & Dosenovic, P. (2019). Double Crisis: Sport Mega-Events and the Future of Public Service Broadcasting. Communication & Sport.

Marcinkowski, F. & Starke, C. (2018). Trust in Government: What’s News Media Got to Do With It? Studies in Communication Sciences, 18(1), 87–102. doi: (Best Paper Award 2018)

Starke, C. & Flemming, F. (2017). Who is Responsible for Doping in Sports? The Attribution of Responsibility in the German Print Media. Communication & Sport, 5(2), 245-262. doi: 10.1177/2167479515603712 .

Marcinkowski, F., Lünich, M. & Starke, C. (2017). Spontaneous trait inferences from candidates’ faces: the impact of the face effect on election outcomes in Germany. Acta Politica, 53(2), 231-247. doi: 10.1057/s41269-017-0048-y .

Flemming, F., Lünich, M., Marcinkowski, F. & Starke, C. (2016). Coping with dilemma. How German sport media users respond to sport mega events in autocratic countries. International Review for the Sociology of Sport, 52(8), 1008-1024. doi: 10.1177/1012690216638545 .

Starke, C., Naab, T.K. & Scherer, H. (2016). Free to Expose Corruption: The Impact of Media Freedom, Internet Access and Governmental Online Service Delivery on Corruption. International Journal of Communication, 10, 4702-4722. doi: 1932–8036/20160005.

Starke, C. & Hofmann, L. (2016). Is the Eurocrisis a Catalyst for European Identity? The complex relationship between conflicts, the public sphere and collective identity. European Policy Review, 3, 15.

Book Chapters

Marcinkowski, F. & Starke, C. (2019). Wann ist Künstliche Intelligenz (un)fair? Ein sozialwissenschaftliches Konzept von KI-Fairness [When is Artificial Intelligence (un)fair? A social science concept of AI-Fairness. In J. Hofmann, N. Kersting, C. Ritzi & W.J. Schünemann (Eds.), Politik in der digitalen Gesellschaft: Zentrale Problemfelder und Forschungsperspektiven. [Politics in the the digital society: important problem areas and research perspectives]. Bielefeld: transcript. 

Köbis, N., Iragorri-Carter, D. & Starke, C. (2018). A Social Psychological View on the Social Norms of Corruption. In I. Kubbe, A. Engelbert (Eds.), Corruption and Norms: Why informal rules matter, pp. 31-52. Basingstoke: Palgrave Macmillan.

Köbis, N. & Starke, C. (2017). Why the Panama Papers did (not) shake the world - The relationship between Journalism and Corruption. In A. K. Schwickerath, A. Varraich, & L-L. Smith (Eds.), How to research corruption?: Conference Proceedings: Interdisciplinary Corruption Research Forum, June 2016, Amsterdam (pp. 69-78). Interdisciplinary Corruption Research Network.  

Flemming, F., Dosenovic, P., Marcinkowski, F., Lünich, M., & Starke, C. (2018). Von Unterhaltung bis Kritik: Wie das deutsche Publikum die Olympischen Spiele 2016 sehen möchte [From Entertainment to Criticism: How the German Public Likes to See the 2016 Olympic Games]. In H. Schramm, C. Schallhorn, H. Ihle & J.-U. Nieland (Eds.), Großer Sport, große Show, große Wirkung? Empirische Analysen zu Olympischen Spielen und Fußballgroßereignissen [Big Sport, Big Show, Big Influence? Empirical Analyses of the Olympic Games and Major Football Events] (pp. 120-145). Köln: von Halem. 

Starke, C., Lünich, M., Marcinkowski, F., Dosenovic, P., & Flemming, F. (2018). Zwischen Politik und Sporterleben: Der Umgang des deutschen Fernsehens mit den Olympischen Spielen 2016 [Between Politics and Sports Experience: How German Television Deals with the 2016 Olympic Games]. In H. Schramm, C. Schallhorn, H. Ihle & J.-U. Nieland (Eds.), Großer Sport, große Show, große Wirkung? Empirische Analysen zu Olympischen Spielen und Fußballgroßereignissen [Big Sport, Big Show, Big Influence? Empirical Analyses of the Olympic Games and Major Football Events] (pp. 98-118). Köln: von Halem. 

Marcinkowski, F., Flemming, F., & Starke, C. (2014). Mediensystem und politische Kommunikation [Media System and Political Communication]. In P. Knoepfel, Y. Papadopoulos, P. Sciarini, A. Vatter & S. Häusermann (Eds.), Handbuch der Schweizer Politik [Manual of Swiss Politics], (pp. 435-462). Zürich: Verlag Neue Züricher Zeitung. 

Working Paper & Reports

Baleis, J., Keller, B., Starke, C., & Marcinkowski, F. (2019). Cognitive and Emotional Response to Fairness in AI – A Systematic Review. In Working Paper Series: Fairness in Artificial Intelligence Reasoning, 3. 

Keller, B., Baleis, J., Starke, C., & Marcinkowski, F. (2019). Machine Learning and Artificial Intelligence in Higher Education: A State-of-the-Art Report on the German University Landscape. In Working Paper Series: Fairness in Artificial Intelligence Reasoning, 1.

Kieslich, K., Lünich, M., Marcinkowski, F. & Starke, C. (2019). Hochschule der Zukunft - Einstellungen von Studierenden gegenüber Künstlicher Intelligenz an der Hochschule. Précis for the Düsseldorf Institute for Internet and Demokratie (DIID). 

Hummel H., & Starke, C. (2017). Defining and prosecuting transborder corruption

Other Publications

Starke, C., Köbis, N., & Brandt, C. (2016). The Role of Social Norms in Corruption Research.

Starke, C., & Lünich, M. (2016). Corruption Perception and Media Freedom from a European Perspective

Brandt, C., Köbis, N., & Starke, C. (2016). "Das machen doch alle so." Höhere Strafen, weniger Korruption. Diese Logik trifft nicht immer zu ["That's What Everybody Does." Higher Penalties, Less Corruption. This Logic Does not Always Apply]. Katapult, 1(1), 44. 

Starke, C., & Köbis, N. (2015). European Identity: the Aftermath of Charlie Hebdo


Starke, C. (2014). Review: Freedom of Expression Revisited. Citizenship and Journalism in the Digital Era. (U. Carlsson, ed.: Gothenburg 2013). Rezensionen:Kommunikation:Medien r:m:k.

Current Projects

Artificial Intelligence in Political Decision-Making (under review)

Co-Authors: Marco Lünich (University of Düsseldorf)

A lack of political legitimacy undermines the ability of the European Union (EU) to resolve major crises and threatens the stability of the system as a whole. By integrating digital data into political processes, the EU seeks to base decision-making increasingly on sound empirical evidence. In particular, artificial intelligence (AI) systems have the potential to increase political legitimacy by identifying pressing societal issues, forecasting potential policy outcomes, informing the policy process, and evaluating policy effectiveness. This paper investigates how citizens’ perceptions of EU input, throughput, and output legitimacy are influenced by three distinct decision-making arrangements: (1) independent human decision-making (HDM); (2) independent algorithmic decision-making (ADM) by AI-based systems; and (3) hybrid decision-making by EU politicians and AI-based systems together. The results of a pre-registered online experiment (n = 572) suggest that existing EU decision-making arrangements are still perceived as the most democratic (input legitimacy). However, regarding the decision-making process itself (throughput legitimacy) and its policy outcomes (output legitimacy), no difference was observed between the status quo and hybrid decision-making involving both ADM and democratically elected EU institutions. Where ADM systems are the sole decision-maker, respondents tend to perceive these as illegitimate. The paper discusses the implications of these findings for (a) EU legitimacy and (b) data-driven policy-making.

Fairness in Machine Learning  (under review)

Co-Authors: Stefan Conrad, Stefan Harmeling, Michael Leuschel, Frank Marcinkowski, Ulrich Rosar (all University of Düsseldorf)

AI increasingly permeates social life, doubly challenging the education sector: firstly, it is supposed to train highly qualified AI experts; secondly, AI-based systems (learning analytics, drop-out detection) are leading to profound changes in research and teaching. While advocates expect AI to improve the quality of education and strengthen the efficiency of universities, critics fear they could reproduce or even reinforce social inequalities. When decisions on access to education or academic success are increasingly taken by AIs, central questions of fairness, responsibility and transparency arise. This is why the interdisciplinary FAIR/HE project analyses the technological and social conditions needed to implement fair and pro-social AI systems at German universities. We distinguish “two faces” of fairness: (1) objective fairness, and (2) perceived fairness. Cooperative research between computer scientists and social scientists is indispensable to adequately investigate both forms of (un)fairness and their interaction. The interdisciplinary FAIR/HE consortium contributes to preparing German universities for the challenges and opportunities of AI. The project will develop procedures and solutions for the fair handling of data, create tools to design non-discriminatory and understandable algorithms, and provide valuable insights into the cognitive and emotional reactions of those affected.