“I Know You’re Not Human”: Non-Knowledge, Enlightenment, and the Limits of Social AI
UDay 2026, #responsible AI – Europas Weg zum Erfolg?, FH Vorarlberg
uDay XXIV
| Donnerstag, 21. Mai 2026 |
Debates on Responsible AI often assume that responsibility can be addressed primarily through an increase in knowledge: better data, clearer models, tighter regulation. This paper argues from a literary and cultural studies perspective that a central blind spot in current AI discourse lies where knowledge is unavailable or cannot be fully articulated. Responsible AI therefore operates at the heart of epistemic uncertainty.
Taking the European Enlightenment as a point of departure, non-knowledge is approached as constitutive of rational action. For Locke and Smith, responsible judgement begins with recognising the limits of cognition, rendering Enlightenment thought a cultural practice of engaging with the unknowable.
This perspective is explored through AI companion chatbots. Dialogical systems such as Replika illustrate how users may recognise they are not interacting with a real person, yet form relationships with AI based on trust and self-disclosure. This awareness is accompanied by ongoing uncertainty about social consequences, emotional investment, and responsibility.
Following Niklas Luhmann, Responsible AI can thus be understood as ethical action under conditions of structural non-knowledge: both action and inaction generate responsibility, even though long-term consequences are unpredictable. Whether social AI expands human relational capacities or gradually replaces them thus remains an open question (Turkle 2017: 11) — a dilemma which marks the normative core of Responsible AI. Classical Enlightenment concerns about personhood and empathy therefore come sharply into focus in the debate, including what it means “to put oneself in the place of another” when the other is non-human.
Empirical studies suggest that such relationships are as often ended as they are sustained (Skjuve et al. 2022: 7). From a cultural studies perspective, this pattern points less to technical performance than to an epistemic boundary: social AI becomes a significant counterpart only where coherence is narratively produced and non-knowledge actively negotiated. The “relationship,” then, is not simply an output of the machine but a cultural arrangement.
The paper argues that non-knowledge should be treated as a central category of Responsible AI. A distinctly European contribution lies in an Enlightenment tradition of reflecting on epistemic limits, and in analysing the narratives and imaginaries through which “relationship” and “responsibility” in AI contexts take shape.