Project Abstract

Flattened English: Measuring Linguistic Labour in AI-Mediated Knowledge Work

Problem Statement

As large language models (LLMs) and AI-mediated platforms become embedded in professional and academic workflows, they increasingly function as gatekeepers of linguistic legitimacy. This project argues that these platforms impose a flattened "prestige English" that operates as a default labor currency, generating asymmetric compliance costs for users of non-standard or regionally grounded varieties.

Central Research Question

"How do AI-mediated platforms impose uneven linguistic labour on users writing in non-standard English varieties in digital work contexts?"

Methodology: Measuring Algorithmic Friction

This study moves beyond identity-based analysis to examine uneven labor across global workers in an AI-saturated ecosystem. The project develops a quantitative audit across search engines, social media, and LLMs, framing linguistic labour through three measurable metrics:

  • Semantic Loss:Propositional degradation during algorithmic "standardization."
  • Reformulation Distance:The mechanical work required to move from natural voice to "legitimate" output.
  • Correction Frequency:The iterative burden of adjusting syntax to satisfy algorithmic norms.

Contribution to Digital Labor Theory

By linking AI language systems to broader platform infrastructures, this research extends the definition of digital labor to include linguistic standardization as invisible work. The findings contribute a technical framework for auditing cultural alignment and labor asymmetries, offering a site of resistance against the algorithmic flattening of human expression in the global knowledge economy.