Research Methodology

Mapping Friction in the Linguistic Ecosystem

The project develops a quantitative audit that combines social media data, search behavior, and LLM interaction.

Social Data

Analyzing captions and comments for dialect markers.

Search Behavior

Tracking semantic drift in algorithmic retrieval.

LLM Interaction

Measuring reformulation distance in prompts.

Methodologically, the project frames linguistic labour as measurable friction: semantic loss, reformulation distance, and correction frequency. These metrics are analysed across platforms to identify where and how standardization pressure is most acute. This project does not aim to authenticate speaker identity, the study focuses on linguistic features circulating in digital labour environments, acknowledging the already AI-saturated conditions under which contemporary language is produced.

By linking AI language systems to search engines and social platforms, this research extends debates on digital labor beyond data labeling and content moderation to include linguistic standardization as a form of invisible work. The findings contribute a technical framework for auditing cultural alignment and labor asymmetries in AI-mediated knowledge economies.