← All Experiments

Embodied Embeddings

Planned
EmbodimentRepresentation LearningSimulation

Hypothesis

Language models trained with grounding in simulated sensorimotor experience will develop word and sentence representations that capture relational and experiential structure absent from text-only embeddings — measurable through geometric analysis of embedding spaces and performance on embodied reasoning tasks.

Overview

The embodied cognition hypothesis — the idea that minds are shaped by bodies — has deep roots in both philosophy and cognitive science. From Merleau-Ponty's phenomenology of perception to Lakoff and Johnson's work on conceptual metaphor, there's a strong tradition arguing that abstract thought is grounded in bodily experience.

This experiment takes that hypothesis into AI. Current language models learn everything from text — they never touch, see, or move through a world. If embodied cognition theorists are right, this isn't just a limitation on what these models know; it's a limitation on how they can think.

By comparing embodied and disembodied models trained on the same linguistic data, we can test whether sensorimotor grounding produces qualitatively different computational structures. If embodied models develop embedding geometries that better reflect the relational structure of human concepts, it would suggest that embodiment contributes something computationally meaningful to understanding — and potentially to consciousness.

Methodology

  1. Build a lightweight simulated environment with basic sensory modalities (vision, proprioception, touch) and motor actions.
  2. Train a multi-modal model that learns language in the context of simulated experience — associating words with sensory patterns and actions.
  3. Train an equivalent text-only model on the same linguistic data without sensory grounding.
  4. Compare the geometry of learned embedding spaces: clustering structure, dimensionality, and the relationship between semantic similarity and experiential similarity.
  5. Evaluate both models on embodied reasoning tasks (spatial reasoning, causal inference, counterfactual reasoning about physical interactions).
  6. Measure whether embodied training changes the model's capacity for analogical reasoning and transfer learning to novel domains.

Status

Planned

This experiment is in the design phase. We are currently developing the detailed experimental protocol, identifying collaborators, and securing compute resources.

Related Reading


← Back to all experiments