A Sensor-Driven Musical Instrument

Anima Locus translates environmental presence into sound through mmWave radar, electric field sensing, and atmospheric awareness. It's an exploration of how technology can serve authentic artistic expression.

What is Anima Locus?

🎵 Musical Instrument

Not a controller, but an instrument. Anima Locus responds to presence, movement, and environmental conditions through granular synthesis, spectral processing, and multi-sampling.

🤖 Human-Computer Collaboration

Hybrid architecture: Linux handles creative computation (audio engines, ML), while the STM32 MCU provides deterministic sensor scanning and real-time control.

🔓 Open & Protected

Hardware under CERN-OHL-S v2 (strongly reciprocal), software under AGPLv3 (network copyleft). Built to be shared, not captured.

Architecture

Sensing Layer

  • mmWave Radar (60-64 GHz) - Position, velocity, presence
  • E-Field (MGC3130) - Gesture recognition, approach detection
  • ToF Depth (VL53L5CX) - Multi-zone distance mapping
  • Environmental - Temperature, humidity, COâ‚‚, air quality
  • Microphones - Beamforming arrays for spatial audio

Processing Layer

  • STM32U585 MCU - Deterministic sensor scanning, tinyML inference
  • NXP i.MX 93 (Linux) - Audio engines, sensor fusion, ML models
  • Link Protocol - Binary message passing between MCU and Linux

Audio Layer

  • Granular Engine - Grain synthesis with sensor modulation
  • Spectral Engine - FFT freeze, bin masking, partials emphasis
  • Multi-Sampler - Polyphonic triggering, ADSR envelopes
  • Effects Pipeline - Reverb, delay, distortion, filters
  • Nutube Stage - Optional analog warmth (6P1 triode)

Repositories

Anima Locus is organized as a multi-repo project. Each component is independently versioned and licensed for maximum flexibility.

hw/

CERN-OHL-S v2

Hardware designs: KiCad schematics, PCB layouts, BOM, assembly instructions. 4-layer PCB with sensor integration and optional Nutube analog stage.

mcu-stm32/

AGPLv3

STM32U585 firmware: Sensor drivers, ISR/DMA architecture, tinyML models, link protocol implementation. Real-time deterministic control.

engine-ui/

AGPLv3

Audio engines, sensor fusion, WebSocket/REST API, Conductor UI. Python-based with FastAPI, JACK/PipeWire audio backend.

sdk-py/

AGPLv3

Python SDK: Typed client library for WebSocket and REST APIs. Async-first design, CLI tools, full type hints for mypy strict mode.

sdk-ts/

AGPLv3

TypeScript SDK: Browser and Node.js client with React hooks. Strict TypeScript, WebSocket auto-reconnect, type-safe event emitters.

docs-site/

AGPLv3 / CC BY-SA 4.0

Comprehensive documentation: Getting started, hardware assembly, API reference, performance tuning. VitePress-based static site.

Philosophy: Team Hybrid

Anima Locus embodies Team Hybrid: humans and computers working together, each doing what they do best. Not AI replacing humans, but technology augmenting authentic human expression.

What Computers Do

  • Deterministic sensor scanning
  • Low-latency signal processing
  • Pattern recognition (ML)
  • Precise timing and control

What Humans Do

  • Creative intent and expression
  • Musical interpretation
  • Emotional context
  • Aesthetic judgment

This project is part of The Authentic Rebellion Framework — a movement toward technology that serves humanity, not extractive systems.

Get Involved

Anima Locus is open source and welcomes contributions. Whether you're interested in hardware design, embedded systems, audio DSP, or documentation, there's a place for you.