This page describes a project pertaining to physical learning systems, ?artificial intelligence, biocomputing and intelligence.
Summary
This project proposes a thermodynamic physical learning system in which computation and adaptation emerge directly from the physics of self-organizing analog matter. Using techniques such as nanowire/nanoparticle networks, memristive/ionic media, and viscoelastic conductors, we treat the substrate as a high-dimensional dynamical system that learns via dynamics using fast energy-minimizing relaxation and reward-gated plasticity.
Modern AI scales with the number of adaptive degrees of freedom. The human brain sets today's benchmark with ~8x1010 neurons and ~1014 synapses, operating on millisecond timescales with synaptic plasticity spanning milliseconds to minutes. A particle-based analog substrate can, in principle, dwarf this capacity: a dense liquid material of ~100 nm nano-objects (although smaller and larger objects could be used instead) easily reaches 1010 conductive particles per cm2; because each particle potentially forms multiple junctions, different (initially random) electrical circuits can form via inter-particle contacts. These systems can trivially scale to million- to billion-fold more adaptive micro-interactions than a neurobiological system of similar footprint. These physical junctions respond at native material timescales: electronic relaxation in picoseconds to nanoseconds, capacitive/RC dynamics in nanoseconds to microseconds, and controlled ionic/structural plasticity in microseconds to milliseconds, thus enabling inference and learning bursts on microsecond or faster horizons. This combination of astronomical state dimensionality plus ultrafast local dynamics provides a new scaling frontier where compute can grow outside the semiconductor supply chain. These substrates compute by relaxing system energy, and if harnessed with hardware-in-the-loop training (black-box RL/evolution, reward-gated consolidation), such substrates could deliver orders-of-magnitude higher model capacity per cm2, millisecond-to-microsecond training episodes, and exceptional energy efficiency, opening a path to compact, self-organizing learners that outscale digital architectures in both density and latency. Well, it may be able to do that, but only after the following feasibility assessments are completed...
Contract project: Analog electronics and physics
1) Seeking a hands-on physicist/EE/materials scientist to assess the feasibility of different thermodynamic physical learning system concepts. You'll evaluate device physics and instrumentation: junction/nonlinear conduction mechanisms, limits, bandwidth, noise/SNR, thermal noise effects, stability/drift, thermal budget, heating/cooling, and device lifetime under various stimulation or energy assumptions. Deliverables include: general feasibility analysis, brainstorming, hypothesizing, basic research/literature review, high-level sketches of simulations if feasible. Ideal backgrounds: physics, analog electronics, electrochemistry, materials science, emulsions or colloids, high-frequency signal generation/processing/ADC, high voltage electronics. This is a short-term contract or project that could turn potentially into an experimental phase. We are separately also pursuing a short-term contract for the ML side of this project.
Contract project: Machine learning expertise
2) Looking for an ?ML researcher/engineer to assess the feasibility and directions for training a black box physical learning system consisting of a high-dimensional analog substrate driven and read only at its electrodes. Deliverables include: general feasability analysis, brainstorming, hypothesizing, basic research/literature review, high-level sketches of GPU-based simulations if feasible. Ideal backgrounds: machine learning, reinforcement learning, evolutionary optimization, physical reservoir computing, energy-based/predictive-coding methods, hybrid analog-digital pipelines, and practical experience optimizing noisy non-differentiable systems.