summaryrefslogtreecommitdiff
path: root/reservoir_computing.mdwn
blob: a5f8ca9fb2b366a2d20f20945282133f581bd38b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# Reservoir computing

<a href="https://en.wikipedia.org/wiki/Reservoir_computing">https://en.wikipedia.org/wiki/Reservoir_computing</a>

"Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed.] The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost."


* All-optical machine learning using diffractive deep neural networks (2018)
* Biological neurons act as generalization filters in reservoir computing (2022)
* Brain-inspired computing with self-assembled networks of nano-objects (2024)
* Deep physical neural networks trained with backpropagation (2022)
* Electroactive polymer gels as probabilistic reservoir automata for computation - 2022
* Electro-active polymer hydrogels exhibit emergent memory when embodied in a simulated game environment - 2024
* Evolution of electronic circuits using carbon nanotube composites - 2016
* Experimental demonstration of reservoir computing with self-assembled percolating networks of nanoparticles - 2024
* Fabrication and programming of large physically evolving networks - Hubler - 2011
* In-materio reservoir computing in a sulfonated polyaniline network - 2021
* Learning without neurons in physical systems - 2023
* Memory elements in potato tubers - 2015
* Pattern recognition in a bucket of water - 2003
* Recent advances in physical reservoir computing: A review - 2019
* Tomography of memory engrams in self-organizing nanowire connectomes - 2023



It could be helpful to use the concepts of physical [[reservoir computing]] when thinking about the human brain. In RC we do not necessarily attribute computation to just synaptic weights but also to many kinds of plausible phenomena including atom-atom interactions, protein-protein, gradients, neurohormone signaling, mRNA expression, extracellular matrix structural topology, ECM holes, ECM density, electrical activity, etc. Michael Levin made a related point recently about this <https://gnusha.org/logs/2025-06-18.log>:

```
10:44 < kanzure> from levin's "agentila memory" article: https://www.mdpi.com/1099-4300/26/6/481 
10:44 < kanzure> ".. if engrams are not in the synaptic structures [217], where are they? My current hypothesis, driven by the above multi-scale perspective [218], is that there is no single substrate for memory[49]. Every component of the system, including but not limited to those that bubble up to conscious recollections, could be using everything in its environment as an interpretable scratchpad[50]. The deep levels of biological structure and dynamics offer an incredibly high-dimensional physical reservoir [223,224,225,226] (referred to as the senome in [19,227]) that can be exploited for memory remapping[51]. In this view, neuronal networks are not so much used for holding memory as they are for learning to interpret the engrams embodied by subcellular components [34,35]."
10:45 < kanzure> not sure why he felt the need to invoke subcellular memory engram embodiment (isn't it sufficient to say high-dimensional physical system if he belives in a "senome")?)
10:52 < kanzure> well maybe it's because he has some evidence of at least one physical memory engram substrate (and it seems to be a likely molecular object, like RNA) from his references 111 through 122 showing memory transfer to different somatic selflets (other animals) (mostly from the 1960s) by e.g. brain tissue extracts, RNA, or other objects.
10:54 < kanzure> still, biological neuronal networks can still have other roles like learning to interpret or utilize other high-dimensional physical reservoires besides just subcellular molecular memory objects and i'm not sure if possible to rule that out as a modality.
10:58 < kanzure> for the purposes of memory preservation, computational read/write of memory, uploading, or similar purposes, it would be very convenient if there is at least one physical substrate of memory that is accessible to our technologies such as DNA sequencing, RNA sequencing, electron beam scanning of the connectome, DNA memory tape readout, or another practicable technique.
10:59 < kanzure> if there really are multiple competing substrate options for biological cognitive memory storage, then we ought to try artificial selection to nudge towards a single modality; i have previously commented on the idea of artificial selection to nudge towards physical memory consolidation in a specific location in the brain instead of whole-brain memory storage or sparse storage, a related concept.
11:00 < kanzure> nudging towards single memory object type (of our choice) and a single centralized location is useful for goals like storage, preservation, memory read/write, and amenable to lesioning proofs.
11:02 < kanzure> i should add physical extraction to that list. physical extraction is useful because it lets us poke at things in a way that "here is a big brain have fun not being able to orchestrate it to run your studies" does not.
...
23:09 < fenn> DNA ticker tape readout could also be a record of activations, is that a memory?
23:09 < fenn> redundant distributed systems are very fault tolerant
23:10 < fenn> i'd rather have a fault tolerant brain than an easily readable one
23:10 < fenn> also i don't think it will be necessary in practice
23:12 < fenn> we should be able to distill memories by recording the process of recall, and training another network to predict that activation pattern
23:12 < fenn> then the other network can be designed to be easy to read, or it could be the final product
23:14 < fenn> is a distilled network 'the same' as the original? no, but it can be a damned good copy
```


from the IRC logs: "Natural systems possess rich nonlinear dynamics that can be harnessed for unconventional computing. Here, we report the discovery that a common potato (Solanum tuberosum) can serve as an effective physical reservoir computer, leveraging its electrochemical properties for complex data processing tasks. By introducing time-varying electrical stimuli via electrodes, we exploit the potato's internal ionic interactions and heterogeneous tissue structure to perform computational tasks, including spoken digit classification and gesture recognition. Our experiments demonstrate that this biological substrate exhibits dynamic responses comparable to traditional reservoir computing systems, achieving high accuracy in these tasks. This bio-based computing approach expands the range of material substrates suitable for physical computing, leveraging the intrinsic properties of biological materials for advanced information processing."


See also [[organoids]]