as ECM i.e. electrochemical metallization). When they are formed through oxygen vacancies the
device is referred to as metal oxide resistive memory (OxRAM; also referred to as VCM i.e.
valence change mechanism). Many OxRAM are also non-filamentary and oxygen vacancies
disperse throughout the insulating material instead of forming filaments, resulting in SCLC
mechanism of transport in many cases. When V
reset
is applied, filaments rupture and metal atoms
and vacancies return to the initial state (HRS) due to electrostatic repulsion and Joule heating.
The other notable mechanism is space charge limited current/conduction (SCLC), though
honestly, a lot of papers that claim SCLC for their device absolutely suck at explaining how it
works and simply write down Child’s law with basically no context. Personally, I find this
mechanism not as straightforward to understand as filamentation since it’s hard to visualize, but
if you look up “SCLC Yifan Yuan” and click the pdf link from the University of Nebraska -
Lincoln, you can find a decent explanation. Thank you for your service Yifan. But unfortunately,
things are still complicated because some claim their device conducts through an SCLC
mechanism in either the HRS or LRS and through a different transport mechanism in the other
resistance state (sometimes Schottky emission), which is something I’ve always been confused
about because the way people claim electron transport occurs in the other resistance state in a
device that displays SCLC feels very hand wavy. Or maybe I’m just too stupid. Mechanisms and
RS ability depend on material choices/properties including RS layer composition and work
function difference between electrodes. For space, I’ll not delve more because it requires
rigorous semiconductor physics/thermodynamics and understanding of materials science, which
is not inherently necessary for just knowing how memristors operate as memory devices.
One reason why memristor research gained traction is the prospect of being able to
perform “neuromorphic” or “in-memory” computing, which involves simultaneously storing and
processing data. Current von-Neumann architecture involves storage and processing being
performed in separate parts of a computer chip and data having to be transferred between storage
(V-NAND SSDs, DRAM, VRAM, SRAM) and processing centers (CPU, GPU, TPU), which
consumes significant energy and time i.e. latency AKA the von-Neumann bottleneck. In-memory
computing is inspired (key word: inspired!) from the way the human brain processes information
in hopes that we emulate the incredibly low power consumption the brain achieves. For context,
data centers globally consumed around 400-500 TWh in 2022 but are projected to go past 1000
TWh in 5 years. The total energy produced by all nuclear power plants combined that same year
was around 2500 TWh so as demand driven by AI increases, we may run into some problems
(and I’m not sure this statistic accounts for cooling system maintenance). Granted, this is still I
think like 5% of the power grid, but being able to redirect even half of that 5% for something
else can still have significant societal impacts.
With memristor crossbar arrays, data inputs take on the form of a vector of applied
voltages and it performs matrix multiplication to get a vector of current outputs that depend on
the conductance of each memristor in the array (Ohm’s law). Each individual memristor can be
set to a particular conductance by applying appropriate voltages along TE and BE. The crossbar
array acts as an adjustable weighing matrix, like with other neural networks, but the advantage is
each individual memristor can correspond to a weight (conductance = weight value) under the
right form of inputs. With CMOS tech, one weight needs many transistors if we want high
precision values, which is actually being sacrificed in Nvidia’s recent Blackwell chip. For those
who may not have theoretical understanding or practical experience with neural networks, I
recommend checking out the youtube channel 3blue1brown which gives a nice, easily accessible,
surface level overview on what neural networks are and how they are trained, covering topics