Biological Inspiration vs. Artificial Neurons
Nerd Cafe
Last updated
Nerd Cafe
Last updated
Understand how biological neurons inspired artificial neural networks (ANNs) and compare their structures and functions.
A neuron consists of:
Dendrites: Receive signals from other neurons.
Cell Body (Soma): Processes incoming signals.
Axon: Transmits signals to other neurons.
Synapses: Connections between neurons where chemical signals (neurotransmitters) are exchanged.
Input (Dendrites): Receives electrical impulses.
Processing (Soma): Sums inputs; if the signal exceeds a threshold, the neuron "fires."
Output (Axon): Sends signals to connected neurons.
Non-linear activation: Neurons don’t fire linearly—they have thresholds.
Plasticity: Synapses strengthen/weaken based on activity (learning).
The first computational model of a neuron.
Binary output: Fires (1) if input exceeds threshold, else (0).
Mathematical Model:
pi = Input signals
wi = Weights (synaptic strength)
b = Bias (threshold adjustment)
Activation Functions: Replace step function with smooth alternatives (sigmoid, ReLU).
Learning: Adjust weights via backpropagation (inspired by synaptic plasticity).
Dendrites receive signals
Input layer ( pi )
Synaptic strength varies
Weights ( wi)
Soma sums inputs
Weighted sum ( ∑ wi p i + b)
Fires if threshold met
Activation function (e.g., ReLU)
Let’s implement a McCulloch-Pitts neuron in Python.
Output
Processing
Parallel, energy-efficient
Sequential, computationally heavy
Learning
Dynamic, self-organizing
Requires explicit training (e.g., backpropagation)
Robustness
Fault-tolerant (damaged neurons adapt)
Sensitive to architecture/initialization
neuron
, activation function
, weights
, bias
, perceptron
, backpropagation
, gradient descent
, loss function
, hidden layers
, ReLU
, sigmoid
, feedforward
, optimization
, training set
, validation
, overfitting
, regularization
, dropout
, CNN
, RNN
, LSTM
, nerd cafe