Biological Inspiration vs. Artificial Neurons
Nerd Cafe
Objective:
Understand how biological neurons inspired artificial neural networks (ANNs) and compare their structures and functions.
1. Biological Neurons: The Brain’s Building Blocks
Structure of a Biological Neuron

A neuron consists of:
Dendrites: Receive signals from other neurons.
Cell Body (Soma): Processes incoming signals.
Axon: Transmits signals to other neurons.
Synapses: Connections between neurons where chemical signals (neurotransmitters) are exchanged.
How Biological Neurons Work
Input (Dendrites): Receives electrical impulses.
Processing (Soma): Sums inputs; if the signal exceeds a threshold, the neuron "fires."
Output (Axon): Sends signals to connected neurons.
Key Properties:
Non-linear activation: Neurons don’t fire linearly—they have thresholds.
Plasticity: Synapses strengthen/weaken based on activity (learning).
2. Artificial Neurons: Mathematical Models
McCulloch-Pitts Neuron (1943)
The first computational model of a neuron.
Binary output: Fires (1) if input exceeds threshold, else (0).
Mathematical Model:
pi = Input signals
wi = Weights (synaptic strength)
b = Bias (threshold adjustment)

Modern Artificial Neurons
Activation Functions: Replace step function with smooth alternatives (sigmoid, ReLU).
Learning: Adjust weights via backpropagation (inspired by synaptic plasticity).
Dendrites receive signals
Input layer ( pi )
Synaptic strength varies
Weights ( wi)
Soma sums inputs
Weighted sum ( ∑ wi p i + b)
Fires if threshold met
Activation function (e.g., ReLU)
3. Python Simulation: A Simple Artificial Neuron
Let’s implement a McCulloch-Pitts neuron in Python.
Code Example
Output
4. Key Differences & Limitations
Processing
Parallel, energy-efficient
Sequential, computationally heavy
Learning
Dynamic, self-organizing
Requires explicit training (e.g., backpropagation)
Robustness
Fault-tolerant (damaged neurons adapt)
Sensitive to architecture/initialization
Keywords
neuron, activation function, weights, bias, perceptron, backpropagation, gradient descent, loss function, hidden layers, ReLU, sigmoid, feedforward, optimization, training set, validation, overfitting, regularization, dropout, CNN, RNN, LSTM, nerd cafe
Last updated