Biological Inspiration vs. Artificial Neurons

Nerd Cafe

Objective:

Understand how biological neurons inspired artificial neural networks (ANNs) and compare their structures and functions.

1. Biological Neurons: The Brain’s Building Blocks

Structure of a Biological Neuron

A neuron consists of:

  1. Dendrites: Receive signals from other neurons.

  2. Cell Body (Soma): Processes incoming signals.

  3. Axon: Transmits signals to other neurons.

  4. Synapses: Connections between neurons where chemical signals (neurotransmitters) are exchanged.

How Biological Neurons Work

  • Input (Dendrites): Receives electrical impulses.

  • Processing (Soma): Sums inputs; if the signal exceeds a threshold, the neuron "fires."

  • Output (Axon): Sends signals to connected neurons.

Key Properties:

  • Non-linear activation: Neurons don’t fire linearly—they have thresholds.

  • Plasticity: Synapses strengthen/weaken based on activity (learning).

2. Artificial Neurons: Mathematical Models

McCulloch-Pitts Neuron (1943)

  • The first computational model of a neuron.

  • Binary output: Fires (1) if input exceeds threshold, else (0).

Mathematical Model:

Output={1if iwipi+b00otherwise\text{Output} = \begin{cases} 1 & \text{if } \sum_i w_i p_i + b \geq 0 \\ 0 & \text{otherwise} \end{cases}
  • pi​ = Input signals

  • wi​ = Weights (synaptic strength)

  • b = Bias (threshold adjustment)

Modern Artificial Neurons

  • Activation Functions: Replace step function with smooth alternatives (sigmoid, ReLU).

  • Learning: Adjust weights via backpropagation (inspired by synaptic plasticity).

Biological Neuron
Artificial Neuron

Dendrites receive signals

Input layer ( pi​ )

Synaptic strength varies

Weights ( wi)

Soma sums inputs

Weighted sum ( ∑ wi p i + b)

Fires if threshold met

Activation function (e.g., ReLU)

3. Python Simulation: A Simple Artificial Neuron

Let’s implement a McCulloch-Pitts neuron in Python.

Code Example

Output

4. Key Differences & Limitations

Aspect
Biological Neuron
Artificial Neuron

Processing

Parallel, energy-efficient

Sequential, computationally heavy

Learning

Dynamic, self-organizing

Requires explicit training (e.g., backpropagation)

Robustness

Fault-tolerant (damaged neurons adapt)

Sensitive to architecture/initialization

Keywords

neuron, activation function, weights, bias, perceptron, backpropagation, gradient descent, loss function, hidden layers, ReLU, sigmoid, feedforward, optimization, training set, validation, overfitting, regularization, dropout, CNN, RNN, LSTM, nerd cafe

Last updated