WK2: Artificial Neural Networks I: Fundamentals

Welcome to Week 2
Artificial Neural Networks I: Fundamentals
Module Lecturer: Dr Raghav Kovvuri
Email: raghav.kovvuri@ieg.ac.uk

1 / 15
volgende
Slide 1: Tekstslide
Artificial Intelligence ProgrammingHigher Education (degree)Upper Secondary (Key Stage 4)

In deze les zit 15 slide, met interactieve quiz en tekstslide.

Onderdelen in deze les

Welcome to Week 2
Artificial Neural Networks I: Fundamentals
Module Lecturer: Dr Raghav Kovvuri
Email: raghav.kovvuri@ieg.ac.uk

Slide 1 - Tekstslide

Introduction to ANN
Definition: Computational models inspired by biological neural networks
Key characteristics:
  • Parallel processing
  • Adaptive learning
  • Distributed representation
Historical context: From perceptron (1958) to 
deep learning (present)

Slide 2 - Tekstslide

Introduction : The City of Neurotopia
  • Welcome to Neurotopia, a unique city that represents our Artificial Neural Network (ANN). As we explore this city, we'll uncover the fundamental concepts of ANNs.
  • Analogy: Neurotopia is a living, learning city that processes information collectively to make decisions.

Slide 3 - Tekstslide

Historical Context
  • 1943: McCulloch-Pitts neuron
  • 1958: Rosenblatt's Perceptron
  • 1969: Minsky and Papert's limitations of single-layer networks
  • 1986: Rumelhart, Hinton, and Williams - Backpropagation
  • 2012 onwards: Deep Learning revolution
  • Analogy: Neurotopia wasn't built in a day. Let's explore its evolution from a simple village to a complex metropolis.

Slide 4 - Tekstslide

Biological Inspiration
Structure of biological neurons:
  • Dendrites (receive signals)
  • Cell body (process signals)
  • Axon (transmit signals) 
Dendrites
Cell body
  • Synaptic transmission
  • Analogy: Citizens of Neurotopia (neurons) communicate through an elaborate postal system (synapses).
Axon
Synapse
Junction between two neurons that allows a signal to pass between them

Slide 5 - Tekstslide

Which part of a biological neuron is most similar to the output of an artificial neuron?
A
Dendrites
B
Cell body
C
Axon
D
Synapses

Slide 6 - Quizvraag

The Artificial Neuron
Components of a Artificial neuron: Neurotopia citizen
  1. Inputs (x₁, x₂, ..., xₙ) - Information received
  2. Weights (w₁, w₂, ..., wₙ) - Importance of each input
  3. Bias (b) - Personal opinion
  4. Summation function (Σ) - Combining all inputs
  5. Activation function (f) - Decision to pass on information
Mathematical representation: 
y=f(iwixi+b)
Artificial Neuron vs Perceptron

Slide 7 - Tekstslide

 Activation Functions
Types of Activation Functions:
Definition: Activation functions are mathematical operations applied to the weighted sum of inputs in a neuron, introducing non-linearity into the network's output
Purpose: Introduce non-linearity, allowing networks to learn complex patterns
  1. Step Function: Binary decision (Yes/No)
  2. Sigmoid Function:  Gradual transition (0 to 1)
  3. Hyperbolic Tangent (tanh): Gradual transition (0 to 1)
  4. ReLU (Rectified Linear Unit)
f(x)= 1+ex1
f(x)= ex+exexex
f(x)=max(0,x)
  • Analogy: Activation functions are like the citizens' "mood" influencing their decision to share information.

Slide 8 - Tekstslide

Extended Analogy
  1. Step Function Councilor: Makes binary decisions. "If the majority favors it, I vote yes. Otherwise, it's a no."
  2. Sigmoid Function Councilor: Considers all perspectives gradually. "I'll weigh all the information and give a nuanced opinion between 0 and 1."
  3. Tanh Function Councilor: Similar to Sigmoid, but more decisive. "I'll consider both sides equally and can strongly agree (+1) or disagree (-1)."
  4. ReLU Function Councilor: Focuses only on positive aspects. "I'll support good ideas with full enthusiasm, but I won't consider negative aspects at all 
The Neuron City Council:
Imagine a city council (neural network) making decisions. Each council member (neuron) receives various pieces of information (inputs) from citizens. The activation function represents how each council member processes this information before voting

Slide 9 - Tekstslide

Which activation function would be most suitable for a neural network tasked with sentiment analysis of movie reviews (classifying as positive or negative)?
A
Step Function
B
Sigmoid Function
C
ReLU (Rectified Linear Unit)
D
Hyperbolic Tangent (tanh)

Slide 10 - Quizvraag

Network Architecture
Perform intermediate computations by extracting patterns and features from the data.
Produces the final result or prediction based on the learned patterns.

Receives raw data to be processed.
 Artificial Neuron
  • Input Layer: Information Gathering District
  • Hidden Layer(s): Information Processing Neighborhoods
  • Output Layer: Decision-Making Center
Types of NN
  • Feedforward NN: Information flows from input to output
  • Recurrent NN: Some information loops back (like city planning meetings)

Slide 11 - Tekstslide

Activity (1)
  • Objective: Research and understand the differences between supervised, unsupervised, and reinforcement learning.
  • Search for definitions, key differences, and real-world examples of each type of learning.
  • Compare their learning processes, types of algorithms used, and applications.
  • Post their findings in the Discussion Section of Canvas for this activity.
Research Task - Supervised Learning vs Unsupervised Learning vs Reinforcement Learning
timer
30:00

Slide 12 - Tekstslide

Activity (2)
Research and Programming Task - Classification vs Clustering (20 min)
timer
20:00
Download Classification1.py and Clustering.py from Canvas

Slide 13 - Tekstslide

Activity (3)
Research and Programming Task - Classification vs Regression (20 min)
timer
20:00
Download Classification2.py and Regression.py from canvas

Slide 14 - Tekstslide

Slide 15 - Tekstslide