Skip to content

Research about Particle Swarm Optimization (PSO) and it's implementation to optimize Artificial Neural Network (ANN)

Notifications You must be signed in to change notification settings

aboelkassem/PSO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Particle Swarm Optimization (PSO)

This repoistory contains research about Particle Swarm Optimization (PSO) and it's implementation to optimize Artificial Neural Network (ANN)

Including

Why Traning Neural Network with Particle Swarm Optimization instead of Gradient Descent

  • Motivation

    • Gradient Descent requires differentiable activation function to calculate derivates making it slower than feedforward
    • To speed up backprop lot of memory is required to store activations
    • Backpropagation is strongly dependent on weights and biases initialization. A bad choice can lead to stagnation at local minima and so a suboptimal solution is found.
    • Backpropagation cannot make full use of parallelization due to its sequential nature
  • Advantages of PSO

    • PSO does not require a differentiable or continous function
    • PSO leads to better convergence and is less likely to get stuck at local minima
    • Faster on GPU

Environment

  • Windows 10
  • AMD GPU radeon 530
  • Python 3.9
  • matplotlib 3.3
  • numpy 1.19.5
  • scikit-learn 0.24
  • scipy 1.6.0

Run example with MNIST

$ git clone https://github.com/aboelkassem/PSO.git
$ cd PSO
$ python example_mnist.py

Demo and The Efficient of Results

the following diagram and reports shows the performance of testing data of the dataset including 10 classes (digits classes)

About

Research about Particle Swarm Optimization (PSO) and it's implementation to optimize Artificial Neural Network (ANN)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages