Neural Network

Working Demo: Neural Network

A simple Neural Network written in NextJS / React

How To Use

  • Right Click on the Green Nodes to set input value from 0.0 – 1.0
  • Right Click on the Blue Node to set desired output value from 0.0 – 1.0
  • Click Train button to Train the response
  • Click Test button to run the values through the network. Output values will be displayed in the Blue Output node.

What It Does

This application simulates how a network of nodes (simulated neurons) with weights and biases can be adjusted in order to “learn” to produce particular output value, given its input values.

Each node exists in a layer, where by they are connected to all other nodes in the next layer. There are three distinct layers: an Input Layer, multiple “Hidden Layers” and an output layer. Each node has a “bias” and are connected to other forward nodes and are given a “weight” for the strength of the connection.

This network itself is a Feed-forward Neural Network, where the input values from one layer of nodes is fed through to the next layer, going through a transformation process, until the output layer is reached (Forward Propagation).

var node4_raw = (node1_value * node1_weight) + (node2_value * node2_weight) + node4_bias;
...
node4_value = sigmoid(node4_raw);

...
// Activation Function
func sigmoid = (x) => {
  return 1 / (1 + Math.exp(-x))
}

fig1. How the values are propagated forward

The value of the output is then compared with the desired output. An error figure is calculated and through some fancy calculus (ie. chain-rule, sigmoid derivation ), the values of the edge weights and biases are adjusted from back, to front (back propagation).

With enough passes and training intervals, the network will then be adjusted to produce a close proximity of the desired output values given the input values.

Use Case

The backbone of modern AI (specifically LLMs) are built on a variation of this premise. For this specific case, there are 2 inputs and 1 output. Simple logic gates can be approximated with this Neural Network. We can train the network to be an AND, OR, XOR gate simply by setting the input values and training them on the output values without having to write specific code to do so. The power of the Neural Network is that they can be a generalized function solver simply by adjusting weights/biases.

Reference:

https://en.wikipedia.org/wiki/Neural_network_(machine_learning)