NEURAL NETWORK

See:
Courses about the neural networks
Courses about the connexionnism
demo1_network.func
network_audio.func
Network command


GENERAL
ADAPTATIVE NETWORK
DYNAMIC NETWORK
KOHONEN NETWORK
MULTILAYER PERCEPTRON
TRAINING
EXAMPLES

















































GENERAL

Definition
fac
matrix
Transfert function
generation
generate mass
Validate

Definition of an object type network

network(id)T
         Builds the empty network id type T.
T is:
       "fac: multi layer perceptron (default).
       texture: Kohonen network.
       near: completely connected network.
       interaction: adaptative network.

type network(id)
         Returns the string type of network id.
type network(id)=t
       Changes this type.
After changing the type it is necessary to:
         generate network(id)

validate network

validate network(id) Returns (cpt,err,stat) with:
       cpt: number of passes made.
       err: maximum error committed.
       stat=1: when the network is adapted.

TRAINING

General
Supervised training
Kohonen competitive training
Adaptative training
Learning constants
Use

Use

General

Given a “representative set of patterns” adjust synaptic weights using a Training Set by a learning rule.
After weights was modified by the training, evaluate them by scoring success rates on a Test Set different from the training set.
Rosenblatt (1958) provided a learning scheme with the property that if the patterns of the training set (i.e., a set of feature vectors, each one classified with a 0 or 1) can be separated by some choice of weights and threshold, then the scheme will eventually yield a satisfactory setting of the weights.

Supervised learning

Multilayer perceptron
       A supervised learning (on a network type fac, or multilayer perceptron) is given by the pairs of (Mi,Li):
         Mi = input motifs.
       Li = output law.
         adaptation of the network in nb passes is made by: validate(nb)network(id).
This type of network multi layered "feedforward" is trained by the algorithm.
Definition of learning pairs motif(0)network(id)=M
         Creates an input motif.
law(0)network(id)=L
         Creates an output law.
All inputs must have the same dimension.
All outputs must have the same dimension.
There must be as many outputs as inputs.
Defining a training validate(nb) network(1);
         Adjusts the weights of the network (by backpropagation" of the error algorithm) in nb passes, and returns the number of passes made (when the network is adapted), if this number is nb must continue learning.
validate(nb) coe(c1,c2) network(1)
         Allows to vary the coefficient eta between c1 (the pass 1) and c2 (at step nb), by default 1.0, 0.01.
(adjustment of the learning constant).
validate(nb) error(eps) network(1);
         Sets an error acceptable, by default eps=0.1
interaction validate(nb,1) network(1);
         Allows parallel processing adaptation.
Use
S=validate motif(M) network(id);
         Returns the output S of network id corresponding at input M.
Example
net(n)
{
ini ini network;no edit;
if(n==NIL)n=1;
/* Building the network */
/* -------------------- */
network(1)fac;
         fac(0)network(1)=1,2,3; /* Input layer */
         fac(0)network(1)=4,5; /* Hidden layer */
         fac(0)network(1)=6,7,8; /* Output layer */
/* Definition of the weights */
/* ------------------------- */
mass(1,4)network(1)=.5;
mass(1,5)network(1)=.3;
mass(2,4)network(1)=.3;
mass(2,5)network(1)=.2;
mass(3,4)network(1)=.1;
mass(3,5)network(1)=.1;
mass(4,6)network(1)=.1;
mass(4,7)network(1)=.3;
mass(4,8)network(1)=.5;
mass(5,6)network(1)=.2;
mass(5,7)network(1)=.4;
mass(5,8)network(1)=.6;
/* Training pairs */
/* -------------- */
motif(0)network(1)=1,2,3;law(0)network(1)=.1,.3,.7;
motif(0)network(1)=3,2,1;law(0)network(1)=.2,.3,.;
/* Training */
/* -------- */
$"NB = ",validate(n)network(1)error(.01);NL;
/* Using the network (verification) */
/* -------------------------------- */
$validate motif(1,2,3)network(1);NL;
$validate motif(3,2,1)network(1);NL;
}
res(n); adjusts the network in n passes.
for example res(300); product:
NB = 246.000000
0.102514 0.293342 0.690866
0.199652 0.308365 0.409970
The network is adjusted in 246 passes.
For an example showing the resolution of XOR see (demo network).

Kohonen competitive training

Competitive learning for Kohonen network type texture.

Definition of a network type texture

         network(id)texture;
         fac(0)network(id)=[1,np];
         fac(0)network(id)=[np+1,np+n*n];
Defines a network with 2 layers:
         Input layer with de np neurons.
         Output layer with n*n neurons arranged in a square grid of n-side (constituting the Kohonen map).

Defining the properties

As above, to create motifs:
motif(0)network(1)=m_1,m_2,...,m_np;
Change the weights, the transfert functions, etc ...

Training

Defining the input motifs
motif(0) network(1)=list of values;
         Adds an input.
Note:
         The motifs must have the same dimension as the input layer.

Course of learning

validate(n)network(1);
         Launching a competitive learning (in n passes) on network1 for the motifs that have been entered.
validate(n)network(1)coe(c1,c2)
         To vary the learning constant.
validate(n)network(1)error(eps)
         To specify tolerance..
validate(n)network(1)debug
         Mode debuging.
interaction validate(nb,1) network(1);
         Allows parallel processing adaptation.
Neurons of the Kohonen map (output layer) specialize in recognizing patterns presented at the input. The generalization property is reflected by the fact that the network is able to recognize examples non learned.

Use

validate motif(m)network(1);
         Returns the Kohonen map, ie the list of activations of neurons in the output layer.
validate motif(m)network(1)neuronne;
         Returns the number of winner neuron and its activation.

Adaptative training

The coherent flows algorithm, assuming that the flow of actions (outputs) is coherent with the flow of perceptions (input), is a way to train a neural network type near with only data for the inputs (the outputs being calculated automatically).

Definition of a network type near

network(id)near;
The set of input-output pairs {Mi,Li}0<=i<n defines a dual flus, the entries {Mi,Li}0<=i<n and the output {Li}i. Only the first is given (eg from a capture) and is a working memory for the coherence process flows which will automatically generate the second (eg sensory stimuli sent to a muscular system by an artificial act) .

Setting Properties

To create an input layer neurons:
       fac(0) network(id)=[1,ne];
To create an hidden layer of nc neurons
       fac(0) network(id)=[ne+1,ne+nc];
To create an output layer of ns neurons
       fac(0) network(id)=[ne+nc+1,ne+nc+ns];
To generate input patterns:
       motif(0)network(id)=m;
All motifs must have the same dimensions..
To generate the properties: (neurones et matrix) du réseau:
       generate network(id);
Note that all neurons are connected and that the matrix is fully modified (with the exception of the diagonal is zero) when learning. Facets do not describe the connections (as in networks such facs).

Use

Prior learning
validate(nb,cpt)error(err)coe(c1,c2)network(num);
         nb=number of cycles, cpt for parallel processing.
         err=maximum error.
         c1,c2=boundaries of the learning constant .
         num=numéro duu réseau.

Apprentissage dynamique


validate(nb,cpt)motif(m)error(err)coe(c1,c2)network(num)roll;
m is added at the end of the circular list of motifs, the network is trained on nb tests.

Use:


1) Creation of a netwok containing de n motifs (dimension ne), the outputs are of dimension ns.
network near(id);
fac(0)network(id)=[1,ne]; /* Input layer */
fac(0)network(id)=[ne+1,ne+ns]; /* Output layer */
for(i=1,n)motif(0)network(num)=[1,ne];
generate network(num);

Dynamic training:
w=validate(nb)motif(m)error(err)coe(c1,c2)network(num)roll;
Motif m is stacked in the circular list of motifs, the network is trained on nb tests. w is the output corresponding to input m. The outputs are calculated so as to minimize the difference of changes in the inputs and outputs (the method known as coherent flows (see also

Learning constants


Modifying the learning constants:
Learning constants can be initialized by:
meta coe network(num)=0,max_c1,c1,c1, 0,max_c2,c2,c2 with:
       max_c1: maximum of c1.
       c1: current value of c1.
       max_c2: maximum of c2.
       c2: current value of c2.
The nine fields are required.
These parameters are displayable by the command: displ network(id).

Use

Once completed learning
S = validate motif(M) network (id) returns the output S corresponding ti input M.

EXAMPLES

Perceptron

demo1_network.func function gives an example of programming a layered network learning to recognize alphabets by supervised learning. The menu provides access to:

DEM

A network of two inputs modifiable by clicking in a coordinate (x, y coordinates of the designated point), the matrix of the synaptic weights can be changed by clicking NOISE randomly and having hidden layers, the number may be selected on the scale hidden
Outputs are connected to the joints of a skeleton that comes to life when the inputs are changed.
The network pre-wired by the function NOISE learns nothing.

RES

Provides access to submenu RES, choose one of the boxes ITA FLO APP:
LIR
Click on an alphabet file name *.alp.
BOO
Choose the number of hidden layers cach, the sizes nx and ny of the characters of boolean type and their number nb.
FLO
Choose the number of hidden layers cach, the sizes nx and ny of the characters of floattype and their number nb.
Click on a pattern (red) which is presented at the input of the network which provides an output which is different from the input (because the matrix synaptic weights was initialized randomly), for training the network click on:
APP
Learning: the curve of errors appears in the bottom right and the screen, the constant learning appear at the bottom ( coe1 coe2 automatically varies). Normally out (green) converges to the selected input. If there is no convergence can modify constants in mouse, we canalso reset the matrix by clicking on the scale NOISE or change the number ofhidden layers (on a scale hidden ).
ALEA
Change alea: the input patterns are changed randomly, the network recognizes them anyway.

VOL

Gives an example of an intelligent volume able to adapt to a changing environment. Click on the box AUT generate command:
network axis rota vol(Num_VOL)=Num_RES_VOL;
expressing that the network Num_RES_VOL is attached to volume Num_Vol..
r1=rand2f(-1,1),rand2f(-1,1),rand2f(-1,1);r2=rand2f(-1,1),rand2f(-1,1),rand2f(-1,1);
EXEC=compile message("traj(100)axis(0,0,1, r1, r2)vol(Num_Vol_VOL)period(-1)");

endow the axes of the volume of random paths. The role of adaptive network is then to try to stabilize the volume.

perceptron.func gives a more general example.

Réseau compétitif

kohonen.func function gives an example of programming a Kohonen network detecting patterns in a set of patterns.
kohonen_2.js launches kohonen(2): 2D set.
kohonen_3.js launches kohonen(3): 3D set.
Note:kohonen(n > 3): launches function for recognition of the audio signal sampled values of 2 ^ n.
The input layer is composed of n neurons (coordinates of a point in the set)
The output layer is a grid of 7 neurons.

Initialisation

Initializes a space patterns arranged randomly.

Action

Generating another random space patterns each of which gives a winner neuron output characterized by its color:
We note that the points of the same color are grouped in the same region, indicating that the network has classified.