A perceptron takes several binary inputs ,x1,x2,...,and produce a single binary output:
weights:real numbers expressing the importance of the respective inputs to the output.
The neuron‘s output, 0or1, is determined by whether the weighted sum is less than or greater than some threshold value. Just like the weights, the threshold is a real number which is a parameter of the neuron.
alebraic term:
rewritten:
a NAND
gate
example:
0,0-->positive
0,1-->positive
1,0-->positive
1,1-->negative
we can use perceptrons to compute simple logical functions.
In fact, we can use networks of perceptrons to compute any logical function at all.
HOW:small changes in any weight (or bias) causes only a small corresponding change in the output .
Changing the weights and biases over and over to produce better and better output. The network would be learning.
sigmoid function:
The output of a sigmoid neuron with input x1,x2,..., weights w1,w2,..., and bias b ,is:
z=w.x+b output(即σ(z))
positive 1
negative 0
In fact, the exact form of σ isn‘t so important - what really matters is the shape of the function when plotted.
This shape is a smoothed out version of a step function:
is well approximated by
input layer, output layer, hidden layer(means nothing more than "not an input or an output")
example:
feedforward networks:(前馈神经网络)
There are no loops in the network - information is always fed forward, never fed back.
recurrent neural networks:(循环神经网络)
The idea in these models is to have neurons which fire for some limited duration of time, before becoming quiescent. That firing can stimulate other neurons, which may fire a little while later, also for a limited duration. That causes still more neurons to fire, and so over time we get a cascade of neurons firing. Loops don‘t cause problems in such a model, since a neuron‘s output only affects its input at some later time, not instantaneously.
Two sub-problems
(1)breaking an image containing many digits into a sequence of separate images, each containing a single digit.
for example break the image
into six separate images,
(2)classify each individual digit.
recognize that the digit
is a 5.
A three-layer neural network:
Neural Network and DeepLearning (1.1)
原文:http://www.cnblogs.com/zhoulixue/p/6489724.html