1.简介
本例子是通过对一组逻辑回归映射进行输出,使得网络的权重和偏置达到最理想状态,最后再进行预测。其中,使用GD算法对参数进行更新,损耗函数采取交叉商来表示,一共训练10000次。
2.python代码
#!/usr/bin/python import numpy import theano import theano.tensor as T rng=numpy.random N=400 feats=784 # D[0]:generate rand numbers of size N,element between (0,1) # D[1]:generate rand int number of size N,0 or 1 D=(rng.randn(N,feats),rng.randint(size=N,low=0,high=2)) training_steps=10000 # declare symbolic variables x=T.matrix(‘x‘) y=T.vector(‘y‘) w=theano.shared(rng.randn(feats),name=‘w‘) # w is shared for every input b=theano.shared(0.,name=‘b‘) # b is shared too. print(‘Initial model:‘) print(w.get_value()) print(b.get_value()) # construct theano expressions,symbolic p_1=1/(1+T.exp(-T.dot(x,w)-b)) # sigmoid function,probability of target being 1 prediction=p_1>0.5 xent=-y*T.log(p_1)-(1-y)*T.log(1-p_1) # cross entropy cost=xent.mean()+0.01*(w**2).sum() # cost function to update parameters gw,gb=T.grad(cost,[w,b]) # stochastic gradient descending algorithm #compile train=theano.function(inputs=[x,y],outputs=[prediction,xent],updates=((w,w-0.1*gw),(b,b-0.1*gb))) predict=theano.function(inputs=[x],outputs=prediction) # train for i in range(training_steps): pred,err=train(D[0],D[1]) print(‘Final model:‘) print(w.get_value()) print(b.get_value()) print(‘target values for D:‘) print(D[1]) print(‘prediction on D:‘) print(predict(D[0])) print(‘newly generated data for test:‘) test_input=rng.randn(30,feats) print(‘result:‘) print(predict(test_input))
如上面所示,首先导入所需的库,theano是一个用于科学计算的库。然后这里我们随机产生一个输入矩阵,大小为400*784的随机数,随机产生一个输出向量大小为400,输出向量为二值的。因此,称为逻辑回归。
Python学习(十)——逻辑回归(Logistic Regression)
原文:http://my.oschina.net/zzw922cn/blog/515693