首页 > 其他 > 详细

2015-3-31

时间:2015-04-01 00:17:23      阅读:247      评论:0      收藏:0      [点我收藏+]

Long time no blog.

I worked on Interspeech 2015, but failed. The classification accuracy is not as good as excepted. I will change the lower BLSTM layer to CNN to do another test.

I read DRAW yesterday and RAM today. I do not have any insights on these works. I need to learn a lot of prior knowledge.

RAM is a specific case of POMDP and is more of reinforcement learning. DRAW is more of variational inference on networks. But they all learn some attention mechinism.

All these models are important. Memory networks need to label which sentence is useful. Probably there is a reinforement learning way.

Memory, attention, reinforcement learning exists on this.

Fightting.

2015-3-31

原文:http://www.cnblogs.com/peng-ge/p/4382434.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!