首页 > 其他 > 详细

2015-2-19 log

时间:2015-02-09 22:57:25      阅读:231      评论:0      收藏:0      [点我收藏+]

I checked the combine2vec code and found a bug. In negative sample mode, I did not update the input word vector when the sample is positive. I fixed the bug, but the two objective functions still did not converge together.

I adapted word2vec training startegy in sentence branch. There may be bugs. Because too many biword count values are zero. I need to check the cooccurrence file, bigram code and sentence branch combine2vec code.

I collected gradient statistics yesterday. Interesting phenomenan:

1. If exchange "word" and "last_word" in skip-gram model in the word2vec code, the traing loss become much smaller and the training speed is almost twice higher. This is wired.

2. The gradient variation trend is different between the above two approach.

3. Ployseme may not converge well, so their gradient is large. In reality, words apperaed rearly also produc large gradient.

Read Sequence to sequence learning.

Configured PyCharm.

Checked GroundHog code. It contains enconder-decoder machine translation code. But it is more than 3000 lines. So I will implement sequence to sequence learning code first.

 

Today I sitll wasted a lot of time.

5:00-6:00 Browsed websites.

7:30-8:30 Gone out and ate litta pizza. Honostly, I did not need the supper.

8:30-10:00 Wasted a  lot of time wandering. Checked the GroundHog code, Configured PyCharm, but never face the real problem. Just implement the sequence to sequence learning first.

2015-2-19 log

原文:http://www.cnblogs.com/peng-ge/p/4282675.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!