[首页]
[文章]
[教程]
首页
Web开发
Windows开发
编程语言
数据库技术
移动平台
系统服务
微信
设计
布布扣
其他
数据分析
首页
>
其他
> 详细
Theories of Deep Learning
时间:
2018-09-16 19:13:11
阅读:
228
评论:
0
收藏:
0
[点我收藏+]
https://stats385.github.io/readings
Lecture 1 – Deep Learning Challenge. Is There Theory?
Readings
Deep Deep Trouble
Why 2016 is The Global Tipping Point...
Are AI and ML Killing Analyticals...
The Dark Secret at The Heart of AI
AI Robots Learning Racism...
FaceApp Forced to Pull ‘Racist‘ Filters...
Losing a Whole Generation of Young Men to Video Games
Lecture 2 – Overview of Deep Learning From a Practical Point of View
Readings
Emergence of simple cell
ImageNet Classification with Deep Convolutional Neural Networks (Alexnet)
Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG)
Going Deeper with Convolutions (GoogLeNet)
Deep Residual Learning for Image Recognition (ResNet)
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Visualizing and Understanding Convolutional Neural Networks
Blogs
An Intuitive Guide to Deep Network Architectures
Neural Network Architectures
Videos
Deep Visualization Toolbox
Lecture 3
Readings
A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
Energy Propagation in Deep Convolutional Neural Networks
Discrete Deep Feature Extraction: A Theory and New Architectures
Topology Reduction in Deep Convolutional Feature Extraction Networks
Lecture 4
Readings
A Probabilistic Framework for Deep Learning
Semi-Supervised Learning with the Deep Rendering Mixture Model
A Probabilistic Theory of Deep Learning
Lecture 5
Readings
Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review
Learning Functions: When is Deep Better Than Shallow
Lecture 6
Readings
Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach
Convolutional Kernel Networks
Kernel Descriptors for Visual Recognition
End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
Learning with Kernels
Kernel Based Methods for Hypothesis Testing
Lecture 7
Readings
Geometry of Neural Network Loss Surfaces via Random Matrix Theory
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Nonlinear random matrix theory for deep learning
Lecture 8
Readings
Deep Learning without Poor Local Minima
Topology and Geometry of Half-Rectified Network Optimization
Convexified Convolutional Neural Networks
Implicit Regularization in Matrix Factorization
Lecture 9
Readings
Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
Perception as an inference problem
A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information
Lecture 10
Readings
Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding
Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning
Convolutional Dictionary Learning via Local Processing
To be discussed and extra
Emergence of simple cell
by Olshausen and Field
Auto-Encoding Variational Bayes
by Kingma and Welling
Generative Adversarial Networks
by Goodfellow et al.
Understanding Deep Learning Requires Rethinking Generalization
by Zhang et al.
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
by Giryes et al.
Robust Large Margin Deep Neural Networks
by Sokolic et al.
Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems
by Giryes et al.
Understanding Trainable Sparse Coding via Matrix Factorization
by Moreau and Bruna
Why are Deep Nets Reversible: A Simple Theory, With Implications for Training
by Arora et al.
Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network
by Malgouyres and Landsberg
Optimal Approximation with Sparse Deep Neural Networks
by Bolcskei et al.
Convolutional Rectifier Networks as Generalized Tensor Decompositions
by Cohen and Shashua
Emergence of Invariance and Disentanglement in Deep Representations
by Achille and Soatto
Deep Learning and the Information Bottleneck Principle
by Tishby and Zaslavsky
Theories of Deep Learning
原文:https://www.cnblogs.com/WCFGROUP/p/9656890.html
踩
(
0
)
赞
(
0
)
举报
评论
一句话评论(
0
)
登录后才能评论!
分享档案
更多>
2021年09月23日 (328)
2021年09月24日 (313)
2021年09月17日 (191)
2021年09月15日 (369)
2021年09月16日 (411)
2021年09月13日 (439)
2021年09月11日 (398)
2021年09月12日 (393)
2021年09月10日 (160)
2021年09月08日 (222)
最新文章
更多>
2021/09/28 scripts
2022-05-27
vue自定义全局指令v-emoji限制input输入表情和特殊字符
2022-05-27
9.26学习总结
2022-05-27
vim操作
2022-05-27
深入理解计算机基础 第三章
2022-05-27
C++ string 作为形参与引用传递(转)
2022-05-27
python 加解密
2022-05-27
JavaScript-对象数组里根据id获取name,对象可能有children属性
2022-05-27
SQL语句——保持现有内容在后面增加内容
2022-05-27
virsh命令文档
2022-05-27
教程昨日排行
更多>
1.
list.reverse()
2.
Django Admin 管理工具
3.
AppML 案例模型
4.
HTML 标签列表(功能排序)
5.
HTML 颜色名
6.
HTML 语言代码
7.
jQuery 事件
8.
jEasyUI 创建分割按钮
9.
jEasyUI 创建复杂布局
10.
jEasyUI 创建简单窗口
友情链接
汇智网
PHP教程
插件网
关于我们
-
联系我们
-
留言反馈
- 联系我们:wmxa8@hotmail.com
© 2014
bubuko.com
版权所有
打开技术之扣,分享程序人生!