Skip to content

Latest commit

 

History

History
351 lines (271 loc) · 26.5 KB

README.md

File metadata and controls

351 lines (271 loc) · 26.5 KB

2016-07

2016-06

  • Towards an integration of deep learning and neuroscience [arXiv]
  • On Multiplicative Integration with Recurrent Neural Networks [arxiv]
  • Wide & Deep Learning for Recommender Systems [arXiv]
  • Online and Offline Handwritten Chinese Character Recognition [arXiv]
  • Tutorial on Variational Autoencoders [arXiv]
  • Concrete Problems in AI Safety [arXiv]
  • Deep Reinforcement Learning Discovers Internal Models [arXiv]
  • SQuAD: 100,000+ Questions for Machine Comprehension of Text [arXiv]
  • Conditional Image Generation with PixelCNN Decoders [arXiv]
  • Model-Free Episodic Control [arXiv]
  • Progressive Neural Networks [arXiv]
  • Improved Techniques for Training GANs [arXiv])
  • Memory-Efficient Backpropagation Through Time [arXiv]
  • InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [arXiv]
  • Zero-Resource Translation with Multi-Lingual Neural Machine Translation [arXiv]
  • Key-Value Memory Networks for Directly Reading Documents [arXiv]
  • Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translatin [arXiv]
  • Learning to learn by gradient descent by gradient descent [arXiv]
  • Learning Language Games through Interaction [arXiv]
  • Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations [arXiv]
  • Smart Reply: Automated Response Suggestion for Email [arXiv]
  • Virtual Adversarial Training for Semi-Supervised Text Classification [arXiv]
  • Deep Reinforcement Learning for Dialogue Generation [arXiv]
  • Very Deep Convolutional Networks for Natural Language Processing [arXiv]
  • Neural Net Models for Open-Domain Discourse Coherence [arXiv]
  • Neural Architectures for Fine-grained Entity Type Classification [arXiv]
  • Gated-Attention Readers for Text Comprehension [arXiv]
  • End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning [arXiv]
  • Iterative Alternating Neural Attention for Machine Reading [arXiv]
  • Memory-enhanced Decoder for Neural Machine Translation [arXiv]
  • Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation [arXiv]
  • Conversational Contextual Cues: The Case of Personalization and History for Response Ranking [arXiv]
  • Adversarially Learned Inference [arXiv]
  • Neural Network Translation Models for Grammatical Error Correction [arXiv]

2016-05

  • Hierarchical Memory Networks [arXiv]
  • Deep API Learning [arXiv]
  • Wide Residual Networks [arXiv]
  • TensorFlow: A system for large-scale machine learning [arXiv]
  • Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention [arXiv]
  • Aspect Level Sentiment Classification with Deep Memory Network [arXiv]
  • FractalNet: Ultra-Deep Neural Networks without Residuals [arXiv]
  • Learning End-to-End Goal-Oriented Dialog [arXiv]
  • One-shot Learning with Memory-Augmented Neural Networks [arXiv]
  • Deep Learning without Poor Local Minima [arXiv]
  • AVEC 2016 - Depression, Mood, and Emotion Recognition Workshop and Challenge [arXiv]
  • Data Programming: Creating Large Training Sets, Quickly [arXiv]
  • Deeply-Fused Nets [arXiv]
  • Deep Portfolio Theory [arXiv]
  • Unsupervised Learning for Physical Interaction through Video Prediction [arXiv]
  • Movie Description [arXiv]

2016-04

2016-03

  • A Fast Unified Model for Parsing and Sentence Understanding [arXiv]
  • Latent Predictor Networks for Code Generation [arXiv]
  • Attend, Infer, Repeat: Fast Scene Understanding with Generative Models [arXiv]
  • Recurrent Batch Normalization [arXiv]
  • Neural Language Correction with Character-Based Attention [arXiv]
  • Incorporating Copying Mechanism in Sequence-to-Sequence Learning [arXiv]
  • How NOT To Evaluate Your Dialogue System [arXiv]
  • Adaptive Computation Time for Recurrent Neural Networks [arXiv]
  • A guide to convolution arithmetic for deep learning [arXiv]
  • Colorful Image Colorization [arXiv]
  • Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles [arXiv]
  • Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus [arXiv]
  • A Persona-Based Neural Conversation Model [arXiv]
  • A Character-level Decoder without Explicit Segmentation for Neural Machine Translation [arXiv]
  • Multi-Task Cross-Lingual Sequence Tagging from Scratch [arXiv]
  • Neural Variational Inference for Text Processing [arXiv]
  • Recurrent Dropout without Memory Loss [arXiv]
  • One-Shot Generalization in Deep Generative Models [arXiv]
  • Recursive Recurrent Nets with Attention Modeling for OCR in the Wild [[arXiv](Recursive Recurrent Nets with Attention Modeling for OCR in the Wild)]
  • A New Method to Visualize Deep Neural Networks [[arXiv](A New Method to Visualize Deep Neural Networks)]
  • Neural Architectures for Named Entity Recognition [arXiv]
  • End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF [arXiv]
  • Character-based Neural Machine Translation [arXiv]
  • Learning Word Segmentation Representations to Improve Named Entity Recognition for Chinese Social Media [arXiv]

2016-02

2016-01

2015-12

NLP

Vision

2015-11

NLP

Programs

  • Neural Random-Access Machines [arxiv]
  • Neural Programmer: Inducing Latent Programs with Gradient Descent [arXiv]
  • Neural Programmer-Interpreters [arXiv]
  • Learning Simple Algorithms from Examples [arXiv]
  • Neural GPUs Learn Algorithms [arXiv]
  • On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models [arXiv]

Vision

  • ReSeg: A Recurrent Neural Network for Object Segmentation [arXiv]
  • Deconstructing the Ladder Network Architecture [arXiv]
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [arXiv]

General

  • Towards Principled Unsupervised Learning [arXiv]
  • Dynamic Capacity Networks [arXiv]
  • Generating Sentences from a Continuous Space [arXiv]
  • Net2Net: Accelerating Learning via Knowledge Transfer [arXiv]
  • A Roadmap towards Machine Intelligence [arXiv]
  • Session-based Recommendations with Recurrent Neural Networks [arXiv]
  • Regularizing RNNs by Stabilizing Activations [arXiv]

2015-10

2015-09

2015-08

2015-07

2015-06

2015-05

2015-04

  • Correlational Neural Networks [arXiv]

2015-03

2015-02

2015-01

2014-12

2014-11

2014-10

2014-09

2014-08

  • Convolutional Neural Networks for Sentence Classification [arxiv]

2014-07

2014-06

2014-05

2014-04

  • A Convolutional Neural Network for Modelling Sentences [arXiv]

2014-03

2014-02

2014-01

2013

  • Visualizing and Understanding Convolutional Networks [arXiv]
  • DeViSE: A Deep Visual-Semantic Embedding Model [pub]
  • Maxout Networks [arXiv]
  • Exploiting Similarities among Languages for Machine Translation [arXiv]
  • Efficient Estimation of Word Representations in Vector Space [arXiv]

2011

  • Natural Language Processing (almost) from Scratch [arXiv]