Sinopsis
Find me on Github/Twitter/Kaggle @SamDeepLearning.Find me on LinkedIn @SamPutnam. This Podcast is supported by Enterprise Deep Learning | Cambridge/Boston | New York City | Hanover, NH | http://www.EnterpriseDeepLearning.com. Contact: Sam@EDeepLearning.com, 802-299-1240, P.O. Box 863, Hanover, NH, USA, 03755. We move deep learning to production. I teach the worldwide Deploying Deep Learning Masterclass at http://www.DeepLearningConf.com in NYC regularly and am a Deep Learning Consultant serving Boston and New York City.If you like Talking Machines, harvardnlp, CS231n, the Media Lab, CSAIL, karpathy.github.io, FAIR, Google Brain, Deeplearning4j, TensorFlow, Amazon Web Services, or Google Cloud Platform, you will like the podcast. Try it. Tweet at @SamDeepLearning if you have questions & to correct me!
Episodios
-
Art Generation - Facebook AI Research, Google DeepDream, and Ruder's Style Transfer for Video - Deep Learning: Zero to One
18/04/2017 Duración: 07minJustin Johnson, now at Facebook, wrote the original Torch implementation of the Gatys 2015 paper, which combines the content of one image and the style of another image using convolutional neural networks. Manuel Ruder’s newer 2016 paper transfers the style of one image to a whole video sequence, and it uses a computer vision technique called optical flow to generate consistent and stable stylized video sequences. Ruder’s implementation was used, by me, to generate a stylized butterfly video, located at https://medium.com/@SamPutnam/deep-learning-zero-to-one-art-generation-b532dd0aa390
-
Music Generation - Google Magenta Best Demo NIPS 2016 LSTM RNN - Deep Learning: Zero to One
13/03/2017 Duración: 01h09minI talk through generating 10 melodies, two of which I play at the conclusion using a model trained on thousands of midi examples contained in a .mag Magenta file bundle. I used the Biaxial RNN (https://github.com/hexahedria/biaxial-rnn-music-composition) by a student named Daniel Johnson and the Basic RNN (https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#basic) by Google's Magenta group within TensorFlow and learned that priming a melody with a single note can set the key for each generated melody, and, Anaconda's single 'source activate' line replaces the need for virtualenv and installs all of the necessary dependencies to make this environment easily reproducible. 2 - 3 more details are posted at: https://medium.com/@SamPutnam/deep-learning-zero-to-one-music-generation-46c9a7d82c02
-
Image Generation - Google DeepMind paper with TensorFlow - Deep Learning: Zero to One
04/03/2017 Duración: 05minI talk through generating an image of IRS tax return characters using a model trained on the IRS tax return dataset - NMIST. The authors trained for 70 hours on 32 GPUs. I used unconditioned image generation to create an image in 6 hours on my MacBook Pro CPU. I used the TensorFlow implementation of Conditional Image Generation with PixelCNN Decoders (https://arxiv.org/abs/1606.05328) by a student named Anant Gupta and learned that reasonable-looking digits can be generated with significantly fewer training steps, as soon as the training loss approaches that reached by the DeepMind authors. Each step is detailed at https://medium.com/@SamPutnam/this-is-the-1st-deep-learning-zero-to-one-newsletter-this-one-is-called-image-generation-935bcaf0f37c