In this video, we go through a full machine learning project. We train a large PyTorch model on GPUs in the cloud. Then we deploy it as a serverless endpoint using Docker and finally we build a Flask application to interact with it.
RunPod: https://rebrand.ly/NeuralNineRunpod
◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾◾
📚 Programming Books & Merch 📚
🐍 The Python Bible Book: https://www.neuralnine.com/books/
💻 The Algorithm Bible Book: https://www.neuralnine.com/books/
👕 Programming Merch: https://www.neuralnine.com/shop
💼 Services 💼
💻 Freelancing & Tutoring: https://www.neuralnine.com/services
🌐 Social Media & Contact 🌐
📱 Website: https://www.neuralnine.com/
📷 Instagram: https://www.instagram.com/neuralnine
🐦 Twitter: https://twitter.com/neuralnine
🤵 LinkedIn: https://www.linkedin.com/company/neuralnine/
📁 GitHub: https://github.com/NeuralNine
🎙 Discord: https://discord.gg/JU4xr8U3dm
Timestamps:
(0:00) Intro
(0:31) Overview
(4:01) CNN Training Logic
(25:38) Training Model in Cloud
(35:51) Serverless Handler
(51:36) Dockerization
(54:32) Serverless Deployment
(55:44) Flask Application
(1:10:10) Testing Application
(1:11:35) Outro