Welcome to the Practitioner Bundle of Deep Learning for Computer Vision with Python! This volume is meant to be the next logical step in your deep learning for computer vision education after completing the Starter Bundle.
At this point, you should have a strong understanding of the fundamentals of parameterized learning, neural networks, and Convolutional Neural Networks (CNNs). You should also feel relatively comfortable using the Keras library and the Python programming language to train your own custom deep learning networks.
The purpose of the Practitioner Bundle is to build on your knowledge gained from the Starter Bundle and introduce more advanced algorithms, concepts, and tricks of the trade—these techniques will be covered in three distinct parts of the book.
The first part will focus on methods that are used to boost your classification accuracy in one way or another. One way to increase your classification accuracy is to apply transfer learning methods such as fine-tuning or treating your network as a feature extractor.
We’ll also explore ensemble methods (i.e., training multiple networks and combining the results) and how these methods can give you a nice classification boost with little extra effort. Regularization methods such as data augmentation are used to generate additional training data – in nearly all situations, data augmentation improves your model’s ability to generalize. More advanced optimization algorithms such as Adam, RMSprop, and others can also be used on some datasets to help you obtain lower loss. After we review these techniques, we’ll look at the optimal pathway to apply these methods to ensure you obtain the maximum amount of benefit with the least amount of effort.
We then move on to the second part of the Practitioner Bundle which focuses on larger datasets and more exotic network architectures. Thus far we have only worked with datasets that have fit into the main memory of our system – but what if our dataset is too large to fit into RAM? What do we do then? We’ll address this question in Chapter 9 when we work with HDF5.
Given that we’ll be working with larger datasets, we’ll also be able to discuss more advanced network architectures using AlexNet, GoogLeNet, ResNet, and deeper variants of VGGNet. These network architectures will be applied to more challenging datasets and competitions, including the Kaggle: Dogs vs. Cats recognition challenge as well as the cs231n Tiny ImageNet challenge, the exact same task Stanford CNN students compete in. As we’ll find out, we’ll be able to obtain a top-25 position on the Kaggle Dogs vs. Cats leaderboard and top the cs231n challenge for our technique type.
The final part of this book covers applications of deep learning for computer vision outside of image classification, including basic object detection, deep dreaming and neural style, Generative Adversarial Networks (GANs), and Image Super Resolution. Again, the techniques covered in this volume are meant to be much more advanced than the Starter Bundle – this is where you’ll start to separate yourself from a deep learning novice and transform into a true deep learning practitioner. To start your transformation to deep learning expert, just flip the page.
扫描下方二维码添加微信号 bookyage 回复本书编号 2358 即可，我们会尽快（一般24小时之内）将相应文件以百度网盘链接的形式发送给您。