Deep Learning Architectures for Defense and Security
Oct 05 20213 days, 08:30 AM EDT - 04:30 PM EDT
- $2,090.00 excl.
This 3-day course provides a broad introduction to classical neural networks (NN) and its current evolution to deep learning (DL) technology. This course introduces the well-known deep learning architectures and their applications in defense and security for object detection, identification, verification, action recognition, scene understanding and biometrics using a single modality or multimodality sensor information. This course will describe the history of neural networks and its progress to current deep learning technology. It covers several DL architectures such the classical multi-layer feed forward neural networks, convolutional neural networks (CNN), generative adversarial networks (GAN), restricted Boltzmann machines (RBM), auto-encoders and recurrent neural networks such as long term short memory (LSTM).
Use of deep learning architectures for feature extraction and classification will be described and demonstrated. Examples of popular CNN-based architectures such as AlexNet, VGGNet, GooGleNet (inception modules), ResNet, DeepFace, Highway Networks, FractalNet and their applications to defense and security will be discussed. Advanced architectures such as Siamese deep networks, coupled neural networks, conditional adversarial generative networks , fusion of multiple CNNs and their applications to object verification and classification will also be covered. The course is for scientists, engineers, technicians, or managers who wish to learn more about deep learning architectures and their applications in defense and security.
What You Will Learn:
- Fundamental concepts of neural networks and deep learning.
- Differences between neural network and current deep learning architectures.
- Stochastic gradient descent algorithm to train deep learning networks
- The popular CNN-based architectures (i.e., LeNet, AlexNet, VGGNet, GooGleNet, ResNet).
- Relative merits of various deep learning architectures, MLP, CNN, GAN, RBM and LSTM.
- Auto-encoders for feature extraction. Generative adversarial networks for object synthesis.
- Deep learning framework for object, pedestrian detection, face, iris, fingerprint identification.
- Siamese and coupled deep learning architectures for cross-modal object verification & identification.
- Deep learning architectures for multi-view face identification and multimodal biometrics applications.
- History of Neural Networks. Origin of the artificial neural networks (ANN) and its relationship with artificial intelligence & expert systems. Artificial neuron models vs biological neurons. Characteristics of receptive fields of neurons in visual cortex. Binary and continues perceptrons.
- Multi-layer Perceptrons. Concept of layering, basics of gradient descent and backpropagation learning algorithm for network training.
- Activation functions. Non-linearity functions in neural networks, Hard limiting, Sigmoid, Tanh and ReLU functions.
- Overfitting and Generalization. The concept of overfitting and generalization in deep learning, sparsity-based regularization, L1 sparsity, L2 sparsity and groups sparsity, concept of dropout as regularization.
- Auto-encoders. Denoising autoencoders, hetero-associative auto-encoders, sparse autoencoders, convolutional autoencoders, learning manifold and dimensionality reduction.
- Restricted Boltzmann Machines. Idea behind the classical RBM and deep belief nets.
- Convolutional Neural Network architectures. Concept of convolution neural network architectures and the functions of its layers. Use of different kernel sizes, average pooling, max pooling and concept of using overlapping and non-overlapping strides.
- Modern Convolutional Neural Network architectures. LeNet, AlexNet, VGGNet, GoogleNet, ResNet, DeepFace, Highway Networks and FractalNet.
- Generative Adversarial Networks (GAN). Concept of GAN and conditional GAN for cross-modality synthesis, image restoration and distortion removal.
- Coupled & Siamese Deep Neural Networks. Cross-modal face and object classification, image search and retrieval, Cross-modal deep hashing, Siamese networks for distance metric learning.
- Multisensory Fusion architectures. Deep fusion architectures, deep learning architectures for multimodality and Multiview.
- Applications of Deep Neural Networks. CNN-based object recognition and detection, deep automatic target recognition, deep biometrics (face, iris, fingerprint, voice), cross-spectral classification, scene-to-text generation, sketch-to-photo synthesis, object pedestrian detection from surveillance cameras or moving platforms.
Dr. Nasser M. Nasrabadi is a professor in the Lane Computer Science and Electrical Engineering Department at West Virginia University. He was senior research scientist (ST) at US Army Research Laboratory (ARL). He is actively engaged in research in deep learning, image processing, automatic target recognition and hyperspectral imaging for defense and security. He has published over 300 papers in journals and conference proceedings. He has been an associate editor for the IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology and IEEE Transactions for Neural Networks. He is a Fellow of IEEE and SPIE.
REGISTRATION: There is no obligation or payment required to enter the Registration for an actively scheduled course. We understand that you may need approvals but please register as early as possible or contact us so we know of your interest in this course offering.
SCHEDULING: If this course is not on the current schedule of open enrollment courses and you are interested in attending this or another course as an open enrollment, please contact us at (410)956-8805 or email@example.com. Please indicate the course name, number of students who wish to participate. and a preferred time frame. ATI typically schedules open enrollment courses with a 3-5 month lead-time. To express your interest in an open enrollment course not on our current schedule, please email us at firstname.lastname@example.org.