Deep Learning Architectures for Defense and Security

ATI Courses Logo

Broaden Your Knowledge & Increase Productivity

Training Rocket Scientists Since 1984

(410) 956-8805
(888) 501-2100

Deep Learning Architectures for Defense and Security

3-Day Course

$1990 per person


This 3-day course provides a broad introduction to classical neural networks (NN) and its current evolution to deep learning (DL) technology. This course introduces the well-known deep learning architectures and their applications in defense and security for object detection, identification, verification, action recognition, scene understanding and biometrics using a single modality or multimodality sensor information. This course will describe the history of neural networks and its progress to current deep learning technology. It covers several DL architectures such the classical multi-layer feed forward neural networks, convolutional neural networks (CNN), generative adversarial networks (GAN), restricted Boltzmann machines (RBM), auto-encoders and recurrent neural networks such as long term short memory (LSTM).

Use of deep learning architectures for feature extraction and classification will be described and demonstrated. Examples of popular CNN-based architectures such as AlexNet, VGGNet, GooGleNet (inception modules), ResNet, DeepFace, Highway Networks, FractalNet and their applications to defense and security will be discussed. Advanced architectures such as Siamese deep networks, coupled neural networks, conditional adversarial generative networks , fusion of multiple CNNs and their applications to object verification and classification will also be covered. The course is for scientists, engineers, technicians, or managers who wish to learn more about deep learning architectures and their applications in defense and security.

  • Fundamental concepts of neural networks and deep learning.
  • Differences between neural network and current deep learning architectures.
  • Stochastic gradient descent algorithm to train deep learning networks
  • The popular CNN-based architectures (i.e., LeNet, AlexNet, VGGNet, GooGleNet, ResNet).
  • Relative merits of various deep learning architectures, MLP, CNN, GAN, RBM and LSTM.
  • Auto-encoders for feature extraction. Generative adversarial networks for object synthesis.
  • Deep learning framework for object, pedestrian detection, face, iris, fingerprint identification.
  • Siamese and coupled deep learning architectures for cross-modal object verification & identification.
  • Deep learning architectures for multi-view face identification and multimodal biometrics applications.
  1. History of Neural Networks. Origin of the artificial neural networks (ANN) and its relationship with artificial intelligence & expert systems. Artificial neuron models vs biological neurons. Characteristics of receptive fields of neurons in visual cortex. Binary and continues perceptrons.

  2. Multi-layer Perceptrons. Concept of layering, basics of gradient descent and backpropagation learning algorithm for network training.

  3. Activation functions. Non-linearity functions in neural networks, Hard limiting, Sigmoid, Tanh and ReLU functions.

  4. Overfitting and Generalization. The concept of overfitting and generalization in deep learning, sparsity-based regularization, L1 sparsity, L2 sparsity and groups sparsity, concept of dropout as regularization.

  5. Auto-encoders. Denoising autoencoders, hetero-associative auto-encoders, sparse autoencoders, convolutional autoencoders, learning manifold and dimensionality reduction.

  6. Restricted Boltzmann Machines. Idea behind the classical RBM and deep belief nets.

  7. Convolutional Neural Network architectures. Concept of convolution neural network architectures and the functions of its layers. Use of different kernel sizes, average pooling, max pooling and concept of using overlapping and non-overlapping strides.

  8. Modern Convolutional Neural Network architectures. LeNet, AlexNet, VGGNet, GoogleNet, ResNet, DeepFace, Highway Networks and FractalNet.

  9. Generative Adversarial Networks (GAN). Concept of GAN and conditional GAN for cross-modality synthesis, image restoration and distortion removal.

  10. Coupled & Siamese Deep Neural Networks. Cross-modal face and object classification, image search and retrieval, Cross-modal deep hashing, Siamese networks for distance metric learning.

  11. Multisensory Fusion architectures. Deep fusion architectures, deep learning architectures for multimodality and Multiview.

  12. Applications of Deep Neural Networks. CNN-based object recognition and detection, deep automatic target recognition, deep biometrics (face, iris, fingerprint, voice), cross-spectral classification, scene-to-text generation, sketch-to-photo synthesis, object pedestrian detection from surveillance cameras or moving platforms.

If this course is not on the current schedule of open enrollment courses and you are interested in attending this or another course as an open enrollment, please contact us at (410) 956-8805 or Please indicate the course name, number of students who wish to participate. and a preferred time frame. ATI typically schedules open enrollment courses with a 3-5 month lead time. For on-site pricing, you can use the request an on-site quote form, call us at (410) 956-8805, or email us at


Dr. Nasser M. Nasrabadi is a professor in the Lane Computer Science and Electrical Engineering Department at West Virginia University. He was senior research scientist (ST) at US Army Research Laboratory (ARL). He is actively engaged in research in deep learning, image processing, automatic target recognition and hyperspectral imaging for defense and security. He has published over 300 papers in journals and conference proceedings. He has been an associate editor for the IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology and IEEE Transactions for Neural Networks. He is a Fellow of IEEE and SPIE.

Contact this instructor (please mention course name in the subject line)