The Deep Learning components provide common deep learning layers typically found in the hidden layers of deep neural networks.
Many of the Deep Learning components have the same hyperparameters. Below are the definitions for common hyperparameters found in multiple components:
- Activation function: specifies the type of activation function to use for the layer. See tf.keras.activations for definitions of specific activation functions.
- Batch normalization: specifies if batch normalization should be used. During training, the distribution of each layer's inputs can change as updates are made to values in previous layers. This phenomena known as covariate shift occurs when models experiences saturating nonlinearities in layer values. Batch normalization can address this by normalizing values of mini batches within the model, potentially allowing for higher learning rates and introducing regularization.
- Dropout: specifies if dropout should be used. This can help regularize deep neural networks to avoid overfitting, by randomly dropping out nodes during training.
- Include top: Specifies whether to include the fully-connected layer at the top of the network.Pooling: Optional pooling mode for feature extraction when Include top is false. Can be set to:
- None: the output of the model will be the 4D tensor output of the last convolutional block.
- avg: global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.
- max: global max pooling will be applied.
- Trainable: Specifies whether the model should update its weights during training. For classification it's common to have this setting set to false while for Object Detection it's common to have it set to true.
- Weights: Specifies whether ImageNet weights should be used or no weights. When no weights are used, the weights must be provided in the component's code using PerceptiLabs' Code Editor.
Adds a dense layer (aka fully-connected layer) whereby all outputs from one layer are connected to all inputs of the next layer. A dense layer's input socket is often connected to a Convolution component, and its output socket to a Training component (e.g., Classification component).
- Neurons: specifies how many neurons the layer is to comprise of. Each neuron will be connected to each neuron of both the input and output components.
Adds a convolution layer which is the foundation for building a Convolution Neural Network (CNN). This layer performs a convolution operation on image data (ranging from one to three dimensions) whereby a kernel filter patch, which is smaller in size than the image, is passed over the image at some stride (i.e., x number of pixels at a time) to build a feature map. A CNN is commonly used in computer vision applications to detect features within images. See the Convolution Tutorial for an example.
In PerceptiLabs, the Convolution component's input socket is often connected to Reshape component which provides the input image in the necessary dimensions, and it's output socket is often connected to a Dense component (and sometimes to another Convolution component) which ensures that every neuron of the CNN is connected to every neuron expected by the Training component (e.g., Classification component).
- Convolution type: specifies the type of convolution to use:
- Conv: 2D convolution layer (e.g. spatial convolution over images). See this TensorFlow topic for more information.
- Separable: performs depthwise separable 2D convolution. See this TensorFlow topic for more information.
- Depthwise: performs the first step of Depthwise separable 2D convolution. See this TensorFlow topic for more information.
- Dimension: specifies the dimension of the input image.
- Patch size: sets the size, in pixels, of the filter patch.
- Stride: sets the number of pixels to move the filter patch over the image.
- Feature maps: sets the number of feature maps to generate.
- Zero-padding: specifies how zero-padding should be used. Padding extends the area of the image to provide more area for the filter to cover the image, potentially leading to more accurate image analysis. Zero-padding can be set to:
- SAME: results in padding evenly to the left/right or up/down of the input such that output has the same height/width dimension as the input.
- VALID: No padding is to be used.
- Pooling: specifies if a pooling layer should be included. Pooling down samples a feature map so that changes to features (e.g., small movements of features) don't result in the creation of new feature maps, while retaining the feature.
Adds a deconvolution layer.
- Dimension: specifies which type of convolutional operation to use.
- Patch size: sets the patch size.
- Stride: sets the stride.
- Feature maps: sets the number of feature maps.
- Zero-padding: specifies if zero-padding should be used.
- Activation function: specifies the activation function to use.
- Dropout: specifies if dropout should be used.
- Batch normalization: specifies if batch normalization should be used.
Adds a recurrent layer that includes a "looping" capability such that its input consists of both the data to analyze as well as the output from a previous calculation performed by that layer. Recurrent layers form the basis of Recurrent Neural Network (RNNs), effectively providing them with memory (i.e., maintain a state across iterations), while their recursive nature makes RNNs useful for cases involving sequential data like natural language and time series. They're also useful for mapping inputs to outputs of different types and dimensions.
Recurrent is a base component of many models and will often take the place of either Dense or Convolution components in those cases.
- Neurons: specifies how many neurons that the layer is to consist of.
- Recurrent alternative: specifies which recurrent alternative to use. For additional see Recurrent layers.
- Return Sequence: toggles whether all states should be returned. If set to No, only the last state is returned.
Creates a Keras Applications VGG16 model that can be used for transfer learning when dealing with large-scale images.
Creates a Keras Applications ResNet50 model that can be used for transfer learning. This can be a quicker method than creating a ResNet from scratch. A ResNet50 model is a deep convolutional neural network (CNN) with 50 layers and is commonly used for image classification problems.
Creates a Keras Applications InceptionV3 model that can be used for transfer learning. InceptionV3 is a convolutional neural network (CNN) for assisting in image analysis and object detection,
Creates a Keras Applications MobileNetV2 model that can be used for transfer learning. MobileNetV2 is useful for visual recognition including classification, object detection and semantic segmentation.