Layers¶
elementwise_add_layer¶
element-wise add N vectors y_i = x0_i + x1_i + ... + xnum_i
average_pooling_layer¶
average pooling with trainable weights
Constructors¶
average_pooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pool_size)
- in_height height of input image
- in_channels the number of input image channels(depth)
- in_width width of input image
- pool_size factor by which to downscale
average_pooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pool_size,
size_t stride)
- in_height height of input image
- stride interval at which to apply the filters to the input
- in_channels the number of input image channels(depth)
- in_width width of input image
- pool_size factor by which to downscale
average_pooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pool_size_x,
size_t pool_size_y,
size_t stride_x,
size_t stride_y,
padding pad_type = padding::valid)
- in_height height of input image
- pad_type padding mode(same/valid)
- in_channels the number of input image channels(depth)
- pool_size_x factor by which to downscale
- pool_size_y factor by which to downscale
- in_width width of input image
- stride_x interval at which to apply the filters to the input
- stride_y interval at which to apply the filters to the input
average_unpooling_layer¶
average pooling with trainable weights
Constructors¶
average_unpooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pooling_size)
- in_height height of input image
- in_channels the number of input image channels(depth)
- in_width width of input image
- pooling_size factor by which to upscale
average_unpooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pooling_size,
size_t stride)
- in_height height of input image
- stride interval at which to apply the filters to the input
- in_channels the number of input image channels(depth)
- in_width width of input image
- pooling_size factor by which to upscale
batch_normalization_layer¶
Batch Normalization
Normalize the activations of the previous layer at each batch
Constructors¶
batch_normalization_layer(const layer& prev_layer,
float_t epsilon = 1e-5,
float_t momentum = 0.999,
net_phase phase = net_phase::train)
- phase specify the current context (train/test)
- epsilon small positive value to avoid zero-division
- prev_layer previous layer to be connected with this layer
- momentum momentum in the computation of the exponential average of the mean/stddev of the data
batch_normalization_layer(size_t in_spatial_size,
size_t in_channels,
float_t epsilon = 1e-5,
float_t momentum = 0.999,
net_phase phase = net_phase::train)
- phase specify the current context (train/test)
- in_channels channels of the input data
- in_spatial_size spatial size (WxH) of the input data
- momentum momentum in the computation of the exponential average of the mean/stddev of the data
- epsilon small positive value to avoid zero-division
concat_layer¶
concat N layers along depth
// in: [3,1,1],[3,1,1] out: [3,1,2] (in W,H,K order)
concat_layer l1(2,3);
// in: [3,2,2],[3,2,5] out: [3,2,7] (in W,H,K order)
concat_layer l2({shape3d(3,2,2),shape3d(3,2,5)});
convolutional_layer¶
2D convolution layer
take input as two-dimensional image and applying filtering operation.
Constructors¶
convolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- padding rounding strategy
- valid: use valid pixels of input only.
output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels
- same: add zero-padding to keep same width/height.
output-size = in-width * in-height * out_channels
- valid: use valid pixels of input only.
- in_channels input image channels (grayscale=1, rgb=3)
- backend_type specify backend engine you use
- in_width input image width
convolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- backend_type specify backend engine you use
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- padding rounding strategy
- valid: use valid pixels of input only.
output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels
- same: add zero-padding to keep same width/height.
output-size = in-width * in-height * out_channels
- valid: use valid pixels of input only.
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
convolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy
- valid: use valid pixels of input only.
output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels
- same: add zero-padding to keep same width/height.
output-size = in-width * in-height * out_channels
- valid: use valid pixels of input only.
- in_channels input image channels (grayscale=1, rgb=3)
- backend_type specify backend engine you use
- in_width input image width
convolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- backend_type specify backend engine you use
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy
- valid: use valid pixels of input only.
output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels
- same: add zero-padding to keep same width/height.
output-size = in-width * in-height * out_channels
- valid: use valid pixels of input only.
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
deconvolutional_layer¶
2D deconvolution layer
take input as two-dimensional image and applying filtering operation.
Constructors¶
deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::default_engine())
- in_height input image height
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
dropout_layer¶
applies dropout to the input
feedforward_layer¶
single-input, single-output network with activation function
fully_connected_layer¶
compute fully-connected(matmul) operation
linear_layer¶
element-wise operation: f(x) = h(scale*x+bias)
lrn_layer¶
local response normalization
Constructors¶
lrn_layer(layer* prev,
size_t local_size,
float_t alpha = 1.0,
float_t beta = 5.0,
norm_region region = norm_region::across_channels)
- beta the scaling parameter (same to caffe’s LRN)
- alpha the scaling parameter (same to caffe’s LRN)
- layer the previous layer connected to this
- in_channels the number of channels of input data
- local_size the number of channels(depths) to sum over
lrn_layer(size_t in_width,
size_t in_height,
size_t local_size,
size_t in_channels,
float_t alpha = 1.0,
float_t beta = 5.0,
norm_region region = norm_region::across_channels)
- in_height the height of input data
- local_size the number of channels(depths) to sum over
- beta the scaling parameter (same to caffe’s LRN)
- in_channels the number of channels of input data
- alpha the scaling parameter (same to caffe’s LRN)
- in_width the width of input data
max_pooling_layer¶
Constructors¶
max_pooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pooling_size,
backend_t backend_type = core::default_engine())
- in_height height of input image
- in_channels the number of input image channels(depth)
- in_width width of input image
- pooling_size factor by which to downscale
max_pooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t pooling_size_x,
size_t pooling_size_y,
size_t stride_x,
size_t stride_y,
padding pad_type = padding::valid,
backend_t backend_type = core::default_engine())
- in_height height of input image
- stride interval at which to apply the filters to the input
- in_channels the number of input image channels(depth)
- in_width width of input image
- pooling_size factor by which to downscale
max_unpooling_layer¶
Constructors¶
max_unpooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t unpooling_size)
- in_height height of input image
- in_channels the number of input image channels(depth)
- in_width width of input image
- unpooling_size factor by which to upscale
max_unpooling_layer(size_t in_width,
size_t in_height,
size_t in_channels,
size_t unpooling_size,
size_t stride)
- in_height height of input image
- stride interval at which to apply the filters to the input
- in_channels the number of input image channels(depth)
- in_width width of input image
- unpooling_size factor by which to upscale
power_layer¶
element-wise pow: y = scale*x^factor
Constructors¶
power_layer(const shape3d& in_shape, float_t factor, float_t scale=1.0f)
- factor floating-point number that specifies a power
- scale scale factor for additional multiply
- in_shape shape of input tensor
power_layer(const layer& prev_layer, float_t factor, float_t scale=1.0f)
- prev_layer previous layer to be connected
- scale scale factor for additional multiply
- factor floating-point number that specifies a power
quantized_convolutional_layer¶
2D convolution layer
take input as two-dimensional image and applying filtering operation.
Constructors¶
quantized_convolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_convolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_convolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_convolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_deconvolutional_layer¶
2D deconvolution layer
take input as two-dimensional image and applying filtering operation.
Constructors¶
quantized_deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- h_stride specify the vertical interval at which to apply the filters to the input
- has_bias whether to add a bias vector to the filter outputs
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- padding rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_width + 1) * (in-height - window_height + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_size,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- window_size window(kernel) size of convolution
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_deconvolutional_layer(size_t in_width,
size_t in_height,
size_t window_width,
size_t window_height,
size_t in_channels,
size_t out_channels,
const connection_table& connection_table,
padding pad_type = padding::valid,
bool has_bias = true,
size_t w_stride = 1,
size_t h_stride = 1,
backend_t backend_type = core::backend_t::internal)
- in_height input image height
- has_bias whether to add a bias vector to the filter outputs
- connection_table definition of connections between in-channels and out-channels
- out_channels output image channels
- w_stride specify the horizontal interval at which to apply the filters to the input
- window_height window_height(kernel) size of convolution
- window_width window_width(kernel) size of convolution
- h_stride specify the vertical interval at which to apply the filters to the input
- pad_type rounding strategy valid: use valid pixels of input only. output-size = (in-width - window_size + 1) * (in-height - window_size + 1) * out_channels same: add zero-padding to keep same width/height. output-size = in-width * in-height * out_channels
- in_channels input image channels (grayscale=1, rgb=3)
- in_width input image width
quantized_fully_connected_layer¶
compute fully-connected(matmul) operation
slice_layer¶
slice an input data into multiple outputs along a given slice dimension.
Constructors¶
slice_layer(const shape3d& in_shape, slice_type slice_type, size_t num_outputs)
num_outputs number of output layers
example1: input: NxKxWxH = 4x3x2x2 (N:batch-size, K:channels, W:width, H:height) slice_type: slice_samples num_outputs: 3
output[0]: 1x3x2x2 output[1]: 1x3x2x2 output[2]: 2x3x2x2 (mod data is assigned to the last output)
example2: input: NxKxWxH = 4x6x2x2 slice_type: slice_channels num_outputs: 3
output[0]: 4x2x2x2 output[1]: 4x2x2x2 output[2]: 4x2x2x2
slice_type target axis of slicing