Lecture 13: Intro to Machine Learning

Lecture 13: Intro to Machine Learning

CS 179: LECTURE 17 CONVOLUTIONAL NETS IN CUDNN LAST TIME Motivation for convolutional neural nets Forward and backwards propagation algorithms for convolutional neural nets (at a high level) Down-sampling data using pooling operations Foreshadowing to how we will use cuDNN to do it TODAY

Understanding cuDNNs internal representations for convolutions and pooling objects Implementing convolutional nets using cuDNN REPRESENTING CONVOLUTIONS Adding on to tensors and their descriptors, we now also have cudnnFilterDescriptor_t (to describe a conv kernel/filter) and cudnnConvolutionDescriptor_t (to describe an actual convolution) We also have a cudnnPoolingDescriptor_t to

represent a pooling operation (max pool, mean pool, etc.) These have their own constructors, accessors, mutators, and destructors CONVOLUTIONAL FILTERS cudnnFilterDescriptor_t Allocate by calling cudnnCreateFilterDescriptor( cudnnFilterDescriptor_ t *filterDesc)

Free by calling cudnnDestroyFilterDescriptor( cudnnFilterDescriptor _t filterDesc) The filter itself is just an array of numbers on the device We will be using 4D arrays to store filters However, this is still just a long linear array, like everything else CONVOLUTIONAL FILTERS cudnnFilterDescriptor_t Set by calling cudnnSetFilter4dDescriptor( cudnnFilterDescrip

tor_t filterDesc, cudnnDataType_t datatype, cudnnTensorFormat_t format, int k, int c, int h, int w) Use TENSOR_FORMAT_NCHW for format parameter k = # of output channels, c = # of input channels CONVOLUTIONAL FILTERS cudnnFilterDescriptor_t Get contents by calling cudnnGetFilter4dDescriptor( cudnnFilterDescr iptor_t filterDesc,

cudnnDataType_t *datatype, cudnnTensorFormat_t *format, int *k, int *c, int *h, int *w) As usual, this function returns by setting pointers to output parameters DESCRIBING CONVOLUTIONS cudnnConvolutionDescriptor_t Allocate with cudnnCreateConvolutionDescriptor( cudnnCon volutionDescriptor_t *convDesc)

Free with cudnnDestroyConvolutionDescriptor( cudnnConvolutionDescriptor_t convDesc) We will be considering 2D convolutions only DESCRIBING CONVOLUTIONS cudnnConvolutionDescriptor_t Set with cudnnSetConvolution2dDescriptor( cudnnConv olutionDescriptor_t convDesc, int pad_h, int pad_w,

int u, int v, int dilation_h, int dilation_w, cudnnConvolutionMode_t mode, cudnnDataType_t computeType) DESCRIBING CONVOLUTIONS cudnnConvolutionDescriptor_t pad_h and pad_w are respectively the number of rows and columns of zeros to pad the input with use for both u and v are respectively the vertical and horizontal stride

of the convolution (to downsample w/o pooling) use for both Use for both dilation_h and dilation_w (roughly a stretch factor for filters, but beyond the scope of this class) DESCRIBING CONVOLUTIONS cudnnConvolutionDescriptor_t cudnnConvolutionMode_t is an enum saying

whether to do a convolution or cross-correlation. For this set, use CUDNN_CONVOLUTION for the mode argument. cudnnDataType_t is an enum indicating the kind of data being used (float, double, int, long int, etc.). For this set, use CUDNN_DATA_FLOAT for the computeType argument. DESCRIBING CONVOLUTIONS cudnnConvolutionDescriptor_t

Get with cudnnGetConvolution2dDescriptor( cudnnConv olutionDescriptor_t convDesc, int *pad_h, int *pad_w, int *u, int *v, int *dilation_h, int *dilation_w, cudnnConvolutionMode_t *mode, cudnnDataType_t *computeType) DESCRIBING CONVOLUTIONS

Given descriptors for an input and the filter we want to convolve it with, we can get the shape of the output via cudnnGetConvolution2dForwardOutputDim( cudnnCo nvolutionDescriptor_t convDesc, cudnnTensorDescriptor_t inputTensorDesc, cudnnFilterDescriptor_t filterDesc, int *n, int *c, int *h, int *w) As usual, n, c, h, and w are set by reference as outputs USING THESE IN A CONV NET All of cuDNNs functions for forward and

backward passes in conv nets will extensively use these descriptor types This is why we are establishing them now, rather than later One more aside before discussing the actual functions for doing the forward and backward passes CONVOLUTION ALGORITHMS There are many ways to perform convolutions!

Do it explicitly Turn it into a matrix multiplication Use FFT to transform into frequency domain, multiply pointwise, and inverse FFT back cuDNN lets you choose the algorithm you want to use for all operations in the forward and backward passes CONVOLUTION ALGORITHMS Different algorithms are better suited for different

situations! Most important factor: amount of global memory available for intermediate computations (workspace) Tradeoff b/w time and space complexity faster algorithms tend to need more space for intermediate computations cuDNN lets you specify preferences, and it gives you an algorithm that best matches your preferences CONVOLUTION ALGORITHMS

The choice of algorithm is represented via the enums cudnnConvolutionPreference_t and cudnnConvolutionAlgo_t, and cudnnConvolutionAlgoPerf_t, where is one of Fwd, BwdFilter, and BwdData Feel free to look at NVIDIA docs for these types and related functions, but we will be handling them for you in HW6 FORWARD PASS: CONVOLUTION The forward pass for a conv layer with input ,

filter , and bias is In HW6, we will give you code that deals with the bias term Your job will be to perform the convolution using cudnnConvolutionForward() see next slide for a description of how to call this function FORWARD PASS: CONVOLUTION

cudnnConvolutionForward( cudnnHandle_t handle, void *alpha, cudnnTensorDescriptor_t xDesc, void *x, cudnnFilterDescriptor_t kDesc, void *k, cudnnConvolutionDescriptor_t convDesc, cudnnConvolutionFwdAlgo_t algo, void *workSpace, size_t workSpaceBytes, void *beta, cudnnTensorDescriptor_t zDesc, void *z) FORWARD PASS: CONVOLUTION

This function sets the contents of the output tensor z to alpha[0] * conv(k, x) + beta[0] * z The convolution algorithm, workspace, and size of the workspace will be supplied to you in HW6 (unnecessary complication for you to consider for this set) With alpha[0] = 1 and beta[0] = 0, this is exactly what you need to call! BACKWARD PASS: CONVOLUTION

With the neural net architecture given, we will have: The output of the convolution The gradient with respect to the output of the convolution (propagated backwards from the next layer) We want to find the gradients with respect to: The filter and the bias to do gradient descent The input data to propagate backwards BACKWARD PASS: CONVOLUTION

Key to argument names x is the input data k is the filter dz is the gradient with respect to the output dx is the gradient with respect to input data dk is the gradient with respect to the filter db is the gradient with respect to the bias BACKWARD PASS: CONVOLUTION Key to argument names As always, the alpha and beta arguments are

pointers to mixing parameters If we are using a buffer out to accumulate the results of performing an operation op on an input buffer in, we have out = alpha[0] * op(in) + beta[0] * out GRADIENT WRT BIAS cudnnConvolutionBackwardBias( cudnnHandle_t handle,

void *alpha, cudnnTensorDescriptor_t dzDesc, void *dz, cudnnConvolutionDescriptor_t convDesc, void *beta, cudnnTensorDescriptor_t dbDesc, void *db) We will handle this for you in HW6 GRADIENT WRT FILTER cudnnConvolutionBackwardFilter( cudnnHandle_t handle, void *alpha, cudnnTensorDescriptor_t xDesc, void *x,

cudnnTensorDescriptor_t dzDesc, void *dz, cudnnConvolutionDescriptor_t convDesc, cudnnConvolutionBwdFilterAlgo_t algo, void *workSpace, size_t workSpaceBytes, void *beta, cudnnFilterDescriptor_t dkDesc, void *dk) GRADIENT WRT INPUT DATA cudnnConvolutionBackwardData( cudnnHandle_t handle, void *alpha, cudnnFilterDescriptor_t kDesc, void *k,

cudnnTensorDescriptor_t dzDesc, void *dz, cudnnConvolutionDescriptor_t convDesc, cudnnConvolutionBwdDataAlgo_t algo, void *workSpace, size_t workSpaceBytes, void *beta, cudnnTensorDescriptor_t dxDesc, void *dx) POOLING OPERATIONS Reminder: pooling lets us down-sample images to make them more manageable (reduce dimensionality)

http://ieeexplore.ieee.org/document/7590035/all-figures POOLING OPERATIONS cudnnPoolingDescriptor_t Allocate with cudnnCreatePoolingDescriptor( cudnnPoolingD escriptor_t *poolingDesc) Free with cudnnDestroyPoolingDescriptor( cudnnPooling Descriptor_t poolingDesc) We will only be using 2D pooling operations in HW6

POOLING OPERATIONS cudnnPoolingDescriptor_t Set with cudnnSetPooling2dDescriptor( cudnnPoolingD escriptor_t poolingDesc, cudnnPoolingMode_t poolingMode, cudnnNanPropagation_t nanProp, int windowHeight, int windowWidth, int verticalPad, int horizontalPad,

int verticalStride, int horizontalStride) POOLING OPERATIONS cudnnPoolingDescriptor_t cudnnPoolingMode_t is an enum specifying the kind of pooling to do, i.e. max (CUDNN_POOLING_MAX) or average (CUDNN_POOLING_AVERAGE_COUNT_INCLUDE_PADDING or CUDNN_POOLING_AVERAGE_COUNT_EXCLUDE_PADDING) For nanProp, use CUDNN_PROPAGATE_NAN Use for horizontal and vertical padding

Make the strides equal to the window dimensions POOLING OPERATIONS cudnnPoolingDescriptor_t Get with cudnnGetPooling2dDescriptor( cudnnPoolingDe scriptor_t *poolingDesc, cudnnPoolingMode_t *poolingMode, cudnnNanPropagation_t *nanProp, int *windowHeight, int *windowWidth, int *verticalPad,

int *horizontalPad, int *verticalStride, int *horizontalStride) POOLING OPERATIONS We can get the output shape of a pooling operation on some input using the function cudnnGetPooling2dForwardOutputDim( cudnnPoo lingDescriptor_t poolingDesc, cudnnTensorDescriptor_t inputDesc, int *n, int *c, int *h, int *w) n, c, h, and w are output parameters to be set by

reference POOLING OPERATIONS To perform a pooling operation in the forward direction, use cudnnPoolingForward( cudnnHandle_t handle, cudnnPoolingDescriptor_t poolingDesc, void *alpha, cudnnTensorDescriptor_t xDesc, void *x,

void *beta, cudnnTensorDescriptor_t zDesc, void *z) POOLING OPERATIONS To differentiate with respect to a pooling operation, use cudnnPoolingBackward( cudnnHandle_t handle, cudnnPoolingDescriptor_t poolingDesc, void *alpha, cudnnTensorDescriptor_t zDesc, void *z, cudnnTensorDescriptor_t dzDesc, void *dz, void *beta,

cudnnTensorDescriptor_t dxDesc, void *dx) POOLING OPERATIONS In the previous slides, x is the input to the pooling operation, dx is its gradient, z is the output of the pooling operation, and dz is its gradient alpha and beta are pointers to mixing parameters as usual

In all cases, the last buffer given as an argument is the output array SUMMARY Today, we discussed how to use cuDNN to Perform convolutions Backpropagate gradients with respect to convolutions Perform pooling operations and backpropagate their gradients

For HW6, these slides should be a good alternative reference to the NVIDIA docs.

Recently Viewed Presentations

  • Autoimmune diseases

    Autoimmune diseases

    If the antigen is presented to T cells without adequate levels of costimulators, the cells become anergic. Because costimulatory molecules are not expressed or are weakly expressed on resting dendritic cells in normal tissues, the encounter between autoreactive T cells...
  • Classification of Matter

    Classification of Matter

    classification of matter place the following in the correct position on the flow chart carbon, mixture, sulfur, matter, iron sulfide, homogeneous mixture, heterogeneous mixture, compound, iron, water, pure substance, solution, air, vegetable soup, sugar, sprite classification of matter * *...
  • Character Types - WordPress.com

    Character Types - WordPress.com

    You can think of the protagonist as the hero and the antagonist as the villain. To remember which is which, remember that the prefix pro means good, or positive, and the prefix anti means against, or negative. Protagonist Answer the...
  • 101 Critical Days of Summer Safety Brief - United States Navy

    101 Critical Days of Summer Safety Brief - United States Navy

    Here are some safety tips to keep in mind when using your grill: Use grills outdoors only. Grilling inside any type of enclosed space, such as a garage or a tent poses both a fire hazard and escalates the risk...
  • Chapter 1: Introduction and History  Where does the

    Chapter 1: Introduction and History Where does the

    Operating system Operating system interface Hardware User programs Hardware interface Hardware CPU, registers, disks, monitors, etc Hardware interface the instruction set other things like interrupt - anything that a programmer needs to know in order to write programs that use...
  • In the Name of God

    In the Name of God

    reflex (deep tendon, superficial, and pathological reflexes) spinal nerve roots and peripheral nerves. Other . cranial nerve assessment, neuropsychological assessment (cognitive ability . cerebellar function (coordinated movements: finger to nose)
  • Fitness-to-Drive Screening Measure: From Conceptualization to Global Implementation

    Fitness-to-Drive Screening Measure: From Conceptualization to Global Implementation

    To identify the FTDS resources/recommendations appropriate for Canadian users; and identify the barriers that Canadian stakeholders experience when promoting older driver fitness. Methods Twenty stakeholders from 3 provinces (8 OTs, 3 CDRS, 4 physicians, and 5 members of advocacy organizations)...
  • Recognizing & Addressing Global Health Malpractice

    Recognizing & Addressing Global Health Malpractice

    "I hope that we can begin to truly decolonize global health by being aware of what we do not know, that people understand their own lives better than we could ever do, that they and only they can truly improve...