{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Basic example of using the CUTLASS Python interface for Conv2d\n", "\n", "This notebook walks through a basic example of using the CUTLASS Python interface to declare, compile, and run Conv2d. \n", "\n", "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NVIDIA/cutlass/blob/main/examples/python/03_basic_conv2d.ipynb)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Prerequisites for running on Colab\n", "This notebook requires an NVIDIA GPU. If `nvidia-smi` fails, go to Runtime -> Change runtime type -> Hardware accelerator and confirm a GPU is selected." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!#nvidia-smi" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If running on Colab, you will need to install the CUTLASS Python interface. To do so, uncomment the following line and run the cell:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!#pip install nvidia-cutlass" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## General setup\n", "We first import various packages needed for the example and construct the input and output tensors that will be used in our example." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import torch\n", "import random\n", "\n", "import cutlass\n", "\n", "# This controls whether the C++ GEMM declaration will be printed at each step. \n", "# Set to `false` to omit this information.\n", "print_module = True\n", "\n", "# Input tensor: [N, H, W, C] under the channel-last layout\n", "N, H, W, C = [32, 28, 28, 64]\n", "\n", "# Weight tensor: [K, R, S, C] under the channel-last layout\n", "K, R, S = [128, 3, 3]\n", "\n", "# Stride, and padding\n", "stride = (2, 2)\n", "padding = (1, 1)\n", "dilation = (1, 1)\n", "\n", "# Compute the output size [N, P, Q, K]\n", "N, P, Q, K = cutlass.Conv2d.output_size((N, H, W, C), (K, R, S, C), padding, stride, dilation)\n", "\n", "dtype = torch.float16\n", "type_A = torch.float16\n", "type_B = torch.float16\n", "type_C = torch.float16\n", "type_D = torch.float16\n", "\n", "torch.manual_seed(1234)\n", "\n", "input = torch.ceil(\n", " torch.empty(size=(N, C, H, W), dtype=type_A, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)\n", "weight = torch.ceil(\n", " torch.empty(size=(K, C, R, S), dtype=type_B, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)\n", "tensor_C = torch.ceil(\n", " torch.empty(size=(N, K, P, Q), dtype=type_B, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)\n", "output = torch.zeros_like(tensor_C)\n", "\n", "alpha = 1.0\n", "beta = 0.0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Declaring and running a Conv2d Fprop\n", "\n", "We first show you how to run a Conv2d in the forward propagation. To get started, one only needs to provide the tensors declared above to the `cutlass.op.Conv2dFprop` call. This sets up a default Conv2d fprop operation for the given device on which you are running. \n", "\n", "Assuming that we are runing on SM80, the default is a Conv2d that leverages FP16 Tensor Core operations.\n", "\n", "Calling `plan.run()` will generate the CUTLASS C++ kernel in question, compile it, and run it on the tensors we previously passed in. By setting `print_module` to `true`, the C++ code that is emitted is printed." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Specifying `element_accumulator` is not required if it is the same as `element`\n", "plan = cutlass.Conv2dFprop(element=dtype, element_accumulator=torch.float32)\n", "plan.run(input, weight, tensor_C, output, stride, padding, dilation, alpha, beta, print_module=print_module)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are many other ways to construct a plan from `cutlass.op.Conv2dFprop` (e.g., by specifying the types of each operand, by providing representative tensors as input). For more details on these, see the documentation in the `cutlass.op.Conv2dFprop` constructor.\n", "\n", "We then compare the output to running the Conv2d using PyTorch. PyTorch use NCHW layout by default, so permutations are required." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "output_torch = alpha * torch.ops.aten.conv2d(\n", " input, weight, stride=stride, padding=padding, dilation=dilation\n", ") + beta * tensor_C\n", "\n", "assert torch.equal(output_torch, output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that one could use the same kernel just declared for tensors provided by other frameworks beyond PyTorch, such as NumPy." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Declaring and running Conv2d Dgrad and Wgrad\n", "\n", "The Python interface also supports declaring and running backward kernels of Conv2d. To begin with, we construct the tensors for the gradient of input, output, and weight." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "grad_output = torch.ceil(\n", " torch.empty(size=(N, K, P, Q), dtype=type_A, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)\n", "grad_input = torch.zeros_like(input)\n", "grad_weight = torch.zeros_like(weight)\n", "\n", "tensor_C_dgrad = torch.ceil(\n", " torch.empty(size=(N, C, H, W), dtype=type_A, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)\n", "tensor_C_wgrad = torch.ceil(\n", " torch.empty(size=(K, C, R, S), dtype=type_B, device=\"cuda\").uniform_(-4.5, 3.5)\n", ").to(memory_format=torch.channels_last)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The script below gives a simple example of computing a data gradient via the CUTLASS Python interface and via PyTorch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan_dgrad = cutlass.Conv2dDgrad(element=dtype, element_accumulator=torch.float32)\n", "plan_dgrad.run(grad_output, weight, tensor_C_dgrad, grad_input, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "\n", "grad_input_torch = alpha * torch.nn.grad.conv2d_input(\n", " (N, C, H, W),\n", " weight, grad_output,\n", " stride=stride, padding=padding\n", ") + beta * tensor_C_dgrad\n", "\n", "assert torch.equal(grad_input_torch, grad_input)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The script below gives a simple example of computing a weight gradient via the CUTLASS Python interface and via PyTorch." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan_wgrad = cutlass.Conv2dWgrad(element=dtype, element_accumulator=torch.float32)\n", "plan_wgrad.run(grad_output, input, tensor_C_wgrad, grad_weight, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "\n", "grad_weight_torch = alpha * torch.nn.grad.conv2d_weight(\n", " input, (K, C, R, S), grad_output,\n", " stride=stride, padding=padding\n", ") + beta * tensor_C_wgrad\n", "\n", "assert torch.equal(grad_weight_torch, grad_weight)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Running non-default Conv2ds\n", "\n", "The previous examples showed how it is simple to get starting running a default Conv2d kernel in CUTLASS. But, what do you do if you want a bit more control over the parameters to the Conv2d? CUTLASS Python interface exposes mutable parameters that can be set after the `plan` initialization. We summarize these in the table below.\n", "\n", "|Parameter|Description|\n", "| -- | -- |\n", "|`tile_description`|The threadblock tile size, warp count, software pipeline stages, and instruction shape|\n", "|`iterator_algorithm`|The iterator algorithm used to access the source operands|\n", "|`swizzling_stride`|The stride of the threadblock swizzling functor|\n", "|`split-K`|Partitions the reduction dimension to different threadblocks|" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Tile Description\n", "\n", "The `tile_description` defines the tiling size of each threadblock, the warp count along each dimension of the tile, the software pipeline stages, and the instruction size. Under the hood, CUTLASS enumerates the different Conv2d configuration parameters for this kernel from the CUTLASS profiler. The code below shows how one can access the tile descriptions for the kernel (e.g., threadblock and warp shape)." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan.opclass = \"tensor_op\"\n", "tiles = plan.tile_descriptions()\n", "print(f'{len(tiles)} tile descriptions returned')\n", "num_print = 10\n", "print(f'First {num_print} tile descriptions are:')\n", "for td in tiles[:num_print]:\n", " print(td)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we'll pick one of these configurations at random and compile and run it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "random.seed(42)\n", "idx = random.randint(0, len(tiles)-1)\n", "td = tiles[idx]\n", "print(f'Tile description {idx} is: {td}')\n", "plan.tile_description = td\n", "plan.run(input, weight, tensor_C, output, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "assert torch.equal(output_torch, output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Besides tile descriptions enumerated by CUTLASS, the users can also explicitly set the `threadblockshape`, `warp_shape`, `stages`, `instruction_shape`, and `cluster_shape`. If the configuration is invalid, an exception will be raised at `plan.run()` and the detailed compilation error will be stored in `./cutlass_python_compilation_error.txt` for debugging." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "if plan.cc == 70:\n", " plan.tile_description = {\n", " \"threadblock_shape\": [64, 256, 32],\n", " \"warp_count\": [1, 4, 1],\n", " \"stages\": 2,\n", " \"instruction_shape\": [8, 8, 4], # optional,\n", " \"cluster_shape\": [1, 1, 1] # optional, only [1, 1, 1] is supported currently\n", " }\n", "elif plan.cc == 75:\n", " plan.tile_description = {\n", " \"threadblock_shape\": [128, 64, 32],\n", " \"warp_count\": [2, 1, 1],\n", " \"stages\": 2,\n", " \"instruction_shape\": [16, 8, 8], # optional,\n", " \"cluster_shape\": [1, 1, 1] # optional, only [1, 1, 1] is supported currently\n", " }\n", "elif plan.cc == 80:\n", " plan.tile_description = {\n", " \"threadblock_shape\": [128, 128, 64],\n", " \"warp_count\": [2, 2, 1],\n", " \"stages\": 4,\n", " \"instruction_shape\": [16, 8, 16], # optional,\n", " \"cluster_shape\": [1, 1, 1] # optional, only [1, 1, 1] is supported currently\n", " }\n", "elif plan.cc == 86:\n", " plan.tile_description = {\n", " \"threadblock_shape\": [128, 64, 64],\n", " \"warp_count\": [2, 2, 1],\n", " \"stages\": 3,\n", " \"instruction_shape\": [16, 8, 16],\n", " \"cluster_shape\": [1, 1, 1]\n", " }\n", "\n", "plan.run(input, weight, tensor_C, output, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "assert torch.equal(output_torch, output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Iterator Algorithm\n", "\n", "The iterator algorithm describes how sources are loaded from memory. There are some iterator algorithms optimized for specific alignments and input/output channels that have better performance. The table below illustrates the available iterator algorithms.\n", "\n", "|Conv Kind | Iterator Algorithm | Description |\n", "| -- | -- | -- |\n", "|Fprop | \"analytic\" | Functionally correct in all cases but lower performance |\n", "| | \"optimized\" | Optimized for and requires `R <= 32`, `S<= 32`, and `C % alignment_input == 0`|\n", "| | \"few_channels\" | optimized for small `C` and requires `C % alignment_input == 0`|\n", "| | \"fixed_channels\" | optimized for small `C` and requires `C == alignment_input` |\n", "|Dgrad | \"analytic\" | Functionally correct in all cases but lower performance |\n", "| | \"optimized\" | Optimzed for and require `R <= 32`, `S<= 32`, `K % alignment_grad_output == 0`, and `C % alignment_weight == 0`|\n", "|Wgrad | \"analytic\" | Functionally correct in all cases but lower performance |\n", "| | \"optimized\" | Optimized for and require `K % alignment_grad_output == 0`, and `C % alignment_input == 0`|\n", "\n", "By default, the Python interface will automatically propose a suitable iterator algorithm based on the input tensors in `plan.run()`. However, the user can also specify the desired iterator algorithm as follows" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan.iterator_algorithm = \"analytic\"\n", "plan.run(input, weight, tensor_C, output, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "assert torch.equal(output_torch, output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If the iterator algorithm is invalid for the problem size in `plan.run()`, an exception will be raised." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Swizzling Stride\n", "The swizzling changes how the tile are mapped to threadblocks to improve the L2 Locality. Given a swizzling stride `N`, the threadblock `(tb_x, tb_y)` computes tile `(tb_x / N, tb_y * N + (tb_x % N))`. Currently, stride values of `1`, `2`, `4`, and `8` are supported for `fprop`, `wgrad`, and `1`, and `4` for `dgrad`. The swizzling stride can be set with:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plan.swizzling_stride = 4\n", "plan.run(input, weight, tensor_C, output, stride, padding, dilation, alpha, beta, print_module=print_module)\n", "assert torch.equal(output_torch, output)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Split-K\n", "Split-K is usually applied when the Conv2d has small spatial dimensions and large reduction dimension to ensure good utilization. It further partitions the reduction dimension to different threadblocks. The CUTLASS Python interface supports two types of split-K strategies: `Parallel`, and `Serial`. \n", "* `Parallel`: the partial results from different threadblocks are stored in a temporary buffer in the global memory. When the Conv2d is done, a separate reduction kernel is created and launched to reduce the partial results.\n", "* `Serial`: A semaphore is used to coordinate the order of different threadblocks adding their partial results to a given output tile. A separate kernel does not need to be launched for prforming the reduction.\n", "\n", "While all `fprop`, `dgrad`, and `wgrad` support split-K, here we use `wgrad` as an example. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Parallel Split-K with 5 slices\n", "grad_weight_parallel = torch.zeros_like(grad_weight)\n", "plan_wgrad.run(\n", " grad_output, input, tensor_C_wgrad, grad_weight_parallel, \n", " stride, padding, dilation, alpha, beta, print_module=print_module, split_k=(\"parallel\", 5))\n", "assert torch.equal(grad_weight_torch, grad_weight_parallel)\n", "\n", "# Serial Split-K with 3 slices\n", "grad_weight_serial = torch.zeros_like(grad_weight)\n", "plan_wgrad.run(\n", " grad_output, input, tensor_C_wgrad, grad_weight_serial, \n", " stride, padding, dilation, alpha, beta, print_module=print_module, split_k=(\"serial\", 3))\n", "assert torch.equal(grad_weight_torch, grad_weight_serial)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.9" } }, "nbformat": 4, "nbformat_minor": 4 }