diff --git a/content/2.01_DeviceQuery.rst b/content/2.01_DeviceQuery.rst index 6f85647..56c26a4 100644 --- a/content/2.01_DeviceQuery.rst +++ b/content/2.01_DeviceQuery.rst @@ -16,7 +16,7 @@ First, we want to ask API how many CUDA+capable devices are available, which is .. code-block:: CUDA - __host__ ​__device__​ cudaError_t cudaGetDeviceCount(int* numDevices) + __host__ __device__ cudaError_t cudaGetDeviceCount(int* numDevices) The function calls the API and returns the number of the available devices in the address provided as a first argument. There are a couple of things to notice here. @@ -38,7 +38,7 @@ To populate the |cudaDeviceProp| structure, CUDA has |cudaGetDeviceProperties| f .. code-block:: c++ - __host__​ cudaError_t cudaGetDeviceProperties(cudaDeviceProp* prop, int deviceId) + __host__ cudaError_t cudaGetDeviceProperties(cudaDeviceProp* prop, int deviceId) The function has a |__host__| specifier, which means that one can not call it from the device code. It also returns |cudaError_t| structure, which can be |cudaErrorInvalidDevice| in case we are trying to get properties of a non-existing device (e.g. when ``deviceId`` is larger than ``numDevices`` above). diff --git a/content/2.02_HelloGPU.rst b/content/2.02_HelloGPU.rst index 8b80132..762f8b5 100644 --- a/content/2.02_HelloGPU.rst +++ b/content/2.02_HelloGPU.rst @@ -103,7 +103,7 @@ This can be done with the following function from CUDA API: .. code-block:: CUDA - __host__ ​__device__​ cudaError_t cudaDeviceSynchronize() + __host__ __device__ cudaError_t cudaDeviceSynchronize() We are already familiar with |__host__| and |__device__| specifiers: this function can be used in both host and device code. As usual, the return type is |cudaError_t|, which may indicate that there was an error in execution and the function does not take any arguments. diff --git a/content/2.03_VectorAdd.rst b/content/2.03_VectorAdd.rst index 02c1ba5..4087e41 100644 --- a/content/2.03_VectorAdd.rst +++ b/content/2.03_VectorAdd.rst @@ -48,7 +48,7 @@ To allocate buffer in GPU memory, one has to call the CUDA API function |cudaMal .. code-block:: cuda - __host__ ​__device__ ​cudaError_t cudaMalloc(void** devPtr, size_t size) + __host__ __device__ cudaError_t cudaMalloc(void** devPtr, size_t size) We are now getting used to these function having access specifiers and return |cudaError_t|. As the first arguments, the function takes a pointer to the buffer in the device memory. @@ -62,7 +62,7 @@ To release the memory, |cudaFree| function should be used: .. code-block:: cuda - __host__ ​__device__​ cudaError_t cudaFree(void* devPtr) + __host__ __device__ cudaError_t cudaFree(void* devPtr) Here, the pointer itself is not updated, @@ -76,7 +76,7 @@ This is done using the |cudaMemcpy| function, that has the following signature: .. code-block:: cuda - __host__ ​cudaError_t cudaMemcpy(void* dst, const void* src, size_t count, cudaMemcpyKind kind) + __host__ cudaError_t cudaMemcpy(void* dst, const void* src, size_t count, cudaMemcpyKind kind) Both copy to and from the device buffer are done using the same function and the direction of the copy is specifies by the last argument, which is |cudaMemcpyKind| enumeration. The enumeration can take values |cudaMemcpyHostToHost|, |cudaMemcpyHostToDevice|, |cudaMemcpyDeviceToHost|, |cudaMemcpyDeviceToDevice| or |cudaMemcpyDefault|. diff --git a/content/2.04_HeatEquation.rst b/content/2.04_HeatEquation.rst index e9218b1..c54c819 100644 --- a/content/2.04_HeatEquation.rst +++ b/content/2.04_HeatEquation.rst @@ -98,7 +98,7 @@ Not to overload the code with extra checks after every API call, we are going to .. code-block:: cuda - __host__​ __device__​ cudaError_t cudaGetLastError(void) + __host__ __device__ cudaError_t cudaGetLastError(void) This function will check if there were any CUDA API errors in the previous calls and should return |cudaSuccess| if there were none. We will check this, and print an error message if this was not the case. @@ -108,7 +108,7 @@ In order to render a human-friendly string that describes an error, the |cudaGet .. code-block:: cuda - __host__​ __device__ ​const char* cudaGetErrorString(cudaError_t error) + __host__ __device__ const char* cudaGetErrorString(cudaError_t error) This will return a string, that we are going to print in case there were errors. diff --git a/content/3.01_ParallelReduction.rst b/content/3.01_ParallelReduction.rst index d33ab0f..3c7e844 100644 --- a/content/3.01_ParallelReduction.rst +++ b/content/3.01_ParallelReduction.rst @@ -87,7 +87,7 @@ So either extra condition should be added ot the array should be extended by zer .. code-block:: CUDA - __host__​ cudaError_t cudaMemset(void* devPtr, int value, size_t count) + __host__ cudaError_t cudaMemset(void* devPtr, int value, size_t count) 4. Create the CUDA kernel that will use ``atomicAdd(..)`` to accumulate the data. diff --git a/content/3.02_TaskParallelism.rst b/content/3.02_TaskParallelism.rst index 1a85abb..2fd2613 100644 --- a/content/3.02_TaskParallelism.rst +++ b/content/3.02_TaskParallelism.rst @@ -133,7 +133,7 @@ Creating a stream is done by calling the following function: .. code-block:: CUDA - __host__​ cudaError_t cudaStreamCreate(cudaStream_t* stream) + __host__ cudaError_t cudaStreamCreate(cudaStream_t* stream) This function can only be called from the host code and will return |cudaError_t| object if something went wrong. It takes a pointer to a |cudaStream_t| object, which should be initialized. @@ -161,7 +161,7 @@ This is also called pinning, and should be done by using CUDA API while allocati .. code-block:: CUDA - __host__ ​cudaError_t cudaMallocHost(void** ptr, size_t size) + __host__ cudaError_t cudaMallocHost(void** ptr, size_t size) The function works the same way as the |cudaMalloc|, we are already familiar with. It takes the pointer to the address in memory where allocation should happen and size of the allocation in bytes. @@ -172,7 +172,7 @@ To release the pinned memory, one should use the CUDA API function. .. code-block:: CUDA - __host__ ​cudaError_t cudaFreeHost(void* ptr) + __host__ cudaError_t cudaFreeHost(void* ptr) Now the host arrays are pinned, we can do the host to device and device to host copies asynchroneously. @@ -180,7 +180,7 @@ Now the host arrays are pinned, we can do the host to device and device to host .. code-block:: CUDA - __host__ ​__device__​ cudaError_t cudaMemcpyAsync(void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t stream = 0) + __host__ __device__ cudaError_t cudaMemcpyAsync(void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t stream = 0) The signature of this function is very similar to the synchronous variant we used before. The only difference is that it now takes one extra argument --- the stream in which the copy should be executed. @@ -194,7 +194,7 @@ This can be done with the following function from CUDA API: .. code-block:: CUDA - __host__ ​__device__​ cudaError_t cudaDeviceSynchronize() + __host__ __device__ cudaError_t cudaDeviceSynchronize() Now we have all the means to execute the data transfers and kernel calls asynchronously. @@ -329,7 +329,7 @@ First, one needs to create an |cudaEvent_t| object, which is done by the |cudaEv .. code-block:: CUDA - __host__ ​cudaError_t cudaEventCreate(cudaEvent_t* event) + __host__ cudaError_t cudaEventCreate(cudaEvent_t* event) This function will initialize its only argument. The events can only be created on host, since one does not need one for each GPU thread. @@ -341,7 +341,7 @@ With event created, we need to be able to record and wait for it, which is done .. code-block:: CUDA - __host__​ __device__​ cudaError_t cudaEventRecord(cudaEvent_t event, cudaStream_t stream = 0) + __host__ __device__ cudaError_t cudaEventRecord(cudaEvent_t event, cudaStream_t stream = 0) This will record an event in the provided stream. Having the event recorded allows the host to see at which point the stream execution currently is. @@ -355,7 +355,7 @@ In order for another stream to wait until the event is recorded, one can use the .. code-block:: CUDA - __host__​ __device__ ​cudaError_t cudaStreamWaitEvent(cudaStream_t stream, cudaEvent_t event, unsigned int flags = 0) + __host__ __device__ cudaError_t cudaStreamWaitEvent(cudaStream_t stream, cudaEvent_t event, unsigned int flags = 0) This takes an event and stream as arguments. Calling this function will stop the execution in the provided stream until the event is recorded.