Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions content/2.01_DeviceQuery.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ First, we want to ask API how many CUDA+capable devices are available, which is

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaGetDeviceCount(int* numDevices)
__host__ __device__ cudaError_t cudaGetDeviceCount(int* numDevices)

The function calls the API and returns the number of the available devices in the address provided as a first argument.
There are a couple of things to notice here.
Expand All @@ -38,7 +38,7 @@ To populate the |cudaDeviceProp| structure, CUDA has |cudaGetDeviceProperties| f

.. code-block:: c++

__host__ cudaError_t cudaGetDeviceProperties(cudaDeviceProp* prop, int deviceId)
__host__ cudaError_t cudaGetDeviceProperties(cudaDeviceProp* prop, int deviceId)

The function has a |__host__| specifier, which means that one can not call it from the device code.
It also returns |cudaError_t| structure, which can be |cudaErrorInvalidDevice| in case we are trying to get properties of a non-existing device (e.g. when ``deviceId`` is larger than ``numDevices`` above).
Expand Down
2 changes: 1 addition & 1 deletion content/2.02_HelloGPU.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ This can be done with the following function from CUDA API:

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaDeviceSynchronize()
__host__ __device__ cudaError_t cudaDeviceSynchronize()

We are already familiar with |__host__| and |__device__| specifiers: this function can be used in both host and device code.
As usual, the return type is |cudaError_t|, which may indicate that there was an error in execution and the function does not take any arguments.
Expand Down
6 changes: 3 additions & 3 deletions content/2.03_VectorAdd.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ To allocate buffer in GPU memory, one has to call the CUDA API function |cudaMal

.. code-block:: cuda

__host__ __device__ cudaError_t cudaMalloc(void** devPtr, size_t size)
__host__ __device__ cudaError_t cudaMalloc(void** devPtr, size_t size)

We are now getting used to these function having access specifiers and return |cudaError_t|.
As the first arguments, the function takes a pointer to the buffer in the device memory.
Expand All @@ -62,7 +62,7 @@ To release the memory, |cudaFree| function should be used:

.. code-block:: cuda

__host__ __device__ cudaError_t cudaFree(void* devPtr)
__host__ __device__ cudaError_t cudaFree(void* devPtr)

Here, the pointer itself is not updated,

Expand All @@ -76,7 +76,7 @@ This is done using the |cudaMemcpy| function, that has the following signature:

.. code-block:: cuda

__host__ cudaError_t cudaMemcpy(void* dst, const void* src, size_t count, cudaMemcpyKind kind)
__host__ cudaError_t cudaMemcpy(void* dst, const void* src, size_t count, cudaMemcpyKind kind)

Both copy to and from the device buffer are done using the same function and the direction of the copy is specifies by the last argument, which is |cudaMemcpyKind| enumeration.
The enumeration can take values |cudaMemcpyHostToHost|, |cudaMemcpyHostToDevice|, |cudaMemcpyDeviceToHost|, |cudaMemcpyDeviceToDevice| or |cudaMemcpyDefault|.
Expand Down
4 changes: 2 additions & 2 deletions content/2.04_HeatEquation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ Not to overload the code with extra checks after every API call, we are going to

.. code-block:: cuda

__host__ __device__ cudaError_t cudaGetLastError(void)
__host__ __device__ cudaError_t cudaGetLastError(void)

This function will check if there were any CUDA API errors in the previous calls and should return |cudaSuccess| if there were none.
We will check this, and print an error message if this was not the case.
Expand All @@ -108,7 +108,7 @@ In order to render a human-friendly string that describes an error, the |cudaGet

.. code-block:: cuda

__host__ __device__ const char* cudaGetErrorString(cudaError_t error)
__host__ __device__ const char* cudaGetErrorString(cudaError_t error)

This will return a string, that we are going to print in case there were errors.

Expand Down
2 changes: 1 addition & 1 deletion content/3.01_ParallelReduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ So either extra condition should be added ot the array should be extended by zer

.. code-block:: CUDA

__host__ cudaError_t cudaMemset(void* devPtr, int value, size_t count)
__host__ cudaError_t cudaMemset(void* devPtr, int value, size_t count)


4. Create the CUDA kernel that will use ``atomicAdd(..)`` to accumulate the data.
Expand Down
16 changes: 8 additions & 8 deletions content/3.02_TaskParallelism.rst
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ Creating a stream is done by calling the following function:

.. code-block:: CUDA

__host__ cudaError_t cudaStreamCreate(cudaStream_t* stream)
__host__ cudaError_t cudaStreamCreate(cudaStream_t* stream)

This function can only be called from the host code and will return |cudaError_t| object if something went wrong.
It takes a pointer to a |cudaStream_t| object, which should be initialized.
Expand Down Expand Up @@ -161,7 +161,7 @@ This is also called pinning, and should be done by using CUDA API while allocati

.. code-block:: CUDA

__host__ cudaError_t cudaMallocHost(void** ptr, size_t size)
__host__ cudaError_t cudaMallocHost(void** ptr, size_t size)

The function works the same way as the |cudaMalloc|, we are already familiar with.
It takes the pointer to the address in memory where allocation should happen and size of the allocation in bytes.
Expand All @@ -172,15 +172,15 @@ To release the pinned memory, one should use the CUDA API function.

.. code-block:: CUDA

__host__ cudaError_t cudaFreeHost(void* ptr)
__host__ cudaError_t cudaFreeHost(void* ptr)

Now the host arrays are pinned, we can do the host to device and device to host copies asynchroneously.

.. signature:: |cudaMemcpyAsync|

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaMemcpyAsync(void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t stream = 0)
__host__ __device__ cudaError_t cudaMemcpyAsync(void* dst, const void* src, size_t count, cudaMemcpyKind kind, cudaStream_t stream = 0)

The signature of this function is very similar to the synchronous variant we used before.
The only difference is that it now takes one extra argument --- the stream in which the copy should be executed.
Expand All @@ -194,7 +194,7 @@ This can be done with the following function from CUDA API:

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaDeviceSynchronize()
__host__ __device__ cudaError_t cudaDeviceSynchronize()

Now we have all the means to execute the data transfers and kernel calls asynchronously.

Expand Down Expand Up @@ -329,7 +329,7 @@ First, one needs to create an |cudaEvent_t| object, which is done by the |cudaEv

.. code-block:: CUDA

__host__ cudaError_t cudaEventCreate(cudaEvent_t* event)
__host__ cudaError_t cudaEventCreate(cudaEvent_t* event)

This function will initialize its only argument.
The events can only be created on host, since one does not need one for each GPU thread.
Expand All @@ -341,7 +341,7 @@ With event created, we need to be able to record and wait for it, which is done

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaEventRecord(cudaEvent_t event, cudaStream_t stream = 0)
__host__ __device__ cudaError_t cudaEventRecord(cudaEvent_t event, cudaStream_t stream = 0)

This will record an event in the provided stream.
Having the event recorded allows the host to see at which point the stream execution currently is.
Expand All @@ -355,7 +355,7 @@ In order for another stream to wait until the event is recorded, one can use the

.. code-block:: CUDA

__host__ __device__ cudaError_t cudaStreamWaitEvent(cudaStream_t stream, cudaEvent_t event, unsigned int flags = 0)
__host__ __device__ cudaError_t cudaStreamWaitEvent(cudaStream_t stream, cudaEvent_t event, unsigned int flags = 0)

This takes an event and stream as arguments.
Calling this function will stop the execution in the provided stream until the event is recorded.
Expand Down