Classes | |
class | cv::cuda::BufferPool |
BufferPool for use with CUDA streams. More... | |
class | cv::cuda::Event |
struct | cv::cuda::EventAccessor |
Class that enables getting cudaEvent_t from cuda::Event. More... | |
struct | cv::cuda::GpuData |
class | cv::cuda::GpuMat |
Base storage class for GPU memory with reference counting. More... | |
class | cv::cuda::GpuMatND |
class | cv::cuda::HostMem |
Class with reference counting wrapping special memory type allocation functions from CUDA. More... | |
class | cv::cuda::Stream |
This class encapsulates a queue of asynchronous calls. More... | |
struct | cv::cuda::StreamAccessor |
Class that enables getting cudaStream_t from cuda::Stream. More... | |
Functions | |
void | cv::cuda::createContinuous (int rows, int cols, int type, OutputArray arr) |
Creates a continuous matrix. | |
GpuMat | cv::cuda::createGpuMatFromCudaMemory (int rows, int cols, int type, size_t cudaMemoryAddress, size_t step=Mat::AUTO_STEP) |
Bindings overload to create a GpuMat from existing GPU memory. | |
GpuMat | cv::cuda::createGpuMatFromCudaMemory (Size size, int type, size_t cudaMemoryAddress, size_t step=Mat::AUTO_STEP) |
void | cv::cuda::ensureSizeIsEnough (int rows, int cols, int type, OutputArray arr) |
Ensures that the size of a matrix is big enough and the matrix has a proper type. | |
void | cv::cuda::registerPageLocked (Mat &m) |
Page-locks the memory of matrix and maps it for the device(s). | |
void | cv::cuda::setBufferPoolConfig (int deviceId, size_t stackSize, int stackCount) |
void | cv::cuda::setBufferPoolUsage (bool on) |
BufferPool management (must be called before Stream creation) | |
void | cv::cuda::unregisterPageLocked (Mat &m) |
Unmaps the memory of matrix and makes it pageable again. | |
Stream | cv::cuda::wrapStream (size_t cudaStreamMemoryAddress) |
Bindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t). | |
Detailed Description
Function Documentation
◆ createContinuous()
void cv::cuda::createContinuous | ( | int | rows, |
int | cols, | ||
int | type, | ||
OutputArray | arr | ||
) |
#include <opencv2/core/cuda.hpp>
Creates a continuous matrix.
- Parameters
-
rows Row count. cols Column count. type Type of the matrix. arr Destination matrix. This parameter changes only if it has a proper type and area ( \(\texttt{rows} \times \texttt{cols}\) ).
Matrix is called continuous if its elements are stored continuously, that is, without gaps at the end of each row.
◆ createGpuMatFromCudaMemory() [1/2]
|
inline |
#include <opencv2/core/cuda.hpp>
Bindings overload to create a GpuMat from existing GPU memory.
- Parameters
-
rows Row count. cols Column count. type Type of the matrix. cudaMemoryAddress Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. step Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.
- Note
- Overload for generation of bindings only, not exported or intended for use internally from C++.
◆ createGpuMatFromCudaMemory() [2/2]
|
inline |
#include <opencv2/core/cuda.hpp>
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
- Parameters
-
size 2D array size: Size(cols, rows). In the Size() constructor, the number of rows and the number of columns go in the reverse order. type Type of the matrix. cudaMemoryAddress Address of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it. step Number of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.
- Note
- Overload for generation of bindings only, not exported or intended for use internally from C++.
◆ ensureSizeIsEnough()
void cv::cuda::ensureSizeIsEnough | ( | int | rows, |
int | cols, | ||
int | type, | ||
OutputArray | arr | ||
) |
#include <opencv2/core/cuda.hpp>
Ensures that the size of a matrix is big enough and the matrix has a proper type.
- Parameters
-
rows Minimum desired number of rows. cols Minimum desired number of columns. type Desired matrix type. arr Destination matrix.
The function does not reallocate memory if the matrix has proper attributes already.
◆ registerPageLocked()
void cv::cuda::registerPageLocked | ( | Mat & | m | ) |
#include <opencv2/core/cuda.hpp>
Page-locks the memory of matrix and maps it for the device(s).
- Parameters
-
m Input matrix.
◆ setBufferPoolConfig()
void cv::cuda::setBufferPoolConfig | ( | int | deviceId, |
size_t | stackSize, | ||
int | stackCount | ||
) |
#include <opencv2/core/cuda.hpp>
◆ setBufferPoolUsage()
void cv::cuda::setBufferPoolUsage | ( | bool | on | ) |
#include <opencv2/core/cuda.hpp>
BufferPool management (must be called before Stream creation)
◆ unregisterPageLocked()
void cv::cuda::unregisterPageLocked | ( | Mat & | m | ) |
#include <opencv2/core/cuda.hpp>
Unmaps the memory of matrix and makes it pageable again.
- Parameters
-
m Input matrix.
◆ wrapStream()
Stream cv::cuda::wrapStream | ( | size_t | cudaStreamMemoryAddress | ) |
#include <opencv2/core/cuda.hpp>
Bindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t).
- Parameters
-
cudaStreamMemoryAddress Memory address stored in a CUDA Runtime API stream pointer (cudaStream_t). The created Stream object does not perform any allocation or deallocation and simply wraps existing raw CUDA Runtime API stream pointer.
- Note
- Overload for generation of bindings only, not exported or intended for use internally from C++.