Table of Contents

Class CvInvoke

Namespace
Emgu.CV
Assembly
Emgu.CV.dll

Class that provide access to native OpenCV functions

public static class CvInvoke
Inheritance
CvInvoke
Inherited Members

Fields

BoolMarshalType

Represent a bool value in C++

public const UnmanagedType BoolMarshalType = U1

Field Value

UnmanagedType

BoolToIntMarshalType

Represent a int value in C++

public const UnmanagedType BoolToIntMarshalType = Bool

Field Value

UnmanagedType

CvCallingConvention

Opencv's calling convention

public const CallingConvention CvCallingConvention = Cdecl

Field Value

CallingConvention

CvErrorHandlerIgnoreError

An error handler which will ignore any error and continue

public static readonly CvInvoke.CvErrorCallback CvErrorHandlerIgnoreError

Field Value

CvInvoke.CvErrorCallback

CvErrorHandlerThrowException

The default Exception callback to handle Error thrown by OpenCV

public static readonly CvInvoke.CvErrorCallback CvErrorHandlerThrowException

Field Value

CvInvoke.CvErrorCallback

ExternCudaLibrary

The file name of the cvextern library

public const string ExternCudaLibrary = "cvextern"

Field Value

string

ExternLibrary

The file name of the cvextern library

public const string ExternLibrary = "cvextern"

Field Value

string

MorphologyDefaultBorderValue

The default morphology value.

public static MCvScalar MorphologyDefaultBorderValue

Field Value

MCvScalar

OpenCVModuleList

The List of the opencv modules

public static List<string> OpenCVModuleList

Field Value

List<string>

OpencvFFMpegLibrary

The file name of the opencv_ffmpeg library

public const string OpencvFFMpegLibrary = ""

Field Value

string

StringMarshalType

string marshaling type

public const UnmanagedType StringMarshalType = LPStr

Field Value

UnmanagedType

Properties

AvailableParallelBackends

Get a list of the available parallel backends.

public static string[] AvailableParallelBackends { get; }

Property Value

string[]

Backends

Returns list of all built-in backends

public static Backend[] Backends { get; }

Property Value

Backend[]

BuildInformation

Returns full configuration time cmake output. Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture.

public static string BuildInformation { get; }

Property Value

string

CameraBackends

Returns list of available backends which works via cv::VideoCapture(int index)

public static Backend[] CameraBackends { get; }

Property Value

Backend[]

ConfigDict

Get the dictionary that hold the Open CV build flags. The key is a String and the value is type double. If it is a flag, 0 means false and 1 means true

public static Dictionary<string, double> ConfigDict { get; }

Property Value

Dictionary<string, double>

HaveOpenCL

Check if we have OpenCL

public static bool HaveOpenCL { get; }

Property Value

bool

HaveOpenCLCompatibleGpuDevice

Gets a value indicating whether this device have open CL compatible gpu device.

public static bool HaveOpenCLCompatibleGpuDevice { get; }

Property Value

bool

true if have open CL compatible gpu device; otherwise, false.

HaveOpenVX

Check if use of OpenVX is possible.

public static bool HaveOpenVX { get; }

Property Value

bool

True use of OpenVX is possible.

LogLevel

Get or Set the log level.

public static LogLevel LogLevel { get; set; }

Property Value

LogLevel

NumThreads

Get or set the number of threads that are used by parallelized OpenCV functions

public static int NumThreads { get; set; }

Property Value

int

Remarks

When the argument is zero or negative, and at the beginning of the program, the number of threads is set to the number of processors in the system, as returned by the function omp_get_num_procs() from OpenMP runtime.

NumberOfCPUs

Returns the number of logical CPUs available for the process.

public static int NumberOfCPUs { get; }

Property Value

int

StreamBackends

Returns list of available backends which works via cv::VideoCapture(filename)

public static Backend[] StreamBackends { get; }

Property Value

Backend[]

ThreadNum

Returns the index, from 0 to cvGetNumThreads()-1, of the thread that called the function. It is a wrapper for the function omp_get_thread_num() from OpenMP runtime. The retrieved index may be used to access local-thread data inside the parallelized code fragments.

public static int ThreadNum { get; }

Property Value

int

UseOpenCL

Get or set if OpenCL should be used

public static bool UseOpenCL { get; set; }

Property Value

bool

UseOpenVX

Enable/disable use of OpenVX

public static bool UseOpenVX { get; set; }

Property Value

bool

UseOptimized

Enables or disables the optimized code.

public static bool UseOptimized { get; set; }

Property Value

bool

true if [use optimized]; otherwise, false.

Remarks

The function can be used to dynamically turn on and off optimized code (code that uses SSE2, AVX, and other instructions on the platforms that support it). It sets a global flag that is further checked by OpenCV functions. Since the flag is not checked in the inner OpenCV loops, it is only safe to call the function on the very top level in your application where you can be sure that no other OpenCV function is currently executed.

WriterBackends

Returns list of available backends which works via cv::VideoWriter()

public static Backend[] WriterBackends { get; }

Property Value

Backend[]

Methods

AbsDiff(IInputArray, IInputArray, IOutputArray)

Calculates absolute difference between two arrays. dst(I)c = abs(src1(I)c - src2(I)c). All the arrays must have the same data type and the same size (or ROI size)

public static void AbsDiff(IInputArray src1, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

Accumulate(IInputArray, IInputOutputArray, IInputArray)

Adds the whole image or its selected region to accumulator sum

public static void Accumulate(IInputArray src, IInputOutputArray dst, IInputArray mask = null)

Parameters

src IInputArray

Input image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently).

dst IInputOutputArray

Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point.

mask IInputArray

Optional operation mask

AccumulateProduct(IInputArray, IInputArray, IInputOutputArray, IInputArray)

Adds product of 2 images or thier selected regions to accumulator acc

public static void AccumulateProduct(IInputArray src1, IInputArray src2, IInputOutputArray dst, IInputArray mask = null)

Parameters

src1 IInputArray

First input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)

src2 IInputArray

Second input image, the same format as the first one

dst IInputOutputArray

Accumulator of the same number of channels as input images, 32-bit or 64-bit floating-point

mask IInputArray

Optional operation mask

AccumulateSquare(IInputArray, IInputOutputArray, IInputArray)

Adds the input src or its selected region, raised to power 2, to the accumulator sqsum

public static void AccumulateSquare(IInputArray src, IInputOutputArray dst, IInputArray mask = null)

Parameters

src IInputArray

Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)

dst IInputOutputArray

Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point

mask IInputArray

Optional operation mask

AccumulateWeighted(IInputArray, IInputOutputArray, double, IInputArray)

Calculates weighted sum of input src and the accumulator acc so that acc becomes a running average of frame sequence: acc(x,y)=(1-alpha) * acc(x,y) + alpha * image(x,y) if mask(x,y)!=0 where alpha regulates update speed (how fast accumulator forgets about previous frames).

public static void AccumulateWeighted(IInputArray src, IInputOutputArray dst, double alpha, IInputArray mask = null)

Parameters

src IInputArray

Input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently).

dst IInputOutputArray

Accumulator of the same number of channels as input image, 32-bit or 64-bit floating-point.

alpha double

Weight of input image

mask IInputArray

Optional operation mask

AdaptiveThreshold(IInputArray, IOutputArray, double, AdaptiveThresholdType, ThresholdType, int, double)

Transforms grayscale image to binary image. Threshold calculated individually for each pixel. For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of blockSize x blockSize pixel neighborhood, subtracted by param1. For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of blockSize x blockSize pixel neighborhood, subtracted by param1.

public static void AdaptiveThreshold(IInputArray src, IOutputArray dst, double maxValue, AdaptiveThresholdType adaptiveType, ThresholdType thresholdType, int blockSize, double param1)

Parameters

src IInputArray

Source array (single-channel, 8-bit of 32-bit floating point).

dst IOutputArray

Destination array; must be either the same type as src or 8-bit.

maxValue double

Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types

adaptiveType AdaptiveThresholdType

Adaptive_method

thresholdType ThresholdType

Thresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV

blockSize int

The size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ...

param1 double

Constant subtracted from mean or weighted mean. It may be negative.

Add(IInputArray, IInputArray, IOutputArray, IInputArray, DepthType)

Adds one array to another one: dst(I)=src1(I)+src2(I) if mask(I)!=0All the arrays must have the same type, except the mask, and the same size (or ROI size)

public static void Add(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null, DepthType dtype = DepthType.Default)

Parameters

src1 IInputArray

The first source array.

src2 IInputArray

The second source array.

dst IOutputArray

The destination array.

mask IInputArray

Operation mask, 8-bit single channel array; specifies elements of destination array to be changed.

dtype DepthType

Optional depth type of the output array

AddWeighted(IInputArray, double, IInputArray, double, double, IOutputArray, DepthType)

Calculated weighted sum of two arrays as following: dst(I)=src1(I)*alpha+src2(I)*beta+gamma All the arrays must have the same type and the same size (or ROI size)

public static void AddWeighted(IInputArray src1, double alpha, IInputArray src2, double beta, double gamma, IOutputArray dst, DepthType dtype = DepthType.Default)

Parameters

src1 IInputArray

The first source array.

alpha double

Weight of the first array elements.

src2 IInputArray

The second source array.

beta double

Weight of the second array elements.

gamma double

Scalar, added to each sum.

dst IOutputArray

The destination array.

dtype DepthType

Optional depth of the output array; when both input arrays have the same depth

ApplyColorMap(IInputArray, IOutputArray, ColorMapType)

Applies a GNU Octave/MATLAB equivalent colormap on a given image.

public static void ApplyColorMap(IInputArray src, IOutputArray dst, ColorMapType colorMapType)

Parameters

src IInputArray

The source image, grayscale or colored of type CV_8UC1 or CV_8UC3

dst IOutputArray

The result is the colormapped source image

colorMapType ColorMapType

The type of color map

ApplyColorMap(IInputArray, IOutputArray, IInputArray)

Applies a user colormap on a given image.

public static void ApplyColorMap(IInputArray src, IOutputArray dst, IInputArray userColorMap)

Parameters

src IInputArray

The source image, grayscale or colored of type CV_8UC1 or CV_8UC3.

dst IOutputArray

The result is the colormapped source image.

userColorMap IInputArray

The colormap to apply of type CV_8UC1 or CV_8UC3 and size 256

ApproxPolyDP(IInputArray, IOutputArray, double, bool)

Approximates a polygonal curve(s) with the specified precision.

public static void ApproxPolyDP(IInputArray curve, IOutputArray approxCurve, double epsilon, bool closed)

Parameters

curve IInputArray

Input vector of a 2D point

approxCurve IOutputArray

Result of the approximation. The type should match the type of the input curve.

epsilon double

Parameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.

closed bool

If true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed.

ArcLength(IInputArray, bool)

Calculates a contour perimeter or a curve length

public static double ArcLength(IInputArray curve, bool isClosed)

Parameters

curve IInputArray

Sequence or array of the curve points

isClosed bool

Indicates whether the curve is closed or not.

Returns

double

Contour perimeter or a curve length

ArrowedLine(IInputOutputArray, Point, Point, MCvScalar, int, LineType, int, double)

Draws a arrow segment pointing from the first point to the second one.

public static void ArrowedLine(IInputOutputArray img, Point pt1, Point pt2, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0, double tipLength = 0.1)

Parameters

img IInputOutputArray

Image

pt1 Point

The point the arrow starts from.

pt2 Point

The point the arrow points to.

color MCvScalar

Line color.

thickness int

Line thickness.

lineType LineType

Type of the line.

shift int

Number of fractional bits in the point coordinates.

tipLength double

The length of the arrow tip in relation to the arrow length

BilateralFilter(IInputArray, IOutputArray, int, double, double, BorderType)

Applies the bilateral filter to an image.

public static void BilateralFilter(IInputArray src, IOutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Source 8-bit or floating-point, 1-channel or 3-channel image.

dst IOutputArray

Destination image of the same size and type as src .

d int

Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .

sigmaColor double

Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.

sigmaSpace double

Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.

borderType BorderType

Border mode used to extrapolate pixels outside of the image.

BitwiseAnd(IInputArray, IInputArray, IOutputArray, IInputArray)

Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size

public static void BitwiseAnd(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

mask IInputArray

Operation mask, 8-bit single channel array; specifies elements of destination array to be changed

BitwiseNot(IInputArray, IOutputArray, IInputArray)

Inverses every bit of every array element:

public static void BitwiseNot(IInputArray src, IOutputArray dst, IInputArray mask = null)

Parameters

src IInputArray

The source array

dst IOutputArray

The destination array

mask IInputArray

The optional mask for the operation, use null to ignore

BitwiseOr(IInputArray, IInputArray, IOutputArray, IInputArray)

Calculates per-element bit-wise disjunction of two arrays: dst(I)=src1(I)|src2(I) In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size

public static void BitwiseOr(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

mask IInputArray

Operation mask, 8-bit single channel array; specifies elements of destination array to be changed

BitwiseXor(IInputArray, IInputArray, IOutputArray, IInputArray)

Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size

public static void BitwiseXor(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

mask IInputArray

Mask, 8-bit single channel array; specifies elements of destination array to be changed.

BlendLinear(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)

Performs linear blending of two images: dst(i, j)=weights1(i, j) x src1(i, j) + weights2(i, j) x src2(i, j)

public static void BlendLinear(IInputArray src1, IInputArray src2, IInputArray weights1, IInputArray weights2, IOutputArray dst)

Parameters

src1 IInputArray

It has a type of CV_8UC(n) or CV_32FC(n), where n is a positive integer.

src2 IInputArray

It has the same type and size as src1.

weights1 IInputArray

It has a type of CV_32FC1 and the same size with src1.

weights2 IInputArray

It has a type of CV_32FC1 and the same size with src1.

dst IOutputArray

It is created if it does not have the same size and type with src1.

Blur(IInputArray, IOutputArray, Size, Point, BorderType)

Blurs an image using the normalized box filter.

public static void Blur(IInputArray src, IOutputArray dst, Size ksize, Point anchor, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

dst IOutputArray

Output image of the same size and type as src.

ksize Size

Blurring kernel size.

anchor Point

Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

borderType BorderType

Border mode used to extrapolate pixels outside of the image.

BoundingRectangle(IInputArray)

Returns the up-right bounding rectangle for 2d point set

public static Rectangle BoundingRectangle(IInputArray points)

Parameters

points IInputArray

Input 2D point set, stored in std::vector or Mat.

Returns

Rectangle

The up-right bounding rectangle for 2d point set

BoundingRectangle(Point[])

Returns the up-right bounding rectangle for 2d point set

public static Rectangle BoundingRectangle(Point[] points)

Parameters

points Point[]

Input 2D point set

Returns

Rectangle

The up-right bounding rectangle for 2d point set

BoxFilter(IInputArray, IOutputArray, DepthType, Size, Point, bool, BorderType)

Blurs an image using the box filter.

public static void BoxFilter(IInputArray src, IOutputArray dst, DepthType ddepth, Size ksize, Point anchor, bool normalize = true, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Input image.

dst IOutputArray

Output image of the same size and type as src.

ddepth DepthType

The output image depth (-1 to use src.depth()).

ksize Size

Blurring kernel size.

anchor Point

Anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.

normalize bool

Specifying whether the kernel is normalized by its area or not.

borderType BorderType

Border mode used to extrapolate pixels outside of the image.

BoxPoints(RotatedRect)

Calculates vertices of the input 2d box.

public static PointF[] BoxPoints(RotatedRect box)

Parameters

box RotatedRect

The box

Returns

PointF[]

The four vertices of rectangles.

BoxPoints(RotatedRect, IOutputArray)

Calculates vertices of the input 2d box.

public static void BoxPoints(RotatedRect box, IOutputArray points)

Parameters

box RotatedRect

The box

points IOutputArray

The output array of four vertices of rectangles.

Broadcast(IInputArray, IInputArray, IOutputArray)

Broadcast the given Mat to the given shape.

public static void Broadcast(IInputArray src, IInputArray shape, IOutputArray dst)

Parameters

src IInputArray

Input array

shape IInputArray

Target shape. Should be a list of CV_32S numbers. Note that negative values are not supported.

dst IOutputArray

Output array that has the given shape

BuildOpticalFlowPyramid(IInputArray, IOutputArrayOfArrays, Size, int, bool, BorderType, BorderType, bool)

Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.

public static int BuildOpticalFlowPyramid(IInputArray img, IOutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderType pyrBorder = BorderType.Default, BorderType derivBorder = BorderType.Constant, bool tryReuseInputImage = true)

Parameters

img IInputArray

8-bit input image.

pyramid IOutputArrayOfArrays

Output pyramid.

winSize Size

Window size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.

maxLevel int

0-based maximal pyramid level number.

withDerivatives bool

Set to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.

pyrBorder BorderType

The border mode for pyramid layers.

derivBorder BorderType

The border mode for gradients.

tryReuseInputImage bool

put ROI of input image into the pyramid if possible. You can pass false to force data copying.

Returns

int

Number of levels in constructed pyramid. Can be less than maxLevel.

BuildPyramid(IInputArray, IOutputArrayOfArrays, int, BorderType)

The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown to the previously built pyramid layers, starting from dst[0]==src.

public static void BuildPyramid(IInputArray src, IOutputArrayOfArrays dst, int maxlevel, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Source image. Check pyrDown for the list of supported types.

dst IOutputArrayOfArrays

Destination vector of maxlevel+1 images of the same type as src. dst[0] will be the same as src. dst[1] is the next pyramid layer, a smoothed and down-sized src, and so on.

maxlevel int

0-based index of the last (the smallest) pyramid layer. It must be non-negative.

borderType BorderType

Pixel extrapolation method

CLAHE(IInputArray, double, Size, IOutputArray)

Contrast Limited Adaptive Histogram Equalization (CLAHE)

public static void CLAHE(IInputArray src, double clipLimit, Size tileGridSize, IOutputArray dst)

Parameters

src IInputArray

The source image

clipLimit double

Clip Limit, use 40 for default

tileGridSize Size

Tile grid size, use (8, 8) for default

dst IOutputArray

The destination image

CalcBackProject(IInputArrayOfArrays, int[], IInputArray, IOutputArray, float[], double)

Calculates the back projection of a histogram.

public static void CalcBackProject(IInputArrayOfArrays images, int[] channels, IInputArray hist, IOutputArray backProject, float[] ranges, double scale = 1)

Parameters

images IInputArrayOfArrays

Source arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels.

channels int[]

Number of source images.

hist IInputArray

Input histogram that can be dense or sparse.

backProject IOutputArray

Destination back projection array that is a single-channel array of the same size and depth as images[0] .

ranges float[]

Array of arrays of the histogram bin boundaries in each dimension.

scale double

Optional scale factor for the output back projection.

CalcCovarMatrix(IInputArray, IOutputArray, IInputOutputArray, CovarMethod, DepthType)

Calculates the covariance matrix of a set of vectors.

public static void CalcCovarMatrix(IInputArray samples, IOutputArray covar, IInputOutputArray mean, CovarMethod flags, DepthType ctype = DepthType.Cv64F)

Parameters

samples IInputArray

Samples stored either as separate matrices or as rows/columns of a single matrix.

covar IOutputArray

Output covariance matrix of the type ctype and square size.

mean IInputOutputArray

Input or output (depending on the flags) array as the average value of the input vectors.

flags CovarMethod

Operation flags

ctype DepthType

Type of the matrix

CalcGlobalOrientation(IInputArray, IInputArray, IInputArray, double, double)

Calculates the general motion direction in the selected region and returns the angle between 0 and 360. At first the function builds the orientation histogram and finds the basic orientation as a coordinate of the histogram maximum. After that the function calculates the shift relative to the basic orientation as a weighted sum of all orientation vectors: the more recent is the motion, the greater is the weight. The resultant angle is a circular sum of the basic orientation and the shift.

public static double CalcGlobalOrientation(IInputArray orientation, IInputArray mask, IInputArray mhi, double timestamp, double duration)

Parameters

orientation IInputArray

Motion gradient orientation image; calculated by the function cvCalcMotionGradient.

mask IInputArray

Mask image. It may be a conjunction of valid gradient mask, obtained with cvCalcMotionGradient and mask of the region, whose direction needs to be calculated.

mhi IInputArray

Motion history image.

timestamp double

Current time in milliseconds or other units, it is better to store time passed to cvUpdateMotionHistory before and reuse it here, because running cvUpdateMotionHistory and cvCalcMotionGradient on large images may take some time.

duration double

Maximal duration of motion track in milliseconds, the same as in cvUpdateMotionHistory

Returns

double

The angle

CalcHist(IInputArrayOfArrays, int[], IInputArray, IOutputArray, int[], float[], bool)

Calculates a histogram of a set of arrays.

public static void CalcHist(IInputArrayOfArrays images, int[] channels, IInputArray mask, IOutputArray hist, int[] histSize, float[] ranges, bool accumulate)

Parameters

images IInputArrayOfArrays

Source arrays. They all should have the same depth, CV_8U, CV_16U or CV_32F , and the same size. Each of them can have an arbitrary number of channels.

channels int[]

List of the channels used to compute the histogram.

mask IInputArray

Optional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram.

hist IOutputArray

Output histogram

histSize int[]

Array of histogram sizes in each dimension.

ranges float[]

Array of the dims arrays of the histogram bin boundaries in each dimension.

accumulate bool

Accumulation flag. If it is set, the histogram is not cleared in the beginning when it is allocated. This feature enables you to compute a single histogram from several sets of arrays, or to update the histogram in time.

CalcMotionGradient(IInputArray, IOutputArray, IOutputArray, double, double, int)

Calculates the derivatives Dx and Dy of mhi and then calculates gradient orientation as: orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) where both Dx(x,y)' and Dy(x,y)' signs are taken into account (as in cvCartToPolar function). After that mask is filled to indicate where the orientation is valid (see delta1 and delta2 description).

public static void CalcMotionGradient(IInputArray mhi, IOutputArray mask, IOutputArray orientation, double delta1, double delta2, int apertureSize = 3)

Parameters

mhi IInputArray

Motion history image

mask IOutputArray

Mask image; marks pixels where motion gradient data is correct. Output parameter.

orientation IOutputArray

Motion gradient orientation image; contains angles from 0 to ~360.

delta1 double

The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2).

delta2 double

The function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2).

apertureSize int

Aperture size of derivative operators used by the function: CV_SCHARR, 1, 3, 5 or 7 (see cvSobel).

CalcOpticalFlowFarneback(IInputArray, IInputArray, IInputOutputArray, double, int, int, int, int, double, OpticalflowFarnebackFlag)

Computes dense optical flow using Gunnar Farneback's algorithm

public static void CalcOpticalFlowFarneback(IInputArray prev0, IInputArray next0, IInputOutputArray flow, double pyrScale, int levels, int winSize, int iterations, int polyN, double polySigma, OpticalflowFarnebackFlag flags)

Parameters

prev0 IInputArray

The first 8-bit single-channel input image

next0 IInputArray

The second input image of the same size and the same type as prevImg

flow IInputOutputArray

The computed flow image; will have the same size as prevImg and type CV 32FC2

pyrScale double

Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous

levels int

The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used

winSize int

The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field

iterations int

The number of iterations the algorithm does at each pyramid level

polyN int

Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7

polySigma double

Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5

flags OpticalflowFarnebackFlag

The operation flags

CalcOpticalFlowFarneback(Image<Gray, byte>, Image<Gray, byte>, Image<Gray, float>, Image<Gray, float>, double, int, int, int, int, double, OpticalflowFarnebackFlag)

Computes dense optical flow using Gunnar Farneback's algorithm

public static void CalcOpticalFlowFarneback(Image<Gray, byte> prev0, Image<Gray, byte> next0, Image<Gray, float> flowX, Image<Gray, float> flowY, double pyrScale, int levels, int winSize, int iterations, int polyN, double polySigma, OpticalflowFarnebackFlag flags)

Parameters

prev0 Image<Gray, byte>

The first 8-bit single-channel input image

next0 Image<Gray, byte>

The second input image of the same size and the same type as prevImg

flowX Image<Gray, float>

The computed flow image for x-velocity; will have the same size as prevImg

flowY Image<Gray, float>

The computed flow image for y-velocity; will have the same size as prevImg

pyrScale double

Specifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous

levels int

The number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used

winSize int

The averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field

iterations int

The number of iterations the algorithm does at each pyramid level

polyN int

Size of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7

polySigma double

Standard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5

flags OpticalflowFarnebackFlag

The operation flags

CalcOpticalFlowPyrLK(IInputArray, IInputArray, IInputArray, IInputOutputArray, IOutputArray, IOutputArray, Size, int, MCvTermCriteria, LKFlowFlag, double)

Implements sparse iterative version of Lucas-Kanade optical flow in pyramids ([Bouguet00]). It calculates coordinates of the feature points on the current video frame given their coordinates on the previous frame. The function finds the coordinates with sub-pixel accuracy.

public static void CalcOpticalFlowPyrLK(IInputArray prevImg, IInputArray nextImg, IInputArray prevPts, IInputOutputArray nextPts, IOutputArray status, IOutputArray err, Size winSize, int maxLevel, MCvTermCriteria criteria, LKFlowFlag flags = LKFlowFlag.Default, double minEigThreshold = 0.0001)

Parameters

prevImg IInputArray

First frame, at time t.

nextImg IInputArray

Second frame, at time t + dt .

prevPts IInputArray

Array of points for which the flow needs to be found.

nextPts IInputOutputArray

Array of 2D points containing calculated new positions of input

status IOutputArray

Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise.

err IOutputArray

Array of double numbers containing difference between patches around the original and moved points. Optional parameter; can be NULL

winSize Size

Size of the search window of each pyramid level.

maxLevel int

Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc.

criteria MCvTermCriteria

Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped.

flags LKFlowFlag

Miscellaneous flags

minEigThreshold double

the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

Remarks

Both parameters prev_pyr and curr_pyr comply with the following rules: if the image pointer is 0, the function allocates the buffer internally, calculates the pyramid, and releases the buffer after processing. Otherwise, the function calculates the pyramid and stores it in the buffer unless the flag CV_LKFLOW_PYR_A[B]_READY is set. The image should be large enough to fit the Gaussian pyramid data. After the function call both pyramids are calculated and the readiness flag for the corresponding image can be set in the next call (i.e., typically, for all the image pairs except the very first one CV_LKFLOW_PYR_A_READY is set).

CalcOpticalFlowPyrLK(IInputArray, IInputArray, PointF[], Size, int, MCvTermCriteria, out PointF[], out byte[], out float[], LKFlowFlag, double)

Calculates optical flow for a sparse feature set using iterative Lucas-Kanade method in pyramids

public static void CalcOpticalFlowPyrLK(IInputArray prev, IInputArray curr, PointF[] prevFeatures, Size winSize, int level, MCvTermCriteria criteria, out PointF[] currFeatures, out byte[] status, out float[] trackError, LKFlowFlag flags = LKFlowFlag.Default, double minEigThreshold = 0.0001)

Parameters

prev IInputArray

First frame, at time t

curr IInputArray

Second frame, at time t + dt

prevFeatures PointF[]

Array of points for which the flow needs to be found

winSize Size

Size of the search window of each pyramid level

level int

Maximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc

criteria MCvTermCriteria

Specifies when the iteration process of finding the flow for each point on each pyramid level should be stopped

currFeatures PointF[]

Array of 2D points containing calculated new positions of input features in the second image

status byte[]

Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise

trackError float[]

Array of double numbers containing difference between patches around the original and moved points

flags LKFlowFlag

Flags

minEigThreshold double

the algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.

CalibrateCamera(IInputArray, IInputArray, Size, IInputOutputArray, IInputOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)

Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

public static double CalibrateCamera(IInputArray objectPoints, IInputArray imagePoints, Size imageSize, IInputOutputArray cameraMatrix, IInputOutputArray distortionCoeffs, IOutputArray rotationVectors, IOutputArray translationVectors, CalibType flags, MCvTermCriteria termCriteria)

Parameters

objectPoints IInputArray

In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.

imagePoints IInputArray

In the new interface it is a vector of vectors of the projections of calibration pattern points. In the old interface all the vectors of object points from different views are concatenated together.

imageSize Size

Size of the image used only to initialize the camera intrinsic matrix.

cameraMatrix IInputOutputArray

Input/output 3x3 floating-point camera intrinsic matrix A [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized

distortionCoeffs IInputOutputArray

Input/output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements.

rotationVectors IOutputArray

Output vector of rotation vectors (Rodrigues) estimated for each pattern view. That is, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.

translationVectors IOutputArray

Output vector of translation vectors estimated for each pattern view, see parameter describtion above.

flags CalibType

Different flags

termCriteria MCvTermCriteria

The termination criteria

Returns

double

The final reprojection error

CalibrateCamera(MCvPoint3D32f[][], PointF[][], Size, IInputOutputArray, IInputOutputArray, CalibType, MCvTermCriteria, out Mat[], out Mat[])

Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.

public static double CalibrateCamera(MCvPoint3D32f[][] objectPoints, PointF[][] imagePoints, Size imageSize, IInputOutputArray cameraMatrix, IInputOutputArray distortionCoeffs, CalibType calibrationType, MCvTermCriteria termCriteria, out Mat[] rotationVectors, out Mat[] translationVectors)

Parameters

objectPoints MCvPoint3D32f[][]

In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.

imagePoints PointF[][]

In the new interface it is a vector of vectors of the projections of calibration pattern points. In the old interface all the vectors of object points from different views are concatenated together.

imageSize Size

Size of the image used only to initialize the camera intrinsic matrix.

cameraMatrix IInputOutputArray

Input/output 3x3 floating-point camera intrinsic matrix A [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized

distortionCoeffs IInputOutputArray

Input/output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements.

calibrationType CalibType

The camera calibration flags.

termCriteria MCvTermCriteria

The termination criteria

rotationVectors Mat[]

Output vector of rotation vectors (Rodrigues) estimated for each pattern view. That is, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.

translationVectors Mat[]

Output vector of translation vectors estimated for each pattern view, see parameter describtion above.

Returns

double

The final reprojection error

CalibrateHandEye(IInputArrayOfArrays, IInputArrayOfArrays, IInputArrayOfArrays, IInputArrayOfArrays, IOutputArray, IOutputArray, HandEyeCalibrationMethod)

Computes Hand-Eye calibration

public static void CalibrateHandEye(IInputArrayOfArrays rGripper2base, IInputArrayOfArrays tGripper2base, IInputArrayOfArrays rTarget2cam, IInputArrayOfArrays tTarget2cam, IOutputArray rCam2gripper, IOutputArray tCam2gripper, HandEyeCalibrationMethod method)

Parameters

rGripper2base IInputArrayOfArrays

Rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector (vector<Mat>) that contains the rotation matrices for all the transformations from gripper frame to robot base frame.

tGripper2base IInputArrayOfArrays

Translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector (vector<Mat>) that contains the translation vectors for all the transformations from gripper frame to robot base frame.

rTarget2cam IInputArrayOfArrays

Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector (vector<Mat>) that contains the rotation matrices for all the transformations from calibration target frame to camera frame.

tTarget2cam IInputArrayOfArrays

Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector (vector<Mat>) that contains the translation vectors for all the transformations from calibration target frame to camera frame.

rCam2gripper IOutputArray

Estimated rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.

tCam2gripper IOutputArray

Estimated translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.

method HandEyeCalibrationMethod

One of the implemented Hand-Eye calibration method

CalibrationMatrixValues(IInputArray, Size, double, double, ref double, ref double, ref double, ref MCvPoint2D64f, ref double)

Computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size

public static void CalibrationMatrixValues(IInputArray cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, ref double fovx, ref double fovy, ref double focalLength, ref MCvPoint2D64f principalPoint, ref double aspectRatio)

Parameters

cameraMatrix IInputArray

The matrix of intrinsic parameters

imageSize Size

Image size in pixels

apertureWidth double

Aperture width in real-world units (optional input parameter). Set it to 0 if not used

apertureHeight double

Aperture width in real-world units (optional input parameter). Set it to 0 if not used

fovx double

Field of view angle in x direction in degrees

fovy double

Field of view angle in y direction in degrees

focalLength double

Focal length in real-world units

principalPoint MCvPoint2D64f

The principal point in real-world units

aspectRatio double

The pixel aspect ratio ~ fy/f

CamShift(IInputArray, ref Rectangle, MCvTermCriteria)

Implements CAMSHIFT object tracking algorithm ([Bradski98]). First, it finds an object center using cvMeanShift and, after that, calculates the object size and orientation.

public static RotatedRect CamShift(IInputArray probImage, ref Rectangle window, MCvTermCriteria criteria)

Parameters

probImage IInputArray

Back projection of object histogram

window Rectangle

Initial search window

criteria MCvTermCriteria

Criteria applied to determine when the window search should be finished

Returns

RotatedRect

Circumscribed box for the object, contains object size and orientation

Canny(IInputArray, IInputArray, IOutputArray, double, double, bool)

Finds the edges on the input dx, dy and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges.

public static void Canny(IInputArray dx, IInputArray dy, IOutputArray edges, double threshold1, double threshold2, bool l2Gradient = false)

Parameters

dx IInputArray

16-bit x derivative of input image

dy IInputArray

16-bit y derivative of input image

edges IOutputArray

Image to store the edges found by the function

threshold1 double

The first threshold

threshold2 double

The second threshold.

l2Gradient bool

a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ).

Canny(IInputArray, IOutputArray, double, double, int, bool)

Finds the edges on the input image and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges.

public static void Canny(IInputArray image, IOutputArray edges, double threshold1, double threshold2, int apertureSize = 3, bool l2Gradient = false)

Parameters

image IInputArray

Input image

edges IOutputArray

Image to store the edges found by the function

threshold1 double

The first threshold

threshold2 double

The second threshold.

apertureSize int

Aperture parameter for Sobel operator

l2Gradient bool

a flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ).

CartToPolar(IInputArray, IInputArray, IOutputArray, IOutputArray, bool)

Calculates the magnitude and angle of 2D vectors. magnitude(I)=sqrt( x(I)^2+y(I)^2 ), angle(I)=atan2( y(I)/x(I) ) The angles are calculated with accuracy about 0.3 degrees. For the point (0,0), the angle is set to 0.

public static void CartToPolar(IInputArray x, IInputArray y, IOutputArray magnitude, IOutputArray angle, bool angleInDegrees = false)

Parameters

x IInputArray

Array of x-coordinates; this must be a single-precision or double-precision floating-point array.

y IInputArray

Array of y-coordinates, that must have the same size and same type as x.

magnitude IOutputArray

Output array of magnitudes of the same size and type as x.

angle IOutputArray

Output array of angles that has the same size and type as x; the angles are measured in radians (from 0 to 2*Pi) or in degrees (0 to 360 degrees).

angleInDegrees bool

A flag, indicating whether the angles are measured in radians (which is by default), or in degrees.

CheckRange(IInputArray, bool, ref Point, double, double)

Check that every array element is neither NaN nor +- inf. The functions also check that each value is between minVal and maxVal. in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions either return false (when quiet=true) or throw an exception.

public static bool CheckRange(IInputArray arr, bool quiet, ref Point pos, double minVal, double maxVal)

Parameters

arr IInputArray

The array to check

quiet bool

The flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception

pos Point

This will be filled with the position of the first outlier

minVal double

The inclusive lower boundary of valid values range

maxVal double

The exclusive upper boundary of valid values range

Returns

bool

If quiet, return true if all values are in range

Circle(IInputOutputArray, Point, int, MCvScalar, int, LineType, int)

Draws a simple or filled circle with given center and radius. The circle is clipped by ROI rectangle.

public static void Circle(IInputOutputArray img, Point center, int radius, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image where the circle is drawn

center Point

Center of the circle

radius int

Radius of the circle.

color MCvScalar

Color of the circle

thickness int

Thickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn

lineType LineType

Line type

shift int

Number of fractional bits in the center coordinates and radius value

ClipLine(Rectangle, ref Point, ref Point)

Calculates a part of the line segment which is entirely in the rectangle.

public static bool ClipLine(Rectangle rectangle, ref Point pt1, ref Point pt2)

Parameters

rectangle Rectangle

The rectangle

pt1 Point

First ending point of the line segment. It is modified by the function

pt2 Point

Second ending point of the line segment. It is modified by the function.

Returns

bool

It returns false if the line segment is completely outside the rectangle and true otherwise.

ColorChange(IInputArray, IInputArray, IOutputArray, float, float, float)

Given an original color image, two differently colored versions of this image can be mixed seamlessly.

public static void ColorChange(IInputArray src, IInputArray mask, IOutputArray dst, float redMul = 1, float greenMul = 1, float blueMul = 1)

Parameters

src IInputArray

Input 8-bit 3-channel image.

mask IInputArray

Input 8-bit 1 or 3-channel image.

dst IOutputArray

Output image with the same size and type as src .

redMul float

R-channel multiply factor. Multiplication factor is between .5 to 2.5.

greenMul float

G-channel multiply factor. Multiplication factor is between .5 to 2.5.

blueMul float

B-channel multiply factor. Multiplication factor is between .5 to 2.5.

Compare(IInputArray, IInputArray, IOutputArray, CmpType)

Compares the corresponding elements of two arrays and fills the destination mask array: dst(I)=src1(I) op src2(I), dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size)

public static void Compare(IInputArray src1, IInputArray src2, IOutputArray dst, CmpType cmpOp)

Parameters

src1 IInputArray

The first image to compare with

src2 IInputArray

The second image to compare with

dst IOutputArray

dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise.

cmpOp CmpType

The comparison operator type

CompareHist(IInputArray, IInputArray, HistogramCompMethod)

Compares two histograms.

public static double CompareHist(IInputArray h1, IInputArray h2, HistogramCompMethod method)

Parameters

h1 IInputArray

First compared histogram.

h2 IInputArray

Second compared histogram of the same size as H1 .

method HistogramCompMethod

Comparison method

Returns

double

The distance between the histogram

Compute(IStereoMatcher, IInputArray, IInputArray, IOutputArray)

Computes disparity map for the specified stereo pair

public static void Compute(this IStereoMatcher matcher, IInputArray left, IInputArray right, IOutputArray disparity)

Parameters

matcher IStereoMatcher

The stereo matcher

left IInputArray

Left 8-bit single-channel image.

right IInputArray

Right image of the same size and the same type as the left one.

disparity IOutputArray

Output disparity map. It has the same size as the input images. Some algorithms, like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map (where each disparity value has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map

ComputeCorrespondEpilines(IInputArray, int, IInputArray, IOutputArray)

For every point in one of the two images of stereo-pair the function cvComputeCorrespondEpilines finds equation of a line that contains the corresponding point (i.e. projection of the same 3D point) in the other image. Each line is encoded by a vector of 3 elements l=[a,b,c]^T, so that: l^T*[x, y, 1]^T=0, or ax + by + c = 0 From the fundamental matrix definition (see cvFindFundamentalMatrix discussion), line l2 for a point p1 in the first image (which_image=1) can be computed as: l2=Fp1 and the line l1 for a point p2 in the second image (which_image=1) can be computed as: l1=F^Tp2Line coefficients are defined up to a scale. They are normalized (a2+b2=1) are stored into correspondent_lines

public static void ComputeCorrespondEpilines(IInputArray points, int whichImage, IInputArray fundamentalMatrix, IOutputArray correspondentLines)

Parameters

points IInputArray

The input points. 2xN, Nx2, 3xN or Nx3 array (where N number of points). Multi-channel 1xN or Nx1 array is also acceptable.

whichImage int

Index of the image (1 or 2) that contains the points

fundamentalMatrix IInputArray

Fundamental matrix

correspondentLines IOutputArray

Computed epilines, 3xN or Nx3 array

ConnectedComponents(IInputArray, IOutputArray, LineType, DepthType, ConnectedComponentsAlgorithmsTypes)

Computes the connected components labeled image of boolean image

public static int ConnectedComponents(IInputArray image, IOutputArray labels, LineType connectivity = LineType.EightConnected, DepthType labelType = DepthType.Cv32S, ConnectedComponentsAlgorithmsTypes cclType = ConnectedComponentsAlgorithmsTypes.Default)

Parameters

image IInputArray

The boolean image

labels IOutputArray

The connected components labeled image of boolean image

connectivity LineType

4 or 8 way connectivity

labelType DepthType

Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image

cclType ConnectedComponentsAlgorithmsTypes

connected components algorithm type

Returns

int

N, the total number of labels [0, N-1] where 0 represents the background label.

ConnectedComponentsWithStats(IInputArray, IOutputArray, IOutputArray, IOutputArray, LineType, DepthType, ConnectedComponentsAlgorithmsTypes)

Computes the connected components labeled image of boolean image

public static int ConnectedComponentsWithStats(IInputArray image, IOutputArray labels, IOutputArray stats, IOutputArray centroids, LineType connectivity = LineType.EightConnected, DepthType labelType = DepthType.Cv32S, ConnectedComponentsAlgorithmsTypes cclType = ConnectedComponentsAlgorithmsTypes.Default)

Parameters

image IInputArray

The boolean image

labels IOutputArray

The connected components labeled image of boolean image

stats IOutputArray

Statistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes. The data type is CV_32S

centroids IOutputArray

Centroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.

connectivity LineType

4 or 8 way connectivity

labelType DepthType

Specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image

cclType ConnectedComponentsAlgorithmsTypes

connected components algorithm type

Returns

int

N, the total number of labels [0, N-1] where 0 represents the background label.

ContourArea(IInputArray, bool)

Calculates area of the whole contour or contour section.

public static double ContourArea(IInputArray contour, bool oriented = false)

Parameters

contour IInputArray

Input vector of 2D points (contour vertices), stored in std::vector or Mat.

oriented bool

Oriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.

Returns

double

The area of the whole contour or contour section

ConvertMaps(IInputArray, IInputArray, IOutputArray, IOutputArray, DepthType, int, bool)

Converts image transformation maps from one representation to another.

public static void ConvertMaps(IInputArray map1, IInputArray map2, IOutputArray dstmap1, IOutputArray dstmap2, DepthType dstmap1Depth, int dstmap1Channels, bool nninterpolation = false)

Parameters

map1 IInputArray

The first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 .

map2 IInputArray

The second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively.

dstmap1 IOutputArray

The first output map that has the type dstmap1type and the same size as src .

dstmap2 IOutputArray

The second output map.

dstmap1Depth DepthType

Depth type of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2.

dstmap1Channels int

The number of channels in the dst map.

nninterpolation bool

Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.

ConvertPointsFromHomogeneous(IInputArray, IOutputArray)

Converts points from homogeneous to Euclidean space.

public static void ConvertPointsFromHomogeneous(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

Input vector of N-dimensional points.

dst IOutputArray

Output vector of N-1-dimensional points.

ConvertPointsToHomogeneous(IInputArray, IOutputArray)

Converts points from Euclidean to homogeneous space.

public static void ConvertPointsToHomogeneous(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

Input vector of N-dimensional points.

dst IOutputArray

Output vector of N+1-dimensional points.

ConvertScaleAbs(IInputArray, IOutputArray, double, double)

Similar to cvCvtScale but it stores absolute values of the conversion results: dst(I)=abs(src(I)*scale + (shift,shift,...)) The function supports only destination arrays of 8u (8-bit unsigned integers) type, for other types the function can be emulated by combination of cvConvertScale and cvAbs functions.

public static void ConvertScaleAbs(IInputArray src, IOutputArray dst, double scale, double shift)

Parameters

src IInputArray

Source array

dst IOutputArray

Destination array (should have 8u depth).

scale double

ScaleAbs factor

shift double

Value added to the scaled source array elements

ConvexHull(IInputArray, IOutputArray, bool, bool)

The function cvConvexHull2 finds convex hull of 2D point set using Sklansky's algorithm.

public static void ConvexHull(IInputArray points, IOutputArray hull, bool clockwise = false, bool returnPoints = true)

Parameters

points IInputArray

Input 2D point set

hull IOutputArray

Output convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.

clockwise bool

Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.

returnPoints bool

Operation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector

ConvexHull(PointF[], bool)

Finds convex hull of 2D point set using Sklansky's algorithm

public static PointF[] ConvexHull(PointF[] points, bool clockwise = false)

Parameters

points PointF[]

The points to find convex hull from

clockwise bool

Orientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.

Returns

PointF[]

The convex hull of the points

ConvexityDefects(IInputArray, IInputArray, IOutputArray)

Finds the convexity defects of a contour.

public static void ConvexityDefects(IInputArray contour, IInputArray convexhull, IOutputArray convexityDefects)

Parameters

contour IInputArray

Input contour

convexhull IInputArray

Convex hull obtained using ConvexHull that should contain pointers or indices to the contour points, not the hull points themselves, i.e. return_points parameter in cvConvexHull2 should be 0

convexityDefects IOutputArray

The output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.

CopyMakeBorder(IInputArray, IOutputArray, int, int, int, int, BorderType, MCvScalar)

Copies the source 2D array into interior of destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's

public static void CopyMakeBorder(IInputArray src, IOutputArray dst, int top, int bottom, int left, int right, BorderType bordertype, MCvScalar value = default)

Parameters

src IInputArray

The source image

dst IOutputArray

The destination image

top int

Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate.

bottom int

Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate.

left int

Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate.

right int

Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate.

bordertype BorderType

Type of the border to create around the copied source image rectangle

value MCvScalar

Value of the border pixels if bordertype=CONSTANT

CornerHarris(IInputArray, IOutputArray, int, int, double, BorderType)

Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image.

public static void CornerHarris(IInputArray image, IOutputArray harrisResponse, int blockSize, int apertureSize = 3, double k = 0.04, BorderType borderType = BorderType.Default)

Parameters

image IInputArray

Input image

harrisResponse IOutputArray

Image to store the Harris detector responces. Should have the same size as image

blockSize int

Neighborhood size

apertureSize int

Aperture parameter for Sobel operator (see cvSobel). format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing.

k double

Harris detector free parameter.

borderType BorderType

Pixel extrapolation method.

CornerSubPix(IInputArray, IInputOutputArray, Size, Size, MCvTermCriteria)

Iterates to find the sub-pixel accurate location of corners, or radial saddle points

public static void CornerSubPix(IInputArray image, IInputOutputArray corners, Size win, Size zeroZone, MCvTermCriteria criteria)

Parameters

image IInputArray

Input image

corners IInputOutputArray

Initial coordinates of the input corners and refined coordinates on output

win Size

Half sizes of the search window. For example, if win=(5,5) then 52+1 x 52+1 = 11 x 11 search window is used

zeroZone Size

Half size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size

criteria MCvTermCriteria

Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy

CorrectMatches(IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray)

Refines coordinates of corresponding points.

public static void CorrectMatches(IInputArray f, IInputArray points1, IInputArray points2, IOutputArray newPoints1, IOutputArray newPoints2)

Parameters

f IInputArray

3x3 fundamental matrix.

points1 IInputArray

1xN array containing the first set of points.

points2 IInputArray

1xN array containing the second set of points.

newPoints1 IOutputArray

The optimized points1.

newPoints2 IOutputArray

The optimized points2.

CountNonZero(IInputArray)

Returns the number of non-zero elements in arr: result = sumI arr(I)!=0 In case of IplImage both ROI and COI are supported.

public static int CountNonZero(IInputArray arr)

Parameters

arr IInputArray

The image

Returns

int

the number of non-zero elements in image

CreateHanningWindow(IOutputArray, Size, DepthType)

This function computes a Hanning window coefficients in two dimensions.

public static void CreateHanningWindow(IOutputArray dst, Size winSize, DepthType type)

Parameters

dst IOutputArray

Destination array to place Hann coefficients in

winSize Size

The window size specifications

type DepthType

Created array type

CvArrToMat(nint, bool, bool, int)

Converts CvMat, IplImage , or CvMatND to Mat.

public static Mat CvArrToMat(nint arr, bool copyData = false, bool allowND = true, int coiMode = 0)

Parameters

arr nint

Input CvMat, IplImage , or CvMatND.

copyData bool

When false (default value), no data is copied and only the new header is created, in this case, the original array should not be deallocated while the new matrix header is used; if the parameter is true, all the data is copied and you may deallocate the original array right after the conversion.

allowND bool

When true (default value), CvMatND is converted to 2-dimensional Mat, if it is possible (see the discussion below); if it is not possible, or when the parameter is false, the function will report an error

coiMode int

Parameter specifying how the IplImage COI (when set) is handled. If coiMode=0 and COI is set, the function reports an error. If coiMode=1 , the function never reports an error. Instead, it returns the header to the whole original image and you will have to check and process COI manually.

Returns

Mat

The Mat header

CvtColor(IInputArray, IOutputArray, ColorConversion, int)

Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout).

public static void CvtColor(IInputArray src, IOutputArray dst, ColorConversion code, int dstCn = 0)

Parameters

src IInputArray

The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image

dst IOutputArray

The destination image of the same data type as the source one. The number of channels may be different

code ColorConversion

Color conversion operation that can be specifed using CV_src_color_space2dst_color_space constants

dstCn int

number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code .

CvtColor(IInputArray, IOutputArray, Type, Type)

Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout).

public static void CvtColor(IInputArray src, IOutputArray dest, Type srcColor, Type destColor)

Parameters

src IInputArray

The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image

dest IOutputArray

The destination image of the same data type as the source one. The number of channels may be different

srcColor Type

Source color type.

destColor Type

Destination color type

CvtColorTwoPlane(IInputArray, IInputArray, IOutputArray, ColorConversion)

Converts an image from one color space to another where the source image is stored in two planes.

public static void CvtColorTwoPlane(IInputArray src1, IInputArray src2, IOutputArray dst, ColorConversion code)

Parameters

src1 IInputArray

8-bit image (CV_8U) of the Y plane.

src2 IInputArray

Image containing interleaved U/V plane.

dst IOutputArray

Output image.

code ColorConversion

Specifies the type of conversion. It can take any of the following values: COLOR_YUV2BGR_NV12 COLOR_YUV2RGB_NV12 COLOR_YUV2BGRA_NV12 COLOR_YUV2RGBA_NV12 COLOR_YUV2BGR_NV21 COLOR_YUV2RGB_NV21 COLOR_YUV2BGRA_NV21 COLOR_YUV2RGBA_NV21

Dct(IInputArray, IOutputArray, DctType)

Performs forward or inverse transform of 1D or 2D floating-point array

public static void Dct(IInputArray src, IOutputArray dst, DctType flags)

Parameters

src IInputArray

Source array, real 1D or 2D array

dst IOutputArray

Destination array of the same size and same type as the source

flags DctType

Transformation flags

Decolor(IInputArray, IOutputArray, IOutputArray)

Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications

public static void Decolor(IInputArray src, IOutputArray grayscale, IOutputArray colorBoost)

Parameters

src IInputArray

Input 8-bit 3-channel image.

grayscale IOutputArray

Output 8-bit 1-channel image.

colorBoost IOutputArray

Output 8-bit 3-channel image.

DecomposeEssentialMat(IInputArray, IOutputArray, IOutputArray, IOutputArray)

Decompose an essential matrix to possible rotations and translation.

public static void DecomposeEssentialMat(IInputArray e, IOutputArray r1, IOutputArray r2, IOutputArray t)

Parameters

e IInputArray

The input essential matrix.

r1 IOutputArray

One possible rotation matrix.

r2 IOutputArray

Another possible rotation matrix.

t IOutputArray

One possible translation.

Remarks

This function decomposes the essential matrix E using svd decomposition. In general, four possible poses exist for the decomposition of E. They are [R1,t], [R1,−t], [R2,t], [R2,−t]

DecomposeHomographyMat(IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, IOutputArrayOfArrays)

Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).

public static int DecomposeHomographyMat(IInputArray h, IInputArray k, IOutputArrayOfArrays rotations, IOutputArrayOfArrays translations, IOutputArrayOfArrays normals)

Parameters

h IInputArray

The input homography matrix between two images.

k IInputArray

The input camera intrinsic matrix.

rotations IOutputArrayOfArrays

Array of rotation matrices.

translations IOutputArrayOfArrays

Array of translation matrices.

normals IOutputArrayOfArrays

Array of plane normal matrices.

Returns

int

Number of solutions

DecomposeProjectionMatrix(IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray)

Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.

public static void DecomposeProjectionMatrix(IInputArray projMatrix, IOutputArray cameraMatrix, IOutputArray rotMatrix, IOutputArray transVect, IOutputArray rotMatrixX = null, IOutputArray rotMatrixY = null, IOutputArray rotMatrixZ = null, IOutputArray eulerAngles = null)

Parameters

projMatrix IInputArray

3x4 input projection matrix P.

cameraMatrix IOutputArray

Output 3x3 camera intrinsic matrix A

rotMatrix IOutputArray

Output 3x3 external rotation matrix R.

transVect IOutputArray

Output 4x1 translation vector T.

rotMatrixX IOutputArray

Optional 3x3 rotation matrix around x-axis.

rotMatrixY IOutputArray

Optional 3x3 rotation matrix around y-axis.

rotMatrixZ IOutputArray

Optional 3x3 rotation matrix around z-axis.

eulerAngles IOutputArray

Optional three-element vector containing three Euler angles of rotation in degrees.

DefaultLoadUnmanagedModules(string[], string)

Attempts to load opencv modules from the specific location

public static bool DefaultLoadUnmanagedModules(string[] modules, string loadDirectory = null)

Parameters

modules string[]

The names of opencv modules. e.g. "opencv_core.dll" on windows.

loadDirectory string

The path to load the opencv modules. If null, will use the default path.

Returns

bool

True if all the modules has been loaded successfully

Demosaicing(IInputArray, IOutputArray, ColorConversion, int)

main function for all demosaicing processes

public static void Demosaicing(IInputArray src, IOutputArray dst, ColorConversion code, int dstCn = 0)

Parameters

src IInputArray

Input image: 8-bit unsigned or 16-bit unsigned

dst IOutputArray

Output image of the same size and depth as src

code ColorConversion

Color space conversion code

dstCn int

Number of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.

DenoiseTVL1(Mat[], Mat, double, int)

Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented.

public static void DenoiseTVL1(Mat[] observations, Mat result, double lambda, int niters)

Parameters

observations Mat[]

This array should contain one or more noised versions of the image that is to be restored.

result Mat

Here the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary.

lambda double

Corresponds to in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed.

niters int

Number of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor.

DestroyAllWindows()

Destroys all of the HighGUI windows.

public static void DestroyAllWindows()

DestroyWindow(string)

Destroys the window with a given name

public static void DestroyWindow(string name)

Parameters

name string

Name of the window to be destroyed

DetailEnhance(IInputArray, IOutputArray, float, float)

This filter enhances the details of a particular image.

public static void DetailEnhance(IInputArray src, IOutputArray dst, float sigmaS = 10, float sigmaR = 0.15)

Parameters

src IInputArray

Input 8-bit 3-channel image

dst IOutputArray

Output image with the same size and type as src

sigmaS float

Range between 0 to 200

sigmaR float

Range between 0 to 1

Determinant(IInputArray)

Returns determinant of the square matrix mat. The direct method is used for small matrices and Gaussian elimination is used for larger matrices. For symmetric positive-determined matrices it is also possible to run SVD with U=V=NULL and then calculate determinant as a product of the diagonal elements of W

public static double Determinant(IInputArray mat)

Parameters

mat IInputArray

The pointer to the matrix

Returns

double

determinant of the square matrix mat

Dft(IInputArray, IOutputArray, DxtType, int)

Performs forward or inverse transform of 1D or 2D floating-point array In case of real (single-channel) data, the packed format, borrowed from IPL, is used to to represent a result of forward Fourier transform or input for inverse Fourier transform

public static void Dft(IInputArray src, IOutputArray dst, DxtType flags = DxtType.Forward, int nonzeroRows = 0)

Parameters

src IInputArray

Source array, real or complex

dst IOutputArray

Destination array of the same size and same type as the source

flags DxtType

Transformation flags

nonzeroRows int

Number of nonzero rows to in the source array (in case of forward 2d transform), or a number of rows of interest in the destination array (in case of inverse 2d transform). If the value is negative, zero, or greater than the total number of rows, it is ignored. The parameter can be used to speed up 2d convolution/correlation when computing them via DFT. See the sample below

Dilate(IInputArray, IOutputArray, IInputArray, Point, int, BorderType, MCvScalar)

Dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken The function supports the in-place mode. Dilation can be applied several (iterations) times. In case of color image each channel is processed independently

public static void Dilate(IInputArray src, IOutputArray dst, IInputArray element, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

element IInputArray

Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used

anchor Point

Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

iterations int

Number of times erosion is applied

borderType BorderType

Pixel extrapolation method

borderValue MCvScalar

Border value in case of a constant border

DistanceTransform(IInputArray, IOutputArray, IOutputArray, DistType, int, DistLabelType)

Calculates distance to closest zero pixel for all non-zero pixels of source image

public static void DistanceTransform(IInputArray src, IOutputArray dst, IOutputArray labels, DistType distanceType, int maskSize, DistLabelType labelType = DistLabelType.CComp)

Parameters

src IInputArray

Source 8-bit single-channel (binary) image.

dst IOutputArray

Output image with calculated distances (32-bit floating-point, single-channel).

labels IOutputArray

The optional output 2d array of labels of integer type and the same size as src and dst. Can be null if not needed

distanceType DistType

Type of distance

maskSize int

Size of distance transform mask; can be 3 or 5. In case of CV_DIST_L1 or CV_DIST_C the parameter is forced to 3, because 3x3 mask gives the same result as 5x5 yet it is faster.

labelType DistLabelType

Type of the label array to build. If labelType==CCOMP then each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label. If labelType==PIXEL then each zero pixel (and all the non-zero pixels closest to it) gets its own label.

Divide(IInputArray, IInputArray, IOutputArray, double, DepthType)

Divides one array by another: dst(I)=scale * src1(I)/src2(I), if src1!=IntPtr.Zero; dst(I)=scale/src2(I), if src1==IntPtr.Zero; All the arrays must have the same type, and the same size (or ROI size)

public static void Divide(IInputArray src1, IInputArray src2, IOutputArray dst, double scale = 1, DepthType dtype = DepthType.Default)

Parameters

src1 IInputArray

The first source array. If the pointer is IntPtr.Zero, the array is assumed to be all 1s.

src2 IInputArray

The second source array

dst IOutputArray

The destination array

scale double

Optional scale factor

dtype DepthType

Optional depth of the output array

DrawChessboardCorners(IInputOutputArray, Size, IInputArray, bool)

Draws the individual chessboard corners detected (as red circles) in case if the board was not found (pattern_was_found=0) or the colored corners connected with lines when the board was found (pattern_was_found != 0).

public static void DrawChessboardCorners(IInputOutputArray image, Size patternSize, IInputArray corners, bool patternWasFound)

Parameters

image IInputOutputArray

The destination image; it must be 8-bit color image

patternSize Size

The number of inner corners per chessboard row and column

corners IInputArray

The array of corners detected

patternWasFound bool

Indicates whether the complete board was found (!=0) or not (=0). One may just pass the return value cvFindChessboardCorners here.

DrawContours(IInputOutputArray, IInputArrayOfArrays, int, MCvScalar, int, LineType, IInputArray, int, Point)

Draws contours outlines or filled contours.

public static void DrawContours(IInputOutputArray image, IInputArrayOfArrays contours, int contourIdx, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, IInputArray hierarchy = null, int maxLevel = 2147483647, Point offset = default)

Parameters

image IInputOutputArray

Image where the contours are to be drawn. Like in any other drawing function, the contours are clipped with the ROI

contours IInputArrayOfArrays

All the input contours. Each contour is stored as a point vector.

contourIdx int

Parameter indicating a contour to draw. If it is negative, all the contours are drawn.

color MCvScalar

Color of the contours

thickness int

Thickness of lines the contours are drawn with. If it is negative the contour interiors are drawn

lineType LineType

Type of the contour segments

hierarchy IInputArray

Optional information about hierarchy. It is only needed if you want to draw only some of the contours

maxLevel int

Maximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level.

offset Point

Shift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering.

DrawMarker(IInputOutputArray, Point, MCvScalar, MarkerTypes, int, int, LineType)

Draws a marker on a predefined position in an image.

public static void DrawMarker(IInputOutputArray img, Point position, MCvScalar color, MarkerTypes markerType, int markerSize = 20, int thickness = 1, LineType lineType = LineType.EightConnected)

Parameters

img IInputOutputArray

Image.

position Point

The point where the crosshair is positioned.

color MCvScalar

Line color.

markerType MarkerTypes

The specific type of marker you want to use

markerSize int

The length of the marker axis [default = 20 pixels]

thickness int

Line thickness.

lineType LineType

Type of the line

EMD(IInputArray, IInputArray, DistType, IInputArray, float[], IOutputArray)

Computes the 'minimal work' distance between two weighted point configurations.

public static float EMD(IInputArray signature1, IInputArray signature2, DistType distType, IInputArray cost = null, float[] lowerBound = null, IOutputArray flow = null)

Parameters

signature1 IInputArray

First signature, a size1 x dims + 1 floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used.

signature2 IInputArray

Second signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra 'dummy' point is added to either signature1 or signature2

distType DistType

Used metric. CV_DIST_L1, CV_DIST_L2 , and CV_DIST_C stand for one of the standard metrics. CV_DIST_USER means that a pre-calculated cost matrix cost is used.

cost IInputArray

User-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.

lowerBound float[]

Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column).

flow IOutputArray

Resultant size1 x size2 flow matrix

Returns

float

The 'minimal work' distance between two weighted point configurations.

EdgePreservingFilter(IInputArray, IOutputArray, EdgePreservingFilterFlag, float, float)

Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications.

public static void EdgePreservingFilter(IInputArray src, IOutputArray dst, EdgePreservingFilterFlag flags = EdgePreservingFilterFlag.RecursFilter, float sigmaS = 60, float sigmaR = 0.4)

Parameters

src IInputArray

Input 8-bit 3-channel image

dst IOutputArray

Output 8-bit 3-channel image

flags EdgePreservingFilterFlag

Edge preserving filters

sigmaS float

Range between 0 to 200

sigmaR float

Range between 0 to 1

Eigen(IInputArray, IOutputArray, IOutputArray)

Computes eigenvalues and eigenvectors of a symmetric matrix

public static void Eigen(IInputArray src, IOutputArray eigenValues, IOutputArray eigenVectors = null)

Parameters

src IInputArray

The input symmetric square matrix, modified during the processing

eigenValues IOutputArray

The output vector of eigenvalues, stored in the descending order (order of eigenvalues and eigenvectors is syncronized, of course)

eigenVectors IOutputArray

The output matrix of eigenvectors, stored as subsequent rows

Examples

To calculate the largest eigenvector/-value set lowindex = highindex = 1. For legacy reasons this function always returns a square matrix the same size as the source matrix with eigenvectors and a vector the length of the source matrix with eigenvalues. The selected eigenvectors/-values are always in the first highindex - lowindex + 1 rows.

Remarks

Currently the function is slower than cvSVD yet less accurate, so if A is known to be positivelydefined (for example, it is a covariance matrix)it is recommended to use cvSVD to find eigenvalues and eigenvectors of A, especially if eigenvectors are not required.

Ellipse(IInputOutputArray, RotatedRect, MCvScalar, int, LineType)

Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees.

public static void Ellipse(IInputOutputArray img, RotatedRect box, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected)

Parameters

img IInputOutputArray

Image

box RotatedRect

Alternative ellipse representation via RotatedRect. This means that the function draws an ellipse inscribed in the rotated rectangle.

color MCvScalar

Ellipse color

thickness int

Thickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.

lineType LineType

Type of the ellipse boundary

Ellipse(IInputOutputArray, Point, Size, double, double, double, MCvScalar, int, LineType, int)

Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees.

public static void Ellipse(IInputOutputArray img, Point center, Size axes, double angle, double startAngle, double endAngle, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image

center Point

Center of the ellipse

axes Size

Length of the ellipse axes

angle double

Rotation angle

startAngle double

Starting angle of the elliptic arc

endAngle double

Ending angle of the elliptic arc

color MCvScalar

Ellipse color

thickness int

Thickness of the ellipse arc

lineType LineType

Type of the ellipse boundary

shift int

Number of fractional bits in the center coordinates and axes' values

EqualizeHist(IInputArray, IOutputArray)

The algorithm normalizes brightness and increases contrast of the image

public static void EqualizeHist(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

The input 8-bit single-channel image

dst IOutputArray

The output image of the same size and the same data type as src

Erode(IInputArray, IOutputArray, IInputArray, Point, int, BorderType, MCvScalar)

Erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken: dst=erode(src,element): dst(x,y)=min((x',y') in element)) src(x+x',y+y') The function supports the in-place mode. Erosion can be applied several (iterations) times. In case of color image each channel is processed independently.

public static void Erode(IInputArray src, IOutputArray dst, IInputArray element, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image

element IInputArray

Structuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used.

anchor Point

Position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.

iterations int

Number of times erosion is applied.

borderType BorderType

Pixel extrapolation method

borderValue MCvScalar

Border value in case of a constant border, use Constant for default

ErrorStr(int)

Returns the textual description for the specified error status code. In case of unknown status the function returns NULL pointer.

public static string ErrorStr(int status)

Parameters

status int

The error status

Returns

string

the textual description for the specified error status code.

EstimateAffine2D(IInputArray, IInputArray, IOutputArray, RobustEstimationAlgorithm, double, int, double, int)

Computes an optimal affine transformation between two 2D point sets.

public static Mat EstimateAffine2D(IInputArray from, IInputArray to, IOutputArray inliners = null, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.Ransac, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10)

Parameters

from IInputArray

First input 2D point set containing (X,Y).

to IInputArray

Second input 2D point set containing (x,y).

inliners IOutputArray

Output vector indicating which points are inliers (1-inlier, 0-outlier).

method RobustEstimationAlgorithm

Robust method used to compute transformation.

ransacReprojThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.

maxIters int

The maximum number of robust method iterations.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

refineIters int

Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.

Returns

Mat

Output 2D affine transformation matrix 2×3 or empty matrix if transformation could not be estimated.

EstimateAffine2D(PointF[], PointF[], IOutputArray, RobustEstimationAlgorithm, double, int, double, int)

Computes an optimal affine transformation between two 2D point sets.

public static Mat EstimateAffine2D(PointF[] from, PointF[] to, IOutputArray inliners = null, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.Ransac, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10)

Parameters

from PointF[]

First input 2D point set containing (X,Y).

to PointF[]

Second input 2D point set containing (x,y).

inliners IOutputArray

Output vector indicating which points are inliers (1-inlier, 0-outlier).

method RobustEstimationAlgorithm

Robust method used to compute transformation.

ransacReprojThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.

maxIters int

The maximum number of robust method iterations.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

refineIters int

Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.

Returns

Mat

Output 2D affine transformation matrix 2×3 or empty matrix if transformation could not be estimated.

EstimateAffine3D(IInputArray, IInputArray, IOutputArray, IOutputArray, double, double)

Computes an optimal affine transformation between two 3D point sets.

public static int EstimateAffine3D(IInputArray src, IInputArray dst, IOutputArray affineEstimate, IOutputArray inliers, double ransacThreshold = 3, double confidence = 0.99)

Parameters

src IInputArray

First input 3D point set.

dst IInputArray

Second input 3D point set.

affineEstimate IOutputArray

Output 3D affine transformation matrix 3 x 4

inliers IOutputArray

Output vector indicating which points are inliers.

ransacThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

Returns

int

the result

EstimateAffine3D(MCvPoint3D32f[], MCvPoint3D32f[], out Matrix<double>, out byte[], double, double)

Computes an optimal affine transformation between two 3D point sets.

public static int EstimateAffine3D(MCvPoint3D32f[] src, MCvPoint3D32f[] dst, out Matrix<double> estimate, out byte[] inliers, double ransacThreshold, double confidence)

Parameters

src MCvPoint3D32f[]

First input 3D point set.

dst MCvPoint3D32f[]

Second input 3D point set.

estimate Matrix<double>

Output 3D affine transformation matrix.

inliers byte[]

Output vector indicating which points are inliers.

ransacThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

Returns

int

The result

EstimateAffinePartial2D(IInputArray, IInputArray, IOutputArray, RobustEstimationAlgorithm, double, int, double, int)

Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.

public static Mat EstimateAffinePartial2D(IInputArray from, IInputArray to, IOutputArray inliners, RobustEstimationAlgorithm method, double ransacReprojThreshold, int maxIters, double confidence, int refineIters)

Parameters

from IInputArray

First input 2D point set.

to IInputArray

Second input 2D point set.

inliners IOutputArray

Output vector indicating which points are inliers.

method RobustEstimationAlgorithm

Robust method used to compute transformation.

ransacReprojThreshold double

Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.

maxIters int

The maximum number of robust method iterations.

confidence double

Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.

refineIters int

Maximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.

Returns

Mat

Output 2D affine transformation (4 degrees of freedom) matrix 2×3 or empty matrix if transformation could not be estimated.

EstimateChessboardSharpness(IInputArray, Size, IInputArray, float, bool, IOutputArray)

Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels.

public static MCvScalar EstimateChessboardSharpness(IInputArray image, Size patternSize, IInputArray corners, float riseDistance = 0.8, bool vertical = false, IOutputArray sharpness = null)

Parameters

image IInputArray

Gray image used to find chessboard corners

patternSize Size

Size of a found chessboard pattern

corners IInputArray

Corners found by findChessboardCorners(SB)

riseDistance float

Rise distance 0.8 means 10% ... 90% of the final signal strength

vertical bool

By default edge responses for horizontal lines are calculated

sharpness IOutputArray

Optional output array with a sharpness value for calculated edge responses

Returns

MCvScalar

Scalar(average sharpness, average min brightness, average max brightness,0)

Exp(IInputArray, IOutputArray)

Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is 7e-6. Currently, the function converts denormalized values to zeros on output

public static void Exp(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

The source array

dst IOutputArray

The destination array, it should have double type or the same type as the source

ExtractChannel(IInputArray, IOutputArray, int)

Extract the specific channel from the image

public static void ExtractChannel(IInputArray src, IOutputArray dst, int coi)

Parameters

src IInputArray

The source image

dst IOutputArray

The channel

coi int

0 based index of the channel to be extracted

FastNlMeansDenoising(IInputArray, IOutputArray, float, int, int)

Perform image denoising using Non-local Means Denoising algorithm: http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise.

public static void FastNlMeansDenoising(IInputArray src, IOutputArray dst, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

src IInputArray

Input 8-bit 1-channel, 2-channel or 3-channel image.

dst IOutputArray

Output image with the same size and type as src.

h float

Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd.

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time.

FastNlMeansDenoisingColored(IInputArray, IOutputArray, float, float, int, int)

Perform image denoising using Non-local Means Denoising algorithm (modified for color image): http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise. The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using fastNlMeansDenoising function.

public static void FastNlMeansDenoisingColored(IInputArray src, IOutputArray dst, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)

Parameters

src IInputArray

Input 8-bit 1-channel, 2-channel or 3-channel image.

dst IOutputArray

Output image with the same size and type as src.

h float

Parameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.

hColor float

The same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors.

templateWindowSize int

Size in pixels of the template patch that is used to compute weights. Should be odd.

searchWindowSize int

Size in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time.

FillConvexPoly(IInputOutputArray, IInputArray, MCvScalar, LineType, int)

Fills convex polygon interior. This function is much faster than The function cvFillPoly and can fill not only the convex polygons but any monotonic polygon, i.e. a polygon whose contour intersects every horizontal line (scan line) twice at the most

public static void FillConvexPoly(IInputOutputArray img, IInputArray points, MCvScalar color, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image

points IInputArray

Array of pointers to a single polygon

color MCvScalar

Polygon color

lineType LineType

Type of the polygon boundaries

shift int

Number of fractional bits in the vertex coordinates

FillPoly(IInputOutputArray, IInputArray, MCvScalar, LineType, int, Point)

Fills the area bounded by one or more polygons.

public static void FillPoly(IInputOutputArray img, IInputArray points, MCvScalar color, LineType lineType = LineType.EightConnected, int shift = 0, Point offset = default)

Parameters

img IInputOutputArray

Image.

points IInputArray

Array of polygons where each polygon is represented as an array of points.

color MCvScalar

Polygon color

lineType LineType

Type of the polygon boundaries.

shift int

Number of fractional bits in the vertex coordinates.

offset Point

Optional offset of all points of the contours.

Filter2D(IInputArray, IOutputArray, IInputArray, Point, double, BorderType)

Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image

public static void Filter2D(IInputArray src, IOutputArray dst, IInputArray kernel, Point anchor, double delta = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

The source image

dst IOutputArray

The destination image

kernel IInputArray

Convolution kernel, single-channel floating point matrix. If you want to apply different kernels to different channels, split the image using cvSplit into separate color planes and process them individually

anchor Point

The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center

delta double

The optional value added to the filtered pixels before storing them in dst

borderType BorderType

The pixel extrapolation method.

FilterSpeckles(IInputOutputArray, double, int, double, IInputOutputArray)

Filters off small noise blobs (speckles) in the disparity map.

public static void FilterSpeckles(IInputOutputArray img, double newVal, int maxSpeckleSize, double maxDiff, IInputOutputArray buf = null)

Parameters

img IInputOutputArray

The input 16-bit signed disparity image

newVal double

The disparity value used to paint-off the speckles

maxSpeckleSize int

The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm

maxDiff double

Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.

buf IInputOutputArray

The optional temporary buffer to avoid memory allocation within the function.

Find4QuadCornerSubpix(IInputArray, IInputOutputArray, Size)

Finds subpixel-accurate positions of the chessboard corners

public static bool Find4QuadCornerSubpix(IInputArray image, IInputOutputArray corners, Size regionSize)

Parameters

image IInputArray

Source chessboard view; it must be 8-bit grayscale or color image

corners IInputOutputArray

Pointer to the output array of corners(PointF) detected

regionSize Size

region size

Returns

bool

True if successful

FindChessboardCorners(IInputArray, Size, IOutputArray, CalibCbType)

Attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners

public static bool FindChessboardCorners(IInputArray image, Size patternSize, IOutputArray corners, CalibCbType flags = CalibCbType.AdaptiveThresh | CalibCbType.NormalizeImage)

Parameters

image IInputArray

Source chessboard view; it must be 8-bit grayscale or color image

patternSize Size

The number of inner corners per chessboard row and column

corners IOutputArray

Pointer to the output array of corners(PointF) detected

flags CalibCbType

Various operation flags

Returns

bool

True if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, it returns 0

Remarks

The coordinates detected are approximate, and to determine their position more accurately, the user may use the function cvFindCornerSubPix

FindChessboardCornersSB(IInputArray, Size, IOutputArray, CalibCbType)

Finds the positions of internal corners of the chessboard using a sector based approach.

public static bool FindChessboardCornersSB(IInputArray image, Size patternSize, IOutputArray corners, CalibCbType flags = CalibCbType.Default)

Parameters

image IInputArray

Source chessboard view. It must be an 8-bit grayscale or color image.

patternSize Size

Number of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).

corners IOutputArray

Output array of detected corners.

flags CalibCbType

Various operation flags

Returns

bool

True if chessboard corners found

FindCirclesGrid(IInputArray, Size, IOutputArray, CalibCgType, Feature2D)

Finds centers in the grid of circles

public static bool FindCirclesGrid(IInputArray image, Size patternSize, IOutputArray centers, CalibCgType flags, Feature2D featureDetector)

Parameters

image IInputArray

Source chessboard view

patternSize Size

The number of inner circle per chessboard row and column

centers IOutputArray

output array of detected centers.

flags CalibCgType

Various operation flags

featureDetector Feature2D

The feature detector. Use a SimpleBlobDetector for default

Returns

bool

True if grid found.

FindCirclesGrid(Image<Gray, byte>, Size, CalibCgType, Feature2D)

Finds centers in the grid of circles

public static PointF[] FindCirclesGrid(Image<Gray, byte> image, Size patternSize, CalibCgType flags, Feature2D featureDetector)

Parameters

image Image<Gray, byte>

Source chessboard view

patternSize Size

The number of inner circle per chessboard row and column

flags CalibCgType

Various operation flags

featureDetector Feature2D

The feature detector. Use a SimpleBlobDetector for default

Returns

PointF[]

The center of circles detected if the chess board pattern is found, otherwise null is returned

FindContourTree(IInputOutputArray, IOutputArray, ChainApproxMethod, Point)

Retrieves contours from the binary image as a contour tree. The pointer firstContour is filled by the function. It is provided as a convenient way to obtain the hierarchy value as int[,]. The function modifies the source image content

public static int[,] FindContourTree(IInputOutputArray image, IOutputArray contours, ChainApproxMethod method, Point offset = default)

Parameters

image IInputOutputArray

The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content

contours IOutputArray

Detected contours. Each contour is stored as a vector of points.

method ChainApproxMethod

Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation).

offset Point

Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context

Returns

int[,]

The contour hierarchy

FindContours(IInputOutputArray, IOutputArray, IOutputArray, RetrType, ChainApproxMethod, Point)

Retrieves contours from the binary image and returns the number of retrieved contours. The pointer firstContour is filled by the function. It will contain pointer to the first most outer contour or IntPtr.Zero if no contours is detected (if the image is completely black). Other contours may be reached from firstContour using h_next and v_next links. The sample in cvDrawContours discussion shows how to use contours for connected component detection. Contours can be also used for shape analysis and object recognition - see squares.c in OpenCV sample directory The function modifies the source image content

public static void FindContours(IInputOutputArray image, IOutputArray contours, IOutputArray hierarchy, RetrType mode, ChainApproxMethod method, Point offset = default)

Parameters

image IInputOutputArray

The source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content

contours IOutputArray

Detected contours. Each contour is stored as a vector of points.

hierarchy IOutputArray

Optional output vector, containing information about the image topology.

mode RetrType

Retrieval mode

method ChainApproxMethod

Approximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation).

offset Point

Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context

FindEssentialMat(IInputArray, IInputArray, IInputArray, FmType, double, double, IOutputArray)

Calculates an essential matrix from the corresponding points in two images.

public static Mat FindEssentialMat(IInputArray points1, IInputArray points2, IInputArray cameraMatrix, FmType method = FmType.Ransac, double prob = 0.999, double threshold = 1, IOutputArray mask = null)

Parameters

points1 IInputArray

Array of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).

points2 IInputArray

Array of the second image points of the same size and format as points1

cameraMatrix IInputArray

Camera matrix K=[[fx 0 cx][0 fy cy][0 0 1]]. Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.

method FmType

Method for computing a fundamental matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm

prob double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

threshold double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

mask IOutputArray

Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.

Returns

Mat

The essential mat

FindFundamentalMat(IInputArray, IInputArray, FmType, double, double, IOutputArray)

Calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrices found (1 or 3) and 0, if no matrix is found.

public static Mat FindFundamentalMat(IInputArray points1, IInputArray points2, FmType method = FmType.Ransac, double param1 = 3, double param2 = 0.99, IOutputArray mask = null)

Parameters

points1 IInputArray

Array of N points from the first image. The point coordinates should be floating-point (single or double precision).

points2 IInputArray

Array of the second image points of the same size and format as points1

method FmType

Method for computing the fundamental matrix

param1 double

Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.

param2 double

Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.

mask IOutputArray

The optional pointer to output array of N elements, every element of which is set to 0 for outliers and to 1 for the "inliers", i.e. points that comply well with the estimated epipolar geometry. The array is computed only in RANSAC and LMedS methods. For other methods it is set to all 1.

Returns

Mat

The calculated fundamental matrix

FindHomography(IInputArray, IInputArray, RobustEstimationAlgorithm, double, IOutputArray)

Finds perspective transformation H=||hij|| between the source and the destination planes

public static Mat FindHomography(IInputArray srcPoints, IInputArray dstPoints, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.AllPoints, double ransacReprojThreshold = 3, IOutputArray mask = null)

Parameters

srcPoints IInputArray

Point coordinates in the original plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates), where N is the number of points.

dstPoints IInputArray

Point coordinates in the destination plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates)

method RobustEstimationAlgorithm

The type of the method

ransacReprojThreshold double

The maximum allowed re-projection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3

mask IOutputArray

The optional output mask set by a robust method (RANSAC or LMEDS).

Returns

Mat

Output 3x3 homography matrix. Homography matrix is determined up to a scale, thus it is normalized to make h33=1

FindHomography(PointF[], PointF[], RobustEstimationAlgorithm, double, IOutputArray)

Finds perspective transformation H=||h_ij|| between the source and the destination planes

public static Mat FindHomography(PointF[] srcPoints, PointF[] dstPoints, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.AllPoints, double ransacReprojThreshold = 3, IOutputArray mask = null)

Parameters

srcPoints PointF[]

Point coordinates in the original plane

dstPoints PointF[]

Point coordinates in the destination plane

method RobustEstimationAlgorithm

FindHomography method

ransacReprojThreshold double

The maximum allowed reprojection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3

mask IOutputArray

Optional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.

Returns

Mat

The 3x3 homography matrix if found. Null if not found.

FindNonZero(IInputArray, IOutputArray)

Find the location of the non-zero pixel

public static void FindNonZero(IInputArray src, IOutputArray idx)

Parameters

src IInputArray

The source array

idx IOutputArray

The output array where the location of the pixels are sorted

FindTransformECC(IInputArray, IInputArray, IInputOutputArray, MotionType, MCvTermCriteria, IInputArray)

Finds the geometric transform (warp) between two images in terms of the ECC criterion

public static double FindTransformECC(IInputArray templateImage, IInputArray inputImage, IInputOutputArray warpMatrix, MotionType motionType, MCvTermCriteria criteria, IInputArray inputMask = null)

Parameters

templateImage IInputArray

single-channel template image; CV_8U or CV_32F array.

inputImage IInputArray

single-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as temlateImage.

warpMatrix IInputOutputArray

floating-point 2×3 or 3×3 mapping matrix (warp).

motionType MotionType

Specifying the type of motion. Use Affine for default

criteria MCvTermCriteria

specifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values can use 50 iteration and 0.001 eps.

inputMask IInputArray

An optional mask to indicate valid values of inputImage.

Returns

double

The final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image.

FitEllipse(IInputArray)

Fits an ellipse around a set of 2D points.

public static RotatedRect FitEllipse(IInputArray points)

Parameters

points IInputArray

Input 2D point set

Returns

RotatedRect

The ellipse that fits best (in least-squares sense) to a set of 2D points

FitEllipseAMS(IInputArray)

The function calculates the ellipse that fits a set of 2D points. The Approximate Mean Square (AMS) is used.

public static RotatedRect FitEllipseAMS(IInputArray points)

Parameters

points IInputArray

Input 2D point set

Returns

RotatedRect

The rotated rectangle in which the ellipse is inscribed

FitEllipseDirect(IInputArray)

The function calculates the ellipse that fits a set of 2D points. The Direct least square (Direct) method by [58] is used.

public static RotatedRect FitEllipseDirect(IInputArray points)

Parameters

points IInputArray

Input 2D point set

Returns

RotatedRect

The rotated rectangle in which the ellipse is inscribed

FitLine(IInputArray, IOutputArray, DistType, double, double, double)

Fits line to 2D or 3D point set

public static void FitLine(IInputArray points, IOutputArray line, DistType distType, double param, double reps, double aeps)

Parameters

points IInputArray

Input vector of 2D or 3D points, stored in std::vector or Mat.

line IOutputArray

Output line parameters. In case of 2D fitting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D fitting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line.

distType DistType

The distance used for fitting

param double

Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen

reps double

Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default

aeps double

Sufficient accuracy for angle, 0.01 would be a good default

FitLine(PointF[], out PointF, out PointF, DistType, double, double, double)

Fits line to 2D or 3D point set

public static void FitLine(PointF[] points, out PointF direction, out PointF pointOnLine, DistType distType, double param, double reps, double aeps)

Parameters

points PointF[]

Input vector of 2D points.

direction PointF

A normalized vector collinear to the line

pointOnLine PointF

A point on the line.

distType DistType

The distance used for fitting

param double

Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen

reps double

Sufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default

aeps double

Sufficient accuracy for angle, 0.01 would be a good default

Flip(IInputArray, IOutputArray, FlipType)

Flips the array in one of different 3 ways (row and column indices are 0-based)

public static void Flip(IInputArray src, IOutputArray dst, FlipType flipType)

Parameters

src IInputArray

Source array.

dst IOutputArray

Destination array.

flipType FlipType

Specifies how to flip the array.

FlipND(IInputArray, IOutputArray, int)

Flips a n-dimensional at given axis

public static void FlipND(IInputArray src, IOutputArray dst, int axis)

Parameters

src IInputArray

Input array

dst IOutputArray

Output array that has the same shape of src

axis int

Axis that performs a flip on. 0 <= axis < src.dims.

FloodFill(IInputOutputArray, IInputOutputArray, Point, MCvScalar, out Rectangle, MCvScalar, MCvScalar, Connectivity, FloodFillType)

Fills a connected component with given color.

public static int FloodFill(IInputOutputArray src, IInputOutputArray mask, Point seedPoint, MCvScalar newVal, out Rectangle rect, MCvScalar loDiff, MCvScalar upDiff, Connectivity connectivity = Connectivity.FourConnected, FloodFillType flags = FloodFillType.Default)

Parameters

src IInputOutputArray

Input 1- or 3-channel, 8-bit or floating-point image. It is modified by the function unless CV_FLOODFILL_MASK_ONLY flag is set.

mask IInputOutputArray

Operation mask, should be singe-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. If not IntPtr.Zero, the function uses and updates the mask, so user takes responsibility of initializing mask content. Floodfilling can't go across non-zero pixels in the mask, for example, an edge detector output can be used as a mask to stop filling at edges. Or it is possible to use the same mask in multiple calls to the function to make sure the filled area do not overlap. Note: because mask is larger than the filled image, pixel in mask that corresponds to (x,y) pixel in image will have coordinates (x+1,y+1).

seedPoint Point

The starting point.

newVal MCvScalar

New value of repainted domain pixels.

rect Rectangle

Output parameter set by the function to the minimum bounding rectangle of the repainted domain.

loDiff MCvScalar

Maximal lower brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value.

upDiff MCvScalar

Maximal upper brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value.

connectivity Connectivity

Flood fill connectivity

flags FloodFillType

The operation flags.

Returns

int

The area of the connected component

GaussianBlur(IInputArray, IOutputArray, Size, double, double, BorderType)

Blurs an image using a Gaussian filter.

public static void GaussianBlur(IInputArray src, IOutputArray dst, Size ksize, double sigmaX, double sigmaY = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

input image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

dst IOutputArray

output image of the same size and type as src.

ksize Size

Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero�s and then they are computed from sigma* .

sigmaX double

Gaussian kernel standard deviation in X direction.

sigmaY double

Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height , respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.

borderType BorderType

Pixel extrapolation method

Gemm(IInputArray, IInputArray, double, IInputArray, double, IOutputArray, GemmType)

Performs generalized matrix multiplication: dst = alpha*op(src1)op(src2) + betaop(src3), where op(X) is X or XT

public static void Gemm(IInputArray src1, IInputArray src2, double alpha, IInputArray src3, double beta, IOutputArray dst, GemmType tAbc = GemmType.Default)

Parameters

src1 IInputArray

The first source array.

src2 IInputArray

The second source array.

alpha double

The scalar

src3 IInputArray

The third source array (shift). Can be null, if there is no shift.

beta double

The scalar

dst IOutputArray

The destination array.

tAbc GemmType

The Gemm operation type

GetAffineTransform(IInputArray, IOutputArray)

Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2.

public static Mat GetAffineTransform(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

Pointer to an array of PointF, Coordinates of 3 triangle vertices in the source image.

dst IOutputArray

Pointer to an array of PointF, Coordinates of the 3 corresponding triangle vertices in the destination image

Returns

Mat

The destination 2x3 matrix

GetAffineTransform(PointF[], PointF[])

Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2.

public static Mat GetAffineTransform(PointF[] src, PointF[] dest)

Parameters

src PointF[]

Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used

dest PointF[]

Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used

Returns

Mat

The 2x3 rotation matrix that defines the Affine transform

GetCvStructSizes()

This function retrieve the Open CV structure sizes in unmanaged code

public static CvStructSizes GetCvStructSizes()

Returns

CvStructSizes

The structure that will hold the Open CV structure sizes

GetDefaultNewCameraMatrix(IInputArray, Size, bool)

Returns the default new camera matrix.

public static Mat GetDefaultNewCameraMatrix(IInputArray cameraMatrix, Size imgsize = default, bool centerPrincipalPoint = false)

Parameters

cameraMatrix IInputArray

Input camera matrix.

imgsize Size

Camera view image size in pixels.

centerPrincipalPoint bool

Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not.

Returns

Mat

The default new camera matrix.

GetDepthType(DepthType)

Get the corresponding depth type

public static Type GetDepthType(DepthType t)

Parameters

t DepthType

The opencv depth type

Returns

Type

The equivalent depth type

GetDepthType(Type)

Get the corresponding opencv depth type

public static DepthType GetDepthType(Type t)

Parameters

t Type

The element type

Returns

DepthType

The equivalent opencv depth type

GetDerivKernels(IOutputArray, IOutputArray, int, int, int, bool, DepthType)

Returns filter coefficients for computing spatial image derivatives.

public static void GetDerivKernels(IOutputArray kx, IOutputArray ky, int dx, int dy, int ksize, bool normalize = false, DepthType ktype = DepthType.Cv32F)

Parameters

kx IOutputArray

Output matrix of row filter coefficients.

ky IOutputArray

Output matrix of column filter coefficients.

dx int

Derivative order in respect of x.

dy int

Derivative order in respect of y.

ksize int

Aperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7.

normalize bool

Flag indicating whether to normalize (scale down) the filter coefficients or not.

ktype DepthType

Type of filter coefficients. It can be CV_32f or CV_64F .

GetErrMode()

Returns the current error mode

public static extern int GetErrMode()

Returns

int

The error mode

GetErrStatus()

Returns the current error status - the value set with the last cvSetErrStatus call. Note, that in Leaf mode the program terminates immediately after error occurred, so to always get control after the function call, one should call cvSetErrMode and set Parent or Silent error mode.

public static extern int GetErrStatus()

Returns

int

The current error status

GetGaborKernel(Size, double, double, double, double, double, DepthType)

Returns Gabor filter coefficients.

public static Mat GetGaborKernel(Size ksize, double sigma, double theta, double lambd, double gamma, double psi = 1.5707963267948966, DepthType ktype = DepthType.Cv64F)

Parameters

ksize Size

Size of the filter returned.

sigma double

Standard deviation of the gaussian envelope.

theta double

Orientation of the normal to the parallel stripes of a Gabor function.

lambd double

Wavelength of the sinusoidal factor.

gamma double

Spatial aspect ratio.

psi double

Phase offset.

ktype DepthType

Type of filter coefficients. It can be CV_32F or CV_64F .

Returns

Mat

Gabor filter coefficients.

GetGaussianKernel(int, double, DepthType)

Returns Gaussian filter coefficients.

public static Mat GetGaussianKernel(int ksize, double sigma, DepthType ktype = DepthType.Cv64F)

Parameters

ksize int

Aperture size. It should be odd and positive.

sigma double

Gaussian standard deviation. If it is non-positive, it is computed from ksize.

ktype DepthType

Type of filter coefficients. It can be CV_32F or CV_64F

Returns

Mat

Gaussian filter coefficients.

GetModuleFormatString()

Get the module format string.

public static string GetModuleFormatString()

Returns

string

On Windows, "{0}".dll will be returned; On Linux, "lib{0}.so" will be returned; Otherwise {0} is returned.

GetOptimalDFTSize(int)

Returns the minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r.

public static int GetOptimalDFTSize(int vecsize)

Parameters

vecsize int

Vector size

Returns

int

The minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r.

GetOptimalNewCameraMatrix(IInputArray, IInputArray, Size, double, Size, ref Rectangle, bool)

Returns the new camera matrix based on the free scaling parameter.

public static Mat GetOptimalNewCameraMatrix(IInputArray cameraMatrix, IInputArray distCoeffs, Size imageSize, double alpha, Size newImgSize, ref Rectangle validPixROI, bool centerPrincipalPoint = false)

Parameters

cameraMatrix IInputArray

Input camera matrix.

distCoeffs IInputArray

Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,?x,?y]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.

imageSize Size

Original image size.

alpha double

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).

newImgSize Size

Image size after rectification. By default,it is set to imageSize .

validPixROI Rectangle

output rectangle that outlines all-good-pixels region in the undistorted image.

centerPrincipalPoint bool

indicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.

Returns

Mat

The new camera matrix based on the free scaling parameter.

GetPerspectiveTransform(IInputArray, IInputArray)

calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3.

public static Mat GetPerspectiveTransform(IInputArray src, IInputArray dst)

Parameters

src IInputArray

Coordinates of 4 quadrangle vertices in the source image

dst IInputArray

Coordinates of the 4 corresponding quadrangle vertices in the destination image

Returns

Mat

The perspective transform matrix

GetPerspectiveTransform(PointF[], PointF[])

calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3.

public static Mat GetPerspectiveTransform(PointF[] src, PointF[] dest)

Parameters

src PointF[]

Coordinates of 4 quadrangle vertices in the source image

dest PointF[]

Coordinates of the 4 corresponding quadrangle vertices in the destination image

Returns

Mat

The 3x3 Homography matrix

GetRectSubPix(IInputArray, Size, PointF, IOutputArray, DepthType)

Extracts pixels from src: dst(x, y) = src(x + center.x - (width(dst)-1)*0.5, y + center.y - (height(dst)-1)*0.5) where the values of pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multiple-channel images is processed independently. Whereas the rectangle center must be inside the image, the whole rectangle may be partially occluded. In this case, the replication border mode is used to get pixel values beyond the image boundaries.

public static void GetRectSubPix(IInputArray image, Size patchSize, PointF center, IOutputArray patch, DepthType patchType = DepthType.Default)

Parameters

image IInputArray

Source image

patchSize Size

Size of the extracted patch.

center PointF

Floating point coordinates of the extracted rectangle center within the source image. The center must be inside the image.

patch IOutputArray

Extracted rectangle

patchType DepthType

Depth of the extracted pixels. By default, they have the same depth as image.

GetRotationMatrix2D(PointF, double, double, IOutputArray)

Calculates rotation matrix

public static void GetRotationMatrix2D(PointF center, double angle, double scale, IOutputArray mapMatrix)

Parameters

center PointF

Center of the rotation in the source image.

angle double

The rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner).

scale double

Isotropic scale factor

mapMatrix IOutputArray

Pointer to the destination 2x3 matrix

GetStructuringElement(ElementShape, Size, Point)

Returns a structuring element of the specified size and shape for morphological operations.

public static Mat GetStructuringElement(ElementShape shape, Size ksize, Point anchor)

Parameters

shape ElementShape

Element shape

ksize Size

Size of the structuring element.

anchor Point

Anchor position within the element. The value (-1, -1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.

Returns

Mat

The structuring element

GetTextSize(string, FontFace, double, int, ref int)

Calculates the width and height of a text string.

public static Size GetTextSize(string text, FontFace fontFace, double fontScale, int thickness, ref int baseLine)

Parameters

text string

Input text string.

fontFace FontFace

Font to use

fontScale double

Font scale factor that is multiplied by the font-specific base size.

thickness int

Thickness of lines used to render the text.

baseLine int

Y-coordinate of the baseline relative to the bottom-most text point.

Returns

Size

The size of a box that contains the specified text.

GetWindowProperty(string, WindowPropertyFlags)

Provides parameters of a window.

public static double GetWindowProperty(string name, WindowPropertyFlags propId)

Parameters

name string

Name of the window.

propId WindowPropertyFlags

Window property to retrieve.

Returns

double

Value of the window property

GrabCut(IInputArray, IInputOutputArray, Rectangle, IInputOutputArray, IInputOutputArray, int, GrabcutInitType)

The grab cut algorithm for segmentation

public static void GrabCut(IInputArray img, IInputOutputArray mask, Rectangle rect, IInputOutputArray bgdModel, IInputOutputArray fgdModel, int iterCount, GrabcutInitType type)

Parameters

img IInputArray

The 8-bit 3-channel image to be segmented

mask IInputOutputArray

Input/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values: 0 (GC_BGD) defines an obvious background pixels. 1 (GC_FGD) defines an obvious foreground (object) pixel. 2 (GC_PR_BGR) defines a possible background pixel. 3 (GC_PR_FGD) defines a possible foreground pixel.

rect Rectangle

The rectangle to initialize the segmentation

bgdModel IInputOutputArray

Temporary array for the background model. Do not modify it while you are processing the same image.

fgdModel IInputOutputArray

Temporary arrays for the foreground model. Do not modify it while you are processing the same image.

iterCount int

The number of iterations

type GrabcutInitType

The initialization type

GroupRectangles(VectorOfRect, VectorOfInt, VectorOfDouble, int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(VectorOfRect rectList, VectorOfInt rejectLevels, VectorOfDouble levelWeights, int groupThreshold, double eps = 0.2)

Parameters

rectList VectorOfRect

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

rejectLevels VectorOfInt

reject levels

levelWeights VectorOfDouble

level weights

groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

Relative difference between sides of the rectangles to merge them into a group.

GroupRectangles(VectorOfRect, VectorOfInt, int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(VectorOfRect rectList, VectorOfInt weights, int groupThreshold, double eps = 0.2)

Parameters

rectList VectorOfRect

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

weights VectorOfInt

Weights

groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

Relative difference between sides of the rectangles to merge them into a group.

GroupRectangles(VectorOfRect, int, double)

Groups the object candidate rectangles.

public static void GroupRectangles(VectorOfRect rectList, int groupThreshold, double eps = 0.2)

Parameters

rectList VectorOfRect

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

Relative difference between sides of the rectangles to merge them into a group.

GroupRectangles(VectorOfRect, int, double, VectorOfInt, VectorOfDouble)

Groups the object candidate rectangles.

public static void GroupRectangles(VectorOfRect rectList, int groupThreshold, double eps, VectorOfInt weights, VectorOfDouble levelWeights)

Parameters

rectList VectorOfRect

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

groupThreshold int

Minimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.

eps double

Relative difference between sides of the rectangles to merge them into a group.

weights VectorOfInt

weights

levelWeights VectorOfDouble

level weights

GroupRectanglesMeanshift(VectorOfRect, VectorOfDouble, VectorOfDouble, double, Size)

Groups the object candidate rectangles.

public static void GroupRectanglesMeanshift(VectorOfRect rectList, VectorOfDouble foundWeights, VectorOfDouble foundScales, double detectThreshold, Size winDetSize)

Parameters

rectList VectorOfRect

Input/output vector of rectangles. Output vector includes retained and grouped rectangles.

foundWeights VectorOfDouble

found weights

foundScales VectorOfDouble

found scales

detectThreshold double

detect threshold, use 0 for default

winDetSize Size

win det size, use (64, 128) for default

HConcat(IInputArray, IInputArray, IOutputArray)

Horizontally concatenate two images

public static void HConcat(IInputArray src1, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

The first image

src2 IInputArray

The second image

dst IOutputArray

The result image

HConcat(IInputArrayOfArrays, IOutputArray)

Horizontally concatenate two images

public static void HConcat(IInputArrayOfArrays srcs, IOutputArray dst)

Parameters

srcs IInputArrayOfArrays

Input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.

dst IOutputArray

output array. It has the same number of rows and depth as the src, and the sum of cols of the src. same depth.

HConcat(Mat[], IOutputArray)

Horizontally concatenate two images

public static void HConcat(Mat[] srcs, IOutputArray dst)

Parameters

srcs Mat[]

Input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.

dst IOutputArray

output array. It has the same number of rows and depth as the src, and the sum of cols of the src. same depth.

HasNonZero(IInputArray)

Checks for the presence of at least one non-zero array element.

public static bool HasNonZero(IInputArray arr)

Parameters

arr IInputArray

Single-channel array.

Returns

bool

Whether there are non-zero elements in src

HaveImageReader(string)

Returns true if the specified image can be decoded by OpenCV.

public static bool HaveImageReader(string fileName)

Parameters

fileName string

File name of the image

Returns

bool

True if the specified image can be decoded by OpenCV.

HaveImageWriter(string)

Returns true if an image with the specified filename can be encoded by OpenCV.

public static bool HaveImageWriter(string fileName)

Parameters

fileName string

File name of the image

Returns

bool

True if an image with the specified filename can be encoded by OpenCV.

HoughCircles(IInputArray, HoughModes, double, double, double, double, int, int)

Finds circles in a grayscale image using the Hough transform

public static CircleF[] HoughCircles(IInputArray image, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)

Parameters

image IInputArray

8-bit, single-channel, grayscale input image.

method HoughModes

Detection method to use. Currently, the only implemented method is CV_HOUGH_GRADIENT , which is basically 21HT

dp double

Inverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.

minDist double

Minimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.

param1 double

First method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller).

param2 double

Second method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.

minRadius int

Minimum circle radius.

maxRadius int

Maximum circle radius.

Returns

CircleF[]

The circles detected

HoughCircles(IInputArray, IOutputArray, HoughModes, double, double, double, double, int, int)

Finds circles in grayscale image using some modification of Hough transform

public static void HoughCircles(IInputArray image, IOutputArray circles, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)

Parameters

image IInputArray

The input 8-bit single-channel grayscale image

circles IOutputArray

The storage for the circles detected. It can be a memory storage (in this case a sequence of circles is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of type CV_32FC3, to which the circles' parameters are written. The matrix header is modified by the function so its cols or rows will contain a number of lines detected. If circle_storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of circles is returned. Every circle is encoded as 3 floating-point numbers: center coordinates (x,y) and the radius

method HoughModes

Currently, the only implemented method is CV_HOUGH_GRADIENT

dp double

Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc

minDist double

Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed

param1 double

The first method-specific parameter. In case of CV_HOUGH_GRADIENT it is the higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller).

param2 double

The second method-specific parameter. In case of CV_HOUGH_GRADIENT it is accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first

minRadius int

Minimal radius of the circles to search for

maxRadius int

Maximal radius of the circles to search for. By default the maximal radius is set to max(image_width, image_height).

HoughLines(IInputArray, IOutputArray, double, double, int, double, double)

Finds lines in a binary image using the standard Hough transform.

public static void HoughLines(IInputArray image, IOutputArray lines, double rho, double theta, int threshold, double srn = 0, double stn = 0)

Parameters

image IInputArray

8-bit, single-channel binary source image. The image may be modified by the function.

lines IOutputArray

Output vector of lines. Each line is represented by a two-element vector

rho double

Distance resolution of the accumulator in pixels.

theta double

Angle resolution of the accumulator in radians.

threshold int

Accumulator threshold parameter. Only those lines are returned that get enough votes (> threshold)

srn double

For the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.

stn double

For the multi-scale Hough transform, it is a divisor for the distance resolution theta

HoughLinesP(IInputArray, IOutputArray, double, double, int, double, double)

Finds line segments in a binary image using the probabilistic Hough transform.

public static void HoughLinesP(IInputArray image, IOutputArray lines, double rho, double theta, int threshold, double minLineLength = 0, double maxGap = 0)

Parameters

image IInputArray

8-bit, single-channel binary source image. The image may be modified by the function.

lines IOutputArray

Output vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2)

rho double

Distance resolution of the accumulator in pixels

theta double

Angle resolution of the accumulator in radians

threshold int

Accumulator threshold parameter. Only those lines are returned that get enough votes

minLineLength double

Minimum line length. Line segments shorter than that are rejected.

maxGap double

Maximum allowed gap between points on the same line to link them.

HoughLinesP(IInputArray, double, double, int, double, double)

Finds line segments in a binary image using the probabilistic Hough transform.

public static LineSegment2D[] HoughLinesP(IInputArray image, double rho, double theta, int threshold, double minLineLength = 0, double maxGap = 0)

Parameters

image IInputArray

8-bit, single-channel binary source image. The image may be modified by the function.

rho double

Distance resolution of the accumulator in pixels

theta double

Angle resolution of the accumulator in radians

threshold int

Accumulator threshold parameter. Only those lines are returned that get enough votes

minLineLength double

Minimum line length. Line segments shorter than that are rejected.

maxGap double

Maximum allowed gap between points on the same line to link them.

Returns

LineSegment2D[]

The found line segments

HuMoments(Moments)

Calculates seven Hu invariants

public static double[] HuMoments(Moments m)

Parameters

m Moments

The image moment

Returns

double[]

The output Hu moments.

HuMoments(Moments, IOutputArray)

Calculates seven Hu invariants

public static void HuMoments(Moments m, IOutputArray hu)

Parameters

m Moments

The image moment

hu IOutputArray

The output Hu moments. e.g. a Mat can be passed here.

IlluminationChange(IInputArray, IInputArray, IOutputArray, float, float)

Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image.

public static void IlluminationChange(IInputArray src, IInputArray mask, IOutputArray dst, float alpha = 0.2, float beta = 0.4)

Parameters

src IInputArray

Input 8-bit 3-channel image.

mask IInputArray

Input 8-bit 1 or 3-channel image.

dst IOutputArray

Output image with the same size and type as src.

alpha float

Value ranges between 0-2.

beta float

Value ranges between 0-2.

Imdecode(IInputArray, ImreadModes, Mat)

Decode image stored in the buffer

public static void Imdecode(IInputArray buf, ImreadModes loadType, Mat dst)

Parameters

buf IInputArray

The buffer

loadType ImreadModes

The image loading type

dst Mat

The output placeholder for the decoded matrix.

Imdecode(byte[], ImreadModes, Mat)

Decode image stored in the buffer

public static void Imdecode(byte[] buf, ImreadModes loadType, Mat dst)

Parameters

buf byte[]

The buffer

loadType ImreadModes

The image loading type

dst Mat

The output placeholder for the decoded matrix.

Imencode(string, IInputArray, VectorOfByte, params KeyValuePair<ImwriteFlags, int>[])

Encode image and store the result as a byte vector.

public static bool Imencode(string ext, IInputArray image, VectorOfByte buf, params KeyValuePair<ImwriteFlags, int>[] parameters)

Parameters

ext string

The image format

image IInputArray

The image

buf VectorOfByte

Output buffer resized to fit the compressed image.

parameters KeyValuePair<ImwriteFlags, int>[]

The pointer to the array of integers, which contains the parameter for encoding, use IntPtr.Zero for default

Returns

bool

True if successfully encoded the image into the buffer.

Imencode(string, IInputArray, params KeyValuePair<ImwriteFlags, int>[])

Encode image and return the result as a byte vector.

public static byte[] Imencode(string ext, IInputArray image, params KeyValuePair<ImwriteFlags, int>[] parameters)

Parameters

ext string

The image format

image IInputArray

The image

parameters KeyValuePair<ImwriteFlags, int>[]

The pointer to the array of integers, which contains the parameter for encoding, use IntPtr.Zero for default

Returns

byte[]

Byte array that contains the image in the specific image format. If failed to encode, return null

Imread(string, ImreadModes)

Loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported: Windows bitmaps - BMP, DIB; JPEG files - JPEG, JPG, JPE; Portable Network Graphics - PNG; Portable image format - PBM, PGM, PPM; Sun rasters - SR, RAS; TIFF files - TIFF, TIF; OpenEXR HDR images - EXR; JPEG 2000 images - jp2.

public static Mat Imread(string filename, ImreadModes loadType = ImreadModes.Color)

Parameters

filename string

The name of the file to be loaded

loadType ImreadModes

The image loading type

Returns

Mat

The loaded image

Imreadmulti(string, ImreadModes)

The function imreadmulti loads a multi-page image from the specified file into a vector of Mat objects.

public static Mat[] Imreadmulti(string filename, ImreadModes flags = ImreadModes.AnyColor)

Parameters

filename string

Name of file to be loaded.

flags ImreadModes

Read flags

Returns

Mat[]

Null if the reading fails, otherwise, an array of Mat from the file

Imshow(string, IInputArray)

Shows the image in the specified window

public static void Imshow(string name, IInputArray image)

Parameters

name string

Name of the window

image IInputArray

Image to be shown

Imwrite(string, IInputArray, params KeyValuePair<ImwriteFlags, int>[])

Saves the image to the specified file. The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see cv::imread for the list of extensions).

public static bool Imwrite(string filename, IInputArray image, params KeyValuePair<ImwriteFlags, int>[] parameters)

Parameters

filename string

The name of the file to be saved to

image IInputArray

The image to be saved

parameters KeyValuePair<ImwriteFlags, int>[]

The parameters

Returns

bool

true if success

Remarks

In general, only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function, with these exceptions: 16-bit unsigned(CV_16U) images can be saved in the case of PNG, JPEG 2000, and TIFF formats 32-bit float (CV_32F) images can be saved in PFM, TIFF, OpenEXR, and Radiance HDR formats; 3-channel(CV_32FC3) TIFF images will be saved using the LogLuv high dynamic range encoding(4 bytes per pixel) PNG images with an alpha channel can be saved using this function.To do this, create 8-bit (or 16-bit) 4-channel image BGRA, where the alpha channel goes last. Fully transparent pixels should have alpha set to 0, fully opaque pixels should have alpha set to 255 / 65535(see the code sample below). Multiple images(vector of Mat) can be saved in TIFF format(see the code sample below). If the image format is not supported, the image will be converted to 8 - bit unsigned(CV_8U) and saved that way. If the format, depth or channel order is different, use Mat::convertTo and cv::cvtColor to convert it before saving. Or, use the universal FileStorage I / O functions to save the image to XML or YAML format.

Imwritemulti(string, IInputArrayOfArrays, params KeyValuePair<ImwriteFlags, int>[])

Save multiple images to a specified file (e.g. ".tiff" that support multiple images).

public static bool Imwritemulti(string filename, IInputArrayOfArrays images, params KeyValuePair<ImwriteFlags, int>[] parameters)

Parameters

filename string

Name of the file.

images IInputArrayOfArrays

Images to be saved.

parameters KeyValuePair<ImwriteFlags, int>[]

The parameters

Returns

bool

true if success

InRange(IInputArray, IInputArray, IInputArray, IOutputArray)

Performs range check for every element of the input array: dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 For single-channel arrays, dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 && lower(I)_1 <= src(I)_1 <= upper(I)_1 For two-channel arrays etc. dst(I) is set to 0xff (all '1'-bits) if src(I) is within the range and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size)

public static void InRange(IInputArray src, IInputArray lower, IInputArray upper, IOutputArray dst)

Parameters

src IInputArray

The source image

lower IInputArray

The lower values stored in an image of same type & size as src

upper IInputArray

The upper values stored in an image of same type & size as src

dst IOutputArray

The resulting mask

Init()

Check to make sure all the unmanaged libraries are loaded

public static bool Init()

Returns

bool

True if library loaded

InitCameraMatrix2D(IInputArrayOfArrays, IInputArrayOfArrays, Size, double)

Finds an initial camera matrix from 3D-2D point correspondences.

public static Mat InitCameraMatrix2D(IInputArrayOfArrays objectPoints, IInputArrayOfArrays imagePoints, Size imageSize, double aspectRatio = 1)

Parameters

objectPoints IInputArrayOfArrays

Vector of vectors of the calibration pattern points in the calibration pattern coordinate space.

imagePoints IInputArrayOfArrays

Vector of vectors of the projections of the calibration pattern points.

imageSize Size

Image size in pixels used to initialize the principal point.

aspectRatio double

If it is zero or negative, both fx and fy are estimated independently. Otherwise, fx=fy*aspectRatio.

Returns

Mat

An initial camera matrix for the camera calibration process.

Remarks

Currently, the function only supports planar calibration patterns, which are patterns where each object point has z-coordinate =0.

InitUndistortRectifyMap(IInputArray, IInputArray, IInputArray, IInputArray, Size, DepthType, int, IOutputArray, IOutputArray)

This function is an extended version of cvInitUndistortMap. That is, in addition to the correction of lens distortion, the function can also apply arbitrary perspective transformation R and finally it can scale and shift the image according to the new camera matrix

public static void InitUndistortRectifyMap(IInputArray cameraMatrix, IInputArray distCoeffs, IInputArray R, IInputArray newCameraMatrix, Size size, DepthType depthType, int channels, IOutputArray map1, IOutputArray map2)

Parameters

cameraMatrix IInputArray

The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1]

distCoeffs IInputArray

The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5

R IInputArray

The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used

newCameraMatrix IInputArray

The new camera matrix A'=[fx' 0 cx'; 0 fy' cy'; 0 0 1]

size Size

Undistorted image size.

depthType DepthType

Depth type of the first output map. (The combination with channels can be one of CV_32FC1, CV_32FC2 or CV_16SC2)

channels int

Number of channels of the first output map. (The combination with depthType can be one of CV_32FC1, CV_32FC2 or CV_16SC2)

map1 IOutputArray

The first output map.

map2 IOutputArray

The second output map.

Inpaint(IInputArray, IInputArray, IOutputArray, double, InpaintType)

Reconstructs the selected image area from the pixel near the area boundary. The function may be used to remove dust and scratches from a scanned photo, or to remove undesirable objects from still images or video.

public static void Inpaint(IInputArray src, IInputArray mask, IOutputArray dst, double inpaintRadius, InpaintType flags)

Parameters

src IInputArray

The input 8-bit 1-channel or 3-channel image

mask IInputArray

The inpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted

dst IOutputArray

The output image of the same format and the same size as input

inpaintRadius double

The radius of circular neighborhood of each point inpainted that is considered by the algorithm

flags InpaintType

The inpainting method

InsertChannel(IInputArray, IInputOutputArray, int)

Insert the specific channel to the image

public static void InsertChannel(IInputArray src, IInputOutputArray dst, int coi)

Parameters

src IInputArray

The source channel

dst IInputOutputArray

The destination image where the channel will be inserted into

coi int

0-based index of the channel to be inserted

Integral(IInputArray, IOutputArray, IOutputArray, IOutputArray, DepthType, DepthType)

Calculates one or more integral images for the source image Using these integral images, one may calculate sum, mean, standard deviation over arbitrary up-right or rotated rectangular region of the image in a constant time. It makes possible to do a fast blurring or fast block correlation with variable window size etc. In case of multi-channel images sums for each channel are accumulated independently.

public static void Integral(IInputArray image, IOutputArray sum, IOutputArray sqsum = null, IOutputArray tiltedSum = null, DepthType sdepth = DepthType.Default, DepthType sqdepth = DepthType.Default)

Parameters

image IInputArray

The source image, WxH, 8-bit or floating-point (32f or 64f) image.

sum IOutputArray

The integral image, W+1xH+1, 32-bit integer or double precision floating-point (64f).

sqsum IOutputArray

The integral image for squared pixel values, W+1xH+1, double precision floating-point (64f).

tiltedSum IOutputArray

The integral for the image rotated by 45 degrees, W+1xH+1, the same data type as sum.

sdepth DepthType

Desired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.

sqdepth DepthType

Desired depth of the integral image of squared pixel values, CV_32F or CV_64F.

IntersectConvexConvex(IInputArray, IInputArray, IOutputArray, bool)

finds intersection of two convex polygons

public static float IntersectConvexConvex(IInputArray p1, IInputArray p2, IOutputArray p12, bool handleNested = true)

Parameters

p1 IInputArray

The first convex polygon

p2 IInputArray

The second convex polygon

p12 IOutputArray

The intersection of the convex polygon

handleNested bool

Handle nest

Returns

float

Absolute value of area of intersecting polygon.

Invert(IInputArray, IOutputArray, DecompMethod)

Finds the inverse or pseudo-inverse of a matrix. This function inverts the matrix src and stores the result in dst . When the matrix src is singular or non-square, the function calculates the pseudo-inverse matrix (the dst matrix) so that norm(src*dst - I) is minimal, where I is an identity matrix.

public static double Invert(IInputArray src, IOutputArray dst, DecompMethod method)

Parameters

src IInputArray

The input floating-point M x N matrix.

dst IOutputArray

The output matrix of N x M size and the same type as src.

method DecompMethod

Inversion method

Returns

double

In case of the DECOMP_LU method, the function returns non-zero value if the inverse has been successfully calculated and 0 if src is singular. In case of the DECOMP_SVD method, the function returns the inverse condition number of src (the ratio of the smallest singular value to the largest singular value) and 0 if src is singular. The SVD method calculates a pseudo-inverse matrix if src is singular. Similarly to DECOMP_LU, the method DECOMP_CHOLESKY works only with non-singular square matrices that should also be symmetrical and positively defined. In this case, the function stores the inverted matrix in dst and returns non-zero. Otherwise, it returns 0.

InvertAffineTransform(IInputArray, IOutputArray)

Inverts an affine transformation

public static void InvertAffineTransform(IInputArray m, IOutputArray im)

Parameters

m IInputArray

Original affine transformation

im IOutputArray

Output reverse affine transformation.

IsContourConvex(IInputArray)

The function tests whether the input contour is convex or not. The contour must be simple, that is, without self-intersections. Otherwise, the function output is undefined.

public static bool IsContourConvex(IInputArray contour)

Parameters

contour IInputArray

Input vector of 2D points

Returns

bool

true if input is convex

Kmeans(IInputArray, int, IOutputArray, MCvTermCriteria, int, KMeansInitType, IOutputArray)

Implements k-means algorithm that finds centers of cluster_count clusters and groups the input samples around the clusters. On output labels(i) contains a cluster index for sample stored in the i-th row of samples matrix

public static double Kmeans(IInputArray data, int k, IOutputArray bestLabels, MCvTermCriteria termcrit, int attempts, KMeansInitType flags, IOutputArray centers = null)

Parameters

data IInputArray

Floating-point matrix of input samples, one row per sample

k int

Number of clusters to split the set by.

bestLabels IOutputArray

Output integer vector storing cluster indices for every sample

termcrit MCvTermCriteria

Specifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations)

attempts int

The number of attempts. Use 2 if not sure

flags KMeansInitType

Flags, use 0 if not sure

centers IOutputArray

Pointer to array of centers, use IntPtr.Zero if not sure

Returns

double

The function returns the compactness measure. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function.

LUT(IInputArray, IInputArray, IOutputArray)

Fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of src as following: dst(I)=lut[src(I)+DELTA] where DELTA=0 if src has depth CV_8U, and DELTA=128 if src has depth CV_8S

public static void LUT(IInputArray src, IInputArray lut, IOutputArray dst)

Parameters

src IInputArray

Source array of 8-bit elements

lut IInputArray

Look-up table of 256 elements; should have the same depth as the destination array. In case of multi-channel source and destination arrays, the table should either have a single-channel (in this case the same table is used for all channels), or the same number of channels as the source/destination array

dst IOutputArray

Destination array of arbitrary depth and of the same number of channels as the source array

Laplacian(IInputArray, IOutputArray, DepthType, int, double, double, BorderType)

Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator: dst(x,y) = d2src/dx2 + d2src/dy2 Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Similar to cvSobel function, no scaling is done and the same combinations of input and output formats are supported.

public static void Laplacian(IInputArray src, IOutputArray dst, DepthType ddepth, int ksize = 1, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image. Should have type of float

ddepth DepthType

Desired depth of the destination image.

ksize int

Aperture size used to compute the second-derivative filters.

scale double

Optional scale factor for the computed Laplacian values. By default, no scaling is applied.

delta double

Optional delta value that is added to the results prior to storing them in dst.

borderType BorderType

Pixel extrapolation method.

Line(IInputOutputArray, Point, Point, MCvScalar, int, LineType, int)

Draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image or ROI rectangle. For non-antialiased lines with integer coordinates the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.

public static void Line(IInputOutputArray img, Point pt1, Point pt2, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

The image

pt1 Point

First point of the line segment

pt2 Point

Second point of the line segment

color MCvScalar

Line color

thickness int

Line thickness.

lineType LineType

Type of the line: 8 (or 0) - 8-connected line. 4 - 4-connected line. CV_AA - antialiased line.

shift int

Number of fractional bits in the point coordinates

LinearPolar(IInputArray, IOutputArray, PointF, double, Inter, Warp)

The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc.

public static void LinearPolar(IInputArray src, IOutputArray dst, PointF center, double maxRadius, Inter interpolationType = Inter.Linear, Warp warpType = Warp.FillOutliers)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

center PointF

The transformation center, where the output precision is maximal

maxRadius double

Maximum radius

interpolationType Inter

Interpolation method

warpType Warp

Warp method

LoadUnmanagedModules(string, params string[])

Attempts to load opencv modules from the specific location

public static bool LoadUnmanagedModules(string loadDirectory, params string[] unmanagedModules)

Parameters

loadDirectory string

The directory where the unmanaged modules will be loaded. If it is null, the default location will be used.

unmanagedModules string[]

The names of opencv modules. e.g. "opencv_core.dll" on windows.

Returns

bool

True if all the modules has been loaded successfully

Remarks

If loadDirectory is null, the default location on windows is the dll's path appended by either "x64" or "x86", depends on the applications current mode.

Log(IInputArray, IOutputArray)

Calculates natural logarithm of absolute value of every element of input array: dst(I)=log(abs(src(I))), src(I)!=0 dst(I)=C, src(I)=0 Where C is large negative number (-700 in the current implementation)

public static void Log(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

The source array

dst IOutputArray

The destination array, it should have double type or the same type as the source

LogPolar(IInputArray, IOutputArray, PointF, double, Inter, Warp)

The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc.

public static void LogPolar(IInputArray src, IOutputArray dst, PointF center, double M, Inter interpolationType = Inter.Linear, Warp warpType = Warp.FillOutliers)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

center PointF

The transformation center, where the output precision is maximal

M double

Magnitude scale parameter

interpolationType Inter

Interpolation method

warpType Warp

warp method

Mahalanobis(IInputArray, IInputArray, IInputArray)

Calculates the weighted distance between two vectors and returns it

public static double Mahalanobis(IInputArray v1, IInputArray v2, IInputArray iconvar)

Parameters

v1 IInputArray

The first 1D source vector

v2 IInputArray

The second 1D source vector

iconvar IInputArray

The inverse covariation matrix

Returns

double

the Mahalanobis distance

MakeType(DepthType, int)

This function performs the same as MakeType macro

public static int MakeType(DepthType depth, int channels)

Parameters

depth DepthType

The type of depth

channels int

The number of channels

Returns

int

An integer that represent a mat type

MatchShapes(IInputArray, IInputArray, ContoursMatchType, double)

Compares two shapes. The 3 implemented methods all use Hu moments

public static double MatchShapes(IInputArray contour1, IInputArray contour2, ContoursMatchType method, double parameter = 0)

Parameters

contour1 IInputArray

First contour or grayscale image

contour2 IInputArray

Second contour or grayscale image

method ContoursMatchType

Comparison method

parameter double

Method-specific parameter (is not used now)

Returns

double

The result of the comparison

MatchTemplate(IInputArray, IInputArray, IOutputArray, TemplateMatchingType, IInputArray)

This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result

public static void MatchTemplate(IInputArray image, IInputArray templ, IOutputArray result, TemplateMatchingType method, IInputArray mask = null)

Parameters

image IInputArray

Image where the search is running. It should be 8-bit or 32-bit floating-point

templ IInputArray

Searched template; must be not greater than the source image and the same data type as the image

result IOutputArray

A map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1.

method TemplateMatchingType

Specifies the way the template must be compared with image regions

mask IInputArray

Mask of searched template. It must have the same datatype and size with templ. It is not set by default.

Max(IInputArray, IInputArray, IOutputArray)

Calculates per-element maximum of two arrays: dst(I)=max(src1(I), src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size).

public static void Max(IInputArray src1, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array.

dst IOutputArray

The destination array

Mean(IInputArray, IInputArray)

Calculates the average value M of array elements, independently for each channel: N = sumI mask(I)!=0 Mc = 1/N * sumI,mask(I)!=0 arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the average to the first scalar component (S0).

public static MCvScalar Mean(IInputArray arr, IInputArray mask = null)

Parameters

arr IInputArray

The array

mask IInputArray

The optional operation mask

Returns

MCvScalar

average (mean) of array elements

MeanShift(IInputArray, ref Rectangle, MCvTermCriteria)

Iterates to find the object center given its back projection and initial position of search window. The iterations are made until the search window center moves by less than the given value and/or until the function has done the maximum number of iterations.

public static int MeanShift(IInputArray probImage, ref Rectangle window, MCvTermCriteria criteria)

Parameters

probImage IInputArray

Back projection of object histogram

window Rectangle

Initial search window

criteria MCvTermCriteria

Criteria applied to determine when the window search should be finished.

Returns

int

The number of iterations made

MeanStdDev(IInputArray, IOutputArray, IOutputArray, IInputArray)

Calculates a mean and standard deviation of array elements.

public static void MeanStdDev(IInputArray arr, IOutputArray mean, IOutputArray stdDev, IInputArray mask = null)

Parameters

arr IInputArray

Input array that should have from 1 to 4 channels so that the results can be stored in MCvScalar

mean IOutputArray

Calculated mean value

stdDev IOutputArray

Calculated standard deviation

mask IInputArray

Optional operation mask

MeanStdDev(IInputArray, ref MCvScalar, ref MCvScalar, IInputArray)

The function cvAvgSdv calculates the average value and standard deviation of array elements, independently for each channel

public static void MeanStdDev(IInputArray arr, ref MCvScalar mean, ref MCvScalar stdDev, IInputArray mask = null)

Parameters

arr IInputArray

The array

mean MCvScalar

Pointer to the mean value

stdDev MCvScalar

Pointer to the standard deviation

mask IInputArray

The optional operation mask

Remarks

If the array is IplImage and COI is set, the function processes the selected channel only and stores the average and standard deviation to the first compoenents of output scalars (M0 and S0).

MedianBlur(IInputArray, IOutputArray, int)

Blurs an image using the median filter.

public static void MedianBlur(IInputArray src, IOutputArray dst, int ksize)

Parameters

src IInputArray

Input 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U.

dst IOutputArray

Destination array of the same size and type as src.

ksize int

Aperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ...

Merge(IInputArrayOfArrays, IOutputArray)

This function is the opposite to cvSplit. If the destination array has N channels then if the first N input channels are not IntPtr.Zero, all they are copied to the destination array, otherwise if only a single source channel of the first N is not IntPtr.Zero, this particular channel is copied into the destination array, otherwise an error is raised. Rest of source channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to insert a single channel into the image.

public static void Merge(IInputArrayOfArrays mv, IOutputArray dst)

Parameters

mv IInputArrayOfArrays

Input vector of matrices to be merged; all the matrices in mv must have the same size and the same depth.

dst IOutputArray

output array of the same size and the same depth as mv[0]; The number of channels will be the total number of channels in the matrix array.

Min(IInputArray, IInputArray, IOutputArray)

Calculates per-element minimum of two arrays: dst(I)=min(src1(I),src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size).

public static void Min(IInputArray src1, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

MinAreaRect(IInputArray)

Finds a rotated rectangle of the minimum area enclosing the input 2D point set.

public static RotatedRect MinAreaRect(IInputArray points)

Parameters

points IInputArray

Input vector of 2D points

Returns

RotatedRect

a circumscribed rectangle of the minimal area for 2D point set

MinAreaRect(PointF[])

Find the bounding rectangle for the specific array of points

public static RotatedRect MinAreaRect(PointF[] points)

Parameters

points PointF[]

The collection of points

Returns

RotatedRect

The bounding rectangle for the array of points

MinEnclosingCircle(IInputArray)

Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed)

public static CircleF MinEnclosingCircle(IInputArray points)

Parameters

points IInputArray

Sequence or array of 2D points

Returns

CircleF

The minimal circumscribed circle for 2D point set

MinEnclosingCircle(PointF[])

Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed)

public static CircleF MinEnclosingCircle(PointF[] points)

Parameters

points PointF[]

Sequence or array of 2D points

Returns

CircleF

The minimal circumscribed circle for 2D point set

MinEnclosingTriangle(IInputArray, IOutputArray)

Finds a triangle of minimum area enclosing a 2D point set and returns its area.

public static double MinEnclosingTriangle(IInputArray points, IOutputArray triangles)

Parameters

points IInputArray

Input vector of 2D points with depth CV_32S or CV_32F

triangles IOutputArray

Output vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F.

Returns

double

The triangle's area

MinMaxIdx(IInputArray, out double, out double, int[], int[], IInputArray)

Finds the global minimum and maximum in an array

public static void MinMaxIdx(IInputArray src, out double minVal, out double maxVal, int[] minIdx, int[] maxIdx, IInputArray mask = null)

Parameters

src IInputArray

Input single-channel array.

minVal double

The returned minimum value

maxVal double

The returned maximum value

minIdx int[]

The returned minimum location

maxIdx int[]

The returned maximum location

mask IInputArray

The extremums are searched across the whole array if mask is IntPtr.Zert. Otherwise, search is performed in the specified array region.

MinMaxLoc(IInputArray, ref double, ref double, ref Point, ref Point, IInputArray)

Finds minimum and maximum element values and their positions. The extremums are searched over the whole array, selected ROI (in case of IplImage) or, if mask is not IntPtr.Zero, in the specified array region. If the array has more than one channel, it must be IplImage with COI set. In case if multi-dimensional arrays min_loc->x and max_loc->x will contain raw (linear) positions of the extremums

public static void MinMaxLoc(IInputArray arr, ref double minVal, ref double maxVal, ref Point minLoc, ref Point maxLoc, IInputArray mask = null)

Parameters

arr IInputArray

The source array, single-channel or multi-channel with COI set

minVal double

Pointer to returned minimum value

maxVal double

Pointer to returned maximum value

minLoc Point

Pointer to returned minimum location

maxLoc Point

Pointer to returned maximum location

mask IInputArray

The optional mask that is used to select a subarray. Use IntPtr.Zero if not needed

MixChannels(IInputArrayOfArrays, IInputOutputArray, int[])

The function cvMixChannels is a generalized form of cvSplit and cvMerge and some forms of cvCvtColor. It can be used to change the order of the planes, add/remove alpha channel, extract or insert a single plane or multiple planes etc.

public static void MixChannels(IInputArrayOfArrays src, IInputOutputArray dst, int[] fromTo)

Parameters

src IInputArrayOfArrays

The array of input arrays.

dst IInputOutputArray

The array of output arrays

fromTo int[]

The array of pairs of indices of the planes copied. from_to[k2] is the 0-based index of the input plane, and from_to[k2+1] is the index of the output plane, where the continuous numbering of the planes over all the input and over all the output arrays is used. When from_to[k*2] is negative, the corresponding output plane is filled with 0's.

Remarks

Unlike many other new-style C++ functions in OpenCV, mixChannels requires the output arrays to be pre-allocated before calling the function.

Moments(IInputArray, bool)

Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characeteristics including 7 Hu invariants.

public static Moments Moments(IInputArray arr, bool binaryImage = false)

Parameters

arr IInputArray

Image (1-channel or 3-channel with COI set) or polygon (CvSeq of points or a vector of points)

binaryImage bool

(For images only) If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1s

Returns

Moments

The moment

MorphologyEx(IInputArray, IOutputArray, MorphOp, IInputArray, Point, int, BorderType, MCvScalar)

Performs advanced morphological transformations.

public static void MorphologyEx(IInputArray src, IOutputArray dst, MorphOp operation, IInputArray kernel, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image.

operation MorphOp

Type of morphological operation.

kernel IInputArray

Structuring element.

anchor Point

Anchor position with the kernel. Negative values mean that the anchor is at the kernel center.

iterations int

Number of times erosion and dilation are applied.

borderType BorderType

Pixel extrapolation method.

borderValue MCvScalar

Border value in case of a constant border.

MulSpectrums(IInputArray, IInputArray, IOutputArray, MulSpectrumsType, bool)

Performs per-element multiplication of the two CCS-packed or complex matrices that are results of real or complex Fourier transform.

public static void MulSpectrums(IInputArray src1, IInputArray src2, IOutputArray dst, MulSpectrumsType flags, bool conjB = false)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array of the same type and the same size of the sources

flags MulSpectrumsType

Operation flags; currently, the only supported flag is DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum.

conjB bool

Optional flag that conjugates the second input array before the multiplication (true) or not (false).

MulTransposed(IInputArray, IOutputArray, bool, IInputArray, double, DepthType)

Calculates the product of src and its transposition. The function evaluates dst=scale(src-delta)(src-delta)^T if order=0, and dst=scale(src-delta)^T(src-delta) otherwise.

public static void MulTransposed(IInputArray src, IOutputArray dst, bool aTa, IInputArray delta = null, double scale = 1, DepthType dtype = DepthType.Default)

Parameters

src IInputArray

The source matrix

dst IOutputArray

The destination matrix

aTa bool

Order of multipliers

delta IInputArray

An optional array, subtracted from src before multiplication

scale double

An optional scaling

dtype DepthType

Optional depth type of the output array

Multiply(IInputArray, IInputArray, IOutputArray, double, DepthType)

Calculates per-element product of two arrays: dst(I)=scale*src1(I)*src2(I) All the arrays must have the same type, and the same size (or ROI size)

public static void Multiply(IInputArray src1, IInputArray src2, IOutputArray dst, double scale = 1, DepthType dtype = DepthType.Default)

Parameters

src1 IInputArray

The first source array.

src2 IInputArray

The second source array

dst IOutputArray

The destination array

scale double

Optional scale factor

dtype DepthType

Optional depth of the output array

NamedWindow(string, WindowFlags)

Creates a window which can be used as a placeholder for images and trackbars. Created windows are reffered by their names. If the window with such a name already exists, the function does nothing.

public static void NamedWindow(string name, WindowFlags flags = WindowFlags.AutoSize)

Parameters

name string

Name of the window which is used as window identifier and appears in the window caption

flags WindowFlags

Flags of the window.

Norm(IInputArray, NormType, IInputArray)

Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined.

public static double Norm(IInputArray arr1, NormType normType = NormType.L2, IInputArray mask = null)

Parameters

arr1 IInputArray

The first source image

normType NormType

Type of norm

mask IInputArray

The optional operation mask

Returns

double

The calculated norm

Norm(IInputArray, IInputOutputArray, NormType, IInputArray)

Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined.

public static double Norm(IInputArray arr1, IInputOutputArray arr2, NormType normType = NormType.L2, IInputArray mask = null)

Parameters

arr1 IInputArray

The first source image

arr2 IInputOutputArray

The second source image. If it is null, the absolute norm of arr1 is calculated, otherwise absolute or relative norm of arr1-arr2 is calculated

normType NormType

Type of norm

mask IInputArray

The optional operation mask

Returns

double

The calculated norm

Normalize(IInputArray, IOutputArray, double, double, NormType, DepthType, IInputArray)

Normalizes the input array so that it's norm or value range takes the certain value(s).

public static void Normalize(IInputArray src, IOutputArray dst, double alpha = 1, double beta = 0, NormType normType = NormType.L2, DepthType dType = DepthType.Default, IInputArray mask = null)

Parameters

src IInputArray

The input array

dst IOutputArray

The output array; in-place operation is supported

alpha double

The minimum/maximum value of the output array or the norm of output array

beta double

The maximum/minimum value of the output array

normType NormType

The normalization type

dType DepthType

Optional depth type for the dst array

mask IInputArray

The operation mask. Makes the function consider and normalize only certain array elements

OclFinish()

Finishes OpenCL queue.

public static void OclFinish()

OclGetPlatformsSummary()

Get the OpenCL platform summary as a string

public static string OclGetPlatformsSummary()

Returns

string

An OpenCL platform summary

OclSetDefaultDevice(string)

Set the default opencl device

public static void OclSetDefaultDevice(string deviceName)

Parameters

deviceName string

The name of the opencl device

PCABackProject(IInputArray, IInputArray, IInputArray, IOutputArray)

Reconstructs vectors from their PC projections.

public static void PCABackProject(IInputArray data, IInputArray mean, IInputArray eigenvectors, IOutputArray result)

Parameters

data IInputArray

Coordinates of the vectors in the principal component subspace

mean IInputArray

The mean.

eigenvectors IInputArray

The eigenvectors.

result IOutputArray

The result.

PCACompute(IInputArray, IInputOutputArray, IOutputArray, double)

Performs Principal Component Analysis of the supplied dataset.

public static void PCACompute(IInputArray data, IInputOutputArray mean, IOutputArray eigenvectors, double retainedVariance)

Parameters

data IInputArray

Input samples stored as the matrix rows or as the matrix columns.

mean IInputOutputArray

Optional mean value; if the matrix is empty, the mean is computed from the data.

eigenvectors IOutputArray

The eigenvectors.

retainedVariance double

Percentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.

PCACompute(IInputArray, IInputOutputArray, IOutputArray, int)

Performs Principal Component Analysis of the supplied dataset.

public static void PCACompute(IInputArray data, IInputOutputArray mean, IOutputArray eigenvectors, int maxComponents = 0)

Parameters

data IInputArray

Input samples stored as the matrix rows or as the matrix columns.

mean IInputOutputArray

Optional mean value; if the matrix is empty, the mean is computed from the data.

eigenvectors IOutputArray

The eigenvectors.

maxComponents int

Maximum number of components that PCA should retain; by default, all the components are retained.

PCAProject(IInputArray, IInputArray, IInputArray, IOutputArray)

Projects vector(s) to the principal component subspace.

public static void PCAProject(IInputArray data, IInputArray mean, IInputArray eigenvectors, IOutputArray result)

Parameters

data IInputArray

Input vector(s); must have the same dimensionality and the same layout as the input data used at PCA phase

mean IInputArray

The mean.

eigenvectors IInputArray

The eigenvectors.

result IOutputArray

The result.

PSNR(IInputArray, IInputArray)

Computes PSNR image/video quality metric

public static double PSNR(IInputArray src1, IInputArray src2)

Parameters

src1 IInputArray

The first source image

src2 IInputArray

The second source image

Returns

double

the quality metric

PatchNaNs(IInputOutputArray, double)

Converts NaN's to the given number

public static void PatchNaNs(IInputOutputArray a, double val = 0)

Parameters

a IInputOutputArray

The array where NaN needs to be converted

val double

The value to convert to

PencilSketch(IInputArray, IOutputArray, IOutputArray, float, float, float)

Pencil-like non-photorealistic line drawing

public static void PencilSketch(IInputArray src, IOutputArray dst1, IOutputArray dst2, float sigmaS = 60, float sigmaR = 0.07, float shadeFactor = 0.02)

Parameters

src IInputArray

Input 8-bit 3-channel image

dst1 IOutputArray

Output 8-bit 1-channel image

dst2 IOutputArray

Output image with the same size and type as src

sigmaS float

Range between 0 to 200

sigmaR float

Range between 0 to 1

shadeFactor float

Range between 0 to 0.1

PerspectiveTransform(IInputArray, IOutputArray, IInputArray)

Transforms every element of src (by treating it as 2D or 3D vector) in the following way: (x, y, z) -> (x'/w, y'/w, z'/w) or (x, y) -> (x'/w, y'/w), where (x', y', z', w') = mat4x4 * (x, y, z, 1) or (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise

public static void PerspectiveTransform(IInputArray src, IOutputArray dst, IInputArray mat)

Parameters

src IInputArray

The source three-channel floating-point array

dst IOutputArray

The destination three-channel floating-point array

mat IInputArray

3x3 or 4x4 floating-point transformation matrix.

PerspectiveTransform(PointF[], IInputArray)

Transforms every element of src in the following way: (x, y) -> (x'/w, y'/w), where (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise

public static PointF[] PerspectiveTransform(PointF[] src, IInputArray mat)

Parameters

src PointF[]

The source points

mat IInputArray

3x3 floating-point transformation matrix.

Returns

PointF[]

The destination points

PhaseCorrelate(IInputArray, IInputArray, IInputArray, out double)

The function is used to detect translational shifts that occur between two images. The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation.

public static MCvPoint2D64f PhaseCorrelate(IInputArray src1, IInputArray src2, IInputArray window, out double response)

Parameters

src1 IInputArray

Source floating point array (CV_32FC1 or CV_64FC1)

src2 IInputArray

Source floating point array (CV_32FC1 or CV_64FC1)

window IInputArray

Floating point array with windowing coefficients to reduce edge effects (optional).

response double

Signal power within the 5x5 centroid around the peak, between 0 and 1

Returns

MCvPoint2D64f

The translational shifts that occur between two images

PointPolygonTest(IInputArray, PointF, bool)

Determines whether the point is inside contour, outside, or lies on an edge (or coinsides with a vertex). It returns positive, negative or zero value, correspondingly

public static double PointPolygonTest(IInputArray contour, PointF pt, bool measureDist)

Parameters

contour IInputArray

Input contour

pt PointF

The point tested against the contour

measureDist bool

If != 0, the function estimates distance from the point to the nearest contour edge

Returns

double

When measureDist = false, the return value is >0 (inside), <0 (outside) and =0 (on edge), respectively. When measureDist != true, it is a signed distance between the point and the nearest contour edge

PolarToCart(IInputArray, IInputArray, IOutputArray, IOutputArray, bool)

Calculates either x-coordinate, y-coordinate or both of every vector magnitude(I)* exp(angle(I)*j), j=sqrt(-1): x(I)=magnitude(I)*cos(angle(I)), y(I)=magnitude(I)*sin(angle(I))

public static void PolarToCart(IInputArray magnitude, IInputArray angle, IOutputArray x, IOutputArray y, bool angleInDegrees = false)

Parameters

magnitude IInputArray

Input floating-point array of magnitudes of 2D vectors; it can be an empty matrix (=Mat()), in this case, the function assumes that all the magnitudes are =1; if it is not empty, it must have the same size and type as angle

angle IInputArray

input floating-point array of angles of 2D vectors.

x IOutputArray

Output array of x-coordinates of 2D vectors; it has the same size and type as angle.

y IOutputArray

Output array of y-coordinates of 2D vectors; it has the same size and type as angle.

angleInDegrees bool

The flag indicating whether the angles are measured in radians or in degrees

PollKey()

Polls for a key event without waiting.

public static int PollKey()

Returns

int

The code of the pressed key or -1 if no key was pressed since the last invocation.

Polylines(IInputOutputArray, IInputArray, bool, MCvScalar, int, LineType, int)

Draws a single or multiple polygonal curves

public static void Polylines(IInputOutputArray img, IInputArray pts, bool isClosed, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image

pts IInputArray

Array of pointers to polylines

isClosed bool

Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex.

color MCvScalar

Polyline color

thickness int

Thickness of the polyline edges

lineType LineType

Type of the line segments, see cvLine description

shift int

Number of fractional bits in the vertex coordinates

Polylines(IInputOutputArray, Point[], bool, MCvScalar, int, LineType, int)

Draws a single or multiple polygonal curves

public static void Polylines(IInputOutputArray img, Point[] pts, bool isClosed, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image

pts Point[]

Array points

isClosed bool

Indicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex.

color MCvScalar

Polyline color

thickness int

Thickness of the polyline edges

lineType LineType

Type of the line segments, see cvLine description

shift int

Number of fractional bits in the vertex coordinates

Pow(IInputArray, double, IOutputArray)

Raises every element of input array to p: dst(I)=src(I)p, if p is integer dst(I)=abs(src(I))p, otherwise That is, for non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following sample, computing cube root of array elements, shows: CvSize size = cvGetSize(src); CvMat* mask = cvCreateMat( size.height, size.width, CV_8UC1 ); cvCmpS( src, 0, mask, CV_CMP_LT ); /* find negative elements / cvPow( src, dst, 1./3 ); cvSubRS( dst, cvScalarAll(0), dst, mask ); / negate the results of negative inputs */ cvReleaseMat( &mask ); For some values of power, such as integer values, 0.5 and -0.5, specialized faster algorithms are used.

public static void Pow(IInputArray src, double power, IOutputArray dst)

Parameters

src IInputArray

The source array

power double

The exponent of power

dst IOutputArray

The destination array, should be the same type as the source

ProjectPoints(IInputArray, IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, double)

Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points).

public static void ProjectPoints(IInputArray objectPoints, IInputArray rvec, IInputArray tvec, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray imagePoints, IOutputArray jacobian = null, double aspectRatio = 0)

Parameters

objectPoints IInputArray

The array of object points, 3xN or Nx3, where N is the number of points in the view

rvec IInputArray

The rotation vector, 1x3 or 3x1

tvec IInputArray

The translation vector, 1x3 or 3x1

cameraMatrix IInputArray

The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].

distCoeffs IInputArray

The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's

imagePoints IOutputArray

The output array of image points, 2xN or Nx2, where N is the total number of points in the view

jacobian IOutputArray

Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.

aspectRatio double

Aspect ratio

ProjectPoints(MCvPoint3D32f[], IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, double)

Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters.

public static PointF[] ProjectPoints(MCvPoint3D32f[] objectPoints, IInputArray rvec, IInputArray tvec, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray jacobian = null, double aspectRatio = 0)

Parameters

objectPoints MCvPoint3D32f[]

The array of object points.

rvec IInputArray

The rotation vector, 1x3 or 3x1

tvec IInputArray

The translation vector, 1x3 or 3x1

cameraMatrix IInputArray

The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].

distCoeffs IInputArray

The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's

jacobian IOutputArray

Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.

aspectRatio double

Aspect ratio

Returns

PointF[]

The output array of image points, 2xN or Nx2, where N is the total number of points in the view

Remarks

Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points)

PutText(IInputOutputArray, string, Point, FontFace, double, MCvScalar, int, LineType, bool)

Renders the text in the image with the specified font and color. The printed text is clipped by ROI rectangle. Symbols that do not belong to the specified font are replaced with the rectangle symbol.

public static void PutText(IInputOutputArray img, string text, Point org, FontFace fontFace, double fontScale, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, bool bottomLeftOrigin = false)

Parameters

img IInputOutputArray

Input image

text string

String to print

org Point

Coordinates of the bottom-left corner of the first letter

fontFace FontFace

Font type.

fontScale double

Font scale factor that is multiplied by the font-specific base size.

color MCvScalar

Text color

thickness int

Thickness of the lines used to draw a text.

lineType LineType

Line type

bottomLeftOrigin bool

When true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.

PyrDown(IInputArray, IOutputArray, BorderType)

Performs downsampling step of Gaussian pyramid decomposition. First it convolves source image with the specified filter and then downsamples the image by rejecting even rows and columns.

public static void PyrDown(IInputArray src, IOutputArray dst, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

The source image.

dst IOutputArray

The destination image, should have 2x smaller width and height than the source.

borderType BorderType

Border type

PyrMeanShiftFiltering(IInputArray, IOutputArray, double, double, int, MCvTermCriteria)

Filters image using meanshift algorithm

public static void PyrMeanShiftFiltering(IInputArray src, IOutputArray dst, double sp, double sr, int maxLevel, MCvTermCriteria termcrit)

Parameters

src IInputArray

Source image

dst IOutputArray

Result image

sp double

The spatial window radius.

sr double

The color window radius.

maxLevel int

Maximum level of the pyramid for the segmentation. Use 1 as default value

termcrit MCvTermCriteria

Termination criteria: when to stop meanshift iterations. Use new MCvTermCriteria(5, 1) as default value

PyrUp(IInputArray, IOutputArray, BorderType)

Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples the source image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the destination image is four times larger than the source image.

public static void PyrUp(IInputArray src, IOutputArray dst, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

The source image.

dst IOutputArray

The destination image, should have 2x smaller width and height than the source.

borderType BorderType

Border type

RQDecomp3x3(IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray)

Computes an RQ decomposition of 3x3 matrices.

public static MCvPoint3D64f RQDecomp3x3(IInputArray src, IOutputArray mtxR, IOutputArray mtxQ, IOutputArray Qx = null, IOutputArray Qy = null, IOutputArray Qz = null)

Parameters

src IInputArray

3x3 input matrix.

mtxR IOutputArray

Output 3x3 upper-triangular matrix.

mtxQ IOutputArray

Output 3x3 orthogonal matrix.

Qx IOutputArray

Optional output 3x3 rotation matrix around x-axis.

Qy IOutputArray

Optional output 3x3 rotation matrix around y-axis.

Qz IOutputArray

Optional output 3x3 rotation matrix around z-axis.

Returns

MCvPoint3D64f

The euler angles

RandShuffle(IInputOutputArray, double, ulong)

Shuffles the matrix by swapping randomly chosen pairs of the matrix elements on each iteration (where each element may contain several components in case of multi-channel arrays)

public static void RandShuffle(IInputOutputArray mat, double iterFactor, ulong rng)

Parameters

mat IInputOutputArray

The input/output matrix. It is shuffled in-place.

iterFactor double

The relative parameter that characterizes intensity of the shuffling performed. The number of iterations (i.e. pairs swapped) is round(iter_factor*rows(mat)*cols(mat)), so iter_factor=0 means that no shuffling is done, iter_factor=1 means that the function swaps rows(mat)*cols(mat) random pairs etc

rng ulong

Pointer to MCvRNG random number generator. Use 0 if not sure

Randn(IInputOutputArray, IInputArray, IInputArray)

Fills the array with normally distributed random numbers.

public static void Randn(IInputOutputArray dst, IInputArray mean, IInputArray stddev)

Parameters

dst IInputOutputArray

Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels.

mean IInputArray

Mean value (expectation) of the generated random numbers.

stddev IInputArray

Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix.

Randn(IInputOutputArray, MCvScalar, MCvScalar)

Fills the array with normally distributed random numbers.

public static void Randn(IInputOutputArray dst, MCvScalar mean, MCvScalar stddev)

Parameters

dst IInputOutputArray

Output array of random numbers; the array must be pre-allocated and have 1 to 4 channels.

mean MCvScalar

Mean value (expectation) of the generated random numbers.

stddev MCvScalar

Standard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix.

Randu(IInputOutputArray, IInputArray, IInputArray)

Generates a single uniformly-distributed random number or an array of random numbers.

public static void Randu(IInputOutputArray dst, IInputArray low, IInputArray high)

Parameters

dst IInputOutputArray

Output array of random numbers; the array must be pre-allocated.

low IInputArray

Inclusive lower boundary of the generated random numbers.

high IInputArray

Exclusive upper boundary of the generated random numbers.

Randu(IInputOutputArray, MCvScalar, MCvScalar)

Generates a single uniformly-distributed random number or an array of random numbers.

public static void Randu(IInputOutputArray dst, MCvScalar low, MCvScalar high)

Parameters

dst IInputOutputArray

Output array of random numbers; the array must be pre-allocated.

low MCvScalar

Inclusive lower boundary of the generated random numbers.

high MCvScalar

Exclusive upper boundary of the generated random numbers.

ReadCloud(string, IOutputArray, IOutputArray)

Read point cloud from file

public static Mat ReadCloud(string file, IOutputArray colors = null, IOutputArray normals = null)

Parameters

file string

The point cloud file

colors IOutputArray

The color of the points

normals IOutputArray

The normal of the points

Returns

Mat

The points

Rectangle(IInputOutputArray, Rectangle, MCvScalar, int, LineType, int)

Draws a rectangle specified by a CvRect structure

public static void Rectangle(IInputOutputArray img, Rectangle rect, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)

Parameters

img IInputOutputArray

Image

rect Rectangle

The rectangle to be drawn

color MCvScalar

Line color

thickness int

Thickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle.

lineType LineType

Type of the line

shift int

Number of fractional bits in the point coordinates

RedirectError(CvErrorCallback, nint, nint)

Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision.

public static extern nint RedirectError(CvInvoke.CvErrorCallback errorHandler, nint userdata, nint prevUserdata)

Parameters

errorHandler CvInvoke.CvErrorCallback

The new error handler

userdata nint

Arbitrary pointer that is transparently passed to the error handler.

prevUserdata nint

Pointer to the previously assigned user data pointer.

Returns

nint

Pointer to the old error handler

RedirectError(nint, nint, nint)

Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision.

public static extern nint RedirectError(nint errorHandler, nint userdata, nint prevUserdata)

Parameters

errorHandler nint

Pointer to the new error handler

userdata nint

Arbitrary pointer that is transparently passed to the error handler.

prevUserdata nint

Pointer to the previously assigned user data pointer.

Returns

nint

Pointer to the old error handler

Reduce(IInputArray, IOutputArray, ReduceDimension, ReduceType, DepthType)

Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained.

public static void Reduce(IInputArray src, IOutputArray dst, ReduceDimension dim = ReduceDimension.Auto, ReduceType type = ReduceType.ReduceSum, DepthType dtype = DepthType.Default)

Parameters

src IInputArray

The input matrix

dst IOutputArray

The output single-row/single-column vector that accumulates somehow all the matrix rows/columns

dim ReduceDimension

The dimension index along which the matrix is reduce.

type ReduceType

The reduction operation type

dtype DepthType

Optional depth type of the output array

Remarks

The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes

Remap(IInputArray, IOutputArray, IInputArray, IInputArray, Inter, BorderType, MCvScalar)

Applies a generic geometrical transformation to an image.

public static void Remap(IInputArray src, IOutputArray dst, IInputArray map1, IInputArray map2, Inter interpolation, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

map1 IInputArray

The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed.

map2 IInputArray

The second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively.

interpolation Inter

Interpolation method (see resize() ). The method 'Area' is not supported by this function.

borderMode BorderType

Pixel extrapolation method

borderValue MCvScalar

A value used to fill outliers

Repeat(IInputArray, int, int, IOutputArray)

Fills the destination array with source array tiled: dst(i,j)=src(i mod rows(src), j mod cols(src))So the destination array may be as larger as well as smaller than the source array

public static void Repeat(IInputArray src, int ny, int nx, IOutputArray dst)

Parameters

src IInputArray

Source array, image or matrix

ny int

Flag to specify how many times the src is repeated along the horizontal axis.

nx int

Flag to specify how many times the src is repeated along the vertical axis.

dst IOutputArray

Destination array, image or matrix

ReprojectImageTo3D(IInputArray, IOutputArray, IInputArray, bool, DepthType)

Transforms 1-channel disparity map to 3-channel image, a 3D surface.

public static void ReprojectImageTo3D(IInputArray disparity, IOutputArray image3D, IInputArray q, bool handleMissingValues = false, DepthType ddepth = DepthType.Default)

Parameters

disparity IInputArray

Disparity map

image3D IOutputArray

3-channel, 16-bit integer or 32-bit floating-point image - the output map of 3D points

q IInputArray

The reprojection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify

handleMissingValues bool

Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000).

ddepth DepthType

The optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.

Resize(IInputArray, IOutputArray, Size, double, double, Inter)

Resizes the image src down to or up to the specified size

public static void Resize(IInputArray src, IOutputArray dst, Size dsize, double fx = 0, double fy = 0, Inter interpolation = Inter.Linear)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image

dsize Size

Output image size; if it equals zero, it is computed as: dsize=Size(round(fx*src.cols), round(fy * src.rows)). Either dsize or both fx and fy must be non-zero.

fx double

Scale factor along the horizontal axis

fy double

Scale factor along the vertical axis;

interpolation Inter

Interpolation method

ResizeForFrame(IInputArray, IOutputArray, Size, Inter, bool)

Resize an image such that it fits in a given frame, keeping the aspect ratio.

public static void ResizeForFrame(IInputArray src, IOutputArray dst, Size frameSize, Inter interpolationMethod = Inter.Linear, bool scaleDownOnly = true)

Parameters

src IInputArray

The source image

dst IOutputArray

The result image

frameSize Size

The size of the frame

interpolationMethod Inter

The interpolation method

scaleDownOnly bool

If true, it will not try to scale up the image to fit the frame

Rodrigues(IInputArray, IOutputArray, IOutputArray)

Converts a rotation vector to rotation matrix or vice versa. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis.

public static void Rodrigues(IInputArray src, IOutputArray dst, IOutputArray jacobian = null)

Parameters

src IInputArray

The input rotation vector (3x1 or 1x3) or rotation matrix (3x3).

dst IOutputArray

The output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively

jacobian IOutputArray

Optional output Jacobian matrix, 3x9 or 9x3 - partial derivatives of the output array components w.r.t the input array components

Rotate(IInputArray, IOutputArray, RotateFlags)

Rotates a 2D array in multiples of 90 degrees.

public static void Rotate(IInputArray src, IOutputArray dst, RotateFlags rotateCode)

Parameters

src IInputArray

Input array.

dst IOutputArray

Output array of the same type as src. The size is the same with ROTATE_180, and the rows and cols are switched for ROTATE_90 and ROTATE_270.

rotateCode RotateFlags

A flag to specify how to rotate the array

RotatedRectangleIntersection(RotatedRect, RotatedRect, IOutputArray)

Finds out if there is any intersection between two rotated rectangles.

public static RectIntersectType RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, IOutputArray intersectingRegion)

Parameters

rect1 RotatedRect

First rectangle

rect2 RotatedRect

Second rectangle

intersectingRegion IOutputArray

The output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as VectorOfPointF or Mat as Mx1 of type CV_32FC2.

Returns

RectIntersectType

The intersect type

SVBackSubst(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)

Performs a singular value back substitution.

public static void SVBackSubst(IInputArray w, IInputArray u, IInputArray vt, IInputArray rhs, IOutputArray dst)

Parameters

w IInputArray

Singular values

u IInputArray

Left singular vectors

vt IInputArray

Transposed matrix of right singular vectors.

rhs IInputArray

Right-hand side of a linear system

dst IOutputArray

Found solution of the system.

SVDecomp(IInputArray, IOutputArray, IOutputArray, IOutputArray, SvdFlag)

Decomposes matrix A into a product of a diagonal matrix and two orthogonal matrices: A=UWVT Where W is diagonal matrix of singular values that can be coded as a 1D vector of singular values and U and V. All the singular values are non-negative and sorted (together with U and V columns) in descenting order.

public static void SVDecomp(IInputArray src, IOutputArray w, IOutputArray u, IOutputArray v, SvdFlag flags)

Parameters

src IInputArray

Source MxN matrix

w IOutputArray

Resulting singular value matrix (MxN or NxN) or vector (Nx1).

u IOutputArray

Optional left orthogonal matrix (MxM or MxN). If CV_SVD_U_T is specified, the number of rows and columns in the sentence above should be swapped

v IOutputArray

Optional right orthogonal matrix (NxN)

flags SvdFlag

Operation flags

Remarks

SVD algorithm is numerically robust and its typical applications include:

  1. accurate eigenvalue problem solution when matrix A is square, symmetric and positively defined matrix, for example, when it is a covariation matrix. W in this case will be a vector of eigen values, and U=V is matrix of eigen vectors (thus, only one of U or V needs to be calculated if the eigen vectors are required)
  2. accurate solution of poor-conditioned linear systems
  3. least-squares solution of overdetermined linear systems. This and previous is done by cvSolve function with CV_SVD method
  4. accurate calculation of different matrix characteristics such as rank (number of non-zero singular values), condition number (ratio of the largest singular value to the smallest one), determinant (absolute value of determinant is equal to the product of singular values). All the things listed in this item do not require calculation of U and V matrices.

SanityCheck()

Check if the size of the C structures match those of C#

public static bool SanityCheck()

Returns

bool

True if the size matches

ScaleAdd(IInputArray, double, IInputArray, IOutputArray)

Calculates the sum of a scaled array and another array.

public static void ScaleAdd(IInputArray src1, double alpha, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

First input array

alpha double

Scale factor for the first array

src2 IInputArray

Second input array of the same size and type as src1

dst IOutputArray

Output array of the same size and type as src1

Scharr(IInputArray, IOutputArray, DepthType, int, int, double, double, BorderType)

Calculates the first x- or y- image derivative using Scharr operator.

public static void Scharr(IInputArray src, IOutputArray dst, DepthType ddepth, int dx, int dy, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

input image.

dst IOutputArray

output image of the same size and the same number of channels as src.

ddepth DepthType

output image depth

dx int

order of the derivative x.

dy int

order of the derivative y.

scale double

optional scale factor for the computed derivative values; by default, no scaling is applied

delta double

optional delta value that is added to the results prior to storing them in dst.

borderType BorderType

pixel extrapolation method

SeamlessClone(IInputArray, IInputArray, IInputArray, Point, IOutputArray, CloningMethod)

Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content

public static void SeamlessClone(IInputArray src, IInputArray dst, IInputArray mask, Point p, IOutputArray blend, CloningMethod flags)

Parameters

src IInputArray

Input 8-bit 3-channel image.

dst IInputArray

Input 8-bit 3-channel image.

mask IInputArray

Input 8-bit 1 or 3-channel image.

p Point

Point in dst image where object is placed.

blend IOutputArray

Output image with the same size and type as dst.

flags CloningMethod

Cloning method

SegmentMotion(IInputArray, IOutputArray, VectorOfRect, double, double)

Finds all the motion segments and marks them in segMask with individual values each (1,2,...). It also returns a sequence of CvConnectedComp structures, one per each motion components. After than the motion direction for every component can be calculated with cvCalcGlobalOrientation using extracted mask of the particular component (using cvCmp)

public static void SegmentMotion(IInputArray mhi, IOutputArray segMask, VectorOfRect boundingRects, double timestamp, double segThresh)

Parameters

mhi IInputArray

Motion history image

segMask IOutputArray

Image where the mask found should be stored, single-channel, 32-bit floating-point

boundingRects VectorOfRect

Vector containing ROIs of motion connected components.

timestamp double

Current time in milliseconds or other units

segThresh double

Segmentation threshold; recommended to be equal to the interval between motion history "steps" or greater

SelectROI(string, IInputArray, bool, bool)

Selects ROI on the given image. Function creates a window and allows user to select a ROI using mouse. Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).

public static Rectangle SelectROI(string windowName, IInputArray img, bool showCrosshair = true, bool fromCenter = false)

Parameters

windowName string

Name of the window where selection process will be shown.

img IInputArray

Image to select a ROI.

showCrosshair bool

If true crosshair of selection rectangle will be shown.

fromCenter bool

If true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspont to the initial mouse position.

Returns

Rectangle

Selected ROI or empty rect if selection canceled.

SelectROIs(string, IInputArray, bool, bool)

Selects ROIs on the given image. Function creates a window and allows user to select a ROIs using mouse. Controls: use space or enter to finish current selection and start a new one, use esc to terminate multiple ROI selection process.

public static Rectangle[] SelectROIs(string windowName, IInputArray img, bool showCrosshair = true, bool fromCenter = false)

Parameters

windowName string

Name of the window where selection process will be shown.

img IInputArray

Image to select a ROI.

showCrosshair bool

If true crosshair of selection rectangle will be shown.

fromCenter bool

If true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspont to the initial mouse position.

Returns

Rectangle[]

Selected ROIs.

SepFilter2D(IInputArray, IOutputArray, DepthType, IInputArray, IInputArray, Point, double, BorderType)

The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .

public static void SepFilter2D(IInputArray src, IOutputArray dst, DepthType ddepth, IInputArray kernelX, IInputArray kernelY, Point anchor, double delta = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image of the same size and the same number of channels as src.

ddepth DepthType

Destination image depth

kernelX IInputArray

Coefficients for filtering each row.

kernelY IInputArray

Coefficients for filtering each column.

anchor Point

Anchor position within the kernel. The value (-1,-1) means that the anchor is at the kernel center.

delta double

Value added to the filtered results before storing them.

borderType BorderType

Pixel extrapolation method

SetBreakOnError(bool)

When the break-on-error mode is set, the default error handler issues a hardware exception, which can make debugging more convenient.

public static extern bool SetBreakOnError(bool flag)

Parameters

flag bool

The flag

Returns

bool

The previous state

SetErrMode(int)

Sets the specified error mode.

public static extern int SetErrMode(int errorMode)

Parameters

errorMode int

The error mode

Returns

int

The previous error mode

SetErrStatus(ErrorCodes)

Sets the error status to the specified value. Mostly, the function is used to reset the error status (set to it CV_StsOk) to recover after error. In other cases it is more natural to call cvError or CV_ERROR.

public static extern void SetErrStatus(ErrorCodes code)

Parameters

code ErrorCodes

The error status.

SetIdentity(IInputOutputArray, MCvScalar)

Initializes scaled identity matrix: arr(i,j)=value if i=j, 0 otherwise

public static void SetIdentity(IInputOutputArray mat, MCvScalar value)

Parameters

mat IInputOutputArray

The matrix to initialize (not necessarily square).

value MCvScalar

The value to assign to the diagonal elements.

SetParallelForBackend(string, bool)

Replace OpenCV parallel_for backend.

public static bool SetParallelForBackend(string backendName, bool propagateNumThreads = true)

Parameters

backendName string

The name of the backend.

propagateNumThreads bool

It true, the number of threads of the current enviroment will be passed to the new backend.

Returns

bool

True if backend is set

Remarks

This call is not thread-safe. Consider calling this function from the main() before any other OpenCV processing functions (and without any other created threads).

SetWindowProperty(string, WindowPropertyFlags, double)

Changes parameters of a window dynamically.

public static void SetWindowProperty(string name, WindowPropertyFlags propId, double propValue)

Parameters

name string

Name of the window.

propId WindowPropertyFlags

Window property to edit.

propValue double

New value of the window property.

SetWindowTitle(string, string)

Updates window title

public static void SetWindowTitle(string winname, string title)

Parameters

winname string

Name of the window.

title string

New title.

Sobel(IInputArray, IOutputArray, DepthType, int, int, int, double, double, BorderType)

The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. The first case corresponds to

 
 |-1  0  1|
 |-2  0  2|
 |-1  0  1|

kernel and the second one corresponds to

 |-1 -2 -1|
 | 0  0  0|
 | 1  2  1|

or

 | 1  2  1|
 | 0  0  0|
 |-1 -2 -1|

kernel, depending on the image origin (origin field of IplImage structure). No scaling is done, so the destination image usually has larger by absolute value numbers than the source image. To avoid overflow, the function requires 16-bit destination image if the source image is 8-bit. The result can be converted back to 8-bit using cvConvertScale or cvConvertScaleAbs functions. Besides 8-bit images the function can process 32-bit floating-point images. Both source and destination must be single-channel images of equal size or ROI size

public static void Sobel(IInputArray src, IOutputArray dst, DepthType ddepth, int xorder, int yorder, int kSize = 3, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

Source image.

dst IOutputArray

Destination image

ddepth DepthType

output image depth; the following combinations of src.depth() and ddepth are supported:

src.depth() = CV_8U, ddepth = -1/CV_16S/CV_32F/CV_64F

src.depth() = CV_16U/CV_16S, ddepth = -1/CV_32F/CV_64F

src.depth() = CV_32F, ddepth = -1/CV_32F/CV_64F

src.depth() = CV_64F, ddepth = -1/CV_64F

when ddepth=-1, the destination image will have the same depth as the source; in the case of 8-bit input images it will result in truncated derivatives.
xorder int

Order of the derivative x

yorder int

Order of the derivative y

kSize int

Size of the extended Sobel kernel, must be 1, 3, 5 or 7.

scale double

Optional scale factor for the computed derivative values

delta double

Optional delta value that is added to the results prior to storing them in dst

borderType BorderType

Pixel extrapolation method

Solve(IInputArray, IInputArray, IOutputArray, DecompMethod)

Solves linear system (src1)*(dst) = (src2)

public static bool Solve(IInputArray src1, IInputArray src2, IOutputArray dst, DecompMethod method)

Parameters

src1 IInputArray

The source matrix in the LHS

src2 IInputArray

The source matrix in the RHS

dst IOutputArray

The result

method DecompMethod

The method for solving the equation

Returns

bool

0 if src1 is a singular and CV_LU method is used

SolveCubic(IInputArray, IOutputArray)

finds real roots of a cubic equation: coeffs[0]*x^3 + coeffs[1]*x^2 + coeffs[2]*x + coeffs[3] = 0 (if coeffs is 4-element vector) or x^3 + coeffs[0]*x^2 + coeffs[1]*x + coeffs[2] = 0 (if coeffs is 3-element vector)

public static int SolveCubic(IInputArray coeffs, IOutputArray roots)

Parameters

coeffs IInputArray

The equation coefficients, array of 3 or 4 elements

roots IOutputArray

The output array of real roots. Should have 3 elements. Padded with zeros if there is only one root

Returns

int

the number of real roots found

SolveLP(Mat, Mat, Mat)

Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). What we mean here by “linear programming problem” (or LP problem, for short) can be formulated as: Maximize c x subject to: Ax <= b and x >= 0

public static SolveLPResult SolveLP(Mat functionMatrix, Mat constraintMatrix, Mat zMatrix)

Parameters

functionMatrix Mat

This row-vector corresponds to c in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to c^T.

constraintMatrix Mat

m-by-n+1 matrix, whose rightmost column corresponds to b in formulation above and the remaining to A. It should containt 32- or 64-bit floating point numbers.

zMatrix Mat

The solution will be returned here as a column-vector - it corresponds to c in the formulation above. It will contain 64-bit floating point numbers.

Returns

SolveLPResult

The return codes

SolveP3P(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, SolvePnpMethod)

Finds an object pose from 3 3D-2D point correspondences.

public static int SolveP3P(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, SolvePnpMethod flags)

Parameters

objectPoints IInputArray

Array of object points in the object coordinate space, 3x3 1-channel or 1x3/3x1 3-channel. VectorOfPoint3f can be also passed here.

imagePoints IInputArray

Array of corresponding image points, 3x2 1-channel or 1x3/3x1 2-channel. VectorOfPoint2f can be also passed here.

cameraMatrix IInputArray

Input camera matrix A=[[fx 0 0] [0 fy 0] [cx cy 1]] .

distCoeffs IInputArray

Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.

rvecs IOutputArrayOfArrays

Output rotation vectors (see Rodrigues ) that, together with tvecs , brings points from the model coordinate system to the camera coordinate system. A P3P problem has up to 4 solutions.

tvecs IOutputArrayOfArrays

Output translation vectors.

flags SolvePnpMethod

Method for solving a P3P problem: either P3P or AP3P

Returns

int

Number of solutions

SolvePnP(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, bool, SolvePnpMethod)

Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error

public static bool SolvePnP(IInputArray objectPoints, IInputArray imagePoints, IInputArray intrinsicMatrix, IInputArray distortionCoeffs, IOutputArray rotationVector, IOutputArray translationVector, bool useExtrinsicGuess = false, SolvePnpMethod flags = SolvePnpMethod.Iterative)

Parameters

objectPoints IInputArray

The array of object points, 3xN or Nx3, where N is the number of points in the view

imagePoints IInputArray

The array of corresponding image points, 2xN or Nx2, where N is the number of points in the view

intrinsicMatrix IInputArray

The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].

distortionCoeffs IInputArray

The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's.

rotationVector IOutputArray

The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2).

translationVector IOutputArray

The output 3x1 or 1x3 translation vector

useExtrinsicGuess bool

Use the input rotation and translation parameters as a guess

flags SolvePnpMethod

Method for solving a PnP problem

Returns

bool

True if successful

SolvePnP(MCvPoint3D32f[], PointF[], IInputArray, IInputArray, IOutputArray, IOutputArray, bool, SolvePnpMethod)

Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error.

public static bool SolvePnP(MCvPoint3D32f[] objectPoints, PointF[] imagePoints, IInputArray intrinsicMatrix, IInputArray distortionCoeffs, IOutputArray rotationVector, IOutputArray translationVector, bool useExtrinsicGuess = false, SolvePnpMethod method = SolvePnpMethod.Iterative)

Parameters

objectPoints MCvPoint3D32f[]

The array of object points

imagePoints PointF[]

The array of corresponding image points

intrinsicMatrix IInputArray

The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].

distortionCoeffs IInputArray

The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's.

rotationVector IOutputArray

The output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2).

translationVector IOutputArray

The output 3x1 or 1x3 translation vector

useExtrinsicGuess bool

Use the input rotation and translation parameters as a guess

method SolvePnpMethod

Method for solving a PnP problem

Returns

bool

True if successful

SolvePnPGeneric(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, bool, SolvePnpMethod, IInputArray, IInputArray, IOutputArray)

Finds an object pose from 3D-2D point correspondences.

public static int SolvePnPGeneric(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, bool useExtrinsicGuess = false, SolvePnpMethod flags = SolvePnpMethod.Iterative, IInputArray rvec = null, IInputArray tvec = null, IOutputArray reprojectionError = null)

Parameters

objectPoints IInputArray

Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.

imagePoints IInputArray

Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.

cameraMatrix IInputArray

Input camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].

distCoeffs IInputArray

Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.

rvecs IOutputArrayOfArrays

Vector of output rotation vectors (see Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.

tvecs IOutputArrayOfArrays

Vector of output translation vectors.

useExtrinsicGuess bool

Parameter used for SolvePnpMethod.Iterative. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

flags SolvePnpMethod

Method for solving a PnP problem

rvec IInputArray

Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.

tvec IInputArray

Translation vector used to initialize an iterative PnP refinement algorithm, when flag is SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.

reprojectionError IOutputArray

Optional vector of reprojection error, that is the RMS error between the input image points and the 3D object points projected with the estimated pose.

Returns

int

The number of solutions

SolvePnPRansac(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, bool, int, float, double, IOutputArray, SolvePnpMethod)

Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.

public static bool SolvePnPRansac(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray rvec, IOutputArray tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confident = 0.99, IOutputArray inliers = null, SolvePnpMethod flags = SolvePnpMethod.Iterative)

Parameters

objectPoints IInputArray

Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3D32f can be also passed here.

imagePoints IInputArray

Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPointF can be also passed here.

cameraMatrix IInputArray

Input camera matrix

distCoeffs IInputArray

Input vector of distortion coefficients of 4, 5, 8 or 12 elements. If the vector is null/empty, the zero distortion coefficients are assumed.

rvec IOutputArray

Output rotation vector

tvec IOutputArray

Output translation vector.

useExtrinsicGuess bool

If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.

iterationsCount int

Number of iterations.

reprojectionError float

Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.

confident double

The probability that the algorithm produces a useful result.

inliers IOutputArray

Output vector that contains indices of inliers in objectPoints and imagePoints .

flags SolvePnpMethod

Method for solving a PnP problem

Returns

bool

True if successful

SolvePnPRefineLM(IInputArray, IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, MCvTermCriteria)

Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution

public static void SolvePnPRefineLM(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IInputOutputArray rvec, IInputOutputArray tvec, MCvTermCriteria criteria)

Parameters

objectPoints IInputArray

Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.

imagePoints IInputArray

Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.

cameraMatrix IInputArray

Input camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].

distCoeffs IInputArray

Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.

rvec IInputOutputArray

Input/Output rotation vector (see Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.

tvec IInputOutputArray

Input/Output translation vector. Input values are used as an initial solution.

criteria MCvTermCriteria

Criteria when to stop the Levenberg-Marquard iterative algorithm.

SolvePnPRefineVVS(IInputArray, IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, MCvTermCriteria, double)

Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.

public static void SolvePnPRefineVVS(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IInputOutputArray rvec, IInputOutputArray tvec, MCvTermCriteria criteria, double VVSlambda)

Parameters

objectPoints IInputArray

Array of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.

imagePoints IInputArray

Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.

cameraMatrix IInputArray

Input camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].

distCoeffs IInputArray

Input vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.

rvec IInputOutputArray

Input/Output rotation vector (see Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.

tvec IInputOutputArray

Input/Output translation vector. Input values are used as an initial solution.

criteria MCvTermCriteria

Criteria when to stop the Levenberg-Marquard iterative algorithm.

VVSlambda double

Gain for the virtual visual servoing control law, equivalent to the α gain in the Damped Gauss-Newton formulation.

SolvePoly(IInputArray, IOutputArray, int)

Finds all real and complex roots of any degree polynomial with real coefficients

public static double SolvePoly(IInputArray coeffs, IOutputArray roots, int maxiter = 300)

Parameters

coeffs IInputArray

The (degree + 1)-length array of equation coefficients (CV_32FC1 or CV_64FC1)

roots IOutputArray

The degree-length output array of real or complex roots (CV_32FC2 or CV_64FC2)

maxiter int

The maximum number of iterations

Returns

double

The max difference.

Sort(IInputArray, IOutputArray, SortFlags)

Sorts each matrix row or each matrix column in ascending or descending order.So you should pass two operation flags to get desired behaviour.

public static void Sort(IInputArray src, IOutputArray dst, SortFlags flags)

Parameters

src IInputArray

input single-channel array.

dst IOutputArray

output array of the same size and type as src.

flags SortFlags

operation flags

SortIdx(IInputArray, IOutputArray, SortFlags)

Sorts each matrix row or each matrix column in the ascending or descending order.So you should pass two operation flags to get desired behaviour. Instead of reordering the elements themselves, it stores the indices of sorted elements in the output array.

public static void SortIdx(IInputArray src, IOutputArray dst, SortFlags flags)

Parameters

src IInputArray

input single-channel array.

dst IOutputArray

output integer array of the same size as src.

flags SortFlags

operation flags

SpatialGradient(IInputArray, IOutputArray, IOutputArray, int, BorderType)

Calculates the first order image derivative in both x and y using a Sobel operator. Equivalent to calling: Sobel(src, dx, CV_16SC1, 1, 0, 3 ); Sobel(src, dy, CV_16SC1, 0, 1, 3 );

public static void SpatialGradient(IInputArray src, IOutputArray dx, IOutputArray dy, int ksize = 3, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

input image.

dx IOutputArray

output image with first-order derivative in x.

dy IOutputArray

output image with first-order derivative in y.

ksize int

size of Sobel kernel. It must be 3.

borderType BorderType

pixel extrapolation method

Split(IInputArray, IOutputArray)

Divides a multi-channel array into separate single-channel arrays. Two modes are available for the operation. If the source array has N channels then if the first N destination channels are not IntPtr.Zero, all they are extracted from the source array, otherwise if only a single destination channel of the first N is not IntPtr.Zero, this particular channel is extracted, otherwise an error is raised. Rest of destination channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to extract a single channel from the image

public static void Split(IInputArray src, IOutputArray mv)

Parameters

src IInputArray

Input multi-channel array

mv IOutputArray

Output array or vector of arrays

SqrBoxFilter(IInputArray, IOutputArray, DepthType, Size, Point, bool, BorderType)

Calculates the normalized sum of squares of the pixel values overlapping the filter. For every pixel(x, y) in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel(x, y). The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.

public static void SqrBoxFilter(IInputArray src, IOutputArray dst, DepthType ddepth, Size ksize, Point anchor, bool normalize = true, BorderType borderType = BorderType.Default)

Parameters

src IInputArray

input image

dst IOutputArray

output image of the same size and type as src

ddepth DepthType

the output image depth (-1 to use src.depth())

ksize Size

kernel size

anchor Point

kernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center

normalize bool

flag, specifying whether the kernel is to be normalized by it's area or not.

borderType BorderType

border mode used to extrapolate pixels outside of the image

Sqrt(IInputArray, IOutputArray)

Calculate square root of each source array element. in the case of multichannel arrays each channel is processed independently. The function accuracy is approximately the same as of the built-in std::sqrt.

public static void Sqrt(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

The source floating-point array

dst IOutputArray

The destination array; will have the same size and the same type as src

StackBlur(IInputArray, IOutputArray, Size)

The function applies and stackBlur to an image. stackBlur can generate similar results as Gaussian blur, and the time consumption does not increase with the increase of kernel size. It creates a kind of moving stack of colors whilst scanning through the image.Thereby it just has to add one new block of color to the right side of the stack and remove the leftmost color.The remaining colors on the topmost layer of the stack are either added on or reduced by one, depending on if they are on the right or on the left side of the stack.The only supported borderType is BORDER_REPLICATE. Original paper was proposed by Mario Klingemann, which can be found http://underdestruction.com/2004/02/25/stackblur-2004.

public static void StackBlur(IInputArray src, IOutputArray dst, Size ksize)

Parameters

src IInputArray

Input image. The number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S or CV_32F.

dst IOutputArray

Output image of the same size and type as src.

ksize Size

Stack-blurring kernel size. The ksize.width and ksize.height can differ but they both must be positive and odd.

StereoCalibrate(IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IInputOutputArray, IInputOutputArray, IOutputArray, IOutputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, IOutputArray, CalibType, MCvTermCriteria)

Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T

public static double StereoCalibrate(IInputArray objectPoints, IInputArray imagePoints1, IInputArray imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IInputOutputArray r, IInputOutputArray t, IOutputArray e, IOutputArray f, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, IOutputArray perViewErrors, CalibType flags, MCvTermCriteria termCrit)

Parameters

objectPoints IInputArray

The 3D location of the object points. The first index is the index of image, second index is the index of the point

imagePoints1 IInputArray

The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point

imagePoints2 IInputArray

The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point

cameraMatrix1 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs1 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

cameraMatrix2 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs2 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

imageSize Size

Size of the image, used only to initialize intrinsic camera matrix

r IInputOutputArray

Output rotation matrix. Together with the translation vector T, this matrix brings points given in the first camera's coordinate system to points in the second camera's coordinate system. In more technical terms, the tuple of R and T performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Due to its duality, this tuple is equivalent to the position of the first camera with respect to the second camera coordinate system.

t IInputOutputArray

Output translation vector, see description for "r".

e IOutputArray

The optional output essential matrix

f IOutputArray

The optional output fundamental matrix

rvecs IOutputArrayOfArrays

Output vector of rotation vectors ( Rodrigues ) estimated for each pattern view in the coordinate system of the first camera of the stereo pair (e.g. std::vector < cv::Mat >). More in detail, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space of the first camera of the stereo pair.

tvecs IOutputArrayOfArrays

Output vector of translation vectors estimated for each pattern view, see parameter description of previous output parameter ( rvecs ).

perViewErrors IOutputArray

Output vector of the RMS re-projection error estimated for each pattern view.

flags CalibType

The calibration flags

termCrit MCvTermCriteria

Termination criteria for the iterative optimization algorithm

Returns

double

The final value of the re-projection error.

StereoCalibrate(IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IOutputArray, IOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)

Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T

public static double StereoCalibrate(IInputArray objectPoints, IInputArray imagePoints1, IInputArray imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IOutputArray r, IOutputArray t, IOutputArray e, IOutputArray f, CalibType flags, MCvTermCriteria termCrit)

Parameters

objectPoints IInputArray

The joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views

imagePoints1 IInputArray

The joint matrix of corresponding image points in the views from the 1st camera, 2xN or Nx2, where N is the total number of points in all views

imagePoints2 IInputArray

The joint matrix of corresponding image points in the views from the 2nd camera, 2xN or Nx2, where N is the total number of points in all views

cameraMatrix1 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs1 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

cameraMatrix2 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs2 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

imageSize Size

Size of the image, used only to initialize intrinsic camera matrix

r IOutputArray

The rotation matrix between the 1st and the 2nd cameras' coordinate systems

t IOutputArray

The translation vector between the cameras' coordinate systems

e IOutputArray

The optional output essential matrix

f IOutputArray

The optional output fundamental matrix

flags CalibType

The calibration flags

termCrit MCvTermCriteria

Termination criteria for the iterative optimization algorithm

Returns

double

The final value of the re-projection error.

StereoCalibrate(MCvPoint3D32f[][], PointF[][], PointF[][], IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IOutputArray, IOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)

Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T

public static double StereoCalibrate(MCvPoint3D32f[][] objectPoints, PointF[][] imagePoints1, PointF[][] imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IOutputArray r, IOutputArray t, IOutputArray e, IOutputArray f, CalibType flags, MCvTermCriteria termCrit)

Parameters

objectPoints MCvPoint3D32f[][]

The 3D location of the object points. The first index is the index of image, second index is the index of the point

imagePoints1 PointF[][]

The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point

imagePoints2 PointF[][]

The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point

cameraMatrix1 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs1 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

cameraMatrix2 IInputOutputArray

The input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized

distCoeffs2 IInputOutputArray

The input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5

imageSize Size

Size of the image, used only to initialize intrinsic camera matrix

r IOutputArray

The rotation matrix between the 1st and the 2nd cameras' coordinate systems

t IOutputArray

The translation vector between the cameras' coordinate systems

e IOutputArray

The optional output essential matrix

f IOutputArray

The optional output fundamental matrix

flags CalibType

The calibration flags

termCrit MCvTermCriteria

Termination criteria for the iterative optimization algorithm

Returns

double

The final value of the re-projection error.

StereoRectify(IInputArray, IInputArray, IInputArray, IInputArray, Size, IInputArray, IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, StereoRectifyType, double, Size, ref Rectangle, ref Rectangle)

computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, that makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. On input the function takes the matrices computed by cvStereoCalibrate and on output it gives 2 rotation matrices and also 2 projection matrices in the new coordinates. The function is normally called after cvStereoCalibrate that computes both camera matrices, the distortion coefficients, R and T

public static void StereoRectify(IInputArray cameraMatrix1, IInputArray distCoeffs1, IInputArray cameraMatrix2, IInputArray distCoeffs2, Size imageSize, IInputArray r, IInputArray t, IOutputArray r1, IOutputArray r2, IOutputArray p1, IOutputArray p2, IOutputArray q, StereoRectifyType flags, double alpha, Size newImageSize, ref Rectangle validPixRoi1, ref Rectangle validPixRoi2)

Parameters

cameraMatrix1 IInputArray

The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1]

distCoeffs1 IInputArray

The vectors of distortion coefficients for first camera, 4x1, 1x4, 5x1 or 1x5

cameraMatrix2 IInputArray

The camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1]

distCoeffs2 IInputArray

The vectors of distortion coefficients for second camera, 4x1, 1x4, 5x1 or 1x5

imageSize Size

Size of the image used for stereo calibration

r IInputArray

The rotation matrix between the 1st and the 2nd cameras' coordinate systems

t IInputArray

The translation vector between the cameras' coordinate systems

r1 IOutputArray

3x3 Rectification transforms (rotation matrices) for the first camera

r2 IOutputArray

3x3 Rectification transforms (rotation matrices) for the second camera

p1 IOutputArray

3x4 Projection matrices in the new (rectified) coordinate systems

p2 IOutputArray

3x4 Projection matrices in the new (rectified) coordinate systems

q IOutputArray

The optional output disparity-to-depth mapping matrix, 4x4, see cvReprojectImageTo3D.

flags StereoRectifyType

The operation flags, use ZeroDisparity for default

alpha double

Use -1 for default

newImageSize Size

Use Size.Empty for default

validPixRoi1 Rectangle

The valid pixel ROI for image1

validPixRoi2 Rectangle

The valid pixel ROI for image2

StereoRectifyUncalibrated(IInputArray, IInputArray, IInputArray, Size, IOutputArray, IOutputArray, double)

Computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in space, hence the suffix "Uncalibrated". Another related difference from cvStereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations, encoded by the homography matrices H1 and H2. The function implements the following algorithm [Hartley99].

public static bool StereoRectifyUncalibrated(IInputArray points1, IInputArray points2, IInputArray f, Size imgSize, IOutputArray h1, IOutputArray h2, double threshold = 5)

Parameters

points1 IInputArray

The array of 2D points

points2 IInputArray

The array of 2D points

f IInputArray

Fundamental matrix. It can be computed using the same set of point pairs points1 and points2 using cvFindFundamentalMat

imgSize Size

Size of the image

h1 IOutputArray

The rectification homography matrices for the first images

h2 IOutputArray

The rectification homography matrices for the second images

threshold double

If the parameter is greater than zero, then all the point pairs that do not comply the epipolar geometry well enough (that is, the points for which fabs(points2[i]TFpoints1[i])>threshold) are rejected prior to computing the homographies

Returns

bool

True if successful

Remarks

Note that while the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have significant distortion, it would better be corrected before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using cvCalibrateCamera2 and then the images can be corrected using cvUndistort2

Stylization(IInputArray, IOutputArray, float, float)

Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features.

public static void Stylization(IInputArray src, IOutputArray dst, float sigmaS = 60, float sigmaR = 0.45)

Parameters

src IInputArray

Input 8-bit 3-channel image.

dst IOutputArray

Output image with the same size and type as src.

sigmaS float

Range between 0 to 200.

sigmaR float

Range between 0 to 1.

Subtract(IInputArray, IInputArray, IOutputArray, IInputArray, DepthType)

Subtracts one array from another one: dst(I)=src1(I)-src2(I) if mask(I)!=0 All the arrays must have the same type, except the mask, and the same size (or ROI size)

public static void Subtract(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null, DepthType dtype = DepthType.Default)

Parameters

src1 IInputArray

The first source array

src2 IInputArray

The second source array

dst IOutputArray

The destination array

mask IInputArray

Operation mask, 8-bit single channel array; specifies elements of destination array to be changed

dtype DepthType

Optional depth of the output array

Sum(IInputArray)

Calculates sum S of array elements, independently for each channel Sc = sumI arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the sum to the first scalar component (S0).

public static MCvScalar Sum(IInputArray src)

Parameters

src IInputArray

The array

Returns

MCvScalar

The sum of array elements

Swap(Mat, Mat)

Swaps two matrices

public static void Swap(Mat m1, Mat m2)

Parameters

m1 Mat

The Mat to be swapped

m2 Mat

The Mat to be swapped

Swap(UMat, UMat)

Swaps two matrices

public static void Swap(UMat m1, UMat m2)

Parameters

m1 UMat

The UMat to be swapped

m2 UMat

The UMat to be swapped

TempFile(string)

Get a temporary file name

public static string TempFile(string suffix)

Parameters

suffix string

The suffix of the temporary file name

Returns

string

A temporary file name

TextureFlattening(IInputArray, IInputArray, IOutputArray, float, float, int)

By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used.

public static void TextureFlattening(IInputArray src, IInputArray mask, IOutputArray dst, float lowThreshold = 30, float highThreshold = 45, int kernelSize = 3)

Parameters

src IInputArray

Input 8-bit 3-channel image.

mask IInputArray

Input 8-bit 1 or 3-channel image.

dst IOutputArray

Output image with the same size and type as src.

lowThreshold float

Range from 0 to 100.

highThreshold float

Value > 100

kernelSize int

The size of the Sobel kernel to be used.

Threshold(IInputArray, IOutputArray, double, double, ThresholdType)

Applies a fixed-level threshold to each array element. The function applies fixed-level thresholding to a multiple-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image ( compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.

public static double Threshold(IInputArray src, IOutputArray dst, double threshold, double maxValue, ThresholdType thresholdType)

Parameters

src IInputArray

Input array (multiple-channel, 8-bit or 32-bit floating point).

dst IOutputArray

Output array of the same size and type and the same number of channels as src.

threshold double

Threshold value

maxValue double

Maximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types

thresholdType ThresholdType

Thresholding type

Returns

double

The computed threshold value if Otsu's or Triangle methods used.

Trace(IInputArray)

Returns sum of diagonal elements of the matrix mat.

public static MCvScalar Trace(IInputArray mat)

Parameters

mat IInputArray

the matrix

Returns

MCvScalar

sum of diagonal elements of the matrix src1

Transform(IInputArray, IOutputArray, IInputArray)

Performs matrix transformation of every element of array src and stores the results in dst Both source and destination arrays should have the same depth and the same size or selected ROI size. transmat and shiftvec should be real floating-point matrices.

public static void Transform(IInputArray src, IOutputArray dst, IInputArray m)

Parameters

src IInputArray

The first source array

dst IOutputArray

The destination array

m IInputArray

transformation 2x2 or 2x3 floating-point matrix.

Transpose(IInputArray, IOutputArray)

Transposes matrix src1: dst(i,j)=src(j,i) Note that no complex conjugation is done in case of complex matrix. Conjugation should be done separately: look at the sample code in cvXorS for example

public static void Transpose(IInputArray src, IOutputArray dst)

Parameters

src IInputArray

The source matrix

dst IOutputArray

The destination matrix

TriangulatePoints(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)

Reconstructs points by triangulation.

public static void TriangulatePoints(IInputArray projMat1, IInputArray projMat2, IInputArray projPoints1, IInputArray projPoints2, IOutputArray points4D)

Parameters

projMat1 IInputArray

3x4 projection matrix of the first camera.

projMat2 IInputArray

3x4 projection matrix of the second camera.

projPoints1 IInputArray

2xN array of feature points in the first image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1

projPoints2 IInputArray

2xN array of corresponding points in the second image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.

points4D IOutputArray

4xN array of reconstructed points in homogeneous coordinates.

Undistort(IInputArray, IOutputArray, IInputArray, IInputArray, IInputArray)

Transforms the image to compensate radial and tangential lens distortion.

public static void Undistort(IInputArray src, IOutputArray dst, IInputArray cameraMatrix, IInputArray distortionCoeffs, IInputArray newCameraMatrix = null)

Parameters

src IInputArray

The input (distorted) image

dst IOutputArray

The output (corrected) image

cameraMatrix IInputArray

The camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].

distortionCoeffs IInputArray

The vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2].

newCameraMatrix IInputArray

Camera matrix of the distorted image. By default it is the same as cameraMatrix, but you may additionally scale and shift the result by using some different matrix

UndistortPoints(IInputArray, IOutputArray, IInputArray, IInputArray, IInputArray, IInputArray)

Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation.

public static void UndistortPoints(IInputArray src, IOutputArray dst, IInputArray cameraMatrix, IInputArray distCoeffs, IInputArray R = null, IInputArray P = null)

Parameters

src IInputArray

The observed point coordinates

dst IOutputArray

The ideal point coordinates, after undistortion and reverse perspective transformation.

cameraMatrix IInputArray

The camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1]

distCoeffs IInputArray

The vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5.

R IInputArray

The rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used.

P IInputArray

The new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used.

UpdateMotionHistory(IInputArray, IInputOutputArray, double, double)

Updates the motion history image as following: mhi(x,y)=timestamp if silhouette(x,y)!=0 0 if silhouette(x,y)=0 and mhi(x,y)<timestamp-duration mhi(x,y) otherwise That is, MHI pixels where motion occurs are set to the current timestamp, while the pixels where motion happened far ago are cleared.

public static void UpdateMotionHistory(IInputArray silhouette, IInputOutputArray mhi, double timestamp, double duration)

Parameters

silhouette IInputArray

Silhouette mask that has non-zero pixels where the motion occurs.

mhi IInputOutputArray

Motion history image, that is updated by the function (single-channel, 32-bit floating-point)

timestamp double

Current time in milliseconds or other units.

duration double

Maximal duration of motion track in the same units as timestamp.

VConcat(IInputArray, IInputArray, IOutputArray)

Vertically concatenate two images

public static void VConcat(IInputArray src1, IInputArray src2, IOutputArray dst)

Parameters

src1 IInputArray

The first image

src2 IInputArray

The second image

dst IOutputArray

The result image

VConcat(IInputArrayOfArrays, IOutputArray)

The function vertically concatenates two or more matrices

public static void VConcat(IInputArrayOfArrays srcs, IOutputArray dst)

Parameters

srcs IInputArrayOfArrays

Input array or vector of matrices. all of the matrices must have the same number of cols and the same depth

dst IOutputArray

Output array. It has the same number of cols and depth as the src, and the sum of rows of the src. same depth.

VConcat(Mat[], IOutputArray)

The function vertically concatenates two or more matrices

public static void VConcat(Mat[] srcs, IOutputArray dst)

Parameters

srcs Mat[]

Input array or vector of matrices. all of the matrices must have the same number of cols and the same depth

dst IOutputArray

Output array. It has the same number of cols and depth as the src, and the sum of rows of the src. same depth.

WaitKey(int)

Waits for key event infinitely (delay <= 0) or for "delay" milliseconds.

public static int WaitKey(int delay = 0)

Parameters

delay int

Delay in milliseconds.

Returns

int

The code of the pressed key or -1 if no key were pressed until the specified timeout has elapsed

WarpAffine(IInputArray, IOutputArray, IInputArray, Size, Inter, Warp, BorderType, MCvScalar)

Applies an affine transformation to an image.

public static void WarpAffine(IInputArray src, IOutputArray dst, IInputArray mapMatrix, Size dsize, Inter interMethod = Inter.Linear, Warp warpMethod = Warp.Default, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

mapMatrix IInputArray

2x3 transformation matrix

dsize Size

Size of the output image.

interMethod Inter

Interpolation method

warpMethod Warp

Warp method

borderMode BorderType

Pixel extrapolation method

borderValue MCvScalar

A value used to fill outliers

WarpPerspective(IInputArray, IOutputArray, IInputArray, Size, Inter, Warp, BorderType, MCvScalar)

Applies a perspective transformation to an image

public static void WarpPerspective(IInputArray src, IOutputArray dst, IInputArray mapMatrix, Size dsize, Inter interpolationType = Inter.Linear, Warp warpType = Warp.Default, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)

Parameters

src IInputArray

Source image

dst IOutputArray

Destination image

mapMatrix IInputArray

3x3 transformation matrix

dsize Size

Size of the output image

interpolationType Inter

Interpolation method

warpType Warp

Warp method

borderMode BorderType

Pixel extrapolation method

borderValue MCvScalar

value used in case of a constant border

Watershed(IInputArray, IInputOutputArray)

Implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer92] Before passing the image to the function, user has to outline roughly the desired regions in the image markers with positive (>0) indices, i.e. every region is represented as one or more connected components with the pixel values 1, 2, 3 etc. Those components will be "seeds" of the future image regions. All the other pixels in markers, which relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0's. On the output of the function, each pixel in markers is set to one of values of the "seed" components, or to -1 at boundaries between the regions.

public static void Watershed(IInputArray image, IInputOutputArray markers)

Parameters

image IInputArray

The input 8-bit 3-channel image

markers IInputOutputArray

The input/output Int32 depth single-channel image (map) of markers.

Remarks

Note, that it is not necessary that every two neighbor connected components are separated by a watershed boundary (-1's pixels), for example, in case when such tangent components exist in the initial marker image.

WriteCloud(string, IInputArray, IInputArray, IInputArray)

Write point cloud to file

public static void WriteCloud(string file, IInputArray cloud, IInputArray colors = null, IInputArray normals = null)

Parameters

file string

The point cloud file name

cloud IInputArray

The point cloud

colors IInputArray

The color

normals IInputArray

The normals

cvCheckArr(nint, CheckType, double, double)

Checks that every array element is neither NaN nor Infinity. If CV_CHECK_RANGE is set, it also checks that every element is greater than or equal to minVal and less than maxVal.

public static extern int cvCheckArr(nint arr, CheckType flags, double minVal, double maxVal)

Parameters

arr nint

The array to check.

flags CheckType

The operation flags, CHECK_NAN_INFINITY or combination of CHECK_RANGE - if set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neither NaN nor Infinity. CHECK_QUIET - if set, the function does not raises an error if an element is invalid or out of range

minVal double

The inclusive lower boundary of valid values range. It is used only if CHECK_RANGE is set.

maxVal double

The exclusive upper boundary of valid values range. It is used only if CHECK_RANGE is set.

Returns

int

Returns nonzero if the check succeeded, i.e. all elements are valid and within the range, and zero otherwise. In the latter case if CV_CHECK_QUIET flag is not set, the function raises runtime error.

cvClearND(nint, int[])

Clears (sets to zero) the particular element of dense array or deletes the element of sparse array. If the element does not exists, the function does nothing

public static extern void cvClearND(nint arr, int[] idx)

Parameters

arr nint

Input array

idx int[]

Array of the element indices

cvConvertScale(nint, nint, double, double)

This function has several different purposes and thus has several synonyms. It copies one array to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel arrays are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination array element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate cvConvert synonym. If source and destination array types have equal types, this is also a special case that can be used to scale and shift a matrix or an image and that fits to cvScale synonym.

public static extern void cvConvertScale(nint src, nint dst, double scale, double shift)

Parameters

src nint

Source array

dst nint

Destination array

scale double

Scale factor

shift double

Value added to the scaled source array elements

cvCopy(nint, nint, nint)

Copies selected elements from input array to output array: dst(I)=src(I) if mask(I)!=0. If any of the passed arrays is of IplImage type, then its ROI and COI fields are used. Both arrays must have the same type, the same number of dimensions and the same size. The function can also copy sparse arrays (mask is not supported in this case).

public static extern void cvCopy(nint src, nint des, nint mask)

Parameters

src nint

The source array

des nint

The destination array

mask nint

Operation mask, 8-bit single channel array; specifies elements of destination array to be changed

cvCreateImage(Size, IplDepth, int)

Creates the header and allocates data.

public static nint cvCreateImage(Size size, IplDepth depth, int channels)

Parameters

size Size

Image width and height.

depth IplDepth

Bit depth of image elements

channels int

Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ...

Returns

nint

A pointer to IplImage

cvCreateImageHeader(Size, IplDepth, int)

Allocates, initializes, and returns the structure IplImage.

public static nint cvCreateImageHeader(Size size, IplDepth depth, int channels)

Parameters

size Size

Image width and height.

depth IplDepth

Bit depth of image elements

channels int

Number of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ...

Returns

nint

The structure IplImage

cvCreateMat(int, int, DepthType)

Allocates header for the new matrix and underlying data, and returns a pointer to the created matrix. Matrices are stored row by row. All the rows are aligned by 4 bytes.

public static extern nint cvCreateMat(int rows, int cols, DepthType type)

Parameters

rows int

Number of rows in the matrix.

cols int

Number of columns in the matrix.

type DepthType

Type of the matrix elements.

Returns

nint

A pointer to the created matrix

cvCreateSparseMat(int, nint, DepthType)

The function allocates a multi-dimensional sparse array. Initially the array contain no elements, that is Get or GetReal returns zero for every index

public static extern nint cvCreateSparseMat(int dims, nint sizes, DepthType type)

Parameters

dims int

Number of array dimensions

sizes nint

Array of dimension sizes

type DepthType

Type of array elements

Returns

nint

Pointer to the array header

cvGet1D(nint, int)

Return the particular array element

public static MCvScalar cvGet1D(nint arr, int idx0)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

Returns

MCvScalar

the particular array element

cvGet2D(nint, int, int)

Return the particular array element

public static MCvScalar cvGet2D(nint arr, int idx0, int idx1)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

Returns

MCvScalar

the particular array element

cvGet3D(nint, int, int, int)

Return the particular array element

public static MCvScalar cvGet3D(nint arr, int idx0, int idx1, int idx2)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

idx2 int

The third zero-based component of the element index

Returns

MCvScalar

the particular array element

cvGetCol(nint, nint, int)

Return the header, corresponding to a specified column of the input array

public static nint cvGetCol(nint arr, nint submat, int col)

Parameters

arr nint

Input array

submat nint

Pointer to the prelocate memory of the resulting sub-array header

col int

Zero-based index of the selected column

Returns

nint

The header, corresponding to a specified column of the input array

cvGetCols(nint, nint, int, int)

Return the header, corresponding to a specified col span of the input array

public static extern nint cvGetCols(nint arr, nint submat, int startCol, int endCol)

Parameters

arr nint

Input array

submat nint

Pointer to the prelocated memory of the resulting sub-array header

startCol int

Zero-based index of the selected column

endCol int

Zero-based index of the ending column (exclusive) of the span

Returns

nint

The header, corresponding to a specified col span of the input array

cvGetDiag(nint, nint, int)

returns the header, corresponding to a specified diagonal of the input array

public static extern nint cvGetDiag(nint arr, nint submat, int diag)

Parameters

arr nint

Input array

submat nint

Pointer to the resulting sub-array header

diag int

Array diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc

Returns

nint

Pointer to the resulting sub-array header

cvGetImage(nint, nint)

Returns image header for the input array that can be matrix - CvMat*, or image - IplImage*.

public static extern nint cvGetImage(nint arr, nint imageHeader)

Parameters

arr nint

Input array.

imageHeader nint

Pointer to IplImage structure used as a temporary buffer.

Returns

nint

Returns image header for the input array

cvGetImageCOI(nint)

Returns channel of interest of the image (it returns 0 if all the channels are selected).

public static extern int cvGetImageCOI(nint image)

Parameters

image nint

Image header.

Returns

int

channel of interest of the image (it returns 0 if all the channels are selected)

cvGetImageROI(nint)

Returns channel of interest of the image (it returns 0 if all the channels are selected).

public static Rectangle cvGetImageROI(nint image)

Parameters

image nint

Image header.

Returns

Rectangle

channel of interest of the image (it returns 0 if all the channels are selected)

cvGetMat(nint, nint, out int, int)

Returns matrix header for the input array that can be matrix - CvMat, image - IplImage or multi-dimensional dense array - CvMatND* (latter case is allowed only if allowND != 0) . In the case of matrix the function simply returns the input pointer. In the case of IplImage* or CvMatND* it initializes header structure with parameters of the current image ROI and returns pointer to this temporary structure. Because COI is not supported by CvMat, it is returned separately.

public static extern nint cvGetMat(nint arr, nint header, out int coi, int allowNd)

Parameters

arr nint

Input array

header nint

Pointer to CvMat structure used as a temporary buffer

coi int

Optional output parameter for storing COI

allowNd int

If non-zero, the function accepts multi-dimensional dense arrays (CvMatND*) and returns 2D (if CvMatND has two dimensions) or 1D matrix (when CvMatND has 1 dimension or more than 2 dimensions). The array must be continuous

Returns

nint

Returns matrix header for the input array

cvGetRawData(nint, out nint, out int, out Size)

Fills output variables with low-level information about the array data. All output parameters are optional, so some of the pointers may be set to NULL. If the array is IplImage with ROI set, parameters of ROI are returned.

public static extern void cvGetRawData(nint arr, out nint data, out int step, out Size roiSize)

Parameters

arr nint

Array header

data nint

Output pointer to the whole image origin or ROI origin if ROI is set

step int

Output full row length in bytes

roiSize Size

Output ROI size

cvGetReal1D(nint, int)

Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.

public static extern double cvGetReal1D(nint arr, int idx0)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

Returns

double

the particular element of single-channel array

cvGetReal2D(nint, int, int)

Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.

public static extern double cvGetReal2D(nint arr, int idx0, int idx1)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

Returns

double

the particular element of single-channel array

cvGetReal3D(nint, int, int, int)

Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.

public static extern double cvGetReal3D(nint arr, int idx0, int idx1, int idx2)

Parameters

arr nint

Input array. Must have a single channel

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

idx2 int

The third zero-based component of the element index

Returns

double

the particular element of single-channel array

cvGetRow(nint, nint, int)

Return the header, corresponding to a specified row of the input array

public static nint cvGetRow(nint arr, nint submat, int row)

Parameters

arr nint

Input array

submat nint

Pointer to the prelocate memory of the resulting sub-array header

row int

Zero-based index of the selected row

Returns

nint

The header, corresponding to a specified row of the input array

cvGetRows(nint, nint, int, int, int)

Return the header, corresponding to a specified row span of the input array

public static extern nint cvGetRows(nint arr, nint submat, int startRow, int endRow, int deltaRow)

Parameters

arr nint

Input array

submat nint

Pointer to the prelocated memory of resulting sub-array header

startRow int

Zero-based index of the starting row (inclusive) of the span

endRow int

Zero-based index of the ending row (exclusive) of the span

deltaRow int

Index step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row

Returns

nint

The header, corresponding to a specified row span of the input array

cvGetSize(nint)

Returns number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned.

public static Size cvGetSize(nint arr)

Parameters

arr nint

array header

Returns

Size

number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned.

cvGetSubRect(nint, nint, Rectangle)

Returns header, corresponding to a specified rectangle of the input array. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. ROI is taken into account by the function so the sub-array of ROI is actually extracted.

public static nint cvGetSubRect(nint arr, nint submat, Rectangle rect)

Parameters

arr nint

Input array

submat nint

Pointer to the resultant sub-array header.

rect Rectangle

Zero-based coordinates of the rectangle of interest.

Returns

nint

the resultant sub-array header

cvInitImageHeader(nint, Size, IplDepth, int, int, int)

Initializes the image header structure, pointer to which is passed by the user, and returns the pointer.

public static nint cvInitImageHeader(nint image, Size size, IplDepth depth, int channels, int origin, int align)

Parameters

image nint

Image header to initialize.

size Size

Image width and height.

depth IplDepth

Image depth

channels int

Number of channels

origin int

IPL_ORIGIN_TL or IPL_ORIGIN_BL.

align int

Alignment for image rows, typically 4 or 8 bytes.

Returns

nint

Pointer to the image header

cvInitMatHeader(nint, int, int, int, nint, int)

Initializes already allocated CvMat structure. It can be used to process raw data with OpenCV matrix functions.

public static extern nint cvInitMatHeader(nint mat, int rows, int cols, int type, nint data, int step)

Parameters

mat nint

Pointer to the matrix header to be initialized.

rows int

Number of rows in the matrix.

cols int

Number of columns in the matrix.

type int

Type of the matrix elements.

data nint

Optional data pointer assigned to the matrix header

step int

Full row width in bytes of the data assigned. By default, the minimal possible step is used, i.e., no gaps is assumed between subsequent rows of the matrix.

Returns

nint

Pointer to the CvMat

cvInitMatNDHeader(nint, int, int[], DepthType, nint)

Initializes CvMatND structure allocated by the user

public static extern nint cvInitMatNDHeader(nint mat, int dims, int[] sizes, DepthType type, nint data)

Parameters

mat nint

Pointer to the array header to be initialized

dims int

Number of array dimensions

sizes int[]

Array of dimension sizes

type DepthType

Type of array elements

data nint

Optional data pointer assigned to the matrix header

Returns

nint

Pointer to the array header

cvMaxRect(Rectangle, Rectangle)

Finds minimum area rectangle that contains both input rectangles inside

public static Rectangle cvMaxRect(Rectangle rect1, Rectangle rect2)

Parameters

rect1 Rectangle

First rectangle

rect2 Rectangle

Second rectangle

Returns

Rectangle

The minimum area rectangle that contains both input rectangles inside

cvRange(nint, double, double)

Initializes the matrix as following: arr(i,j)=(end-start)(icols(arr)+j)/(cols(arr)*rows(arr))

public static extern void cvRange(nint mat, double start, double end)

Parameters

mat nint

The matrix to initialize. It should be single-channel 32-bit, integer or floating-point

start double

The lower inclusive boundary of the range

end double

The upper exclusive boundary of the range

cvReleaseImage(ref nint)

Releases the header and the image data.

public static extern void cvReleaseImage(ref nint image)

Parameters

image nint

Double pointer to the header of the deallocated image

cvReleaseImageHeader(ref nint)

Releases the header.

public static extern void cvReleaseImageHeader(ref nint image)

Parameters

image nint

Pointer to the deallocated header.

cvReleaseMat(ref nint)

Decrements the matrix data reference counter and releases matrix header

public static extern void cvReleaseMat(ref nint mat)

Parameters

mat nint

Double pointer to the matrix.

cvReleaseSparseMat(ref nint)

The function releases the sparse array and clears the array pointer upon exit.

public static extern void cvReleaseSparseMat(ref nint mat)

Parameters

mat nint

Reference of the pointer to the array

cvResetImageROI(nint)

Releases image ROI. After that the whole image is considered selected.

public static extern void cvResetImageROI(nint image)

Parameters

image nint

Image header

cvReshape(nint, nint, int, int)

initializes CvMat header so that it points to the same data as the original array but has different shape - different number of channels, different number of rows or both

public static extern nint cvReshape(nint arr, nint header, int newCn, int newRows)

Parameters

arr nint

Input array

header nint

Output header to be filled

newCn int

New number of channels. new_cn = 0 means that number of channels remains unchanged

newRows int

New number of rows. new_rows = 0 means that number of rows remains unchanged unless it needs to be changed according to new_cn value. destination array to be changed

Returns

nint

The CvMat header

cvSet2D(nint, int, int, MCvScalar)

Assign the new value to the particular element of array

public static void cvSet2D(nint arr, int idx0, int idx1, MCvScalar value)

Parameters

arr nint

Input array.

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

value MCvScalar

The assigned value

cvSetData(nint, nint, int)

Assigns user data to the array header.

public static extern void cvSetData(nint arr, nint data, int step)

Parameters

arr nint

Array header.

data nint

User data.

step int

Full row length in bytes.

cvSetImageCOI(nint, int)

Sets the channel of interest to a given value. Value 0 means that all channels are selected, 1 means that the first channel is selected etc. If ROI is NULL and coi != 0, ROI is allocated.

public static extern void cvSetImageCOI(nint image, int coi)

Parameters

image nint

Image header

coi int

Channel of interest starting from 1. If 0, the COI is unset.

cvSetImageROI(nint, Rectangle)

Sets the image ROI to a given rectangle. If ROI is NULL and the value of the parameter rect is not equal to the whole image, ROI is allocated.

public static void cvSetImageROI(nint image, Rectangle rect)

Parameters

image nint

Image header.

rect Rectangle

ROI rectangle.

cvSetReal1D(nint, int, double)

Assign the new value to the particular element of single-channel array

public static extern void cvSetReal1D(nint arr, int idx0, double value)

Parameters

arr nint

Input array

idx0 int

The first zero-based component of the element index

value double

The assigned value

cvSetReal2D(nint, int, int, double)

Assign the new value to the particular element of single-channel array

public static extern void cvSetReal2D(nint arr, int idx0, int idx1, double value)

Parameters

arr nint

Input array

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

value double

The assigned value

cvSetReal3D(nint, int, int, int, double)

Assign the new value to the particular element of single-channel array

public static extern void cvSetReal3D(nint arr, int idx0, int idx1, int idx2, double value)

Parameters

arr nint

Input array

idx0 int

The first zero-based component of the element index

idx1 int

The second zero-based component of the element index

idx2 int

The third zero-based component of the element index

value double

The assigned value

cvSetRealND(nint, int[], double)

Assign the new value to the particular element of single-channel array

public static extern void cvSetRealND(nint arr, int[] idx, double value)

Parameters

arr nint

Input array

idx int[]

Array of the element indices

value double

The assigned value