Class CvInvoke
Class that provide access to native OpenCV functions
public static class CvInvoke
- Inheritance
-
CvInvoke
- Inherited Members
Fields
BoolMarshalType
Represent a bool value in C++
public const UnmanagedType BoolMarshalType = U1
Field Value
BoolToIntMarshalType
Represent a int value in C++
public const UnmanagedType BoolToIntMarshalType = Bool
Field Value
CvCallingConvention
Opencv's calling convention
public const CallingConvention CvCallingConvention = Cdecl
Field Value
CvErrorHandlerIgnoreError
An error handler which will ignore any error and continue
public static readonly CvInvoke.CvErrorCallback CvErrorHandlerIgnoreError
Field Value
CvErrorHandlerThrowException
The default Exception callback to handle Error thrown by OpenCV
public static readonly CvInvoke.CvErrorCallback CvErrorHandlerThrowException
Field Value
ExternCudaLibrary
The file name of the cvextern library
public const string ExternCudaLibrary = "cvextern"
Field Value
ExternLibrary
The file name of the cvextern library
public const string ExternLibrary = "cvextern"
Field Value
MorphologyDefaultBorderValue
The default morphology value.
public static MCvScalar MorphologyDefaultBorderValue
Field Value
OpenCVModuleList
The List of the opencv modules
public static List<string> OpenCVModuleList
Field Value
OpencvFFMpegLibrary
The file name of the opencv_ffmpeg library
public const string OpencvFFMpegLibrary = ""
Field Value
StringMarshalType
string marshaling type
public const UnmanagedType StringMarshalType = LPStr
Field Value
Properties
AvailableParallelBackends
Get a list of the available parallel backends.
public static string[] AvailableParallelBackends { get; }
Property Value
- string[]
Backends
Returns list of all built-in backends
public static Backend[] Backends { get; }
Property Value
- Backend[]
BuildInformation
Returns full configuration time cmake output. Returned value is raw cmake output including version control system revision, compiler version, compiler flags, enabled modules and third party libraries, etc.Output format depends on target architecture.
public static string BuildInformation { get; }
Property Value
CameraBackends
Returns list of available backends which works via cv::VideoCapture(int index)
public static Backend[] CameraBackends { get; }
Property Value
- Backend[]
ConfigDict
Get the dictionary that hold the Open CV build flags. The key is a String and the value is type double. If it is a flag, 0 means false and 1 means true
public static Dictionary<string, double> ConfigDict { get; }
Property Value
HaveOpenCL
Check if we have OpenCL
public static bool HaveOpenCL { get; }
Property Value
HaveOpenCLCompatibleGpuDevice
Gets a value indicating whether this device have open CL compatible gpu device.
public static bool HaveOpenCLCompatibleGpuDevice { get; }
Property Value
- bool
true
if have open CL compatible gpu device; otherwise,false
.
HaveOpenVX
Check if use of OpenVX is possible.
public static bool HaveOpenVX { get; }
Property Value
- bool
True use of OpenVX is possible.
LogLevel
Get or Set the log level.
public static LogLevel LogLevel { get; set; }
Property Value
NumThreads
Get or set the number of threads that are used by parallelized OpenCV functions
public static int NumThreads { get; set; }
Property Value
Remarks
When the argument is zero or negative, and at the beginning of the program, the number of threads is set to the number of processors in the system, as returned by the function omp_get_num_procs() from OpenMP runtime.
NumberOfCPUs
Returns the number of logical CPUs available for the process.
public static int NumberOfCPUs { get; }
Property Value
StreamBackends
Returns list of available backends which works via cv::VideoCapture(filename)
public static Backend[] StreamBackends { get; }
Property Value
- Backend[]
ThreadNum
Returns the index, from 0 to cvGetNumThreads()-1, of the thread that called the function. It is a wrapper for the function omp_get_thread_num() from OpenMP runtime. The retrieved index may be used to access local-thread data inside the parallelized code fragments.
public static int ThreadNum { get; }
Property Value
UseOpenCL
Get or set if OpenCL should be used
public static bool UseOpenCL { get; set; }
Property Value
UseOpenVX
Enable/disable use of OpenVX
public static bool UseOpenVX { get; set; }
Property Value
UseOptimized
Enables or disables the optimized code.
public static bool UseOptimized { get; set; }
Property Value
- bool
true
if [use optimized]; otherwise,false
.
Remarks
The function can be used to dynamically turn on and off optimized code (code that uses SSE2, AVX, and other instructions on the platforms that support it). It sets a global flag that is further checked by OpenCV functions. Since the flag is not checked in the inner OpenCV loops, it is only safe to call the function on the very top level in your application where you can be sure that no other OpenCV function is currently executed.
WriterBackends
Returns list of available backends which works via cv::VideoWriter()
public static Backend[] WriterBackends { get; }
Property Value
- Backend[]
Methods
AbsDiff(IInputArray, IInputArray, IOutputArray)
Calculates absolute difference between two arrays. dst(I)c = abs(src1(I)c - src2(I)c). All the arrays must have the same data type and the same size (or ROI size)
public static void AbsDiff(IInputArray src1, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
Accumulate(IInputArray, IInputOutputArray, IInputArray)
Adds the whole image or its selected region to accumulator sum
public static void Accumulate(IInputArray src, IInputOutputArray dst, IInputArray mask = null)
Parameters
src
IInputArrayInput image, 1- or 3-channel, 8-bit or 32-bit floating point. (each channel of multi-channel image is processed independently).
dst
IInputOutputArrayAccumulator of the same number of channels as input image, 32-bit or 64-bit floating-point.
mask
IInputArrayOptional operation mask
AccumulateProduct(IInputArray, IInputArray, IInputOutputArray, IInputArray)
Adds product of 2 images or thier selected regions to accumulator acc
public static void AccumulateProduct(IInputArray src1, IInputArray src2, IInputOutputArray dst, IInputArray mask = null)
Parameters
src1
IInputArrayFirst input image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
src2
IInputArraySecond input image, the same format as the first one
dst
IInputOutputArrayAccumulator of the same number of channels as input images, 32-bit or 64-bit floating-point
mask
IInputArrayOptional operation mask
AccumulateSquare(IInputArray, IInputOutputArray, IInputArray)
Adds the input src
or its selected region, raised to power 2, to the accumulator sqsum
public static void AccumulateSquare(IInputArray src, IInputOutputArray dst, IInputArray mask = null)
Parameters
src
IInputArrayInput image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently)
dst
IInputOutputArrayAccumulator of the same number of channels as input image, 32-bit or 64-bit floating-point
mask
IInputArrayOptional operation mask
AccumulateWeighted(IInputArray, IInputOutputArray, double, IInputArray)
Calculates weighted sum of input src
and the accumulator acc so that acc becomes a running average of frame sequence:
acc(x,y)=(1-alpha
) * acc(x,y) + alpha
* image(x,y) if mask(x,y)!=0
where alpha
regulates update speed (how fast accumulator forgets about previous frames).
public static void AccumulateWeighted(IInputArray src, IInputOutputArray dst, double alpha, IInputArray mask = null)
Parameters
src
IInputArrayInput image, 1- or 3-channel, 8-bit or 32-bit floating point (each channel of multi-channel image is processed independently).
dst
IInputOutputArrayAccumulator of the same number of channels as input image, 32-bit or 64-bit floating-point.
alpha
doubleWeight of input image
mask
IInputArrayOptional operation mask
AdaptiveThreshold(IInputArray, IOutputArray, double, AdaptiveThresholdType, ThresholdType, int, double)
Transforms grayscale image to binary image.
Threshold calculated individually for each pixel.
For the method CV_ADAPTIVE_THRESH_MEAN_C it is a mean of blockSize
x blockSize
pixel
neighborhood, subtracted by param1.
For the method CV_ADAPTIVE_THRESH_GAUSSIAN_C it is a weighted sum (gaussian) of blockSize
x blockSize
pixel neighborhood, subtracted by param1.
public static void AdaptiveThreshold(IInputArray src, IOutputArray dst, double maxValue, AdaptiveThresholdType adaptiveType, ThresholdType thresholdType, int blockSize, double param1)
Parameters
src
IInputArraySource array (single-channel, 8-bit of 32-bit floating point).
dst
IOutputArrayDestination array; must be either the same type as src or 8-bit.
maxValue
doubleMaximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types
adaptiveType
AdaptiveThresholdTypeAdaptive_method
thresholdType
ThresholdTypeThresholding type. must be one of CV_THRESH_BINARY, CV_THRESH_BINARY_INV
blockSize
intThe size of a pixel neighborhood that is used to calculate a threshold value for the pixel: 3, 5, 7, ...
param1
doubleConstant subtracted from mean or weighted mean. It may be negative.
Add(IInputArray, IInputArray, IOutputArray, IInputArray, DepthType)
Adds one array to another one: dst(I)=src1(I)+src2(I) if mask(I)!=0All the arrays must have the same type, except the mask, and the same size (or ROI size)
public static void Add(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null, DepthType dtype = DepthType.Default)
Parameters
src1
IInputArrayThe first source array.
src2
IInputArrayThe second source array.
dst
IOutputArrayThe destination array.
mask
IInputArrayOperation mask, 8-bit single channel array; specifies elements of destination array to be changed.
dtype
DepthTypeOptional depth type of the output array
AddWeighted(IInputArray, double, IInputArray, double, double, IOutputArray, DepthType)
Calculated weighted sum of two arrays as following: dst(I)=src1(I)*alpha+src2(I)*beta+gamma All the arrays must have the same type and the same size (or ROI size)
public static void AddWeighted(IInputArray src1, double alpha, IInputArray src2, double beta, double gamma, IOutputArray dst, DepthType dtype = DepthType.Default)
Parameters
src1
IInputArrayThe first source array.
alpha
doubleWeight of the first array elements.
src2
IInputArrayThe second source array.
beta
doubleWeight of the second array elements.
gamma
doubleScalar, added to each sum.
dst
IOutputArrayThe destination array.
dtype
DepthTypeOptional depth of the output array; when both input arrays have the same depth
ApplyColorMap(IInputArray, IOutputArray, ColorMapType)
Applies a GNU Octave/MATLAB equivalent colormap on a given image.
public static void ApplyColorMap(IInputArray src, IOutputArray dst, ColorMapType colorMapType)
Parameters
src
IInputArrayThe source image, grayscale or colored of type CV_8UC1 or CV_8UC3
dst
IOutputArrayThe result is the colormapped source image
colorMapType
ColorMapTypeThe type of color map
ApplyColorMap(IInputArray, IOutputArray, IInputArray)
Applies a user colormap on a given image.
public static void ApplyColorMap(IInputArray src, IOutputArray dst, IInputArray userColorMap)
Parameters
src
IInputArrayThe source image, grayscale or colored of type CV_8UC1 or CV_8UC3.
dst
IOutputArrayThe result is the colormapped source image.
userColorMap
IInputArrayThe colormap to apply of type CV_8UC1 or CV_8UC3 and size 256
ApproxPolyDP(IInputArray, IOutputArray, double, bool)
Approximates a polygonal curve(s) with the specified precision.
public static void ApproxPolyDP(IInputArray curve, IOutputArray approxCurve, double epsilon, bool closed)
Parameters
curve
IInputArrayInput vector of a 2D point
approxCurve
IOutputArrayResult of the approximation. The type should match the type of the input curve.
epsilon
doubleParameter specifying the approximation accuracy. This is the maximum distance between the original curve and its approximation.
closed
boolIf true, the approximated curve is closed (its first and last vertices are connected). Otherwise, it is not closed.
ArcLength(IInputArray, bool)
Calculates a contour perimeter or a curve length
public static double ArcLength(IInputArray curve, bool isClosed)
Parameters
curve
IInputArraySequence or array of the curve points
isClosed
boolIndicates whether the curve is closed or not.
Returns
- double
Contour perimeter or a curve length
ArrowedLine(IInputOutputArray, Point, Point, MCvScalar, int, LineType, int, double)
Draws a arrow segment pointing from the first point to the second one.
public static void ArrowedLine(IInputOutputArray img, Point pt1, Point pt2, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0, double tipLength = 0.1)
Parameters
img
IInputOutputArrayImage
pt1
PointThe point the arrow starts from.
pt2
PointThe point the arrow points to.
color
MCvScalarLine color.
thickness
intLine thickness.
lineType
LineTypeType of the line.
shift
intNumber of fractional bits in the point coordinates.
tipLength
doubleThe length of the arrow tip in relation to the arrow length
BilateralFilter(IInputArray, IOutputArray, int, double, double, BorderType)
Applies the bilateral filter to an image.
public static void BilateralFilter(IInputArray src, IOutputArray dst, int d, double sigmaColor, double sigmaSpace, BorderType borderType = BorderType.Default)
Parameters
src
IInputArraySource 8-bit or floating-point, 1-channel or 3-channel image.
dst
IOutputArrayDestination image of the same size and type as src .
d
intDiameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
sigmaColor
doubleFilter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
sigmaSpace
doubleFilter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace. Otherwise, d is proportional to sigmaSpace.
borderType
BorderTypeBorder mode used to extrapolate pixels outside of the image.
BitwiseAnd(IInputArray, IInputArray, IOutputArray, IInputArray)
Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I) & src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size
public static void BitwiseAnd(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
mask
IInputArrayOperation mask, 8-bit single channel array; specifies elements of destination array to be changed
BitwiseNot(IInputArray, IOutputArray, IInputArray)
Inverses every bit of every array element:
public static void BitwiseNot(IInputArray src, IOutputArray dst, IInputArray mask = null)
Parameters
src
IInputArrayThe source array
dst
IOutputArrayThe destination array
mask
IInputArrayThe optional mask for the operation, use null to ignore
BitwiseOr(IInputArray, IInputArray, IOutputArray, IInputArray)
Calculates per-element bit-wise disjunction of two arrays: dst(I)=src1(I)|src2(I) In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size
public static void BitwiseOr(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
mask
IInputArrayOperation mask, 8-bit single channel array; specifies elements of destination array to be changed
BitwiseXor(IInputArray, IInputArray, IOutputArray, IInputArray)
Calculates per-element bit-wise logical conjunction of two arrays: dst(I)=src1(I)^src2(I) if mask(I)!=0 In the case of floating-point arrays their bit representations are used for the operation. All the arrays must have the same type, except the mask, and the same size
public static void BitwiseXor(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
mask
IInputArrayMask, 8-bit single channel array; specifies elements of destination array to be changed.
BlendLinear(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)
Performs linear blending of two images: dst(i, j)=weights1(i, j) x src1(i, j) + weights2(i, j) x src2(i, j)
public static void BlendLinear(IInputArray src1, IInputArray src2, IInputArray weights1, IInputArray weights2, IOutputArray dst)
Parameters
src1
IInputArrayIt has a type of CV_8UC(n) or CV_32FC(n), where n is a positive integer.
src2
IInputArrayIt has the same type and size as src1.
weights1
IInputArrayIt has a type of CV_32FC1 and the same size with src1.
weights2
IInputArrayIt has a type of CV_32FC1 and the same size with src1.
dst
IOutputArrayIt is created if it does not have the same size and type with src1.
Blur(IInputArray, IOutputArray, Size, Point, BorderType)
Blurs an image using the normalized box filter.
public static void Blur(IInputArray src, IOutputArray dst, Size ksize, Point anchor, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayinput image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
IOutputArrayOutput image of the same size and type as src.
ksize
SizeBlurring kernel size.
anchor
PointAnchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
borderType
BorderTypeBorder mode used to extrapolate pixels outside of the image.
BoundingRectangle(IInputArray)
Returns the up-right bounding rectangle for 2d point set
public static Rectangle BoundingRectangle(IInputArray points)
Parameters
points
IInputArrayInput 2D point set, stored in std::vector or Mat.
Returns
- Rectangle
The up-right bounding rectangle for 2d point set
BoundingRectangle(Point[])
Returns the up-right bounding rectangle for 2d point set
public static Rectangle BoundingRectangle(Point[] points)
Parameters
points
Point[]Input 2D point set
Returns
- Rectangle
The up-right bounding rectangle for 2d point set
BoxFilter(IInputArray, IOutputArray, DepthType, Size, Point, bool, BorderType)
Blurs an image using the box filter.
public static void BoxFilter(IInputArray src, IOutputArray dst, DepthType ddepth, Size ksize, Point anchor, bool normalize = true, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayInput image.
dst
IOutputArrayOutput image of the same size and type as src.
ddepth
DepthTypeThe output image depth (-1 to use src.depth()).
ksize
SizeBlurring kernel size.
anchor
PointAnchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
normalize
boolSpecifying whether the kernel is normalized by its area or not.
borderType
BorderTypeBorder mode used to extrapolate pixels outside of the image.
BoxPoints(RotatedRect)
Calculates vertices of the input 2d box.
public static PointF[] BoxPoints(RotatedRect box)
Parameters
box
RotatedRectThe box
Returns
- PointF[]
The four vertices of rectangles.
BoxPoints(RotatedRect, IOutputArray)
Calculates vertices of the input 2d box.
public static void BoxPoints(RotatedRect box, IOutputArray points)
Parameters
box
RotatedRectThe box
points
IOutputArrayThe output array of four vertices of rectangles.
Broadcast(IInputArray, IInputArray, IOutputArray)
Broadcast the given Mat to the given shape.
public static void Broadcast(IInputArray src, IInputArray shape, IOutputArray dst)
Parameters
src
IInputArrayInput array
shape
IInputArrayTarget shape. Should be a list of CV_32S numbers. Note that negative values are not supported.
dst
IOutputArrayOutput array that has the given shape
BuildOpticalFlowPyramid(IInputArray, IOutputArrayOfArrays, Size, int, bool, BorderType, BorderType, bool)
Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK.
public static int BuildOpticalFlowPyramid(IInputArray img, IOutputArrayOfArrays pyramid, Size winSize, int maxLevel, bool withDerivatives = true, BorderType pyrBorder = BorderType.Default, BorderType derivBorder = BorderType.Constant, bool tryReuseInputImage = true)
Parameters
img
IInputArray8-bit input image.
pyramid
IOutputArrayOfArraysOutput pyramid.
winSize
SizeWindow size of optical flow algorithm. Must be not less than winSize argument of calcOpticalFlowPyrLK. It is needed to calculate required padding for pyramid levels.
maxLevel
int0-based maximal pyramid level number.
withDerivatives
boolSet to precompute gradients for the every pyramid level. If pyramid is constructed without the gradients then calcOpticalFlowPyrLK will calculate them internally.
pyrBorder
BorderTypeThe border mode for pyramid layers.
derivBorder
BorderTypeThe border mode for gradients.
tryReuseInputImage
boolput ROI of input image into the pyramid if possible. You can pass false to force data copying.
Returns
- int
Number of levels in constructed pyramid. Can be less than maxLevel.
BuildPyramid(IInputArray, IOutputArrayOfArrays, int, BorderType)
The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown to the previously built pyramid layers, starting from dst[0]==src.
public static void BuildPyramid(IInputArray src, IOutputArrayOfArrays dst, int maxlevel, BorderType borderType = BorderType.Default)
Parameters
src
IInputArraySource image. Check pyrDown for the list of supported types.
dst
IOutputArrayOfArraysDestination vector of maxlevel+1 images of the same type as src. dst[0] will be the same as src. dst[1] is the next pyramid layer, a smoothed and down-sized src, and so on.
maxlevel
int0-based index of the last (the smallest) pyramid layer. It must be non-negative.
borderType
BorderTypePixel extrapolation method
CLAHE(IInputArray, double, Size, IOutputArray)
Contrast Limited Adaptive Histogram Equalization (CLAHE)
public static void CLAHE(IInputArray src, double clipLimit, Size tileGridSize, IOutputArray dst)
Parameters
src
IInputArrayThe source image
clipLimit
doubleClip Limit, use 40 for default
tileGridSize
SizeTile grid size, use (8, 8) for default
dst
IOutputArrayThe destination image
CalcBackProject(IInputArrayOfArrays, int[], IInputArray, IOutputArray, float[], double)
Calculates the back projection of a histogram.
public static void CalcBackProject(IInputArrayOfArrays images, int[] channels, IInputArray hist, IOutputArray backProject, float[] ranges, double scale = 1)
Parameters
images
IInputArrayOfArraysSource arrays. They all should have the same depth, CV_8U or CV_32F , and the same size. Each of them can have an arbitrary number of channels.
channels
int[]Number of source images.
hist
IInputArrayInput histogram that can be dense or sparse.
backProject
IOutputArrayDestination back projection array that is a single-channel array of the same size and depth as images[0] .
ranges
float[]Array of arrays of the histogram bin boundaries in each dimension.
scale
doubleOptional scale factor for the output back projection.
CalcCovarMatrix(IInputArray, IOutputArray, IInputOutputArray, CovarMethod, DepthType)
Calculates the covariance matrix of a set of vectors.
public static void CalcCovarMatrix(IInputArray samples, IOutputArray covar, IInputOutputArray mean, CovarMethod flags, DepthType ctype = DepthType.Cv64F)
Parameters
samples
IInputArraySamples stored either as separate matrices or as rows/columns of a single matrix.
covar
IOutputArrayOutput covariance matrix of the type ctype and square size.
mean
IInputOutputArrayInput or output (depending on the flags) array as the average value of the input vectors.
flags
CovarMethodOperation flags
ctype
DepthTypeType of the matrix
CalcGlobalOrientation(IInputArray, IInputArray, IInputArray, double, double)
Calculates the general motion direction in the selected region and returns the angle between 0 and 360. At first the function builds the orientation histogram and finds the basic orientation as a coordinate of the histogram maximum. After that the function calculates the shift relative to the basic orientation as a weighted sum of all orientation vectors: the more recent is the motion, the greater is the weight. The resultant angle is a circular sum of the basic orientation and the shift.
public static double CalcGlobalOrientation(IInputArray orientation, IInputArray mask, IInputArray mhi, double timestamp, double duration)
Parameters
orientation
IInputArrayMotion gradient orientation image; calculated by the function cvCalcMotionGradient.
mask
IInputArrayMask image. It may be a conjunction of valid gradient mask, obtained with cvCalcMotionGradient and mask of the region, whose direction needs to be calculated.
mhi
IInputArrayMotion history image.
timestamp
doubleCurrent time in milliseconds or other units, it is better to store time passed to cvUpdateMotionHistory before and reuse it here, because running cvUpdateMotionHistory and cvCalcMotionGradient on large images may take some time.
duration
doubleMaximal duration of motion track in milliseconds, the same as in cvUpdateMotionHistory
Returns
- double
The angle
CalcHist(IInputArrayOfArrays, int[], IInputArray, IOutputArray, int[], float[], bool)
Calculates a histogram of a set of arrays.
public static void CalcHist(IInputArrayOfArrays images, int[] channels, IInputArray mask, IOutputArray hist, int[] histSize, float[] ranges, bool accumulate)
Parameters
images
IInputArrayOfArraysSource arrays. They all should have the same depth, CV_8U, CV_16U or CV_32F , and the same size. Each of them can have an arbitrary number of channels.
channels
int[]List of the channels used to compute the histogram.
mask
IInputArrayOptional mask. If the matrix is not empty, it must be an 8-bit array of the same size as images[i] . The non-zero mask elements mark the array elements counted in the histogram.
hist
IOutputArrayOutput histogram
histSize
int[]Array of histogram sizes in each dimension.
ranges
float[]Array of the dims arrays of the histogram bin boundaries in each dimension.
accumulate
boolAccumulation flag. If it is set, the histogram is not cleared in the beginning when it is allocated. This feature enables you to compute a single histogram from several sets of arrays, or to update the histogram in time.
CalcMotionGradient(IInputArray, IOutputArray, IOutputArray, double, double, int)
Calculates the derivatives Dx and Dy of mhi and then calculates gradient orientation as: orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) where both Dx(x,y)' and Dy(x,y)' signs are taken into account (as in cvCartToPolar function). After that mask is filled to indicate where the orientation is valid (see delta1 and delta2 description).
public static void CalcMotionGradient(IInputArray mhi, IOutputArray mask, IOutputArray orientation, double delta1, double delta2, int apertureSize = 3)
Parameters
mhi
IInputArrayMotion history image
mask
IOutputArrayMask image; marks pixels where motion gradient data is correct. Output parameter.
orientation
IOutputArrayMotion gradient orientation image; contains angles from 0 to ~360.
delta1
doubleThe function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2).
delta2
doubleThe function finds minimum (m(x,y)) and maximum (M(x,y)) mhi values over each pixel (x,y) neihborhood and assumes the gradient is valid only if min(delta1,delta2) <= M(x,y)-m(x,y) <= max(delta1,delta2).
apertureSize
intAperture size of derivative operators used by the function: CV_SCHARR, 1, 3, 5 or 7 (see cvSobel).
CalcOpticalFlowFarneback(IInputArray, IInputArray, IInputOutputArray, double, int, int, int, int, double, OpticalflowFarnebackFlag)
Computes dense optical flow using Gunnar Farneback's algorithm
public static void CalcOpticalFlowFarneback(IInputArray prev0, IInputArray next0, IInputOutputArray flow, double pyrScale, int levels, int winSize, int iterations, int polyN, double polySigma, OpticalflowFarnebackFlag flags)
Parameters
prev0
IInputArrayThe first 8-bit single-channel input image
next0
IInputArrayThe second input image of the same size and the same type as prevImg
flow
IInputOutputArrayThe computed flow image; will have the same size as prevImg and type CV 32FC2
pyrScale
doubleSpecifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous
levels
intThe number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used
winSize
intThe averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field
iterations
intThe number of iterations the algorithm does at each pyramid level
polyN
intSize of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7
polySigma
doubleStandard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5
flags
OpticalflowFarnebackFlagThe operation flags
CalcOpticalFlowFarneback(Image<Gray, byte>, Image<Gray, byte>, Image<Gray, float>, Image<Gray, float>, double, int, int, int, int, double, OpticalflowFarnebackFlag)
Computes dense optical flow using Gunnar Farneback's algorithm
public static void CalcOpticalFlowFarneback(Image<Gray, byte> prev0, Image<Gray, byte> next0, Image<Gray, float> flowX, Image<Gray, float> flowY, double pyrScale, int levels, int winSize, int iterations, int polyN, double polySigma, OpticalflowFarnebackFlag flags)
Parameters
prev0
Image<Gray, byte>The first 8-bit single-channel input image
next0
Image<Gray, byte>The second input image of the same size and the same type as prevImg
flowX
Image<Gray, float>The computed flow image for x-velocity; will have the same size as prevImg
flowY
Image<Gray, float>The computed flow image for y-velocity; will have the same size as prevImg
pyrScale
doubleSpecifies the image scale (!1) to build the pyramids for each image. pyrScale=0.5 means the classical pyramid, where each next layer is twice smaller than the previous
levels
intThe number of pyramid layers, including the initial image. levels=1 means that no extra layers are created and only the original images are used
winSize
intThe averaging window size; The larger values increase the algorithm robustness to image noise and give more chances for fast motion detection, but yield more blurred motion field
iterations
intThe number of iterations the algorithm does at each pyramid level
polyN
intSize of the pixel neighborhood used to find polynomial expansion in each pixel. The larger values mean that the image will be approximated with smoother surfaces, yielding more robust algorithm and more blurred motion field. Typically, poly n=5 or 7
polySigma
doubleStandard deviation of the Gaussian that is used to smooth derivatives that are used as a basis for the polynomial expansion. For poly n=5 you can set poly sigma=1.1, for poly n=7 a good value would be poly sigma=1.5
flags
OpticalflowFarnebackFlagThe operation flags
CalcOpticalFlowPyrLK(IInputArray, IInputArray, IInputArray, IInputOutputArray, IOutputArray, IOutputArray, Size, int, MCvTermCriteria, LKFlowFlag, double)
Implements sparse iterative version of Lucas-Kanade optical flow in pyramids ([Bouguet00]). It calculates coordinates of the feature points on the current video frame given their coordinates on the previous frame. The function finds the coordinates with sub-pixel accuracy.
public static void CalcOpticalFlowPyrLK(IInputArray prevImg, IInputArray nextImg, IInputArray prevPts, IInputOutputArray nextPts, IOutputArray status, IOutputArray err, Size winSize, int maxLevel, MCvTermCriteria criteria, LKFlowFlag flags = LKFlowFlag.Default, double minEigThreshold = 0.0001)
Parameters
prevImg
IInputArrayFirst frame, at time t.
nextImg
IInputArraySecond frame, at time t + dt .
prevPts
IInputArrayArray of points for which the flow needs to be found.
nextPts
IInputOutputArrayArray of 2D points containing calculated new positions of input
status
IOutputArrayArray. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise.
err
IOutputArrayArray of double numbers containing difference between patches around the original and moved points. Optional parameter; can be NULL
winSize
SizeSize of the search window of each pyramid level.
maxLevel
intMaximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc.
criteria
MCvTermCriteriaSpecifies when the iteration process of finding the flow for each point on each pyramid level should be stopped.
flags
LKFlowFlagMiscellaneous flags
minEigThreshold
doublethe algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.
Remarks
Both parameters prev_pyr and curr_pyr comply with the following rules: if the image pointer is 0, the function allocates the buffer internally, calculates the pyramid, and releases the buffer after processing. Otherwise, the function calculates the pyramid and stores it in the buffer unless the flag CV_LKFLOW_PYR_A[B]_READY is set. The image should be large enough to fit the Gaussian pyramid data. After the function call both pyramids are calculated and the readiness flag for the corresponding image can be set in the next call (i.e., typically, for all the image pairs except the very first one CV_LKFLOW_PYR_A_READY is set).
CalcOpticalFlowPyrLK(IInputArray, IInputArray, PointF[], Size, int, MCvTermCriteria, out PointF[], out byte[], out float[], LKFlowFlag, double)
Calculates optical flow for a sparse feature set using iterative Lucas-Kanade method in pyramids
public static void CalcOpticalFlowPyrLK(IInputArray prev, IInputArray curr, PointF[] prevFeatures, Size winSize, int level, MCvTermCriteria criteria, out PointF[] currFeatures, out byte[] status, out float[] trackError, LKFlowFlag flags = LKFlowFlag.Default, double minEigThreshold = 0.0001)
Parameters
prev
IInputArrayFirst frame, at time t
curr
IInputArraySecond frame, at time t + dt
prevFeatures
PointF[]Array of points for which the flow needs to be found
winSize
SizeSize of the search window of each pyramid level
level
intMaximal pyramid level number. If 0 , pyramids are not used (single level), if 1 , two levels are used, etc
criteria
MCvTermCriteriaSpecifies when the iteration process of finding the flow for each point on each pyramid level should be stopped
currFeatures
PointF[]Array of 2D points containing calculated new positions of input features in the second image
status
byte[]Array. Every element of the array is set to 1 if the flow for the corresponding feature has been found, 0 otherwise
trackError
float[]Array of double numbers containing difference between patches around the original and moved points
flags
LKFlowFlagFlags
minEigThreshold
doublethe algorithm calculates the minimum eigen value of a 2x2 normal matrix of optical flow equations (this matrix is called a spatial gradient matrix in [Bouguet00]), divided by number of pixels in a window; if this value is less than minEigThreshold, then a corresponding feature is filtered out and its flow is not processed, so it allows to remove bad points and get a performance boost.
CalibrateCamera(IInputArray, IInputArray, Size, IInputOutputArray, IInputOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
public static double CalibrateCamera(IInputArray objectPoints, IInputArray imagePoints, Size imageSize, IInputOutputArray cameraMatrix, IInputOutputArray distortionCoeffs, IOutputArray rotationVectors, IOutputArray translationVectors, CalibType flags, MCvTermCriteria termCriteria)
Parameters
objectPoints
IInputArrayIn the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.
imagePoints
IInputArrayIn the new interface it is a vector of vectors of the projections of calibration pattern points. In the old interface all the vectors of object points from different views are concatenated together.
imageSize
SizeSize of the image used only to initialize the camera intrinsic matrix.
cameraMatrix
IInputOutputArrayInput/output 3x3 floating-point camera intrinsic matrix A [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized
distortionCoeffs
IInputOutputArrayInput/output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements.
rotationVectors
IOutputArrayOutput vector of rotation vectors (Rodrigues) estimated for each pattern view. That is, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.
translationVectors
IOutputArrayOutput vector of translation vectors estimated for each pattern view, see parameter describtion above.
flags
CalibTypeDifferent flags
termCriteria
MCvTermCriteriaThe termination criteria
Returns
- double
The final reprojection error
CalibrateCamera(MCvPoint3D32f[][], PointF[][], Size, IInputOutputArray, IInputOutputArray, CalibType, MCvTermCriteria, out Mat[], out Mat[])
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.
public static double CalibrateCamera(MCvPoint3D32f[][] objectPoints, PointF[][] imagePoints, Size imageSize, IInputOutputArray cameraMatrix, IInputOutputArray distortionCoeffs, CalibType calibrationType, MCvTermCriteria termCriteria, out Mat[] rotationVectors, out Mat[] translationVectors)
Parameters
objectPoints
MCvPoint3D32f[][]In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space. The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Z-coordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.
imagePoints
PointF[][]In the new interface it is a vector of vectors of the projections of calibration pattern points. In the old interface all the vectors of object points from different views are concatenated together.
imageSize
SizeSize of the image used only to initialize the camera intrinsic matrix.
cameraMatrix
IInputOutputArrayInput/output 3x3 floating-point camera intrinsic matrix A [fx 0 cx; 0 fy cy; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS and/or CV_CALIB_FIX_ASPECT_RATION are specified, some or all of fx, fy, cx, cy must be initialized
distortionCoeffs
IInputOutputArrayInput/output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements.
calibrationType
CalibTypeThe camera calibration flags.
termCriteria
MCvTermCriteriaThe termination criteria
rotationVectors
Mat[]Output vector of rotation vectors (Rodrigues) estimated for each pattern view. That is, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.
translationVectors
Mat[]Output vector of translation vectors estimated for each pattern view, see parameter describtion above.
Returns
- double
The final reprojection error
CalibrateHandEye(IInputArrayOfArrays, IInputArrayOfArrays, IInputArrayOfArrays, IInputArrayOfArrays, IOutputArray, IOutputArray, HandEyeCalibrationMethod)
Computes Hand-Eye calibration
public static void CalibrateHandEye(IInputArrayOfArrays rGripper2base, IInputArrayOfArrays tGripper2base, IInputArrayOfArrays rTarget2cam, IInputArrayOfArrays tTarget2cam, IOutputArray rCam2gripper, IOutputArray tCam2gripper, HandEyeCalibrationMethod method)
Parameters
rGripper2base
IInputArrayOfArraysRotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector (vector<Mat>) that contains the rotation matrices for all the transformations from gripper frame to robot base frame.
tGripper2base
IInputArrayOfArraysTranslation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame. This is a vector (vector<Mat>) that contains the translation vectors for all the transformations from gripper frame to robot base frame.
rTarget2cam
IInputArrayOfArraysRotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector (vector<Mat>) that contains the rotation matrices for all the transformations from calibration target frame to camera frame.
tTarget2cam
IInputArrayOfArraysRotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame. This is a vector (vector<Mat>) that contains the translation vectors for all the transformations from calibration target frame to camera frame.
rCam2gripper
IOutputArrayEstimated rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.
tCam2gripper
IOutputArrayEstimated translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame.
method
HandEyeCalibrationMethodOne of the implemented Hand-Eye calibration method
CalibrationMatrixValues(IInputArray, Size, double, double, ref double, ref double, ref double, ref MCvPoint2D64f, ref double)
Computes various useful camera (sensor/lens) characteristics using the computed camera calibration matrix, image frame resolution in pixels and the physical aperture size
public static void CalibrationMatrixValues(IInputArray cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, ref double fovx, ref double fovy, ref double focalLength, ref MCvPoint2D64f principalPoint, ref double aspectRatio)
Parameters
cameraMatrix
IInputArrayThe matrix of intrinsic parameters
imageSize
SizeImage size in pixels
apertureWidth
doubleAperture width in real-world units (optional input parameter). Set it to 0 if not used
apertureHeight
doubleAperture width in real-world units (optional input parameter). Set it to 0 if not used
fovx
doubleField of view angle in x direction in degrees
fovy
doubleField of view angle in y direction in degrees
focalLength
doubleFocal length in real-world units
principalPoint
MCvPoint2D64fThe principal point in real-world units
aspectRatio
doubleThe pixel aspect ratio ~ fy/f
CamShift(IInputArray, ref Rectangle, MCvTermCriteria)
Implements CAMSHIFT object tracking algorithm ([Bradski98]). First, it finds an object center using cvMeanShift and, after that, calculates the object size and orientation.
public static RotatedRect CamShift(IInputArray probImage, ref Rectangle window, MCvTermCriteria criteria)
Parameters
probImage
IInputArrayBack projection of object histogram
window
RectangleInitial search window
criteria
MCvTermCriteriaCriteria applied to determine when the window search should be finished
Returns
- RotatedRect
Circumscribed box for the object, contains object size and orientation
Canny(IInputArray, IInputArray, IOutputArray, double, double, bool)
Finds the edges on the input dx
, dy
and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges.
public static void Canny(IInputArray dx, IInputArray dy, IOutputArray edges, double threshold1, double threshold2, bool l2Gradient = false)
Parameters
dx
IInputArray16-bit x derivative of input image
dy
IInputArray16-bit y derivative of input image
edges
IOutputArrayImage to store the edges found by the function
threshold1
doubleThe first threshold
threshold2
doubleThe second threshold.
l2Gradient
boola flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ).
Canny(IInputArray, IOutputArray, double, double, int, bool)
Finds the edges on the input image
and marks them in the output image edges using the Canny algorithm. The smallest of threshold1 and threshold2 is used for edge linking, the largest - to find initial segments of strong edges.
public static void Canny(IInputArray image, IOutputArray edges, double threshold1, double threshold2, int apertureSize = 3, bool l2Gradient = false)
Parameters
image
IInputArrayInput image
edges
IOutputArrayImage to store the edges found by the function
threshold1
doubleThe first threshold
threshold2
doubleThe second threshold.
apertureSize
intAperture parameter for Sobel operator
l2Gradient
boola flag, indicating whether a more accurate norm should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default norm is enough ( L2gradient=false ).
CartToPolar(IInputArray, IInputArray, IOutputArray, IOutputArray, bool)
Calculates the magnitude and angle of 2D vectors. magnitude(I)=sqrt( x(I)^2+y(I)^2 ), angle(I)=atan2( y(I)/x(I) ) The angles are calculated with accuracy about 0.3 degrees. For the point (0,0), the angle is set to 0.
public static void CartToPolar(IInputArray x, IInputArray y, IOutputArray magnitude, IOutputArray angle, bool angleInDegrees = false)
Parameters
x
IInputArrayArray of x-coordinates; this must be a single-precision or double-precision floating-point array.
y
IInputArrayArray of y-coordinates, that must have the same size and same type as x.
magnitude
IOutputArrayOutput array of magnitudes of the same size and type as x.
angle
IOutputArrayOutput array of angles that has the same size and type as x; the angles are measured in radians (from 0 to 2*Pi) or in degrees (0 to 360 degrees).
angleInDegrees
boolA flag, indicating whether the angles are measured in radians (which is by default), or in degrees.
CheckRange(IInputArray, bool, ref Point, double, double)
Check that every array element is neither NaN nor +- inf. The functions also check that each value is between minVal and maxVal. in the case of multi-channel arrays each channel is processed independently. If some values are out of range, position of the first outlier is stored in pos, and then the functions either return false (when quiet=true) or throw an exception.
public static bool CheckRange(IInputArray arr, bool quiet, ref Point pos, double minVal, double maxVal)
Parameters
arr
IInputArrayThe array to check
quiet
boolThe flag indicating whether the functions quietly return false when the array elements are out of range, or they throw an exception
pos
PointThis will be filled with the position of the first outlier
minVal
doubleThe inclusive lower boundary of valid values range
maxVal
doubleThe exclusive upper boundary of valid values range
Returns
- bool
If quiet, return true if all values are in range
Circle(IInputOutputArray, Point, int, MCvScalar, int, LineType, int)
Draws a simple or filled circle with given center and radius. The circle is clipped by ROI rectangle.
public static void Circle(IInputOutputArray img, Point center, int radius, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage where the circle is drawn
center
PointCenter of the circle
radius
intRadius of the circle.
color
MCvScalarColor of the circle
thickness
intThickness of the circle outline if positive, otherwise indicates that a filled circle has to be drawn
lineType
LineTypeLine type
shift
intNumber of fractional bits in the center coordinates and radius value
ClipLine(Rectangle, ref Point, ref Point)
Calculates a part of the line segment which is entirely in the rectangle.
public static bool ClipLine(Rectangle rectangle, ref Point pt1, ref Point pt2)
Parameters
rectangle
RectangleThe rectangle
pt1
PointFirst ending point of the line segment. It is modified by the function
pt2
PointSecond ending point of the line segment. It is modified by the function.
Returns
- bool
It returns false if the line segment is completely outside the rectangle and true otherwise.
ColorChange(IInputArray, IInputArray, IOutputArray, float, float, float)
Given an original color image, two differently colored versions of this image can be mixed seamlessly.
public static void ColorChange(IInputArray src, IInputArray mask, IOutputArray dst, float redMul = 1, float greenMul = 1, float blueMul = 1)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
mask
IInputArrayInput 8-bit 1 or 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src .
redMul
floatR-channel multiply factor. Multiplication factor is between .5 to 2.5.
greenMul
floatG-channel multiply factor. Multiplication factor is between .5 to 2.5.
blueMul
floatB-channel multiply factor. Multiplication factor is between .5 to 2.5.
Compare(IInputArray, IInputArray, IOutputArray, CmpType)
Compares the corresponding elements of two arrays and fills the destination mask array: dst(I)=src1(I) op src2(I), dst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size)
public static void Compare(IInputArray src1, IInputArray src2, IOutputArray dst, CmpType cmpOp)
Parameters
src1
IInputArrayThe first image to compare with
src2
IInputArrayThe second image to compare with
dst
IOutputArraydst(I) is set to 0xff (all '1'-bits) if the particular relation between the elements is true and 0 otherwise.
cmpOp
CmpTypeThe comparison operator type
CompareHist(IInputArray, IInputArray, HistogramCompMethod)
Compares two histograms.
public static double CompareHist(IInputArray h1, IInputArray h2, HistogramCompMethod method)
Parameters
h1
IInputArrayFirst compared histogram.
h2
IInputArraySecond compared histogram of the same size as H1 .
method
HistogramCompMethodComparison method
Returns
- double
The distance between the histogram
Compute(IStereoMatcher, IInputArray, IInputArray, IOutputArray)
Computes disparity map for the specified stereo pair
public static void Compute(this IStereoMatcher matcher, IInputArray left, IInputArray right, IOutputArray disparity)
Parameters
matcher
IStereoMatcherThe stereo matcher
left
IInputArrayLeft 8-bit single-channel image.
right
IInputArrayRight image of the same size and the same type as the left one.
disparity
IOutputArrayOutput disparity map. It has the same size as the input images. Some algorithms, like StereoBM or StereoSGBM compute 16-bit fixed-point disparity map (where each disparity value has 4 fractional bits), whereas other algorithms output 32-bit floating-point disparity map
ComputeCorrespondEpilines(IInputArray, int, IInputArray, IOutputArray)
For every point in one of the two images of stereo-pair the function cvComputeCorrespondEpilines finds equation of a line that contains the corresponding point (i.e. projection of the same 3D point) in the other image. Each line is encoded by a vector of 3 elements l=[a,b,c]^T, so that: l^T*[x, y, 1]^T=0, or ax + by + c = 0 From the fundamental matrix definition (see cvFindFundamentalMatrix discussion), line l2 for a point p1 in the first image (which_image=1) can be computed as: l2=Fp1 and the line l1 for a point p2 in the second image (which_image=1) can be computed as: l1=F^Tp2Line coefficients are defined up to a scale. They are normalized (a2+b2=1) are stored into correspondent_lines
public static void ComputeCorrespondEpilines(IInputArray points, int whichImage, IInputArray fundamentalMatrix, IOutputArray correspondentLines)
Parameters
points
IInputArrayThe input points. 2xN, Nx2, 3xN or Nx3 array (where N number of points). Multi-channel 1xN or Nx1 array is also acceptable.
whichImage
intIndex of the image (1 or 2) that contains the points
fundamentalMatrix
IInputArrayFundamental matrix
correspondentLines
IOutputArrayComputed epilines, 3xN or Nx3 array
ConnectedComponents(IInputArray, IOutputArray, LineType, DepthType, ConnectedComponentsAlgorithmsTypes)
Computes the connected components labeled image of boolean image
public static int ConnectedComponents(IInputArray image, IOutputArray labels, LineType connectivity = LineType.EightConnected, DepthType labelType = DepthType.Cv32S, ConnectedComponentsAlgorithmsTypes cclType = ConnectedComponentsAlgorithmsTypes.Default)
Parameters
image
IInputArrayThe boolean image
labels
IOutputArrayThe connected components labeled image of boolean image
connectivity
LineType4 or 8 way connectivity
labelType
DepthTypeSpecifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image
cclType
ConnectedComponentsAlgorithmsTypesconnected components algorithm type
Returns
- int
N, the total number of labels [0, N-1] where 0 represents the background label.
ConnectedComponentsWithStats(IInputArray, IOutputArray, IOutputArray, IOutputArray, LineType, DepthType, ConnectedComponentsAlgorithmsTypes)
Computes the connected components labeled image of boolean image
public static int ConnectedComponentsWithStats(IInputArray image, IOutputArray labels, IOutputArray stats, IOutputArray centroids, LineType connectivity = LineType.EightConnected, DepthType labelType = DepthType.Cv32S, ConnectedComponentsAlgorithmsTypes cclType = ConnectedComponentsAlgorithmsTypes.Default)
Parameters
image
IInputArrayThe boolean image
labels
IOutputArrayThe connected components labeled image of boolean image
stats
IOutputArrayStatistics output for each label, including the background label, see below for available statistics. Statistics are accessed via stats(label, COLUMN) where COLUMN is one of cv::ConnectedComponentsTypes. The data type is CV_32S
centroids
IOutputArrayCentroid output for each label, including the background label. Centroids are accessed via centroids(label, 0) for x and centroids(label, 1) for y. The data type CV_64F.
connectivity
LineType4 or 8 way connectivity
labelType
DepthTypeSpecifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image
cclType
ConnectedComponentsAlgorithmsTypesconnected components algorithm type
Returns
- int
N, the total number of labels [0, N-1] where 0 represents the background label.
ContourArea(IInputArray, bool)
Calculates area of the whole contour or contour section.
public static double ContourArea(IInputArray contour, bool oriented = false)
Parameters
contour
IInputArrayInput vector of 2D points (contour vertices), stored in std::vector or Mat.
oriented
boolOriented area flag. If it is true, the function returns a signed area value, depending on the contour orientation (clockwise or counter-clockwise). Using this feature you can determine orientation of a contour by taking the sign of an area. By default, the parameter is false, which means that the absolute value is returned.
Returns
- double
The area of the whole contour or contour section
ConvertMaps(IInputArray, IInputArray, IOutputArray, IOutputArray, DepthType, int, bool)
Converts image transformation maps from one representation to another.
public static void ConvertMaps(IInputArray map1, IInputArray map2, IOutputArray dstmap1, IOutputArray dstmap2, DepthType dstmap1Depth, int dstmap1Channels, bool nninterpolation = false)
Parameters
map1
IInputArrayThe first input map of type CV_16SC2 , CV_32FC1 , or CV_32FC2 .
map2
IInputArrayThe second input map of type CV_16UC1 , CV_32FC1 , or none (empty matrix), respectively.
dstmap1
IOutputArrayThe first output map that has the type dstmap1type and the same size as src .
dstmap2
IOutputArrayThe second output map.
dstmap1Depth
DepthTypeDepth type of the first output map that should be CV_16SC2 , CV_32FC1 , or CV_32FC2.
dstmap1Channels
intThe number of channels in the dst map.
nninterpolation
boolFlag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.
ConvertPointsFromHomogeneous(IInputArray, IOutputArray)
Converts points from homogeneous to Euclidean space.
public static void ConvertPointsFromHomogeneous(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayInput vector of N-dimensional points.
dst
IOutputArrayOutput vector of N-1-dimensional points.
ConvertPointsToHomogeneous(IInputArray, IOutputArray)
Converts points from Euclidean to homogeneous space.
public static void ConvertPointsToHomogeneous(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayInput vector of N-dimensional points.
dst
IOutputArrayOutput vector of N+1-dimensional points.
ConvertScaleAbs(IInputArray, IOutputArray, double, double)
Similar to cvCvtScale but it stores absolute values of the conversion results: dst(I)=abs(src(I)*scale + (shift,shift,...)) The function supports only destination arrays of 8u (8-bit unsigned integers) type, for other types the function can be emulated by combination of cvConvertScale and cvAbs functions.
public static void ConvertScaleAbs(IInputArray src, IOutputArray dst, double scale, double shift)
Parameters
src
IInputArraySource array
dst
IOutputArrayDestination array (should have 8u depth).
scale
doubleScaleAbs factor
shift
doubleValue added to the scaled source array elements
ConvexHull(IInputArray, IOutputArray, bool, bool)
The function cvConvexHull2 finds convex hull of 2D point set using Sklansky's algorithm.
public static void ConvexHull(IInputArray points, IOutputArray hull, bool clockwise = false, bool returnPoints = true)
Parameters
points
IInputArrayInput 2D point set
hull
IOutputArrayOutput convex hull. It is either an integer vector of indices or vector of points. In the first case, the hull elements are 0-based indices of the convex hull points in the original array (since the set of convex hull points is a subset of the original point set). In the second case, hull elements are the convex hull points themselves.
clockwise
boolOrientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.
returnPoints
boolOperation flag. In case of a matrix, when the flag is true, the function returns convex hull points. Otherwise, it returns indices of the convex hull points. When the output array is std::vector, the flag is ignored, and the output depends on the type of the vector
ConvexHull(PointF[], bool)
Finds convex hull of 2D point set using Sklansky's algorithm
public static PointF[] ConvexHull(PointF[] points, bool clockwise = false)
Parameters
points
PointF[]The points to find convex hull from
clockwise
boolOrientation flag. If it is true, the output convex hull is oriented clockwise. Otherwise, it is oriented counter-clockwise. The assumed coordinate system has its X axis pointing to the right, and its Y axis pointing upwards.
Returns
- PointF[]
The convex hull of the points
ConvexityDefects(IInputArray, IInputArray, IOutputArray)
Finds the convexity defects of a contour.
public static void ConvexityDefects(IInputArray contour, IInputArray convexhull, IOutputArray convexityDefects)
Parameters
contour
IInputArrayInput contour
convexhull
IInputArrayConvex hull obtained using ConvexHull that should contain pointers or indices to the contour points, not the hull points themselves, i.e. return_points parameter in cvConvexHull2 should be 0
convexityDefects
IOutputArrayThe output vector of convexity defects. Each convexity defect is represented as 4-element integer vector (a.k.a. cv::Vec4i): (start_index, end_index, farthest_pt_index, fixpt_depth), where indices are 0-based indices in the original contour of the convexity defect beginning, end and the farthest point, and fixpt_depth is fixed-point approximation (with 8 fractional bits) of the distance between the farthest contour point and the hull. That is, to get the floating-point value of the depth will be fixpt_depth/256.0.
CopyMakeBorder(IInputArray, IOutputArray, int, int, int, int, BorderType, MCvScalar)
Copies the source 2D array into interior of destination array and makes a border of the specified type around the copied area. The function is useful when one needs to emulate border type that is different from the one embedded into a specific algorithm implementation. For example, morphological functions, as well as most of other filtering functions in OpenCV, internally use replication border type, while the user may need zero border or a border, filled with 1's or 255's
public static void CopyMakeBorder(IInputArray src, IOutputArray dst, int top, int bottom, int left, int right, BorderType bordertype, MCvScalar value = default)
Parameters
src
IInputArrayThe source image
dst
IOutputArrayThe destination image
top
intParameter specifying how many pixels in each direction from the source image rectangle to extrapolate.
bottom
intParameter specifying how many pixels in each direction from the source image rectangle to extrapolate.
left
intParameter specifying how many pixels in each direction from the source image rectangle to extrapolate.
right
intParameter specifying how many pixels in each direction from the source image rectangle to extrapolate.
bordertype
BorderTypeType of the border to create around the copied source image rectangle
value
MCvScalarValue of the border pixels if bordertype=CONSTANT
CornerHarris(IInputArray, IOutputArray, int, int, double, BorderType)
Runs the Harris edge detector on image. Similarly to cvCornerMinEigenVal and cvCornerEigenValsAndVecs, for each pixel it calculates 2x2 gradient covariation matrix M over block_size x block_size neighborhood. Then, it stores det(M) - k*trace(M)^2 to the destination image. Corners in the image can be found as local maxima of the destination image.
public static void CornerHarris(IInputArray image, IOutputArray harrisResponse, int blockSize, int apertureSize = 3, double k = 0.04, BorderType borderType = BorderType.Default)
Parameters
image
IInputArrayInput image
harrisResponse
IOutputArrayImage to store the Harris detector responces. Should have the same size as image
blockSize
intNeighborhood size
apertureSize
intAperture parameter for Sobel operator (see cvSobel). format. In the case of floating-point input format this parameter is the number of the fixed float filter used for differencing.
k
doubleHarris detector free parameter.
borderType
BorderTypePixel extrapolation method.
CornerSubPix(IInputArray, IInputOutputArray, Size, Size, MCvTermCriteria)
Iterates to find the sub-pixel accurate location of corners, or radial saddle points
public static void CornerSubPix(IInputArray image, IInputOutputArray corners, Size win, Size zeroZone, MCvTermCriteria criteria)
Parameters
image
IInputArrayInput image
corners
IInputOutputArrayInitial coordinates of the input corners and refined coordinates on output
win
SizeHalf sizes of the search window. For example, if win=(5,5) then 52+1 x 52+1 = 11 x 11 search window is used
zeroZone
SizeHalf size of the dead region in the middle of the search zone over which the summation in formulae below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such size
criteria
MCvTermCriteriaCriteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after certain number of iteration or when a required accuracy is achieved. The criteria may specify either of or both the maximum number of iteration and the required accuracy
CorrectMatches(IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray)
Refines coordinates of corresponding points.
public static void CorrectMatches(IInputArray f, IInputArray points1, IInputArray points2, IOutputArray newPoints1, IOutputArray newPoints2)
Parameters
f
IInputArray3x3 fundamental matrix.
points1
IInputArray1xN array containing the first set of points.
points2
IInputArray1xN array containing the second set of points.
newPoints1
IOutputArrayThe optimized points1.
newPoints2
IOutputArrayThe optimized points2.
CountNonZero(IInputArray)
Returns the number of non-zero elements in arr: result = sumI arr(I)!=0 In case of IplImage both ROI and COI are supported.
public static int CountNonZero(IInputArray arr)
Parameters
arr
IInputArrayThe image
Returns
- int
the number of non-zero elements in image
CreateHanningWindow(IOutputArray, Size, DepthType)
This function computes a Hanning window coefficients in two dimensions.
public static void CreateHanningWindow(IOutputArray dst, Size winSize, DepthType type)
Parameters
dst
IOutputArrayDestination array to place Hann coefficients in
winSize
SizeThe window size specifications
type
DepthTypeCreated array type
CvArrToMat(nint, bool, bool, int)
Converts CvMat, IplImage , or CvMatND to Mat.
public static Mat CvArrToMat(nint arr, bool copyData = false, bool allowND = true, int coiMode = 0)
Parameters
arr
nintInput CvMat, IplImage , or CvMatND.
copyData
boolWhen false (default value), no data is copied and only the new header is created, in this case, the original array should not be deallocated while the new matrix header is used; if the parameter is true, all the data is copied and you may deallocate the original array right after the conversion.
allowND
boolWhen true (default value), CvMatND is converted to 2-dimensional Mat, if it is possible (see the discussion below); if it is not possible, or when the parameter is false, the function will report an error
coiMode
intParameter specifying how the IplImage COI (when set) is handled. If coiMode=0 and COI is set, the function reports an error. If coiMode=1 , the function never reports an error. Instead, it returns the header to the whole original image and you will have to check and process COI manually.
Returns
- Mat
The Mat header
CvtColor(IInputArray, IOutputArray, ColorConversion, int)
Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout).
public static void CvtColor(IInputArray src, IOutputArray dst, ColorConversion code, int dstCn = 0)
Parameters
src
IInputArrayThe source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image
dst
IOutputArrayThe destination image of the same data type as the source one. The number of channels may be different
code
ColorConversionColor conversion operation that can be specifed using CV_src_color_space2dst_color_space constants
dstCn
intnumber of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code .
CvtColor(IInputArray, IOutputArray, Type, Type)
Converts input image from one color space to another. The function ignores colorModel and channelSeq fields of IplImage header, so the source image color space should be specified correctly (including order of the channels in case of RGB space, e.g. BGR means 24-bit format with B0 G0 R0 B1 G1 R1 ... layout, whereas RGB means 24-bit format with R0 G0 B0 R1 G1 B1 ... layout).
public static void CvtColor(IInputArray src, IOutputArray dest, Type srcColor, Type destColor)
Parameters
src
IInputArrayThe source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image
dest
IOutputArrayThe destination image of the same data type as the source one. The number of channels may be different
srcColor
TypeSource color type.
destColor
TypeDestination color type
CvtColorTwoPlane(IInputArray, IInputArray, IOutputArray, ColorConversion)
Converts an image from one color space to another where the source image is stored in two planes.
public static void CvtColorTwoPlane(IInputArray src1, IInputArray src2, IOutputArray dst, ColorConversion code)
Parameters
src1
IInputArray8-bit image (CV_8U) of the Y plane.
src2
IInputArrayImage containing interleaved U/V plane.
dst
IOutputArrayOutput image.
code
ColorConversionSpecifies the type of conversion. It can take any of the following values: COLOR_YUV2BGR_NV12 COLOR_YUV2RGB_NV12 COLOR_YUV2BGRA_NV12 COLOR_YUV2RGBA_NV12 COLOR_YUV2BGR_NV21 COLOR_YUV2RGB_NV21 COLOR_YUV2BGRA_NV21 COLOR_YUV2RGBA_NV21
Dct(IInputArray, IOutputArray, DctType)
Performs forward or inverse transform of 1D or 2D floating-point array
public static void Dct(IInputArray src, IOutputArray dst, DctType flags)
Parameters
src
IInputArraySource array, real 1D or 2D array
dst
IOutputArrayDestination array of the same size and same type as the source
flags
DctTypeTransformation flags
Decolor(IInputArray, IOutputArray, IOutputArray)
Transforms a color image to a grayscale image. It is a basic tool in digital printing, stylized black-and-white photograph rendering, and in many single channel image processing applications
public static void Decolor(IInputArray src, IOutputArray grayscale, IOutputArray colorBoost)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
grayscale
IOutputArrayOutput 8-bit 1-channel image.
colorBoost
IOutputArrayOutput 8-bit 3-channel image.
DecomposeEssentialMat(IInputArray, IOutputArray, IOutputArray, IOutputArray)
Decompose an essential matrix to possible rotations and translation.
public static void DecomposeEssentialMat(IInputArray e, IOutputArray r1, IOutputArray r2, IOutputArray t)
Parameters
e
IInputArrayThe input essential matrix.
r1
IOutputArrayOne possible rotation matrix.
r2
IOutputArrayAnother possible rotation matrix.
t
IOutputArrayOne possible translation.
Remarks
This function decomposes the essential matrix E using svd decomposition. In general, four possible poses exist for the decomposition of E. They are [R1,t], [R1,−t], [R2,t], [R2,−t]
DecomposeHomographyMat(IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, IOutputArrayOfArrays)
Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).
public static int DecomposeHomographyMat(IInputArray h, IInputArray k, IOutputArrayOfArrays rotations, IOutputArrayOfArrays translations, IOutputArrayOfArrays normals)
Parameters
h
IInputArrayThe input homography matrix between two images.
k
IInputArrayThe input camera intrinsic matrix.
rotations
IOutputArrayOfArraysArray of rotation matrices.
translations
IOutputArrayOfArraysArray of translation matrices.
normals
IOutputArrayOfArraysArray of plane normal matrices.
Returns
- int
Number of solutions
DecomposeProjectionMatrix(IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.
public static void DecomposeProjectionMatrix(IInputArray projMatrix, IOutputArray cameraMatrix, IOutputArray rotMatrix, IOutputArray transVect, IOutputArray rotMatrixX = null, IOutputArray rotMatrixY = null, IOutputArray rotMatrixZ = null, IOutputArray eulerAngles = null)
Parameters
projMatrix
IInputArray3x4 input projection matrix P.
cameraMatrix
IOutputArrayOutput 3x3 camera intrinsic matrix A
rotMatrix
IOutputArrayOutput 3x3 external rotation matrix R.
transVect
IOutputArrayOutput 4x1 translation vector T.
rotMatrixX
IOutputArrayOptional 3x3 rotation matrix around x-axis.
rotMatrixY
IOutputArrayOptional 3x3 rotation matrix around y-axis.
rotMatrixZ
IOutputArrayOptional 3x3 rotation matrix around z-axis.
eulerAngles
IOutputArrayOptional three-element vector containing three Euler angles of rotation in degrees.
DefaultLoadUnmanagedModules(string[], string)
Attempts to load opencv modules from the specific location
public static bool DefaultLoadUnmanagedModules(string[] modules, string loadDirectory = null)
Parameters
modules
string[]The names of opencv modules. e.g. "opencv_core.dll" on windows.
loadDirectory
stringThe path to load the opencv modules. If null, will use the default path.
Returns
- bool
True if all the modules has been loaded successfully
Demosaicing(IInputArray, IOutputArray, ColorConversion, int)
main function for all demosaicing processes
public static void Demosaicing(IInputArray src, IOutputArray dst, ColorConversion code, int dstCn = 0)
Parameters
src
IInputArrayInput image: 8-bit unsigned or 16-bit unsigned
dst
IOutputArrayOutput image of the same size and depth as src
code
ColorConversionColor space conversion code
dstCn
intNumber of channels in the destination image; if the parameter is 0, the number of the channels is derived automatically from src and code.
DenoiseTVL1(Mat[], Mat, double, int)
Primal-dual algorithm is an algorithm for solving special types of variational problems (that is, finding a function to minimize some functional). As the image denoising, in particular, may be seen as the variational problem, primal-dual algorithm then can be used to perform denoising and this is exactly what is implemented.
public static void DenoiseTVL1(Mat[] observations, Mat result, double lambda, int niters)
Parameters
observations
Mat[]This array should contain one or more noised versions of the image that is to be restored.
result
MatHere the denoised image will be stored. There is no need to do pre-allocation of storage space, as it will be automatically allocated, if necessary.
lambda
doubleCorresponds to in the formulas above. As it is enlarged, the smooth (blurred) images are treated more favorably than detailed (but maybe more noised) ones. Roughly speaking, as it becomes smaller, the result will be more blur but more sever outliers will be removed.
niters
intNumber of iterations that the algorithm will run. Of course, as more iterations as better, but it is hard to quantitatively refine this statement, so just use the default and increase it if the results are poor.
DestroyAllWindows()
Destroys all of the HighGUI windows.
public static void DestroyAllWindows()
DestroyWindow(string)
Destroys the window with a given name
public static void DestroyWindow(string name)
Parameters
name
stringName of the window to be destroyed
DetailEnhance(IInputArray, IOutputArray, float, float)
This filter enhances the details of a particular image.
public static void DetailEnhance(IInputArray src, IOutputArray dst, float sigmaS = 10, float sigmaR = 0.15)
Parameters
src
IInputArrayInput 8-bit 3-channel image
dst
IOutputArrayOutput image with the same size and type as src
sigmaS
floatRange between 0 to 200
sigmaR
floatRange between 0 to 1
Determinant(IInputArray)
Returns determinant of the square matrix mat. The direct method is used for small matrices and Gaussian elimination is used for larger matrices. For symmetric positive-determined matrices it is also possible to run SVD with U=V=NULL and then calculate determinant as a product of the diagonal elements of W
public static double Determinant(IInputArray mat)
Parameters
mat
IInputArrayThe pointer to the matrix
Returns
- double
determinant of the square matrix mat
Dft(IInputArray, IOutputArray, DxtType, int)
Performs forward or inverse transform of 1D or 2D floating-point array In case of real (single-channel) data, the packed format, borrowed from IPL, is used to to represent a result of forward Fourier transform or input for inverse Fourier transform
public static void Dft(IInputArray src, IOutputArray dst, DxtType flags = DxtType.Forward, int nonzeroRows = 0)
Parameters
src
IInputArraySource array, real or complex
dst
IOutputArrayDestination array of the same size and same type as the source
flags
DxtTypeTransformation flags
nonzeroRows
intNumber of nonzero rows to in the source array (in case of forward 2d transform), or a number of rows of interest in the destination array (in case of inverse 2d transform). If the value is negative, zero, or greater than the total number of rows, it is ignored. The parameter can be used to speed up 2d convolution/correlation when computing them via DFT. See the sample below
Dilate(IInputArray, IOutputArray, IInputArray, Point, int, BorderType, MCvScalar)
Dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken The function supports the in-place mode. Dilation can be applied several (iterations) times. In case of color image each channel is processed independently
public static void Dilate(IInputArray src, IOutputArray dst, IInputArray element, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
element
IInputArrayStructuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used
anchor
PointPosition of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.
iterations
intNumber of times erosion is applied
borderType
BorderTypePixel extrapolation method
borderValue
MCvScalarBorder value in case of a constant border
DistanceTransform(IInputArray, IOutputArray, IOutputArray, DistType, int, DistLabelType)
Calculates distance to closest zero pixel for all non-zero pixels of source image
public static void DistanceTransform(IInputArray src, IOutputArray dst, IOutputArray labels, DistType distanceType, int maskSize, DistLabelType labelType = DistLabelType.CComp)
Parameters
src
IInputArraySource 8-bit single-channel (binary) image.
dst
IOutputArrayOutput image with calculated distances (32-bit floating-point, single-channel).
labels
IOutputArrayThe optional output 2d array of labels of integer type and the same size as src and dst. Can be null if not needed
distanceType
DistTypeType of distance
maskSize
intSize of distance transform mask; can be 3 or 5. In case of CV_DIST_L1 or CV_DIST_C the parameter is forced to 3, because 3x3 mask gives the same result as 5x5 yet it is faster.
labelType
DistLabelTypeType of the label array to build. If labelType==CCOMP then each connected component of zeros in src (as well as all the non-zero pixels closest to the connected component) will be assigned the same label. If labelType==PIXEL then each zero pixel (and all the non-zero pixels closest to it) gets its own label.
Divide(IInputArray, IInputArray, IOutputArray, double, DepthType)
Divides one array by another: dst(I)=scale * src1(I)/src2(I), if src1!=IntPtr.Zero; dst(I)=scale/src2(I), if src1==IntPtr.Zero; All the arrays must have the same type, and the same size (or ROI size)
public static void Divide(IInputArray src1, IInputArray src2, IOutputArray dst, double scale = 1, DepthType dtype = DepthType.Default)
Parameters
src1
IInputArrayThe first source array. If the pointer is IntPtr.Zero, the array is assumed to be all 1s.
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
scale
doubleOptional scale factor
dtype
DepthTypeOptional depth of the output array
DrawChessboardCorners(IInputOutputArray, Size, IInputArray, bool)
Draws the individual chessboard corners detected (as red circles) in case if the board was not found (pattern_was_found=0) or the colored corners connected with lines when the board was found (pattern_was_found != 0).
public static void DrawChessboardCorners(IInputOutputArray image, Size patternSize, IInputArray corners, bool patternWasFound)
Parameters
image
IInputOutputArrayThe destination image; it must be 8-bit color image
patternSize
SizeThe number of inner corners per chessboard row and column
corners
IInputArrayThe array of corners detected
patternWasFound
boolIndicates whether the complete board was found (!=0) or not (=0). One may just pass the return value cvFindChessboardCorners here.
DrawContours(IInputOutputArray, IInputArrayOfArrays, int, MCvScalar, int, LineType, IInputArray, int, Point)
Draws contours outlines or filled contours.
public static void DrawContours(IInputOutputArray image, IInputArrayOfArrays contours, int contourIdx, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, IInputArray hierarchy = null, int maxLevel = 2147483647, Point offset = default)
Parameters
image
IInputOutputArrayImage where the contours are to be drawn. Like in any other drawing function, the contours are clipped with the ROI
contours
IInputArrayOfArraysAll the input contours. Each contour is stored as a point vector.
contourIdx
intParameter indicating a contour to draw. If it is negative, all the contours are drawn.
color
MCvScalarColor of the contours
thickness
intThickness of lines the contours are drawn with. If it is negative the contour interiors are drawn
lineType
LineTypeType of the contour segments
hierarchy
IInputArrayOptional information about hierarchy. It is only needed if you want to draw only some of the contours
maxLevel
intMaximal level for drawn contours. If 0, only contour is drawn. If 1, the contour and all contours after it on the same level are drawn. If 2, all contours after and all contours one level below the contours are drawn, etc. If the value is negative, the function does not draw the contours following after contour but draws child contours of contour up to abs(maxLevel)-1 level.
offset
PointShift all the point coordinates by the specified value. It is useful in case if the contours retrieved in some image ROI and then the ROI offset needs to be taken into account during the rendering.
DrawMarker(IInputOutputArray, Point, MCvScalar, MarkerTypes, int, int, LineType)
Draws a marker on a predefined position in an image.
public static void DrawMarker(IInputOutputArray img, Point position, MCvScalar color, MarkerTypes markerType, int markerSize = 20, int thickness = 1, LineType lineType = LineType.EightConnected)
Parameters
img
IInputOutputArrayImage.
position
PointThe point where the crosshair is positioned.
color
MCvScalarLine color.
markerType
MarkerTypesThe specific type of marker you want to use
markerSize
intThe length of the marker axis [default = 20 pixels]
thickness
intLine thickness.
lineType
LineTypeType of the line
EMD(IInputArray, IInputArray, DistType, IInputArray, float[], IOutputArray)
Computes the 'minimal work' distance between two weighted point configurations.
public static float EMD(IInputArray signature1, IInputArray signature2, DistType distType, IInputArray cost = null, float[] lowerBound = null, IOutputArray flow = null)
Parameters
signature1
IInputArrayFirst signature, a size1 x dims + 1 floating-point matrix. Each row stores the point weight followed by the point coordinates. The matrix is allowed to have a single column (weights only) if the user-defined cost matrix is used.
signature2
IInputArraySecond signature of the same format as signature1 , though the number of rows may be different. The total weights may be different. In this case an extra 'dummy' point is added to either signature1 or signature2
distType
DistTypeUsed metric. CV_DIST_L1, CV_DIST_L2 , and CV_DIST_C stand for one of the standard metrics. CV_DIST_USER means that a pre-calculated cost matrix cost is used.
cost
IInputArrayUser-defined size1 x size2 cost matrix. Also, if a cost matrix is used, lower boundary lowerBound cannot be calculated because it needs a metric function.
lowerBound
float[]Optional input/output parameter: lower boundary of a distance between the two signatures that is a distance between mass centers. The lower boundary may not be calculated if the user-defined cost matrix is used, the total weights of point configurations are not equal, or if the signatures consist of weights only (the signature matrices have a single column).
flow
IOutputArrayResultant size1 x size2 flow matrix
Returns
- float
The 'minimal work' distance between two weighted point configurations.
EdgePreservingFilter(IInputArray, IOutputArray, EdgePreservingFilterFlag, float, float)
Filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications.
public static void EdgePreservingFilter(IInputArray src, IOutputArray dst, EdgePreservingFilterFlag flags = EdgePreservingFilterFlag.RecursFilter, float sigmaS = 60, float sigmaR = 0.4)
Parameters
src
IInputArrayInput 8-bit 3-channel image
dst
IOutputArrayOutput 8-bit 3-channel image
flags
EdgePreservingFilterFlagEdge preserving filters
sigmaS
floatRange between 0 to 200
sigmaR
floatRange between 0 to 1
Eigen(IInputArray, IOutputArray, IOutputArray)
Computes eigenvalues and eigenvectors of a symmetric matrix
public static void Eigen(IInputArray src, IOutputArray eigenValues, IOutputArray eigenVectors = null)
Parameters
src
IInputArrayThe input symmetric square matrix, modified during the processing
eigenValues
IOutputArrayThe output vector of eigenvalues, stored in the descending order (order of eigenvalues and eigenvectors is syncronized, of course)
eigenVectors
IOutputArrayThe output matrix of eigenvectors, stored as subsequent rows
Examples
To calculate the largest eigenvector/-value set lowindex = highindex = 1. For legacy reasons this function always returns a square matrix the same size as the source matrix with eigenvectors and a vector the length of the source matrix with eigenvalues. The selected eigenvectors/-values are always in the first highindex - lowindex + 1 rows.
Remarks
Currently the function is slower than cvSVD yet less accurate, so if A is known to be positivelydefined (for example, it is a covariance matrix)it is recommended to use cvSVD to find eigenvalues and eigenvectors of A, especially if eigenvectors are not required.
Ellipse(IInputOutputArray, RotatedRect, MCvScalar, int, LineType)
Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees.
public static void Ellipse(IInputOutputArray img, RotatedRect box, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected)
Parameters
img
IInputOutputArrayImage
box
RotatedRectAlternative ellipse representation via RotatedRect. This means that the function draws an ellipse inscribed in the rotated rectangle.
color
MCvScalarEllipse color
thickness
intThickness of the ellipse arc outline, if positive. Otherwise, this indicates that a filled ellipse sector is to be drawn.
lineType
LineTypeType of the ellipse boundary
Ellipse(IInputOutputArray, Point, Size, double, double, double, MCvScalar, int, LineType, int)
Draws a simple or thick elliptic arc or fills an ellipse sector. The arc is clipped by ROI rectangle. A piecewise-linear approximation is used for antialiased arcs and thick arcs. All the angles are given in degrees.
public static void Ellipse(IInputOutputArray img, Point center, Size axes, double angle, double startAngle, double endAngle, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage
center
PointCenter of the ellipse
axes
SizeLength of the ellipse axes
angle
doubleRotation angle
startAngle
doubleStarting angle of the elliptic arc
endAngle
doubleEnding angle of the elliptic arc
color
MCvScalarEllipse color
thickness
intThickness of the ellipse arc
lineType
LineTypeType of the ellipse boundary
shift
intNumber of fractional bits in the center coordinates and axes' values
EqualizeHist(IInputArray, IOutputArray)
The algorithm normalizes brightness and increases contrast of the image
public static void EqualizeHist(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayThe input 8-bit single-channel image
dst
IOutputArrayThe output image of the same size and the same data type as src
Erode(IInputArray, IOutputArray, IInputArray, Point, int, BorderType, MCvScalar)
Erodes the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the minimum is taken: dst=erode(src,element): dst(x,y)=min((x',y') in element)) src(x+x',y+y') The function supports the in-place mode. Erosion can be applied several (iterations) times. In case of color image each channel is processed independently.
public static void Erode(IInputArray src, IOutputArray dst, IInputArray element, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image
element
IInputArrayStructuring element used for erosion. If it is IntPtr.Zero, a 3x3 rectangular structuring element is used.
anchor
PointPosition of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.
iterations
intNumber of times erosion is applied.
borderType
BorderTypePixel extrapolation method
borderValue
MCvScalarBorder value in case of a constant border, use Constant for default
ErrorStr(int)
Returns the textual description for the specified error status code. In case of unknown status the function returns NULL pointer.
public static string ErrorStr(int status)
Parameters
status
intThe error status
Returns
- string
the textual description for the specified error status code.
EstimateAffine2D(IInputArray, IInputArray, IOutputArray, RobustEstimationAlgorithm, double, int, double, int)
Computes an optimal affine transformation between two 2D point sets.
public static Mat EstimateAffine2D(IInputArray from, IInputArray to, IOutputArray inliners = null, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.Ransac, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10)
Parameters
from
IInputArrayFirst input 2D point set containing (X,Y).
to
IInputArraySecond input 2D point set containing (x,y).
inliners
IOutputArrayOutput vector indicating which points are inliers (1-inlier, 0-outlier).
method
RobustEstimationAlgorithmRobust method used to compute transformation.
ransacReprojThreshold
doubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.
maxIters
intThe maximum number of robust method iterations.
confidence
doubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
refineIters
intMaximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.
Returns
- Mat
Output 2D affine transformation matrix 2×3 or empty matrix if transformation could not be estimated.
EstimateAffine2D(PointF[], PointF[], IOutputArray, RobustEstimationAlgorithm, double, int, double, int)
Computes an optimal affine transformation between two 2D point sets.
public static Mat EstimateAffine2D(PointF[] from, PointF[] to, IOutputArray inliners = null, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.Ransac, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10)
Parameters
from
PointF[]First input 2D point set containing (X,Y).
to
PointF[]Second input 2D point set containing (x,y).
inliners
IOutputArrayOutput vector indicating which points are inliers (1-inlier, 0-outlier).
method
RobustEstimationAlgorithmRobust method used to compute transformation.
ransacReprojThreshold
doubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.
maxIters
intThe maximum number of robust method iterations.
confidence
doubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
refineIters
intMaximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.
Returns
- Mat
Output 2D affine transformation matrix 2×3 or empty matrix if transformation could not be estimated.
EstimateAffine3D(IInputArray, IInputArray, IOutputArray, IOutputArray, double, double)
Computes an optimal affine transformation between two 3D point sets.
public static int EstimateAffine3D(IInputArray src, IInputArray dst, IOutputArray affineEstimate, IOutputArray inliers, double ransacThreshold = 3, double confidence = 0.99)
Parameters
src
IInputArrayFirst input 3D point set.
dst
IInputArraySecond input 3D point set.
affineEstimate
IOutputArrayOutput 3D affine transformation matrix 3 x 4
inliers
IOutputArrayOutput vector indicating which points are inliers.
ransacThreshold
doubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier.
confidence
doubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
Returns
- int
the result
EstimateAffine3D(MCvPoint3D32f[], MCvPoint3D32f[], out Matrix<double>, out byte[], double, double)
Computes an optimal affine transformation between two 3D point sets.
public static int EstimateAffine3D(MCvPoint3D32f[] src, MCvPoint3D32f[] dst, out Matrix<double> estimate, out byte[] inliers, double ransacThreshold, double confidence)
Parameters
src
MCvPoint3D32f[]First input 3D point set.
dst
MCvPoint3D32f[]Second input 3D point set.
estimate
Matrix<double>Output 3D affine transformation matrix.
inliers
byte[]Output vector indicating which points are inliers.
ransacThreshold
doubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier.
confidence
doubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
Returns
- int
The result
EstimateAffinePartial2D(IInputArray, IInputArray, IOutputArray, RobustEstimationAlgorithm, double, int, double, int)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.
public static Mat EstimateAffinePartial2D(IInputArray from, IInputArray to, IOutputArray inliners, RobustEstimationAlgorithm method, double ransacReprojThreshold, int maxIters, double confidence, int refineIters)
Parameters
from
IInputArrayFirst input 2D point set.
to
IInputArraySecond input 2D point set.
inliners
IOutputArrayOutput vector indicating which points are inliers.
method
RobustEstimationAlgorithmRobust method used to compute transformation.
ransacReprojThreshold
doubleMaximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.
maxIters
intThe maximum number of robust method iterations.
confidence
doubleConfidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.8-0.9 can result in an incorrectly estimated transformation.
refineIters
intMaximum number of iterations of refining algorithm (Levenberg-Marquardt). Passing 0 will disable refining, so the output matrix will be output of robust method.
Returns
- Mat
Output 2D affine transformation (4 degrees of freedom) matrix 2×3 or empty matrix if transformation could not be estimated.
EstimateChessboardSharpness(IInputArray, Size, IInputArray, float, bool, IOutputArray)
Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels.
public static MCvScalar EstimateChessboardSharpness(IInputArray image, Size patternSize, IInputArray corners, float riseDistance = 0.8, bool vertical = false, IOutputArray sharpness = null)
Parameters
image
IInputArrayGray image used to find chessboard corners
patternSize
SizeSize of a found chessboard pattern
corners
IInputArrayCorners found by findChessboardCorners(SB)
riseDistance
floatRise distance 0.8 means 10% ... 90% of the final signal strength
vertical
boolBy default edge responses for horizontal lines are calculated
sharpness
IOutputArrayOptional output array with a sharpness value for calculated edge responses
Returns
- MCvScalar
Scalar(average sharpness, average min brightness, average max brightness,0)
Exp(IInputArray, IOutputArray)
Calculates exponent of every element of input array: dst(I)=exp(src(I)) Maximum relative error is 7e-6. Currently, the function converts denormalized values to zeros on output
public static void Exp(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayThe source array
dst
IOutputArrayThe destination array, it should have double type or the same type as the source
ExtractChannel(IInputArray, IOutputArray, int)
Extract the specific channel from the image
public static void ExtractChannel(IInputArray src, IOutputArray dst, int coi)
Parameters
src
IInputArrayThe source image
dst
IOutputArrayThe channel
coi
int0 based index of the channel to be extracted
FastNlMeansDenoising(IInputArray, IOutputArray, float, int, int)
Perform image denoising using Non-local Means Denoising algorithm: http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise.
public static void FastNlMeansDenoising(IInputArray src, IOutputArray dst, float h = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
src
IInputArrayInput 8-bit 1-channel, 2-channel or 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src.
h
floatParameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.
templateWindowSize
intSize in pixels of the template patch that is used to compute weights. Should be odd.
searchWindowSize
intSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time.
FastNlMeansDenoisingColored(IInputArray, IOutputArray, float, float, int, int)
Perform image denoising using Non-local Means Denoising algorithm (modified for color image): http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ with several computational optimizations. Noise expected to be a Gaussian white noise. The function converts image to CIELAB colorspace and then separately denoise L and AB components with given h parameters using fastNlMeansDenoising function.
public static void FastNlMeansDenoisingColored(IInputArray src, IOutputArray dst, float h = 3, float hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21)
Parameters
src
IInputArrayInput 8-bit 1-channel, 2-channel or 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src.
h
floatParameter regulating filter strength. Big h value perfectly removes noise but also removes image details, smaller h value preserves details but also preserves some noise.
hColor
floatThe same as h but for color components. For most images value equals 10 will be enought to remove colored noise and do not distort colors.
templateWindowSize
intSize in pixels of the template patch that is used to compute weights. Should be odd.
searchWindowSize
intSize in pixels of the window that is used to compute weighted average for given pixel. Should be odd. Affect performance linearly: greater searchWindowsSize - greater denoising time.
FillConvexPoly(IInputOutputArray, IInputArray, MCvScalar, LineType, int)
Fills convex polygon interior. This function is much faster than The function cvFillPoly and can fill not only the convex polygons but any monotonic polygon, i.e. a polygon whose contour intersects every horizontal line (scan line) twice at the most
public static void FillConvexPoly(IInputOutputArray img, IInputArray points, MCvScalar color, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage
points
IInputArrayArray of pointers to a single polygon
color
MCvScalarPolygon color
lineType
LineTypeType of the polygon boundaries
shift
intNumber of fractional bits in the vertex coordinates
FillPoly(IInputOutputArray, IInputArray, MCvScalar, LineType, int, Point)
Fills the area bounded by one or more polygons.
public static void FillPoly(IInputOutputArray img, IInputArray points, MCvScalar color, LineType lineType = LineType.EightConnected, int shift = 0, Point offset = default)
Parameters
img
IInputOutputArrayImage.
points
IInputArrayArray of polygons where each polygon is represented as an array of points.
color
MCvScalarPolygon color
lineType
LineTypeType of the polygon boundaries.
shift
intNumber of fractional bits in the vertex coordinates.
offset
PointOptional offset of all points of the contours.
Filter2D(IInputArray, IOutputArray, IInputArray, Point, double, BorderType)
Applies arbitrary linear filter to the image. In-place operation is supported. When the aperture is partially outside the image, the function interpolates outlier pixel values from the nearest pixels that is inside the image
public static void Filter2D(IInputArray src, IOutputArray dst, IInputArray kernel, Point anchor, double delta = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayThe source image
dst
IOutputArrayThe destination image
kernel
IInputArrayConvolution kernel, single-channel floating point matrix. If you want to apply different kernels to different channels, split the image using cvSplit into separate color planes and process them individually
anchor
PointThe anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center
delta
doubleThe optional value added to the filtered pixels before storing them in dst
borderType
BorderTypeThe pixel extrapolation method.
FilterSpeckles(IInputOutputArray, double, int, double, IInputOutputArray)
Filters off small noise blobs (speckles) in the disparity map.
public static void FilterSpeckles(IInputOutputArray img, double newVal, int maxSpeckleSize, double maxDiff, IInputOutputArray buf = null)
Parameters
img
IInputOutputArrayThe input 16-bit signed disparity image
newVal
doubleThe disparity value used to paint-off the speckles
maxSpeckleSize
intThe maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithm
maxDiff
doubleMaximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixed-point disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.
buf
IInputOutputArrayThe optional temporary buffer to avoid memory allocation within the function.
Find4QuadCornerSubpix(IInputArray, IInputOutputArray, Size)
Finds subpixel-accurate positions of the chessboard corners
public static bool Find4QuadCornerSubpix(IInputArray image, IInputOutputArray corners, Size regionSize)
Parameters
image
IInputArraySource chessboard view; it must be 8-bit grayscale or color image
corners
IInputOutputArrayPointer to the output array of corners(PointF) detected
regionSize
Sizeregion size
Returns
- bool
True if successful
FindChessboardCorners(IInputArray, Size, IOutputArray, CalibCbType)
Attempts to determine whether the input image is a view of the chessboard pattern and locate internal chessboard corners
public static bool FindChessboardCorners(IInputArray image, Size patternSize, IOutputArray corners, CalibCbType flags = CalibCbType.AdaptiveThresh | CalibCbType.NormalizeImage)
Parameters
image
IInputArraySource chessboard view; it must be 8-bit grayscale or color image
patternSize
SizeThe number of inner corners per chessboard row and column
corners
IOutputArrayPointer to the output array of corners(PointF) detected
flags
CalibCbTypeVarious operation flags
Returns
- bool
True if all the corners have been found and they have been placed in a certain order (row by row, left to right in every row), otherwise, if the function fails to find all the corners or reorder them, it returns 0
Remarks
The coordinates detected are approximate, and to determine their position more accurately, the user may use the function cvFindCornerSubPix
FindChessboardCornersSB(IInputArray, Size, IOutputArray, CalibCbType)
Finds the positions of internal corners of the chessboard using a sector based approach.
public static bool FindChessboardCornersSB(IInputArray image, Size patternSize, IOutputArray corners, CalibCbType flags = CalibCbType.Default)
Parameters
image
IInputArraySource chessboard view. It must be an 8-bit grayscale or color image.
patternSize
SizeNumber of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
corners
IOutputArrayOutput array of detected corners.
flags
CalibCbTypeVarious operation flags
Returns
- bool
True if chessboard corners found
FindCirclesGrid(IInputArray, Size, IOutputArray, CalibCgType, Feature2D)
Finds centers in the grid of circles
public static bool FindCirclesGrid(IInputArray image, Size patternSize, IOutputArray centers, CalibCgType flags, Feature2D featureDetector)
Parameters
image
IInputArraySource chessboard view
patternSize
SizeThe number of inner circle per chessboard row and column
centers
IOutputArrayoutput array of detected centers.
flags
CalibCgTypeVarious operation flags
featureDetector
Feature2DThe feature detector. Use a SimpleBlobDetector for default
Returns
- bool
True if grid found.
FindCirclesGrid(Image<Gray, byte>, Size, CalibCgType, Feature2D)
Finds centers in the grid of circles
public static PointF[] FindCirclesGrid(Image<Gray, byte> image, Size patternSize, CalibCgType flags, Feature2D featureDetector)
Parameters
image
Image<Gray, byte>Source chessboard view
patternSize
SizeThe number of inner circle per chessboard row and column
flags
CalibCgTypeVarious operation flags
featureDetector
Feature2DThe feature detector. Use a SimpleBlobDetector for default
Returns
- PointF[]
The center of circles detected if the chess board pattern is found, otherwise null is returned
FindContourTree(IInputOutputArray, IOutputArray, ChainApproxMethod, Point)
Retrieves contours from the binary image as a contour tree. The pointer firstContour is filled by the function. It is provided as a convenient way to obtain the hierarchy value as int[,]. The function modifies the source image content
public static int[,] FindContourTree(IInputOutputArray image, IOutputArray contours, ChainApproxMethod method, Point offset = default)
Parameters
image
IInputOutputArrayThe source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content
contours
IOutputArrayDetected contours. Each contour is stored as a vector of points.
method
ChainApproxMethodApproximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation).
offset
PointOffset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context
Returns
- int[,]
The contour hierarchy
FindContours(IInputOutputArray, IOutputArray, IOutputArray, RetrType, ChainApproxMethod, Point)
Retrieves contours from the binary image and returns the number of retrieved contours. The pointer firstContour is filled by the function. It will contain pointer to the first most outer contour or IntPtr.Zero if no contours is detected (if the image is completely black). Other contours may be reached from firstContour using h_next and v_next links. The sample in cvDrawContours discussion shows how to use contours for connected component detection. Contours can be also used for shape analysis and object recognition - see squares.c in OpenCV sample directory The function modifies the source image content
public static void FindContours(IInputOutputArray image, IOutputArray contours, IOutputArray hierarchy, RetrType mode, ChainApproxMethod method, Point offset = default)
Parameters
image
IInputOutputArrayThe source 8-bit single channel image. Non-zero pixels are treated as 1s, zero pixels remain 0s - that is image treated as binary. To get such a binary image from grayscale, one may use cvThreshold, cvAdaptiveThreshold or cvCanny. The function modifies the source image content
contours
IOutputArrayDetected contours. Each contour is stored as a vector of points.
hierarchy
IOutputArrayOptional output vector, containing information about the image topology.
mode
RetrTypeRetrieval mode
method
ChainApproxMethodApproximation method (for all the modes, except CV_RETR_RUNS, which uses built-in approximation).
offset
PointOffset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context
FindEssentialMat(IInputArray, IInputArray, IInputArray, FmType, double, double, IOutputArray)
Calculates an essential matrix from the corresponding points in two images.
public static Mat FindEssentialMat(IInputArray points1, IInputArray points2, IInputArray cameraMatrix, FmType method = FmType.Ransac, double prob = 0.999, double threshold = 1, IOutputArray mask = null)
Parameters
points1
IInputArrayArray of N (N >= 5) 2D points from the first image. The point coordinates should be floating-point (single or double precision).
points2
IInputArrayArray of the second image points of the same size and format as points1
cameraMatrix
IInputArrayCamera matrix K=[[fx 0 cx][0 fy cy][0 0 1]]. Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix.
method
FmTypeMethod for computing a fundamental matrix. RANSAC for the RANSAC algorithm. LMEDS for the LMedS algorithm
prob
doubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
threshold
doubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
mask
IOutputArrayOutput array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.
Returns
- Mat
The essential mat
FindFundamentalMat(IInputArray, IInputArray, FmType, double, double, IOutputArray)
Calculates fundamental matrix using one of four methods listed above and returns the number of fundamental matrices found (1 or 3) and 0, if no matrix is found.
public static Mat FindFundamentalMat(IInputArray points1, IInputArray points2, FmType method = FmType.Ransac, double param1 = 3, double param2 = 0.99, IOutputArray mask = null)
Parameters
points1
IInputArrayArray of N points from the first image. The point coordinates should be floating-point (single or double precision).
points2
IInputArrayArray of the second image points of the same size and format as points1
method
FmTypeMethod for computing the fundamental matrix
param1
doubleParameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 1-3, depending on the accuracy of the point localization, image resolution, and the image noise.
param2
doubleParameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.
mask
IOutputArrayThe optional pointer to output array of N elements, every element of which is set to 0 for outliers and to 1 for the "inliers", i.e. points that comply well with the estimated epipolar geometry. The array is computed only in RANSAC and LMedS methods. For other methods it is set to all 1.
Returns
- Mat
The calculated fundamental matrix
FindHomography(IInputArray, IInputArray, RobustEstimationAlgorithm, double, IOutputArray)
Finds perspective transformation H=||hij|| between the source and the destination planes
public static Mat FindHomography(IInputArray srcPoints, IInputArray dstPoints, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.AllPoints, double ransacReprojThreshold = 3, IOutputArray mask = null)
Parameters
srcPoints
IInputArrayPoint coordinates in the original plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates), where N is the number of points.
dstPoints
IInputArrayPoint coordinates in the destination plane, 2xN, Nx2, 3xN or Nx3 array (the latter two are for representation in homogeneous coordinates)
method
RobustEstimationAlgorithmThe type of the method
ransacReprojThreshold
doubleThe maximum allowed re-projection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3
mask
IOutputArrayThe optional output mask set by a robust method (RANSAC or LMEDS).
Returns
- Mat
Output 3x3 homography matrix. Homography matrix is determined up to a scale, thus it is normalized to make h33=1
FindHomography(PointF[], PointF[], RobustEstimationAlgorithm, double, IOutputArray)
Finds perspective transformation H=||h_ij|| between the source and the destination planes
public static Mat FindHomography(PointF[] srcPoints, PointF[] dstPoints, RobustEstimationAlgorithm method = RobustEstimationAlgorithm.AllPoints, double ransacReprojThreshold = 3, IOutputArray mask = null)
Parameters
srcPoints
PointF[]Point coordinates in the original plane
dstPoints
PointF[]Point coordinates in the destination plane
method
RobustEstimationAlgorithmFindHomography method
ransacReprojThreshold
doubleThe maximum allowed reprojection error to treat a point pair as an inlier. The parameter is only used in RANSAC-based homography estimation. E.g. if dst_points coordinates are measured in pixels with pixel-accurate precision, it makes sense to set this parameter somewhere in the range ~1..3
mask
IOutputArrayOptional output mask set by a robust method ( CV_RANSAC or CV_LMEDS ). Note that the input mask values are ignored.
Returns
- Mat
The 3x3 homography matrix if found. Null if not found.
FindNonZero(IInputArray, IOutputArray)
Find the location of the non-zero pixel
public static void FindNonZero(IInputArray src, IOutputArray idx)
Parameters
src
IInputArrayThe source array
idx
IOutputArrayThe output array where the location of the pixels are sorted
FindTransformECC(IInputArray, IInputArray, IInputOutputArray, MotionType, MCvTermCriteria, IInputArray)
Finds the geometric transform (warp) between two images in terms of the ECC criterion
public static double FindTransformECC(IInputArray templateImage, IInputArray inputImage, IInputOutputArray warpMatrix, MotionType motionType, MCvTermCriteria criteria, IInputArray inputMask = null)
Parameters
templateImage
IInputArraysingle-channel template image; CV_8U or CV_32F array.
inputImage
IInputArraysingle-channel input image which should be warped with the final warpMatrix in order to provide an image similar to templateImage, same type as temlateImage.
warpMatrix
IInputOutputArrayfloating-point 2×3 or 3×3 mapping matrix (warp).
motionType
MotionTypeSpecifying the type of motion. Use Affine for default
criteria
MCvTermCriteriaspecifying the termination criteria of the ECC algorithm; criteria.epsilon defines the threshold of the increment in the correlation coefficient between two iterations (a negative criteria.epsilon makes criteria.maxcount the only termination criterion). Default values can use 50 iteration and 0.001 eps.
inputMask
IInputArrayAn optional mask to indicate valid values of inputImage.
Returns
- double
The final enhanced correlation coefficient, that is the correlation coefficient between the template image and the final warped input image.
FitEllipse(IInputArray)
Fits an ellipse around a set of 2D points.
public static RotatedRect FitEllipse(IInputArray points)
Parameters
points
IInputArrayInput 2D point set
Returns
- RotatedRect
The ellipse that fits best (in least-squares sense) to a set of 2D points
FitEllipseAMS(IInputArray)
The function calculates the ellipse that fits a set of 2D points. The Approximate Mean Square (AMS) is used.
public static RotatedRect FitEllipseAMS(IInputArray points)
Parameters
points
IInputArrayInput 2D point set
Returns
- RotatedRect
The rotated rectangle in which the ellipse is inscribed
FitEllipseDirect(IInputArray)
The function calculates the ellipse that fits a set of 2D points. The Direct least square (Direct) method by [58] is used.
public static RotatedRect FitEllipseDirect(IInputArray points)
Parameters
points
IInputArrayInput 2D point set
Returns
- RotatedRect
The rotated rectangle in which the ellipse is inscribed
FitLine(IInputArray, IOutputArray, DistType, double, double, double)
Fits line to 2D or 3D point set
public static void FitLine(IInputArray points, IOutputArray line, DistType distType, double param, double reps, double aeps)
Parameters
points
IInputArrayInput vector of 2D or 3D points, stored in std::vector or Mat.
line
IOutputArrayOutput line parameters. In case of 2D fitting, it should be a vector of 4 elements (like Vec4f) - (vx, vy, x0, y0), where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is a point on the line. In case of 3D fitting, it should be a vector of 6 elements (like Vec6f) - (vx, vy, vz, x0, y0, z0), where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is a point on the line.
distType
DistTypeThe distance used for fitting
param
doubleNumerical parameter (C) for some types of distances, if 0 then some optimal value is chosen
reps
doubleSufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default
aeps
doubleSufficient accuracy for angle, 0.01 would be a good default
FitLine(PointF[], out PointF, out PointF, DistType, double, double, double)
Fits line to 2D or 3D point set
public static void FitLine(PointF[] points, out PointF direction, out PointF pointOnLine, DistType distType, double param, double reps, double aeps)
Parameters
points
PointF[]Input vector of 2D points.
direction
PointFA normalized vector collinear to the line
pointOnLine
PointFA point on the line.
distType
DistTypeThe distance used for fitting
param
doubleNumerical parameter (C) for some types of distances, if 0 then some optimal value is chosen
reps
doubleSufficient accuracy for radius (distance between the coordinate origin and the line), 0.01 would be a good default
aeps
doubleSufficient accuracy for angle, 0.01 would be a good default
Flip(IInputArray, IOutputArray, FlipType)
Flips the array in one of different 3 ways (row and column indices are 0-based)
public static void Flip(IInputArray src, IOutputArray dst, FlipType flipType)
Parameters
src
IInputArraySource array.
dst
IOutputArrayDestination array.
flipType
FlipTypeSpecifies how to flip the array.
FlipND(IInputArray, IOutputArray, int)
Flips a n-dimensional at given axis
public static void FlipND(IInputArray src, IOutputArray dst, int axis)
Parameters
src
IInputArrayInput array
dst
IOutputArrayOutput array that has the same shape of src
axis
intAxis that performs a flip on. 0 <= axis < src.dims.
FloodFill(IInputOutputArray, IInputOutputArray, Point, MCvScalar, out Rectangle, MCvScalar, MCvScalar, Connectivity, FloodFillType)
Fills a connected component with given color.
public static int FloodFill(IInputOutputArray src, IInputOutputArray mask, Point seedPoint, MCvScalar newVal, out Rectangle rect, MCvScalar loDiff, MCvScalar upDiff, Connectivity connectivity = Connectivity.FourConnected, FloodFillType flags = FloodFillType.Default)
Parameters
src
IInputOutputArrayInput 1- or 3-channel, 8-bit or floating-point image. It is modified by the function unless CV_FLOODFILL_MASK_ONLY flag is set.
mask
IInputOutputArrayOperation mask, should be singe-channel 8-bit image, 2 pixels wider and 2 pixels taller than image. If not IntPtr.Zero, the function uses and updates the mask, so user takes responsibility of initializing mask content. Floodfilling can't go across non-zero pixels in the mask, for example, an edge detector output can be used as a mask to stop filling at edges. Or it is possible to use the same mask in multiple calls to the function to make sure the filled area do not overlap. Note: because mask is larger than the filled image, pixel in mask that corresponds to (x,y) pixel in image will have coordinates (x+1,y+1).
seedPoint
PointThe starting point.
newVal
MCvScalarNew value of repainted domain pixels.
rect
RectangleOutput parameter set by the function to the minimum bounding rectangle of the repainted domain.
loDiff
MCvScalarMaximal lower brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value.
upDiff
MCvScalarMaximal upper brightness/color difference between the currently observed pixel and one of its neighbor belong to the component or seed pixel to add the pixel to component. In case of 8-bit color images it is packed value.
connectivity
ConnectivityFlood fill connectivity
flags
FloodFillTypeThe operation flags.
Returns
- int
The area of the connected component
GaussianBlur(IInputArray, IOutputArray, Size, double, double, BorderType)
Blurs an image using a Gaussian filter.
public static void GaussianBlur(IInputArray src, IOutputArray dst, Size ksize, double sigmaX, double sigmaY = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayinput image; the image can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
dst
IOutputArrayoutput image of the same size and type as src.
ksize
SizeGaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd. Or, they can be zero�s and then they are computed from sigma* .
sigmaX
doubleGaussian kernel standard deviation in X direction.
sigmaY
doubleGaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height , respectively (see getGaussianKernel() for details); to fully control the result regardless of possible future modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
borderType
BorderTypePixel extrapolation method
Gemm(IInputArray, IInputArray, double, IInputArray, double, IOutputArray, GemmType)
Performs generalized matrix multiplication: dst = alpha*op(src1)op(src2) + betaop(src3), where op(X) is X or XT
public static void Gemm(IInputArray src1, IInputArray src2, double alpha, IInputArray src3, double beta, IOutputArray dst, GemmType tAbc = GemmType.Default)
Parameters
src1
IInputArrayThe first source array.
src2
IInputArrayThe second source array.
alpha
doubleThe scalar
src3
IInputArrayThe third source array (shift). Can be null, if there is no shift.
beta
doubleThe scalar
dst
IOutputArrayThe destination array.
tAbc
GemmTypeThe Gemm operation type
GetAffineTransform(IInputArray, IOutputArray)
Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2.
public static Mat GetAffineTransform(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayPointer to an array of PointF, Coordinates of 3 triangle vertices in the source image.
dst
IOutputArrayPointer to an array of PointF, Coordinates of the 3 corresponding triangle vertices in the destination image
Returns
- Mat
The destination 2x3 matrix
GetAffineTransform(PointF[], PointF[])
Calculates the matrix of an affine transform such that: (x'_i,y'_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..2.
public static Mat GetAffineTransform(PointF[] src, PointF[] dest)
Parameters
src
PointF[]Coordinates of 3 triangle vertices in the source image. If the array contains more than 3 points, only the first 3 will be used
dest
PointF[]Coordinates of the 3 corresponding triangle vertices in the destination image. If the array contains more than 3 points, only the first 3 will be used
Returns
- Mat
The 2x3 rotation matrix that defines the Affine transform
GetCvStructSizes()
This function retrieve the Open CV structure sizes in unmanaged code
public static CvStructSizes GetCvStructSizes()
Returns
- CvStructSizes
The structure that will hold the Open CV structure sizes
GetDefaultNewCameraMatrix(IInputArray, Size, bool)
Returns the default new camera matrix.
public static Mat GetDefaultNewCameraMatrix(IInputArray cameraMatrix, Size imgsize = default, bool centerPrincipalPoint = false)
Parameters
cameraMatrix
IInputArrayInput camera matrix.
imgsize
SizeCamera view image size in pixels.
centerPrincipalPoint
boolLocation of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not.
Returns
- Mat
The default new camera matrix.
GetDepthType(DepthType)
Get the corresponding depth type
public static Type GetDepthType(DepthType t)
Parameters
t
DepthTypeThe opencv depth type
Returns
- Type
The equivalent depth type
GetDepthType(Type)
Get the corresponding opencv depth type
public static DepthType GetDepthType(Type t)
Parameters
t
TypeThe element type
Returns
- DepthType
The equivalent opencv depth type
GetDerivKernels(IOutputArray, IOutputArray, int, int, int, bool, DepthType)
Returns filter coefficients for computing spatial image derivatives.
public static void GetDerivKernels(IOutputArray kx, IOutputArray ky, int dx, int dy, int ksize, bool normalize = false, DepthType ktype = DepthType.Cv32F)
Parameters
kx
IOutputArrayOutput matrix of row filter coefficients.
ky
IOutputArrayOutput matrix of column filter coefficients.
dx
intDerivative order in respect of x.
dy
intDerivative order in respect of y.
ksize
intAperture size. It can be FILTER_SCHARR, 1, 3, 5, or 7.
normalize
boolFlag indicating whether to normalize (scale down) the filter coefficients or not.
ktype
DepthTypeType of filter coefficients. It can be CV_32f or CV_64F .
GetErrMode()
Returns the current error mode
public static extern int GetErrMode()
Returns
- int
The error mode
GetErrStatus()
Returns the current error status - the value set with the last cvSetErrStatus call. Note, that in Leaf mode the program terminates immediately after error occurred, so to always get control after the function call, one should call cvSetErrMode and set Parent or Silent error mode.
public static extern int GetErrStatus()
Returns
- int
The current error status
GetGaborKernel(Size, double, double, double, double, double, DepthType)
Returns Gabor filter coefficients.
public static Mat GetGaborKernel(Size ksize, double sigma, double theta, double lambd, double gamma, double psi = 1.5707963267948966, DepthType ktype = DepthType.Cv64F)
Parameters
ksize
SizeSize of the filter returned.
sigma
doubleStandard deviation of the gaussian envelope.
theta
doubleOrientation of the normal to the parallel stripes of a Gabor function.
lambd
doubleWavelength of the sinusoidal factor.
gamma
doubleSpatial aspect ratio.
psi
doublePhase offset.
ktype
DepthTypeType of filter coefficients. It can be CV_32F or CV_64F .
Returns
- Mat
Gabor filter coefficients.
GetGaussianKernel(int, double, DepthType)
Returns Gaussian filter coefficients.
public static Mat GetGaussianKernel(int ksize, double sigma, DepthType ktype = DepthType.Cv64F)
Parameters
ksize
intAperture size. It should be odd and positive.
sigma
doubleGaussian standard deviation. If it is non-positive, it is computed from ksize.
ktype
DepthTypeType of filter coefficients. It can be CV_32F or CV_64F
Returns
- Mat
Gaussian filter coefficients.
GetModuleFormatString()
Get the module format string.
public static string GetModuleFormatString()
Returns
- string
On Windows, "{0}".dll will be returned; On Linux, "lib{0}.so" will be returned; Otherwise {0} is returned.
GetOptimalDFTSize(int)
Returns the minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r.
public static int GetOptimalDFTSize(int vecsize)
Parameters
vecsize
intVector size
Returns
- int
The minimum number N that is greater to equal to size0, such that DFT of a vector of size N can be computed fast. In the current implementation N=2^p x 3^q x 5^r for some p, q, r.
GetOptimalNewCameraMatrix(IInputArray, IInputArray, Size, double, Size, ref Rectangle, bool)
Returns the new camera matrix based on the free scaling parameter.
public static Mat GetOptimalNewCameraMatrix(IInputArray cameraMatrix, IInputArray distCoeffs, Size imageSize, double alpha, Size newImgSize, ref Rectangle validPixROI, bool centerPrincipalPoint = false)
Parameters
cameraMatrix
IInputArrayInput camera matrix.
distCoeffs
IInputArrayInput vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,?x,?y]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
imageSize
SizeOriginal image size.
alpha
doubleFree scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image).
newImgSize
SizeImage size after rectification. By default,it is set to imageSize .
validPixROI
Rectangleoutput rectangle that outlines all-good-pixels region in the undistorted image.
centerPrincipalPoint
boolindicates whether in the new camera matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image.
Returns
- Mat
The new camera matrix based on the free scaling parameter.
GetPerspectiveTransform(IInputArray, IInputArray)
calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3.
public static Mat GetPerspectiveTransform(IInputArray src, IInputArray dst)
Parameters
src
IInputArrayCoordinates of 4 quadrangle vertices in the source image
dst
IInputArrayCoordinates of the 4 corresponding quadrangle vertices in the destination image
Returns
- Mat
The perspective transform matrix
GetPerspectiveTransform(PointF[], PointF[])
calculates matrix of perspective transform such that: (t_i x'_i,t_i y'_i,t_i)^T=map_matrix (x_i,y_i,1)^T where dst(i)=(x'_i,y'_i), src(i)=(x_i,y_i), i=0..3.
public static Mat GetPerspectiveTransform(PointF[] src, PointF[] dest)
Parameters
src
PointF[]Coordinates of 4 quadrangle vertices in the source image
dest
PointF[]Coordinates of the 4 corresponding quadrangle vertices in the destination image
Returns
- Mat
The 3x3 Homography matrix
GetRectSubPix(IInputArray, Size, PointF, IOutputArray, DepthType)
Extracts pixels from src: dst(x, y) = src(x + center.x - (width(dst)-1)*0.5, y + center.y - (height(dst)-1)*0.5) where the values of pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multiple-channel images is processed independently. Whereas the rectangle center must be inside the image, the whole rectangle may be partially occluded. In this case, the replication border mode is used to get pixel values beyond the image boundaries.
public static void GetRectSubPix(IInputArray image, Size patchSize, PointF center, IOutputArray patch, DepthType patchType = DepthType.Default)
Parameters
image
IInputArraySource image
patchSize
SizeSize of the extracted patch.
center
PointFFloating point coordinates of the extracted rectangle center within the source image. The center must be inside the image.
patch
IOutputArrayExtracted rectangle
patchType
DepthTypeDepth of the extracted pixels. By default, they have the same depth as
image
.
GetRotationMatrix2D(PointF, double, double, IOutputArray)
Calculates rotation matrix
public static void GetRotationMatrix2D(PointF center, double angle, double scale, IOutputArray mapMatrix)
Parameters
center
PointFCenter of the rotation in the source image.
angle
doubleThe rotation angle in degrees. Positive values mean couter-clockwise rotation (the coordiate origin is assumed at top-left corner).
scale
doubleIsotropic scale factor
mapMatrix
IOutputArrayPointer to the destination 2x3 matrix
GetStructuringElement(ElementShape, Size, Point)
Returns a structuring element of the specified size and shape for morphological operations.
public static Mat GetStructuringElement(ElementShape shape, Size ksize, Point anchor)
Parameters
shape
ElementShapeElement shape
ksize
SizeSize of the structuring element.
anchor
PointAnchor position within the element. The value (-1, -1) means that the anchor is at the center. Note that only the shape of a cross-shaped element depends on the anchor position. In other cases the anchor just regulates how much the result of the morphological operation is shifted.
Returns
- Mat
The structuring element
GetTextSize(string, FontFace, double, int, ref int)
Calculates the width and height of a text string.
public static Size GetTextSize(string text, FontFace fontFace, double fontScale, int thickness, ref int baseLine)
Parameters
text
stringInput text string.
fontFace
FontFaceFont to use
fontScale
doubleFont scale factor that is multiplied by the font-specific base size.
thickness
intThickness of lines used to render the text.
baseLine
intY-coordinate of the baseline relative to the bottom-most text point.
Returns
- Size
The size of a box that contains the specified text.
GetWindowProperty(string, WindowPropertyFlags)
Provides parameters of a window.
public static double GetWindowProperty(string name, WindowPropertyFlags propId)
Parameters
name
stringName of the window.
propId
WindowPropertyFlagsWindow property to retrieve.
Returns
- double
Value of the window property
GrabCut(IInputArray, IInputOutputArray, Rectangle, IInputOutputArray, IInputOutputArray, int, GrabcutInitType)
The grab cut algorithm for segmentation
public static void GrabCut(IInputArray img, IInputOutputArray mask, Rectangle rect, IInputOutputArray bgdModel, IInputOutputArray fgdModel, int iterCount, GrabcutInitType type)
Parameters
img
IInputArrayThe 8-bit 3-channel image to be segmented
mask
IInputOutputArrayInput/output 8-bit single-channel mask. The mask is initialized by the function when mode is set to GC_INIT_WITH_RECT. Its elements may have one of following values: 0 (GC_BGD) defines an obvious background pixels. 1 (GC_FGD) defines an obvious foreground (object) pixel. 2 (GC_PR_BGR) defines a possible background pixel. 3 (GC_PR_FGD) defines a possible foreground pixel.
rect
RectangleThe rectangle to initialize the segmentation
bgdModel
IInputOutputArrayTemporary array for the background model. Do not modify it while you are processing the same image.
fgdModel
IInputOutputArrayTemporary arrays for the foreground model. Do not modify it while you are processing the same image.
iterCount
intThe number of iterations
type
GrabcutInitTypeThe initialization type
GroupRectangles(VectorOfRect, VectorOfInt, VectorOfDouble, int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(VectorOfRect rectList, VectorOfInt rejectLevels, VectorOfDouble levelWeights, int groupThreshold, double eps = 0.2)
Parameters
rectList
VectorOfRectInput/output vector of rectangles. Output vector includes retained and grouped rectangles.
rejectLevels
VectorOfIntreject levels
levelWeights
VectorOfDoublelevel weights
groupThreshold
intMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
eps
doubleRelative difference between sides of the rectangles to merge them into a group.
GroupRectangles(VectorOfRect, VectorOfInt, int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(VectorOfRect rectList, VectorOfInt weights, int groupThreshold, double eps = 0.2)
Parameters
rectList
VectorOfRectInput/output vector of rectangles. Output vector includes retained and grouped rectangles.
weights
VectorOfIntWeights
groupThreshold
intMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
eps
doubleRelative difference between sides of the rectangles to merge them into a group.
GroupRectangles(VectorOfRect, int, double)
Groups the object candidate rectangles.
public static void GroupRectangles(VectorOfRect rectList, int groupThreshold, double eps = 0.2)
Parameters
rectList
VectorOfRectInput/output vector of rectangles. Output vector includes retained and grouped rectangles.
groupThreshold
intMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
eps
doubleRelative difference between sides of the rectangles to merge them into a group.
GroupRectangles(VectorOfRect, int, double, VectorOfInt, VectorOfDouble)
Groups the object candidate rectangles.
public static void GroupRectangles(VectorOfRect rectList, int groupThreshold, double eps, VectorOfInt weights, VectorOfDouble levelWeights)
Parameters
rectList
VectorOfRectInput/output vector of rectangles. Output vector includes retained and grouped rectangles.
groupThreshold
intMinimum possible number of rectangles minus 1. The threshold is used in a group of rectangles to retain it.
eps
doubleRelative difference between sides of the rectangles to merge them into a group.
weights
VectorOfIntweights
levelWeights
VectorOfDoublelevel weights
GroupRectanglesMeanshift(VectorOfRect, VectorOfDouble, VectorOfDouble, double, Size)
Groups the object candidate rectangles.
public static void GroupRectanglesMeanshift(VectorOfRect rectList, VectorOfDouble foundWeights, VectorOfDouble foundScales, double detectThreshold, Size winDetSize)
Parameters
rectList
VectorOfRectInput/output vector of rectangles. Output vector includes retained and grouped rectangles.
foundWeights
VectorOfDoublefound weights
foundScales
VectorOfDoublefound scales
detectThreshold
doubledetect threshold, use 0 for default
winDetSize
Sizewin det size, use (64, 128) for default
HConcat(IInputArray, IInputArray, IOutputArray)
Horizontally concatenate two images
public static void HConcat(IInputArray src1, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayThe first image
src2
IInputArrayThe second image
dst
IOutputArrayThe result image
HConcat(IInputArrayOfArrays, IOutputArray)
Horizontally concatenate two images
public static void HConcat(IInputArrayOfArrays srcs, IOutputArray dst)
Parameters
srcs
IInputArrayOfArraysInput array or vector of matrices. all of the matrices must have the same number of rows and the same depth.
dst
IOutputArrayoutput array. It has the same number of rows and depth as the src, and the sum of cols of the src. same depth.
HConcat(Mat[], IOutputArray)
Horizontally concatenate two images
public static void HConcat(Mat[] srcs, IOutputArray dst)
Parameters
srcs
Mat[]Input array or vector of matrices. all of the matrices must have the same number of rows and the same depth.
dst
IOutputArrayoutput array. It has the same number of rows and depth as the src, and the sum of cols of the src. same depth.
HasNonZero(IInputArray)
Checks for the presence of at least one non-zero array element.
public static bool HasNonZero(IInputArray arr)
Parameters
arr
IInputArraySingle-channel array.
Returns
- bool
Whether there are non-zero elements in src
HaveImageReader(string)
Returns true if the specified image can be decoded by OpenCV.
public static bool HaveImageReader(string fileName)
Parameters
fileName
stringFile name of the image
Returns
- bool
True if the specified image can be decoded by OpenCV.
HaveImageWriter(string)
Returns true if an image with the specified filename can be encoded by OpenCV.
public static bool HaveImageWriter(string fileName)
Parameters
fileName
stringFile name of the image
Returns
- bool
True if an image with the specified filename can be encoded by OpenCV.
HoughCircles(IInputArray, HoughModes, double, double, double, double, int, int)
Finds circles in a grayscale image using the Hough transform
public static CircleF[] HoughCircles(IInputArray image, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)
Parameters
image
IInputArray8-bit, single-channel, grayscale input image.
method
HoughModesDetection method to use. Currently, the only implemented method is CV_HOUGH_GRADIENT , which is basically 21HT
dp
doubleInverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height.
minDist
doubleMinimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1
doubleFirst method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the higher threshold of the two passed to the Canny() edge detector (the lower one is twice smaller).
param2
doubleSecond method-specific parameter. In case of CV_HOUGH_GRADIENT , it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first.
minRadius
intMinimum circle radius.
maxRadius
intMaximum circle radius.
Returns
- CircleF[]
The circles detected
HoughCircles(IInputArray, IOutputArray, HoughModes, double, double, double, double, int, int)
Finds circles in grayscale image using some modification of Hough transform
public static void HoughCircles(IInputArray image, IOutputArray circles, HoughModes method, double dp, double minDist, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0)
Parameters
image
IInputArrayThe input 8-bit single-channel grayscale image
circles
IOutputArrayThe storage for the circles detected. It can be a memory storage (in this case a sequence of circles is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of type CV_32FC3, to which the circles' parameters are written. The matrix header is modified by the function so its cols or rows will contain a number of lines detected. If circle_storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of circles is returned. Every circle is encoded as 3 floating-point numbers: center coordinates (x,y) and the radius
method
HoughModesCurrently, the only implemented method is CV_HOUGH_GRADIENT
dp
doubleResolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc
minDist
doubleMinimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed
param1
doubleThe first method-specific parameter. In case of CV_HOUGH_GRADIENT it is the higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller).
param2
doubleThe second method-specific parameter. In case of CV_HOUGH_GRADIENT it is accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first
minRadius
intMinimal radius of the circles to search for
maxRadius
intMaximal radius of the circles to search for. By default the maximal radius is set to max(image_width, image_height).
HoughLines(IInputArray, IOutputArray, double, double, int, double, double)
Finds lines in a binary image using the standard Hough transform.
public static void HoughLines(IInputArray image, IOutputArray lines, double rho, double theta, int threshold, double srn = 0, double stn = 0)
Parameters
image
IInputArray8-bit, single-channel binary source image. The image may be modified by the function.
lines
IOutputArrayOutput vector of lines. Each line is represented by a two-element vector
rho
doubleDistance resolution of the accumulator in pixels.
theta
doubleAngle resolution of the accumulator in radians.
threshold
intAccumulator threshold parameter. Only those lines are returned that get enough votes (> threshold)
srn
doubleFor the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
stn
doubleFor the multi-scale Hough transform, it is a divisor for the distance resolution theta
HoughLinesP(IInputArray, IOutputArray, double, double, int, double, double)
Finds line segments in a binary image using the probabilistic Hough transform.
public static void HoughLinesP(IInputArray image, IOutputArray lines, double rho, double theta, int threshold, double minLineLength = 0, double maxGap = 0)
Parameters
image
IInputArray8-bit, single-channel binary source image. The image may be modified by the function.
lines
IOutputArrayOutput vector of lines. Each line is represented by a 4-element vector (x1, y1, x2, y2)
rho
doubleDistance resolution of the accumulator in pixels
theta
doubleAngle resolution of the accumulator in radians
threshold
intAccumulator threshold parameter. Only those lines are returned that get enough votes
minLineLength
doubleMinimum line length. Line segments shorter than that are rejected.
maxGap
doubleMaximum allowed gap between points on the same line to link them.
HoughLinesP(IInputArray, double, double, int, double, double)
Finds line segments in a binary image using the probabilistic Hough transform.
public static LineSegment2D[] HoughLinesP(IInputArray image, double rho, double theta, int threshold, double minLineLength = 0, double maxGap = 0)
Parameters
image
IInputArray8-bit, single-channel binary source image. The image may be modified by the function.
rho
doubleDistance resolution of the accumulator in pixels
theta
doubleAngle resolution of the accumulator in radians
threshold
intAccumulator threshold parameter. Only those lines are returned that get enough votes
minLineLength
doubleMinimum line length. Line segments shorter than that are rejected.
maxGap
doubleMaximum allowed gap between points on the same line to link them.
Returns
- LineSegment2D[]
The found line segments
HuMoments(Moments)
Calculates seven Hu invariants
public static double[] HuMoments(Moments m)
Parameters
m
MomentsThe image moment
Returns
- double[]
The output Hu moments.
HuMoments(Moments, IOutputArray)
Calculates seven Hu invariants
public static void HuMoments(Moments m, IOutputArray hu)
Parameters
m
MomentsThe image moment
hu
IOutputArrayThe output Hu moments. e.g. a Mat can be passed here.
IlluminationChange(IInputArray, IInputArray, IOutputArray, float, float)
Applying an appropriate non-linear transformation to the gradient field inside the selection and then integrating back with a Poisson solver, modifies locally the apparent illumination of an image.
public static void IlluminationChange(IInputArray src, IInputArray mask, IOutputArray dst, float alpha = 0.2, float beta = 0.4)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
mask
IInputArrayInput 8-bit 1 or 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src.
alpha
floatValue ranges between 0-2.
beta
floatValue ranges between 0-2.
Imdecode(IInputArray, ImreadModes, Mat)
Decode image stored in the buffer
public static void Imdecode(IInputArray buf, ImreadModes loadType, Mat dst)
Parameters
buf
IInputArrayThe buffer
loadType
ImreadModesThe image loading type
dst
MatThe output placeholder for the decoded matrix.
Imdecode(byte[], ImreadModes, Mat)
Decode image stored in the buffer
public static void Imdecode(byte[] buf, ImreadModes loadType, Mat dst)
Parameters
buf
byte[]The buffer
loadType
ImreadModesThe image loading type
dst
MatThe output placeholder for the decoded matrix.
Imencode(string, IInputArray, VectorOfByte, params KeyValuePair<ImwriteFlags, int>[])
Encode image and store the result as a byte vector.
public static bool Imencode(string ext, IInputArray image, VectorOfByte buf, params KeyValuePair<ImwriteFlags, int>[] parameters)
Parameters
ext
stringThe image format
image
IInputArrayThe image
buf
VectorOfByteOutput buffer resized to fit the compressed image.
parameters
KeyValuePair<ImwriteFlags, int>[]The pointer to the array of integers, which contains the parameter for encoding, use IntPtr.Zero for default
Returns
- bool
True if successfully encoded the image into the buffer.
Imencode(string, IInputArray, params KeyValuePair<ImwriteFlags, int>[])
Encode image and return the result as a byte vector.
public static byte[] Imencode(string ext, IInputArray image, params KeyValuePair<ImwriteFlags, int>[] parameters)
Parameters
ext
stringThe image format
image
IInputArrayThe image
parameters
KeyValuePair<ImwriteFlags, int>[]The pointer to the array of integers, which contains the parameter for encoding, use IntPtr.Zero for default
Returns
- byte[]
Byte array that contains the image in the specific image format. If failed to encode, return null
Imread(string, ImreadModes)
Loads an image from the specified file and returns the pointer to the loaded image. Currently the following file formats are supported: Windows bitmaps - BMP, DIB; JPEG files - JPEG, JPG, JPE; Portable Network Graphics - PNG; Portable image format - PBM, PGM, PPM; Sun rasters - SR, RAS; TIFF files - TIFF, TIF; OpenEXR HDR images - EXR; JPEG 2000 images - jp2.
public static Mat Imread(string filename, ImreadModes loadType = ImreadModes.Color)
Parameters
filename
stringThe name of the file to be loaded
loadType
ImreadModesThe image loading type
Returns
- Mat
The loaded image
Imreadmulti(string, ImreadModes)
The function imreadmulti loads a multi-page image from the specified file into a vector of Mat objects.
public static Mat[] Imreadmulti(string filename, ImreadModes flags = ImreadModes.AnyColor)
Parameters
filename
stringName of file to be loaded.
flags
ImreadModesRead flags
Returns
- Mat[]
Null if the reading fails, otherwise, an array of Mat from the file
Imshow(string, IInputArray)
Shows the image in the specified window
public static void Imshow(string name, IInputArray image)
Parameters
name
stringName of the window
image
IInputArrayImage to be shown
Imwrite(string, IInputArray, params KeyValuePair<ImwriteFlags, int>[])
Saves the image to the specified file. The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see cv::imread for the list of extensions).
public static bool Imwrite(string filename, IInputArray image, params KeyValuePair<ImwriteFlags, int>[] parameters)
Parameters
filename
stringThe name of the file to be saved to
image
IInputArrayThe image to be saved
parameters
KeyValuePair<ImwriteFlags, int>[]The parameters
Returns
- bool
true if success
Remarks
In general, only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function, with these exceptions: 16-bit unsigned(CV_16U) images can be saved in the case of PNG, JPEG 2000, and TIFF formats 32-bit float (CV_32F) images can be saved in PFM, TIFF, OpenEXR, and Radiance HDR formats; 3-channel(CV_32FC3) TIFF images will be saved using the LogLuv high dynamic range encoding(4 bytes per pixel) PNG images with an alpha channel can be saved using this function.To do this, create 8-bit (or 16-bit) 4-channel image BGRA, where the alpha channel goes last. Fully transparent pixels should have alpha set to 0, fully opaque pixels should have alpha set to 255 / 65535(see the code sample below). Multiple images(vector of Mat) can be saved in TIFF format(see the code sample below). If the image format is not supported, the image will be converted to 8 - bit unsigned(CV_8U) and saved that way. If the format, depth or channel order is different, use Mat::convertTo and cv::cvtColor to convert it before saving. Or, use the universal FileStorage I / O functions to save the image to XML or YAML format.
Imwritemulti(string, IInputArrayOfArrays, params KeyValuePair<ImwriteFlags, int>[])
Save multiple images to a specified file (e.g. ".tiff" that support multiple images).
public static bool Imwritemulti(string filename, IInputArrayOfArrays images, params KeyValuePair<ImwriteFlags, int>[] parameters)
Parameters
filename
stringName of the file.
images
IInputArrayOfArraysImages to be saved.
parameters
KeyValuePair<ImwriteFlags, int>[]The parameters
Returns
- bool
true if success
InRange(IInputArray, IInputArray, IInputArray, IOutputArray)
Performs range check for every element of the input array: dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 For single-channel arrays, dst(I)=lower(I)_0 <= src(I)_0 <= upper(I)_0 && lower(I)_1 <= src(I)_1 <= upper(I)_1 For two-channel arrays etc. dst(I) is set to 0xff (all '1'-bits) if src(I) is within the range and 0 otherwise. All the arrays must have the same type, except the destination, and the same size (or ROI size)
public static void InRange(IInputArray src, IInputArray lower, IInputArray upper, IOutputArray dst)
Parameters
src
IInputArrayThe source image
lower
IInputArrayThe lower values stored in an image of same type & size as
src
upper
IInputArrayThe upper values stored in an image of same type & size as
src
dst
IOutputArrayThe resulting mask
Init()
Check to make sure all the unmanaged libraries are loaded
public static bool Init()
Returns
- bool
True if library loaded
InitCameraMatrix2D(IInputArrayOfArrays, IInputArrayOfArrays, Size, double)
Finds an initial camera matrix from 3D-2D point correspondences.
public static Mat InitCameraMatrix2D(IInputArrayOfArrays objectPoints, IInputArrayOfArrays imagePoints, Size imageSize, double aspectRatio = 1)
Parameters
objectPoints
IInputArrayOfArraysVector of vectors of the calibration pattern points in the calibration pattern coordinate space.
imagePoints
IInputArrayOfArraysVector of vectors of the projections of the calibration pattern points.
imageSize
SizeImage size in pixels used to initialize the principal point.
aspectRatio
doubleIf it is zero or negative, both fx and fy are estimated independently. Otherwise, fx=fy*aspectRatio.
Returns
- Mat
An initial camera matrix for the camera calibration process.
Remarks
Currently, the function only supports planar calibration patterns, which are patterns where each object point has z-coordinate =0.
InitUndistortRectifyMap(IInputArray, IInputArray, IInputArray, IInputArray, Size, DepthType, int, IOutputArray, IOutputArray)
This function is an extended version of cvInitUndistortMap. That is, in addition to the correction of lens distortion, the function can also apply arbitrary perspective transformation R and finally it can scale and shift the image according to the new camera matrix
public static void InitUndistortRectifyMap(IInputArray cameraMatrix, IInputArray distCoeffs, IInputArray R, IInputArray newCameraMatrix, Size size, DepthType depthType, int channels, IOutputArray map1, IOutputArray map2)
Parameters
cameraMatrix
IInputArrayThe camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1]
distCoeffs
IInputArrayThe vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5
R
IInputArrayThe rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used
newCameraMatrix
IInputArrayThe new camera matrix A'=[fx' 0 cx'; 0 fy' cy'; 0 0 1]
size
SizeUndistorted image size.
depthType
DepthTypeDepth type of the first output map. (The combination with
channels
can be one of CV_32FC1, CV_32FC2 or CV_16SC2)channels
intNumber of channels of the first output map. (The combination with
depthType
can be one of CV_32FC1, CV_32FC2 or CV_16SC2)map1
IOutputArrayThe first output map.
map2
IOutputArrayThe second output map.
Inpaint(IInputArray, IInputArray, IOutputArray, double, InpaintType)
Reconstructs the selected image area from the pixel near the area boundary. The function may be used to remove dust and scratches from a scanned photo, or to remove undesirable objects from still images or video.
public static void Inpaint(IInputArray src, IInputArray mask, IOutputArray dst, double inpaintRadius, InpaintType flags)
Parameters
src
IInputArrayThe input 8-bit 1-channel or 3-channel image
mask
IInputArrayThe inpainting mask, 8-bit 1-channel image. Non-zero pixels indicate the area that needs to be inpainted
dst
IOutputArrayThe output image of the same format and the same size as input
inpaintRadius
doubleThe radius of circular neighborhood of each point inpainted that is considered by the algorithm
flags
InpaintTypeThe inpainting method
InsertChannel(IInputArray, IInputOutputArray, int)
Insert the specific channel to the image
public static void InsertChannel(IInputArray src, IInputOutputArray dst, int coi)
Parameters
src
IInputArrayThe source channel
dst
IInputOutputArrayThe destination image where the channel will be inserted into
coi
int0-based index of the channel to be inserted
Integral(IInputArray, IOutputArray, IOutputArray, IOutputArray, DepthType, DepthType)
Calculates one or more integral images for the source image Using these integral images, one may calculate sum, mean, standard deviation over arbitrary up-right or rotated rectangular region of the image in a constant time. It makes possible to do a fast blurring or fast block correlation with variable window size etc. In case of multi-channel images sums for each channel are accumulated independently.
public static void Integral(IInputArray image, IOutputArray sum, IOutputArray sqsum = null, IOutputArray tiltedSum = null, DepthType sdepth = DepthType.Default, DepthType sqdepth = DepthType.Default)
Parameters
image
IInputArrayThe source image, WxH, 8-bit or floating-point (32f or 64f) image.
sum
IOutputArrayThe integral image, W+1xH+1, 32-bit integer or double precision floating-point (64f).
sqsum
IOutputArrayThe integral image for squared pixel values, W+1xH+1, double precision floating-point (64f).
tiltedSum
IOutputArrayThe integral for the image rotated by 45 degrees, W+1xH+1, the same data type as sum.
sdepth
DepthTypeDesired depth of the integral and the tilted integral images, CV_32S, CV_32F, or CV_64F.
sqdepth
DepthTypeDesired depth of the integral image of squared pixel values, CV_32F or CV_64F.
IntersectConvexConvex(IInputArray, IInputArray, IOutputArray, bool)
finds intersection of two convex polygons
public static float IntersectConvexConvex(IInputArray p1, IInputArray p2, IOutputArray p12, bool handleNested = true)
Parameters
p1
IInputArrayThe first convex polygon
p2
IInputArrayThe second convex polygon
p12
IOutputArrayThe intersection of the convex polygon
handleNested
boolHandle nest
Returns
- float
Absolute value of area of intersecting polygon.
Invert(IInputArray, IOutputArray, DecompMethod)
Finds the inverse or pseudo-inverse of a matrix. This function inverts the matrix src and stores the result in dst . When the matrix src is singular or non-square, the function calculates the pseudo-inverse matrix (the dst matrix) so that norm(src*dst - I) is minimal, where I is an identity matrix.
public static double Invert(IInputArray src, IOutputArray dst, DecompMethod method)
Parameters
src
IInputArrayThe input floating-point M x N matrix.
dst
IOutputArrayThe output matrix of N x M size and the same type as src.
method
DecompMethodInversion method
Returns
- double
In case of the DECOMP_LU method, the function returns non-zero value if the inverse has been successfully calculated and 0 if src is singular. In case of the DECOMP_SVD method, the function returns the inverse condition number of src (the ratio of the smallest singular value to the largest singular value) and 0 if src is singular. The SVD method calculates a pseudo-inverse matrix if src is singular. Similarly to DECOMP_LU, the method DECOMP_CHOLESKY works only with non-singular square matrices that should also be symmetrical and positively defined. In this case, the function stores the inverted matrix in dst and returns non-zero. Otherwise, it returns 0.
InvertAffineTransform(IInputArray, IOutputArray)
Inverts an affine transformation
public static void InvertAffineTransform(IInputArray m, IOutputArray im)
Parameters
m
IInputArrayOriginal affine transformation
im
IOutputArrayOutput reverse affine transformation.
IsContourConvex(IInputArray)
The function tests whether the input contour is convex or not. The contour must be simple, that is, without self-intersections. Otherwise, the function output is undefined.
public static bool IsContourConvex(IInputArray contour)
Parameters
contour
IInputArrayInput vector of 2D points
Returns
- bool
true if input is convex
Kmeans(IInputArray, int, IOutputArray, MCvTermCriteria, int, KMeansInitType, IOutputArray)
Implements k-means algorithm that finds centers of cluster_count clusters and groups the input samples around the clusters. On output labels(i) contains a cluster index for sample stored in the i-th row of samples matrix
public static double Kmeans(IInputArray data, int k, IOutputArray bestLabels, MCvTermCriteria termcrit, int attempts, KMeansInitType flags, IOutputArray centers = null)
Parameters
data
IInputArrayFloating-point matrix of input samples, one row per sample
k
intNumber of clusters to split the set by.
bestLabels
IOutputArrayOutput integer vector storing cluster indices for every sample
termcrit
MCvTermCriteriaSpecifies maximum number of iterations and/or accuracy (distance the centers move by between the subsequent iterations)
attempts
intThe number of attempts. Use 2 if not sure
flags
KMeansInitTypeFlags, use 0 if not sure
centers
IOutputArrayPointer to array of centers, use IntPtr.Zero if not sure
Returns
- double
The function returns the compactness measure. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function.
LUT(IInputArray, IInputArray, IOutputArray)
Fills the destination array with values from the look-up table. Indices of the entries are taken from the source array. That is, the function processes each element of src as following: dst(I)=lut[src(I)+DELTA] where DELTA=0 if src has depth CV_8U, and DELTA=128 if src has depth CV_8S
public static void LUT(IInputArray src, IInputArray lut, IOutputArray dst)
Parameters
src
IInputArraySource array of 8-bit elements
lut
IInputArrayLook-up table of 256 elements; should have the same depth as the destination array. In case of multi-channel source and destination arrays, the table should either have a single-channel (in this case the same table is used for all channels), or the same number of channels as the source/destination array
dst
IOutputArrayDestination array of arbitrary depth and of the same number of channels as the source array
Laplacian(IInputArray, IOutputArray, DepthType, int, double, double, BorderType)
Calculates Laplacian of the source image by summing second x- and y- derivatives calculated using Sobel operator: dst(x,y) = d2src/dx2 + d2src/dy2 Specifying aperture_size=1 gives the fastest variant that is equal to convolving the image with the following kernel: |0 1 0| |1 -4 1| |0 1 0| Similar to cvSobel function, no scaling is done and the same combinations of input and output formats are supported.
public static void Laplacian(IInputArray src, IOutputArray dst, DepthType ddepth, int ksize = 1, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image. Should have type of float
ddepth
DepthTypeDesired depth of the destination image.
ksize
intAperture size used to compute the second-derivative filters.
scale
doubleOptional scale factor for the computed Laplacian values. By default, no scaling is applied.
delta
doubleOptional delta value that is added to the results prior to storing them in dst.
borderType
BorderTypePixel extrapolation method.
Line(IInputOutputArray, Point, Point, MCvScalar, int, LineType, int)
Draws the line segment between pt1 and pt2 points in the image. The line is clipped by the image or ROI rectangle. For non-antialiased lines with integer coordinates the 8-connected or 4-connected Bresenham algorithm is used. Thick lines are drawn with rounding endings. Antialiased lines are drawn using Gaussian filtering.
public static void Line(IInputOutputArray img, Point pt1, Point pt2, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayThe image
pt1
PointFirst point of the line segment
pt2
PointSecond point of the line segment
color
MCvScalarLine color
thickness
intLine thickness.
lineType
LineTypeType of the line: 8 (or 0) - 8-connected line. 4 - 4-connected line. CV_AA - antialiased line.
shift
intNumber of fractional bits in the point coordinates
LinearPolar(IInputArray, IOutputArray, PointF, double, Inter, Warp)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc.
public static void LinearPolar(IInputArray src, IOutputArray dst, PointF center, double maxRadius, Inter interpolationType = Inter.Linear, Warp warpType = Warp.FillOutliers)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
center
PointFThe transformation center, where the output precision is maximal
maxRadius
doubleMaximum radius
interpolationType
InterInterpolation method
warpType
WarpWarp method
LoadUnmanagedModules(string, params string[])
Attempts to load opencv modules from the specific location
public static bool LoadUnmanagedModules(string loadDirectory, params string[] unmanagedModules)
Parameters
loadDirectory
stringThe directory where the unmanaged modules will be loaded. If it is null, the default location will be used.
unmanagedModules
string[]The names of opencv modules. e.g. "opencv_core.dll" on windows.
Returns
- bool
True if all the modules has been loaded successfully
Remarks
If loadDirectory
is null, the default location on windows is the dll's path appended by either "x64" or "x86", depends on the applications current mode.
Log(IInputArray, IOutputArray)
Calculates natural logarithm of absolute value of every element of input array: dst(I)=log(abs(src(I))), src(I)!=0 dst(I)=C, src(I)=0 Where C is large negative number (-700 in the current implementation)
public static void Log(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayThe source array
dst
IOutputArrayThe destination array, it should have double type or the same type as the source
LogPolar(IInputArray, IOutputArray, PointF, double, Inter, Warp)
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking etc.
public static void LogPolar(IInputArray src, IOutputArray dst, PointF center, double M, Inter interpolationType = Inter.Linear, Warp warpType = Warp.FillOutliers)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
center
PointFThe transformation center, where the output precision is maximal
M
doubleMagnitude scale parameter
interpolationType
InterInterpolation method
warpType
Warpwarp method
Mahalanobis(IInputArray, IInputArray, IInputArray)
Calculates the weighted distance between two vectors and returns it
public static double Mahalanobis(IInputArray v1, IInputArray v2, IInputArray iconvar)
Parameters
v1
IInputArrayThe first 1D source vector
v2
IInputArrayThe second 1D source vector
iconvar
IInputArrayThe inverse covariation matrix
Returns
- double
the Mahalanobis distance
MakeType(DepthType, int)
This function performs the same as MakeType macro
public static int MakeType(DepthType depth, int channels)
Parameters
Returns
- int
An integer that represent a mat type
MatchShapes(IInputArray, IInputArray, ContoursMatchType, double)
Compares two shapes. The 3 implemented methods all use Hu moments
public static double MatchShapes(IInputArray contour1, IInputArray contour2, ContoursMatchType method, double parameter = 0)
Parameters
contour1
IInputArrayFirst contour or grayscale image
contour2
IInputArraySecond contour or grayscale image
method
ContoursMatchTypeComparison method
parameter
doubleMethod-specific parameter (is not used now)
Returns
- double
The result of the comparison
MatchTemplate(IInputArray, IInputArray, IOutputArray, TemplateMatchingType, IInputArray)
This function is similiar to cvCalcBackProjectPatch. It slids through image, compares overlapped patches of size wxh with templ using the specified method and stores the comparison results to result
public static void MatchTemplate(IInputArray image, IInputArray templ, IOutputArray result, TemplateMatchingType method, IInputArray mask = null)
Parameters
image
IInputArrayImage where the search is running. It should be 8-bit or 32-bit floating-point
templ
IInputArraySearched template; must be not greater than the source image and the same data type as the image
result
IOutputArrayA map of comparison results; single-channel 32-bit floating-point. If image is WxH and templ is wxh then result must be W-w+1xH-h+1.
method
TemplateMatchingTypeSpecifies the way the template must be compared with image regions
mask
IInputArrayMask of searched template. It must have the same datatype and size with templ. It is not set by default.
Max(IInputArray, IInputArray, IOutputArray)
Calculates per-element maximum of two arrays: dst(I)=max(src1(I), src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size).
public static void Max(IInputArray src1, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array.
dst
IOutputArrayThe destination array
Mean(IInputArray, IInputArray)
Calculates the average value M of array elements, independently for each channel: N = sumI mask(I)!=0 Mc = 1/N * sumI,mask(I)!=0 arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the average to the first scalar component (S0).
public static MCvScalar Mean(IInputArray arr, IInputArray mask = null)
Parameters
arr
IInputArrayThe array
mask
IInputArrayThe optional operation mask
Returns
- MCvScalar
average (mean) of array elements
MeanShift(IInputArray, ref Rectangle, MCvTermCriteria)
Iterates to find the object center given its back projection and initial position of search window. The iterations are made until the search window center moves by less than the given value and/or until the function has done the maximum number of iterations.
public static int MeanShift(IInputArray probImage, ref Rectangle window, MCvTermCriteria criteria)
Parameters
probImage
IInputArrayBack projection of object histogram
window
RectangleInitial search window
criteria
MCvTermCriteriaCriteria applied to determine when the window search should be finished.
Returns
- int
The number of iterations made
MeanStdDev(IInputArray, IOutputArray, IOutputArray, IInputArray)
Calculates a mean and standard deviation of array elements.
public static void MeanStdDev(IInputArray arr, IOutputArray mean, IOutputArray stdDev, IInputArray mask = null)
Parameters
arr
IInputArrayInput array that should have from 1 to 4 channels so that the results can be stored in MCvScalar
mean
IOutputArrayCalculated mean value
stdDev
IOutputArrayCalculated standard deviation
mask
IInputArrayOptional operation mask
MeanStdDev(IInputArray, ref MCvScalar, ref MCvScalar, IInputArray)
The function cvAvgSdv calculates the average value and standard deviation of array elements, independently for each channel
public static void MeanStdDev(IInputArray arr, ref MCvScalar mean, ref MCvScalar stdDev, IInputArray mask = null)
Parameters
arr
IInputArrayThe array
mean
MCvScalarPointer to the mean value
stdDev
MCvScalarPointer to the standard deviation
mask
IInputArrayThe optional operation mask
Remarks
If the array is IplImage and COI is set, the function processes the selected channel only and stores the average and standard deviation to the first compoenents of output scalars (M0 and S0).
MedianBlur(IInputArray, IOutputArray, int)
Blurs an image using the median filter.
public static void MedianBlur(IInputArray src, IOutputArray dst, int ksize)
Parameters
src
IInputArrayInput 1-, 3-, or 4-channel image; when ksize is 3 or 5, the image depth should be CV_8U, CV_16U, or CV_32F, for larger aperture sizes, it can only be CV_8U.
dst
IOutputArrayDestination array of the same size and type as src.
ksize
intAperture linear size; it must be odd and greater than 1, for example: 3, 5, 7 ...
Merge(IInputArrayOfArrays, IOutputArray)
This function is the opposite to cvSplit. If the destination array has N channels then if the first N input channels are not IntPtr.Zero, all they are copied to the destination array, otherwise if only a single source channel of the first N is not IntPtr.Zero, this particular channel is copied into the destination array, otherwise an error is raised. Rest of source channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to insert a single channel into the image.
public static void Merge(IInputArrayOfArrays mv, IOutputArray dst)
Parameters
mv
IInputArrayOfArraysInput vector of matrices to be merged; all the matrices in mv must have the same size and the same depth.
dst
IOutputArrayoutput array of the same size and the same depth as mv[0]; The number of channels will be the total number of channels in the matrix array.
Min(IInputArray, IInputArray, IOutputArray)
Calculates per-element minimum of two arrays: dst(I)=min(src1(I),src2(I)) All the arrays must have a single channel, the same data type and the same size (or ROI size).
public static void Min(IInputArray src1, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
MinAreaRect(IInputArray)
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
public static RotatedRect MinAreaRect(IInputArray points)
Parameters
points
IInputArrayInput vector of 2D points
Returns
- RotatedRect
a circumscribed rectangle of the minimal area for 2D point set
MinAreaRect(PointF[])
Find the bounding rectangle for the specific array of points
public static RotatedRect MinAreaRect(PointF[] points)
Parameters
points
PointF[]The collection of points
Returns
- RotatedRect
The bounding rectangle for the array of points
MinEnclosingCircle(IInputArray)
Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed)
public static CircleF MinEnclosingCircle(IInputArray points)
Parameters
points
IInputArraySequence or array of 2D points
Returns
- CircleF
The minimal circumscribed circle for 2D point set
MinEnclosingCircle(PointF[])
Finds the minimal circumscribed circle for 2D point set using iterative algorithm. It returns nonzero if the resultant circle contains all the input points and zero otherwise (i.e. algorithm failed)
public static CircleF MinEnclosingCircle(PointF[] points)
Parameters
points
PointF[]Sequence or array of 2D points
Returns
- CircleF
The minimal circumscribed circle for 2D point set
MinEnclosingTriangle(IInputArray, IOutputArray)
Finds a triangle of minimum area enclosing a 2D point set and returns its area.
public static double MinEnclosingTriangle(IInputArray points, IOutputArray triangles)
Parameters
points
IInputArrayInput vector of 2D points with depth CV_32S or CV_32F
triangles
IOutputArrayOutput vector of three 2D points defining the vertices of the triangle. The depth of the OutputArray must be CV_32F.
Returns
- double
The triangle's area
MinMaxIdx(IInputArray, out double, out double, int[], int[], IInputArray)
Finds the global minimum and maximum in an array
public static void MinMaxIdx(IInputArray src, out double minVal, out double maxVal, int[] minIdx, int[] maxIdx, IInputArray mask = null)
Parameters
src
IInputArrayInput single-channel array.
minVal
doubleThe returned minimum value
maxVal
doubleThe returned maximum value
minIdx
int[]The returned minimum location
maxIdx
int[]The returned maximum location
mask
IInputArrayThe extremums are searched across the whole array if mask is IntPtr.Zert. Otherwise, search is performed in the specified array region.
MinMaxLoc(IInputArray, ref double, ref double, ref Point, ref Point, IInputArray)
Finds minimum and maximum element values and their positions. The extremums are searched over the whole array, selected ROI (in case of IplImage) or, if mask is not IntPtr.Zero, in the specified array region. If the array has more than one channel, it must be IplImage with COI set. In case if multi-dimensional arrays min_loc->x and max_loc->x will contain raw (linear) positions of the extremums
public static void MinMaxLoc(IInputArray arr, ref double minVal, ref double maxVal, ref Point minLoc, ref Point maxLoc, IInputArray mask = null)
Parameters
arr
IInputArrayThe source array, single-channel or multi-channel with COI set
minVal
doublePointer to returned minimum value
maxVal
doublePointer to returned maximum value
minLoc
PointPointer to returned minimum location
maxLoc
PointPointer to returned maximum location
mask
IInputArrayThe optional mask that is used to select a subarray. Use IntPtr.Zero if not needed
MixChannels(IInputArrayOfArrays, IInputOutputArray, int[])
The function cvMixChannels is a generalized form of cvSplit and cvMerge and some forms of cvCvtColor. It can be used to change the order of the planes, add/remove alpha channel, extract or insert a single plane or multiple planes etc.
public static void MixChannels(IInputArrayOfArrays src, IInputOutputArray dst, int[] fromTo)
Parameters
src
IInputArrayOfArraysThe array of input arrays.
dst
IInputOutputArrayThe array of output arrays
fromTo
int[]The array of pairs of indices of the planes copied. from_to[k2] is the 0-based index of the input plane, and from_to[k2+1] is the index of the output plane, where the continuous numbering of the planes over all the input and over all the output arrays is used. When from_to[k*2] is negative, the corresponding output plane is filled with 0's.
Remarks
Unlike many other new-style C++ functions in OpenCV, mixChannels requires the output arrays to be pre-allocated before calling the function.
Moments(IInputArray, bool)
Calculates spatial and central moments up to the third order and writes them to moments. The moments may be used then to calculate gravity center of the shape, its area, main axises and various shape characeteristics including 7 Hu invariants.
public static Moments Moments(IInputArray arr, bool binaryImage = false)
Parameters
arr
IInputArrayImage (1-channel or 3-channel with COI set) or polygon (CvSeq of points or a vector of points)
binaryImage
bool(For images only) If the flag is true, all the zero pixel values are treated as zeroes, all the others are treated as 1s
Returns
- Moments
The moment
MorphologyEx(IInputArray, IOutputArray, MorphOp, IInputArray, Point, int, BorderType, MCvScalar)
Performs advanced morphological transformations.
public static void MorphologyEx(IInputArray src, IOutputArray dst, MorphOp operation, IInputArray kernel, Point anchor, int iterations, BorderType borderType, MCvScalar borderValue)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image.
operation
MorphOpType of morphological operation.
kernel
IInputArrayStructuring element.
anchor
PointAnchor position with the kernel. Negative values mean that the anchor is at the kernel center.
iterations
intNumber of times erosion and dilation are applied.
borderType
BorderTypePixel extrapolation method.
borderValue
MCvScalarBorder value in case of a constant border.
MulSpectrums(IInputArray, IInputArray, IOutputArray, MulSpectrumsType, bool)
Performs per-element multiplication of the two CCS-packed or complex matrices that are results of real or complex Fourier transform.
public static void MulSpectrums(IInputArray src1, IInputArray src2, IOutputArray dst, MulSpectrumsType flags, bool conjB = false)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array of the same type and the same size of the sources
flags
MulSpectrumsTypeOperation flags; currently, the only supported flag is DFT_ROWS, which indicates that each row of src1 and src2 is an independent 1D Fourier spectrum.
conjB
boolOptional flag that conjugates the second input array before the multiplication (true) or not (false).
MulTransposed(IInputArray, IOutputArray, bool, IInputArray, double, DepthType)
Calculates the product of src and its transposition. The function evaluates dst=scale(src-delta)(src-delta)^T if order=0, and dst=scale(src-delta)^T(src-delta) otherwise.
public static void MulTransposed(IInputArray src, IOutputArray dst, bool aTa, IInputArray delta = null, double scale = 1, DepthType dtype = DepthType.Default)
Parameters
src
IInputArrayThe source matrix
dst
IOutputArrayThe destination matrix
aTa
boolOrder of multipliers
delta
IInputArrayAn optional array, subtracted from
src
before multiplicationscale
doubleAn optional scaling
dtype
DepthTypeOptional depth type of the output array
Multiply(IInputArray, IInputArray, IOutputArray, double, DepthType)
Calculates per-element product of two arrays: dst(I)=scale*src1(I)*src2(I) All the arrays must have the same type, and the same size (or ROI size)
public static void Multiply(IInputArray src1, IInputArray src2, IOutputArray dst, double scale = 1, DepthType dtype = DepthType.Default)
Parameters
src1
IInputArrayThe first source array.
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
scale
doubleOptional scale factor
dtype
DepthTypeOptional depth of the output array
NamedWindow(string, WindowFlags)
Creates a window which can be used as a placeholder for images and trackbars. Created windows are reffered by their names. If the window with such a name already exists, the function does nothing.
public static void NamedWindow(string name, WindowFlags flags = WindowFlags.AutoSize)
Parameters
name
stringName of the window which is used as window identifier and appears in the window caption
flags
WindowFlagsFlags of the window.
Norm(IInputArray, NormType, IInputArray)
Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined.
public static double Norm(IInputArray arr1, NormType normType = NormType.L2, IInputArray mask = null)
Parameters
arr1
IInputArrayThe first source image
normType
NormTypeType of norm
mask
IInputArrayThe optional operation mask
Returns
- double
The calculated norm
Norm(IInputArray, IInputOutputArray, NormType, IInputArray)
Returns the calculated norm. The multiple-channel array are treated as single-channel, that is, the results for all channels are combined.
public static double Norm(IInputArray arr1, IInputOutputArray arr2, NormType normType = NormType.L2, IInputArray mask = null)
Parameters
arr1
IInputArrayThe first source image
arr2
IInputOutputArrayThe second source image. If it is null, the absolute norm of arr1 is calculated, otherwise absolute or relative norm of arr1-arr2 is calculated
normType
NormTypeType of norm
mask
IInputArrayThe optional operation mask
Returns
- double
The calculated norm
Normalize(IInputArray, IOutputArray, double, double, NormType, DepthType, IInputArray)
Normalizes the input array so that it's norm or value range takes the certain value(s).
public static void Normalize(IInputArray src, IOutputArray dst, double alpha = 1, double beta = 0, NormType normType = NormType.L2, DepthType dType = DepthType.Default, IInputArray mask = null)
Parameters
src
IInputArrayThe input array
dst
IOutputArrayThe output array; in-place operation is supported
alpha
doubleThe minimum/maximum value of the output array or the norm of output array
beta
doubleThe maximum/minimum value of the output array
normType
NormTypeThe normalization type
dType
DepthTypeOptional depth type for the dst array
mask
IInputArrayThe operation mask. Makes the function consider and normalize only certain array elements
OclFinish()
Finishes OpenCL queue.
public static void OclFinish()
OclGetPlatformsSummary()
Get the OpenCL platform summary as a string
public static string OclGetPlatformsSummary()
Returns
- string
An OpenCL platform summary
OclSetDefaultDevice(string)
Set the default opencl device
public static void OclSetDefaultDevice(string deviceName)
Parameters
deviceName
stringThe name of the opencl device
PCABackProject(IInputArray, IInputArray, IInputArray, IOutputArray)
Reconstructs vectors from their PC projections.
public static void PCABackProject(IInputArray data, IInputArray mean, IInputArray eigenvectors, IOutputArray result)
Parameters
data
IInputArrayCoordinates of the vectors in the principal component subspace
mean
IInputArrayThe mean.
eigenvectors
IInputArrayThe eigenvectors.
result
IOutputArrayThe result.
PCACompute(IInputArray, IInputOutputArray, IOutputArray, double)
Performs Principal Component Analysis of the supplied dataset.
public static void PCACompute(IInputArray data, IInputOutputArray mean, IOutputArray eigenvectors, double retainedVariance)
Parameters
data
IInputArrayInput samples stored as the matrix rows or as the matrix columns.
mean
IInputOutputArrayOptional mean value; if the matrix is empty, the mean is computed from the data.
eigenvectors
IOutputArrayThe eigenvectors.
retainedVariance
doublePercentage of variance that PCA should retain. Using this parameter will let the PCA decided how many components to retain but it will always keep at least 2.
PCACompute(IInputArray, IInputOutputArray, IOutputArray, int)
Performs Principal Component Analysis of the supplied dataset.
public static void PCACompute(IInputArray data, IInputOutputArray mean, IOutputArray eigenvectors, int maxComponents = 0)
Parameters
data
IInputArrayInput samples stored as the matrix rows or as the matrix columns.
mean
IInputOutputArrayOptional mean value; if the matrix is empty, the mean is computed from the data.
eigenvectors
IOutputArrayThe eigenvectors.
maxComponents
intMaximum number of components that PCA should retain; by default, all the components are retained.
PCAProject(IInputArray, IInputArray, IInputArray, IOutputArray)
Projects vector(s) to the principal component subspace.
public static void PCAProject(IInputArray data, IInputArray mean, IInputArray eigenvectors, IOutputArray result)
Parameters
data
IInputArrayInput vector(s); must have the same dimensionality and the same layout as the input data used at PCA phase
mean
IInputArrayThe mean.
eigenvectors
IInputArrayThe eigenvectors.
result
IOutputArrayThe result.
PSNR(IInputArray, IInputArray)
Computes PSNR image/video quality metric
public static double PSNR(IInputArray src1, IInputArray src2)
Parameters
src1
IInputArrayThe first source image
src2
IInputArrayThe second source image
Returns
- double
the quality metric
PatchNaNs(IInputOutputArray, double)
Converts NaN's to the given number
public static void PatchNaNs(IInputOutputArray a, double val = 0)
Parameters
a
IInputOutputArrayThe array where NaN needs to be converted
val
doubleThe value to convert to
PencilSketch(IInputArray, IOutputArray, IOutputArray, float, float, float)
Pencil-like non-photorealistic line drawing
public static void PencilSketch(IInputArray src, IOutputArray dst1, IOutputArray dst2, float sigmaS = 60, float sigmaR = 0.07, float shadeFactor = 0.02)
Parameters
src
IInputArrayInput 8-bit 3-channel image
dst1
IOutputArrayOutput 8-bit 1-channel image
dst2
IOutputArrayOutput image with the same size and type as src
sigmaS
floatRange between 0 to 200
sigmaR
floatRange between 0 to 1
shadeFactor
floatRange between 0 to 0.1
PerspectiveTransform(IInputArray, IOutputArray, IInputArray)
Transforms every element of src (by treating it as 2D or 3D vector) in the following way: (x, y, z) -> (x'/w, y'/w, z'/w) or (x, y) -> (x'/w, y'/w), where (x', y', z', w') = mat4x4 * (x, y, z, 1) or (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise
public static void PerspectiveTransform(IInputArray src, IOutputArray dst, IInputArray mat)
Parameters
src
IInputArrayThe source three-channel floating-point array
dst
IOutputArrayThe destination three-channel floating-point array
mat
IInputArray3x3 or 4x4 floating-point transformation matrix.
PerspectiveTransform(PointF[], IInputArray)
Transforms every element of src in the following way: (x, y) -> (x'/w, y'/w), where (x', y', w') = mat3x3 * (x, y, 1) and w = w' if w'!=0, inf otherwise
public static PointF[] PerspectiveTransform(PointF[] src, IInputArray mat)
Parameters
src
PointF[]The source points
mat
IInputArray3x3 floating-point transformation matrix.
Returns
- PointF[]
The destination points
PhaseCorrelate(IInputArray, IInputArray, IInputArray, out double)
The function is used to detect translational shifts that occur between two images. The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation.
public static MCvPoint2D64f PhaseCorrelate(IInputArray src1, IInputArray src2, IInputArray window, out double response)
Parameters
src1
IInputArraySource floating point array (CV_32FC1 or CV_64FC1)
src2
IInputArraySource floating point array (CV_32FC1 or CV_64FC1)
window
IInputArrayFloating point array with windowing coefficients to reduce edge effects (optional).
response
doubleSignal power within the 5x5 centroid around the peak, between 0 and 1
Returns
- MCvPoint2D64f
The translational shifts that occur between two images
PointPolygonTest(IInputArray, PointF, bool)
Determines whether the point is inside contour, outside, or lies on an edge (or coinsides with a vertex). It returns positive, negative or zero value, correspondingly
public static double PointPolygonTest(IInputArray contour, PointF pt, bool measureDist)
Parameters
contour
IInputArrayInput contour
pt
PointFThe point tested against the contour
measureDist
boolIf != 0, the function estimates distance from the point to the nearest contour edge
Returns
- double
When measureDist = false, the return value is >0 (inside), <0 (outside) and =0 (on edge), respectively. When measureDist != true, it is a signed distance between the point and the nearest contour edge
PolarToCart(IInputArray, IInputArray, IOutputArray, IOutputArray, bool)
Calculates either x-coordinate, y-coordinate or both of every vector magnitude(I)* exp(angle(I)*j), j=sqrt(-1): x(I)=magnitude(I)*cos(angle(I)), y(I)=magnitude(I)*sin(angle(I))
public static void PolarToCart(IInputArray magnitude, IInputArray angle, IOutputArray x, IOutputArray y, bool angleInDegrees = false)
Parameters
magnitude
IInputArrayInput floating-point array of magnitudes of 2D vectors; it can be an empty matrix (=Mat()), in this case, the function assumes that all the magnitudes are =1; if it is not empty, it must have the same size and type as angle
angle
IInputArrayinput floating-point array of angles of 2D vectors.
x
IOutputArrayOutput array of x-coordinates of 2D vectors; it has the same size and type as angle.
y
IOutputArrayOutput array of y-coordinates of 2D vectors; it has the same size and type as angle.
angleInDegrees
boolThe flag indicating whether the angles are measured in radians or in degrees
PollKey()
Polls for a key event without waiting.
public static int PollKey()
Returns
- int
The code of the pressed key or -1 if no key was pressed since the last invocation.
Polylines(IInputOutputArray, IInputArray, bool, MCvScalar, int, LineType, int)
Draws a single or multiple polygonal curves
public static void Polylines(IInputOutputArray img, IInputArray pts, bool isClosed, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage
pts
IInputArrayArray of pointers to polylines
isClosed
boolIndicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex.
color
MCvScalarPolyline color
thickness
intThickness of the polyline edges
lineType
LineTypeType of the line segments, see cvLine description
shift
intNumber of fractional bits in the vertex coordinates
Polylines(IInputOutputArray, Point[], bool, MCvScalar, int, LineType, int)
Draws a single or multiple polygonal curves
public static void Polylines(IInputOutputArray img, Point[] pts, bool isClosed, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage
pts
Point[]Array points
isClosed
boolIndicates whether the polylines must be drawn closed. If !=0, the function draws the line from the last vertex of every contour to the first vertex.
color
MCvScalarPolyline color
thickness
intThickness of the polyline edges
lineType
LineTypeType of the line segments, see cvLine description
shift
intNumber of fractional bits in the vertex coordinates
Pow(IInputArray, double, IOutputArray)
Raises every element of input array to p: dst(I)=src(I)p, if p is integer dst(I)=abs(src(I))p, otherwise That is, for non-integer power exponent the absolute values of input array elements are used. However, it is possible to get true values for negative values using some extra operations, as the following sample, computing cube root of array elements, shows: CvSize size = cvGetSize(src); CvMat* mask = cvCreateMat( size.height, size.width, CV_8UC1 ); cvCmpS( src, 0, mask, CV_CMP_LT ); /* find negative elements / cvPow( src, dst, 1./3 ); cvSubRS( dst, cvScalarAll(0), dst, mask ); / negate the results of negative inputs */ cvReleaseMat( &mask ); For some values of power, such as integer values, 0.5 and -0.5, specialized faster algorithms are used.
public static void Pow(IInputArray src, double power, IOutputArray dst)
Parameters
src
IInputArrayThe source array
power
doubleThe exponent of power
dst
IOutputArrayThe destination array, should be the same type as the source
ProjectPoints(IInputArray, IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, double)
Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters. Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points).
public static void ProjectPoints(IInputArray objectPoints, IInputArray rvec, IInputArray tvec, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray imagePoints, IOutputArray jacobian = null, double aspectRatio = 0)
Parameters
objectPoints
IInputArrayThe array of object points, 3xN or Nx3, where N is the number of points in the view
rvec
IInputArrayThe rotation vector, 1x3 or 3x1
tvec
IInputArrayThe translation vector, 1x3 or 3x1
cameraMatrix
IInputArrayThe camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].
distCoeffs
IInputArrayThe vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's
imagePoints
IOutputArrayThe output array of image points, 2xN or Nx2, where N is the total number of points in the view
jacobian
IOutputArrayOptional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
aspectRatio
doubleAspect ratio
ProjectPoints(MCvPoint3D32f[], IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, double)
Computes projections of 3D points to the image plane given intrinsic and extrinsic camera parameters. Optionally, the function computes jacobians - matrices of partial derivatives of image points as functions of all the input parameters w.r.t. the particular parameters, intrinsic and/or extrinsic. The jacobians are used during the global optimization in cvCalibrateCamera2 and cvFindExtrinsicCameraParams2. The function itself is also used to compute back-projection error for with current intrinsic and extrinsic parameters.
public static PointF[] ProjectPoints(MCvPoint3D32f[] objectPoints, IInputArray rvec, IInputArray tvec, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray jacobian = null, double aspectRatio = 0)
Parameters
objectPoints
MCvPoint3D32f[]The array of object points.
rvec
IInputArrayThe rotation vector, 1x3 or 3x1
tvec
IInputArrayThe translation vector, 1x3 or 3x1
cameraMatrix
IInputArrayThe camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].
distCoeffs
IInputArrayThe vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's
jacobian
IOutputArrayOptional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.
aspectRatio
doubleAspect ratio
Returns
- PointF[]
The output array of image points, 2xN or Nx2, where N is the total number of points in the view
Remarks
Note, that with intrinsic and/or extrinsic parameters set to special values, the function can be used to compute just extrinsic transformation or just intrinsic transformation (i.e. distortion of a sparse set of points)
PutText(IInputOutputArray, string, Point, FontFace, double, MCvScalar, int, LineType, bool)
Renders the text in the image with the specified font and color. The printed text is clipped by ROI rectangle. Symbols that do not belong to the specified font are replaced with the rectangle symbol.
public static void PutText(IInputOutputArray img, string text, Point org, FontFace fontFace, double fontScale, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, bool bottomLeftOrigin = false)
Parameters
img
IInputOutputArrayInput image
text
stringString to print
org
PointCoordinates of the bottom-left corner of the first letter
fontFace
FontFaceFont type.
fontScale
doubleFont scale factor that is multiplied by the font-specific base size.
color
MCvScalarText color
thickness
intThickness of the lines used to draw a text.
lineType
LineTypeLine type
bottomLeftOrigin
boolWhen true, the image data origin is at the bottom-left corner. Otherwise, it is at the top-left corner.
PyrDown(IInputArray, IOutputArray, BorderType)
Performs downsampling step of Gaussian pyramid decomposition. First it convolves source image with the specified filter and then downsamples the image by rejecting even rows and columns.
public static void PyrDown(IInputArray src, IOutputArray dst, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayThe source image.
dst
IOutputArrayThe destination image, should have 2x smaller width and height than the source.
borderType
BorderTypeBorder type
PyrMeanShiftFiltering(IInputArray, IOutputArray, double, double, int, MCvTermCriteria)
Filters image using meanshift algorithm
public static void PyrMeanShiftFiltering(IInputArray src, IOutputArray dst, double sp, double sr, int maxLevel, MCvTermCriteria termcrit)
Parameters
src
IInputArraySource image
dst
IOutputArrayResult image
sp
doubleThe spatial window radius.
sr
doubleThe color window radius.
maxLevel
intMaximum level of the pyramid for the segmentation. Use 1 as default value
termcrit
MCvTermCriteriaTermination criteria: when to stop meanshift iterations. Use new MCvTermCriteria(5, 1) as default value
PyrUp(IInputArray, IOutputArray, BorderType)
Performs up-sampling step of Gaussian pyramid decomposition. First it upsamples the source image by injecting even zero rows and columns and then convolves result with the specified filter multiplied by 4 for interpolation. So the destination image is four times larger than the source image.
public static void PyrUp(IInputArray src, IOutputArray dst, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayThe source image.
dst
IOutputArrayThe destination image, should have 2x smaller width and height than the source.
borderType
BorderTypeBorder type
RQDecomp3x3(IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray)
Computes an RQ decomposition of 3x3 matrices.
public static MCvPoint3D64f RQDecomp3x3(IInputArray src, IOutputArray mtxR, IOutputArray mtxQ, IOutputArray Qx = null, IOutputArray Qy = null, IOutputArray Qz = null)
Parameters
src
IInputArray3x3 input matrix.
mtxR
IOutputArrayOutput 3x3 upper-triangular matrix.
mtxQ
IOutputArrayOutput 3x3 orthogonal matrix.
Qx
IOutputArrayOptional output 3x3 rotation matrix around x-axis.
Qy
IOutputArrayOptional output 3x3 rotation matrix around y-axis.
Qz
IOutputArrayOptional output 3x3 rotation matrix around z-axis.
Returns
- MCvPoint3D64f
The euler angles
RandShuffle(IInputOutputArray, double, ulong)
Shuffles the matrix by swapping randomly chosen pairs of the matrix elements on each iteration (where each element may contain several components in case of multi-channel arrays)
public static void RandShuffle(IInputOutputArray mat, double iterFactor, ulong rng)
Parameters
mat
IInputOutputArrayThe input/output matrix. It is shuffled in-place.
iterFactor
doubleThe relative parameter that characterizes intensity of the shuffling performed. The number of iterations (i.e. pairs swapped) is round(iter_factor*rows(mat)*cols(mat)), so iter_factor=0 means that no shuffling is done, iter_factor=1 means that the function swaps rows(mat)*cols(mat) random pairs etc
rng
ulongPointer to MCvRNG random number generator. Use 0 if not sure
Randn(IInputOutputArray, IInputArray, IInputArray)
Fills the array with normally distributed random numbers.
public static void Randn(IInputOutputArray dst, IInputArray mean, IInputArray stddev)
Parameters
dst
IInputOutputArrayOutput array of random numbers; the array must be pre-allocated and have 1 to 4 channels.
mean
IInputArrayMean value (expectation) of the generated random numbers.
stddev
IInputArrayStandard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix.
Randn(IInputOutputArray, MCvScalar, MCvScalar)
Fills the array with normally distributed random numbers.
public static void Randn(IInputOutputArray dst, MCvScalar mean, MCvScalar stddev)
Parameters
dst
IInputOutputArrayOutput array of random numbers; the array must be pre-allocated and have 1 to 4 channels.
mean
MCvScalarMean value (expectation) of the generated random numbers.
stddev
MCvScalarStandard deviation of the generated random numbers; it can be either a vector (in which case a diagonal standard deviation matrix is assumed) or a square matrix.
Randu(IInputOutputArray, IInputArray, IInputArray)
Generates a single uniformly-distributed random number or an array of random numbers.
public static void Randu(IInputOutputArray dst, IInputArray low, IInputArray high)
Parameters
dst
IInputOutputArrayOutput array of random numbers; the array must be pre-allocated.
low
IInputArrayInclusive lower boundary of the generated random numbers.
high
IInputArrayExclusive upper boundary of the generated random numbers.
Randu(IInputOutputArray, MCvScalar, MCvScalar)
Generates a single uniformly-distributed random number or an array of random numbers.
public static void Randu(IInputOutputArray dst, MCvScalar low, MCvScalar high)
Parameters
dst
IInputOutputArrayOutput array of random numbers; the array must be pre-allocated.
low
MCvScalarInclusive lower boundary of the generated random numbers.
high
MCvScalarExclusive upper boundary of the generated random numbers.
ReadCloud(string, IOutputArray, IOutputArray)
Read point cloud from file
public static Mat ReadCloud(string file, IOutputArray colors = null, IOutputArray normals = null)
Parameters
file
stringThe point cloud file
colors
IOutputArrayThe color of the points
normals
IOutputArrayThe normal of the points
Returns
- Mat
The points
Rectangle(IInputOutputArray, Rectangle, MCvScalar, int, LineType, int)
Draws a rectangle specified by a CvRect structure
public static void Rectangle(IInputOutputArray img, Rectangle rect, MCvScalar color, int thickness = 1, LineType lineType = LineType.EightConnected, int shift = 0)
Parameters
img
IInputOutputArrayImage
rect
RectangleThe rectangle to be drawn
color
MCvScalarLine color
thickness
intThickness of lines that make up the rectangle. Negative values make the function to draw a filled rectangle.
lineType
LineTypeType of the line
shift
intNumber of fractional bits in the point coordinates
RedirectError(CvErrorCallback, nint, nint)
Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision.
public static extern nint RedirectError(CvInvoke.CvErrorCallback errorHandler, nint userdata, nint prevUserdata)
Parameters
errorHandler
CvInvoke.CvErrorCallbackThe new error handler
userdata
nintArbitrary pointer that is transparently passed to the error handler.
prevUserdata
nintPointer to the previously assigned user data pointer.
Returns
- nint
Pointer to the old error handler
RedirectError(nint, nint, nint)
Sets a new error handler that can be one of standard handlers or a custom handler that has the certain interface. The handler takes the same parameters as cvError function. If the handler returns non-zero value, the program is terminated, otherwise, it continues. The error handler may check the current error mode with cvGetErrMode to make a decision.
public static extern nint RedirectError(nint errorHandler, nint userdata, nint prevUserdata)
Parameters
errorHandler
nintPointer to the new error handler
userdata
nintArbitrary pointer that is transparently passed to the error handler.
prevUserdata
nintPointer to the previously assigned user data pointer.
Returns
- nint
Pointer to the old error handler
Reduce(IInputArray, IOutputArray, ReduceDimension, ReduceType, DepthType)
Reduces matrix to a vector by treating the matrix rows/columns as a set of 1D vectors and performing the specified operation on the vectors until a single row/column is obtained.
public static void Reduce(IInputArray src, IOutputArray dst, ReduceDimension dim = ReduceDimension.Auto, ReduceType type = ReduceType.ReduceSum, DepthType dtype = DepthType.Default)
Parameters
src
IInputArrayThe input matrix
dst
IOutputArrayThe output single-row/single-column vector that accumulates somehow all the matrix rows/columns
dim
ReduceDimensionThe dimension index along which the matrix is reduce.
type
ReduceTypeThe reduction operation type
dtype
DepthTypeOptional depth type of the output array
Remarks
The function can be used to compute horizontal and vertical projections of an raster image. In case of CV_REDUCE_SUM and CV_REDUCE_AVG the output may have a larger element bit-depth to preserve accuracy. And multi-channel arrays are also supported in these two reduction modes
Remap(IInputArray, IOutputArray, IInputArray, IInputArray, Inter, BorderType, MCvScalar)
Applies a generic geometrical transformation to an image.
public static void Remap(IInputArray src, IOutputArray dst, IInputArray map1, IInputArray map2, Inter interpolation, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
map1
IInputArrayThe first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1 , or CV_32FC2 . See convertMaps() for details on converting a floating point representation to fixed-point for speed.
map2
IInputArrayThe second map of y values having the type CV_16UC1 , CV_32FC1 , or none (empty map if map1 is (x,y) points), respectively.
interpolation
InterInterpolation method (see resize() ). The method 'Area' is not supported by this function.
borderMode
BorderTypePixel extrapolation method
borderValue
MCvScalarA value used to fill outliers
Repeat(IInputArray, int, int, IOutputArray)
Fills the destination array with source array tiled: dst(i,j)=src(i mod rows(src), j mod cols(src))So the destination array may be as larger as well as smaller than the source array
public static void Repeat(IInputArray src, int ny, int nx, IOutputArray dst)
Parameters
src
IInputArraySource array, image or matrix
ny
intFlag to specify how many times the src is repeated along the horizontal axis.
nx
intFlag to specify how many times the src is repeated along the vertical axis.
dst
IOutputArrayDestination array, image or matrix
ReprojectImageTo3D(IInputArray, IOutputArray, IInputArray, bool, DepthType)
Transforms 1-channel disparity map to 3-channel image, a 3D surface.
public static void ReprojectImageTo3D(IInputArray disparity, IOutputArray image3D, IInputArray q, bool handleMissingValues = false, DepthType ddepth = DepthType.Default)
Parameters
disparity
IInputArrayDisparity map
image3D
IOutputArray3-channel, 16-bit integer or 32-bit floating-point image - the output map of 3D points
q
IInputArrayThe reprojection 4x4 matrix, can be arbitrary, e.g. the one, computed by cvStereoRectify
handleMissingValues
boolIndicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000).
ddepth
DepthTypeThe optional output array depth. If it is -1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F.
Resize(IInputArray, IOutputArray, Size, double, double, Inter)
Resizes the image src down to or up to the specified size
public static void Resize(IInputArray src, IOutputArray dst, Size dsize, double fx = 0, double fy = 0, Inter interpolation = Inter.Linear)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image
dsize
SizeOutput image size; if it equals zero, it is computed as: dsize=Size(round(fx*src.cols), round(fy * src.rows)). Either dsize or both fx and fy must be non-zero.
fx
doubleScale factor along the horizontal axis
fy
doubleScale factor along the vertical axis;
interpolation
InterInterpolation method
ResizeForFrame(IInputArray, IOutputArray, Size, Inter, bool)
Resize an image such that it fits in a given frame, keeping the aspect ratio.
public static void ResizeForFrame(IInputArray src, IOutputArray dst, Size frameSize, Inter interpolationMethod = Inter.Linear, bool scaleDownOnly = true)
Parameters
src
IInputArrayThe source image
dst
IOutputArrayThe result image
frameSize
SizeThe size of the frame
interpolationMethod
InterThe interpolation method
scaleDownOnly
boolIf true, it will not try to scale up the image to fit the frame
Rodrigues(IInputArray, IOutputArray, IOutputArray)
Converts a rotation vector to rotation matrix or vice versa. Rotation vector is a compact representation of rotation matrix. Direction of the rotation vector is the rotation axis and the length of the vector is the rotation angle around the axis.
public static void Rodrigues(IInputArray src, IOutputArray dst, IOutputArray jacobian = null)
Parameters
src
IInputArrayThe input rotation vector (3x1 or 1x3) or rotation matrix (3x3).
dst
IOutputArrayThe output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively
jacobian
IOutputArrayOptional output Jacobian matrix, 3x9 or 9x3 - partial derivatives of the output array components w.r.t the input array components
Rotate(IInputArray, IOutputArray, RotateFlags)
Rotates a 2D array in multiples of 90 degrees.
public static void Rotate(IInputArray src, IOutputArray dst, RotateFlags rotateCode)
Parameters
src
IInputArrayInput array.
dst
IOutputArrayOutput array of the same type as src. The size is the same with ROTATE_180, and the rows and cols are switched for ROTATE_90 and ROTATE_270.
rotateCode
RotateFlagsA flag to specify how to rotate the array
RotatedRectangleIntersection(RotatedRect, RotatedRect, IOutputArray)
Finds out if there is any intersection between two rotated rectangles.
public static RectIntersectType RotatedRectangleIntersection(RotatedRect rect1, RotatedRect rect2, IOutputArray intersectingRegion)
Parameters
rect1
RotatedRectFirst rectangle
rect2
RotatedRectSecond rectangle
intersectingRegion
IOutputArrayThe output array of the verticies of the intersecting region. It returns at most 8 vertices. Stored as VectorOfPointF or Mat as Mx1 of type CV_32FC2.
Returns
- RectIntersectType
The intersect type
SVBackSubst(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)
Performs a singular value back substitution.
public static void SVBackSubst(IInputArray w, IInputArray u, IInputArray vt, IInputArray rhs, IOutputArray dst)
Parameters
w
IInputArraySingular values
u
IInputArrayLeft singular vectors
vt
IInputArrayTransposed matrix of right singular vectors.
rhs
IInputArrayRight-hand side of a linear system
dst
IOutputArrayFound solution of the system.
SVDecomp(IInputArray, IOutputArray, IOutputArray, IOutputArray, SvdFlag)
Decomposes matrix A into a product of a diagonal matrix and two orthogonal matrices: A=UWVT Where W is diagonal matrix of singular values that can be coded as a 1D vector of singular values and U and V. All the singular values are non-negative and sorted (together with U and V columns) in descenting order.
public static void SVDecomp(IInputArray src, IOutputArray w, IOutputArray u, IOutputArray v, SvdFlag flags)
Parameters
src
IInputArraySource MxN matrix
w
IOutputArrayResulting singular value matrix (MxN or NxN) or vector (Nx1).
u
IOutputArrayOptional left orthogonal matrix (MxM or MxN). If CV_SVD_U_T is specified, the number of rows and columns in the sentence above should be swapped
v
IOutputArrayOptional right orthogonal matrix (NxN)
flags
SvdFlagOperation flags
Remarks
SVD algorithm is numerically robust and its typical applications include:
- accurate eigenvalue problem solution when matrix A is square, symmetric and positively defined matrix, for example, when it is a covariation matrix. W in this case will be a vector of eigen values, and U=V is matrix of eigen vectors (thus, only one of U or V needs to be calculated if the eigen vectors are required)
- accurate solution of poor-conditioned linear systems
- least-squares solution of overdetermined linear systems. This and previous is done by cvSolve function with CV_SVD method
- accurate calculation of different matrix characteristics such as rank (number of non-zero singular values), condition number (ratio of the largest singular value to the smallest one), determinant (absolute value of determinant is equal to the product of singular values). All the things listed in this item do not require calculation of U and V matrices.
SanityCheck()
Check if the size of the C structures match those of C#
public static bool SanityCheck()
Returns
- bool
True if the size matches
ScaleAdd(IInputArray, double, IInputArray, IOutputArray)
Calculates the sum of a scaled array and another array.
public static void ScaleAdd(IInputArray src1, double alpha, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayFirst input array
alpha
doubleScale factor for the first array
src2
IInputArraySecond input array of the same size and type as src1
dst
IOutputArrayOutput array of the same size and type as src1
Scharr(IInputArray, IOutputArray, DepthType, int, int, double, double, BorderType)
Calculates the first x- or y- image derivative using Scharr operator.
public static void Scharr(IInputArray src, IOutputArray dst, DepthType ddepth, int dx, int dy, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayinput image.
dst
IOutputArrayoutput image of the same size and the same number of channels as src.
ddepth
DepthTypeoutput image depth
dx
intorder of the derivative x.
dy
intorder of the derivative y.
scale
doubleoptional scale factor for the computed derivative values; by default, no scaling is applied
delta
doubleoptional delta value that is added to the results prior to storing them in dst.
borderType
BorderTypepixel extrapolation method
SeamlessClone(IInputArray, IInputArray, IInputArray, Point, IOutputArray, CloningMethod)
Image editing tasks concern either global changes (color/intensity corrections, filters, deformations) or local changes concerned to a selection. Here we are interested in achieving local changes, ones that are restricted to a region manually selected (ROI), in a seamless and effortless manner. The extent of the changes ranges from slight distortions to complete replacement by novel content
public static void SeamlessClone(IInputArray src, IInputArray dst, IInputArray mask, Point p, IOutputArray blend, CloningMethod flags)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
dst
IInputArrayInput 8-bit 3-channel image.
mask
IInputArrayInput 8-bit 1 or 3-channel image.
p
PointPoint in dst image where object is placed.
blend
IOutputArrayOutput image with the same size and type as dst.
flags
CloningMethodCloning method
SegmentMotion(IInputArray, IOutputArray, VectorOfRect, double, double)
Finds all the motion segments and marks them in segMask with individual values each (1,2,...). It also returns a sequence of CvConnectedComp structures, one per each motion components. After than the motion direction for every component can be calculated with cvCalcGlobalOrientation using extracted mask of the particular component (using cvCmp)
public static void SegmentMotion(IInputArray mhi, IOutputArray segMask, VectorOfRect boundingRects, double timestamp, double segThresh)
Parameters
mhi
IInputArrayMotion history image
segMask
IOutputArrayImage where the mask found should be stored, single-channel, 32-bit floating-point
boundingRects
VectorOfRectVector containing ROIs of motion connected components.
timestamp
doubleCurrent time in milliseconds or other units
segThresh
doubleSegmentation threshold; recommended to be equal to the interval between motion history "steps" or greater
SelectROI(string, IInputArray, bool, bool)
Selects ROI on the given image. Function creates a window and allows user to select a ROI using mouse. Controls: use space or enter to finish selection, use key c to cancel selection (function will return the zero cv::Rect).
public static Rectangle SelectROI(string windowName, IInputArray img, bool showCrosshair = true, bool fromCenter = false)
Parameters
windowName
stringName of the window where selection process will be shown.
img
IInputArrayImage to select a ROI.
showCrosshair
boolIf true crosshair of selection rectangle will be shown.
fromCenter
boolIf true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspont to the initial mouse position.
Returns
- Rectangle
Selected ROI or empty rect if selection canceled.
SelectROIs(string, IInputArray, bool, bool)
Selects ROIs on the given image. Function creates a window and allows user to select a ROIs using mouse. Controls: use space or enter to finish current selection and start a new one, use esc to terminate multiple ROI selection process.
public static Rectangle[] SelectROIs(string windowName, IInputArray img, bool showCrosshair = true, bool fromCenter = false)
Parameters
windowName
stringName of the window where selection process will be shown.
img
IInputArrayImage to select a ROI.
showCrosshair
boolIf true crosshair of selection rectangle will be shown.
fromCenter
boolIf true center of selection will match initial mouse position. In opposite case a corner of selection rectangle will correspont to the initial mouse position.
Returns
- Rectangle[]
Selected ROIs.
SepFilter2D(IInputArray, IOutputArray, DepthType, IInputArray, IInputArray, Point, double, BorderType)
The function applies a separable linear filter to the image. That is, first, every row of src is filtered with the 1D kernel kernelX. Then, every column of the result is filtered with the 1D kernel kernelY. The final result shifted by delta is stored in dst .
public static void SepFilter2D(IInputArray src, IOutputArray dst, DepthType ddepth, IInputArray kernelX, IInputArray kernelY, Point anchor, double delta = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image of the same size and the same number of channels as src.
ddepth
DepthTypeDestination image depth
kernelX
IInputArrayCoefficients for filtering each row.
kernelY
IInputArrayCoefficients for filtering each column.
anchor
PointAnchor position within the kernel. The value (-1,-1) means that the anchor is at the kernel center.
delta
doubleValue added to the filtered results before storing them.
borderType
BorderTypePixel extrapolation method
SetBreakOnError(bool)
When the break-on-error mode is set, the default error handler issues a hardware exception, which can make debugging more convenient.
public static extern bool SetBreakOnError(bool flag)
Parameters
flag
boolThe flag
Returns
- bool
The previous state
SetErrMode(int)
Sets the specified error mode.
public static extern int SetErrMode(int errorMode)
Parameters
errorMode
intThe error mode
Returns
- int
The previous error mode
SetErrStatus(ErrorCodes)
Sets the error status to the specified value. Mostly, the function is used to reset the error status (set to it CV_StsOk) to recover after error. In other cases it is more natural to call cvError or CV_ERROR.
public static extern void SetErrStatus(ErrorCodes code)
Parameters
code
ErrorCodesThe error status.
SetIdentity(IInputOutputArray, MCvScalar)
Initializes scaled identity matrix: arr(i,j)=value if i=j, 0 otherwise
public static void SetIdentity(IInputOutputArray mat, MCvScalar value)
Parameters
mat
IInputOutputArrayThe matrix to initialize (not necessarily square).
value
MCvScalarThe value to assign to the diagonal elements.
SetParallelForBackend(string, bool)
Replace OpenCV parallel_for backend.
public static bool SetParallelForBackend(string backendName, bool propagateNumThreads = true)
Parameters
backendName
stringThe name of the backend.
propagateNumThreads
boolIt true, the number of threads of the current enviroment will be passed to the new backend.
Returns
- bool
True if backend is set
Remarks
This call is not thread-safe. Consider calling this function from the main() before any other OpenCV processing functions (and without any other created threads).
SetWindowProperty(string, WindowPropertyFlags, double)
Changes parameters of a window dynamically.
public static void SetWindowProperty(string name, WindowPropertyFlags propId, double propValue)
Parameters
name
stringName of the window.
propId
WindowPropertyFlagsWindow property to edit.
propValue
doubleNew value of the window property.
SetWindowTitle(string, string)
Updates window title
public static void SetWindowTitle(string winname, string title)
Parameters
Sobel(IInputArray, IOutputArray, DepthType, int, int, int, double, double, BorderType)
The Sobel operators combine Gaussian smoothing and differentiation so the result is more or less robust to the noise. Most often, the function is called with (xorder=1, yorder=0, aperture_size=3) or (xorder=0, yorder=1, aperture_size=3) to calculate first x- or y- image derivative. The first case corresponds to
|-1 0 1| |-2 0 2| |-1 0 1|
kernel and the second one corresponds to
|-1 -2 -1| | 0 0 0| | 1 2 1|
or
| 1 2 1| | 0 0 0| |-1 -2 -1|
kernel, depending on the image origin (origin field of IplImage structure). No scaling is done, so the destination image usually has larger by absolute value numbers than the source image. To avoid overflow, the function requires 16-bit destination image if the source image is 8-bit. The result can be converted back to 8-bit using cvConvertScale or cvConvertScaleAbs functions. Besides 8-bit images the function can process 32-bit floating-point images. Both source and destination must be single-channel images of equal size or ROI size
public static void Sobel(IInputArray src, IOutputArray dst, DepthType ddepth, int xorder, int yorder, int kSize = 3, double scale = 1, double delta = 0, BorderType borderType = BorderType.Default)
Parameters
src
IInputArraySource image.
dst
IOutputArrayDestination image
ddepth
DepthTypeoutput image depth; the following combinations of src.depth() and ddepth are supported:
src.depth() = CV_8U, ddepth = -1/CV_16S/CV_32F/CV_64F
src.depth() = CV_16U/CV_16S, ddepth = -1/CV_32F/CV_64F
src.depth() = CV_32F, ddepth = -1/CV_32F/CV_64F
src.depth() = CV_64F, ddepth = -1/CV_64F
when ddepth=-1, the destination image will have the same depth as the source; in the case of 8-bit input images it will result in truncated derivatives.xorder
intOrder of the derivative x
yorder
intOrder of the derivative y
kSize
intSize of the extended Sobel kernel, must be 1, 3, 5 or 7.
scale
doubleOptional scale factor for the computed derivative values
delta
doubleOptional delta value that is added to the results prior to storing them in
dst
borderType
BorderTypePixel extrapolation method
Solve(IInputArray, IInputArray, IOutputArray, DecompMethod)
Solves linear system (src1)*(dst) = (src2)
public static bool Solve(IInputArray src1, IInputArray src2, IOutputArray dst, DecompMethod method)
Parameters
src1
IInputArrayThe source matrix in the LHS
src2
IInputArrayThe source matrix in the RHS
dst
IOutputArrayThe result
method
DecompMethodThe method for solving the equation
Returns
- bool
0 if src1 is a singular and CV_LU method is used
SolveCubic(IInputArray, IOutputArray)
finds real roots of a cubic equation: coeffs[0]*x^3 + coeffs[1]*x^2 + coeffs[2]*x + coeffs[3] = 0 (if coeffs is 4-element vector) or x^3 + coeffs[0]*x^2 + coeffs[1]*x + coeffs[2] = 0 (if coeffs is 3-element vector)
public static int SolveCubic(IInputArray coeffs, IOutputArray roots)
Parameters
coeffs
IInputArrayThe equation coefficients, array of 3 or 4 elements
roots
IOutputArrayThe output array of real roots. Should have 3 elements. Padded with zeros if there is only one root
Returns
- int
the number of real roots found
SolveLP(Mat, Mat, Mat)
Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method). What we mean here by “linear programming problem” (or LP problem, for short) can be formulated as: Maximize c x subject to: Ax <= b and x >= 0
public static SolveLPResult SolveLP(Mat functionMatrix, Mat constraintMatrix, Mat zMatrix)
Parameters
functionMatrix
MatThis row-vector corresponds to c in the LP problem formulation (see above). It should contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted, in the latter case it is understood to correspond to c^T.
constraintMatrix
Matm-by-n+1 matrix, whose rightmost column corresponds to b in formulation above and the remaining to A. It should containt 32- or 64-bit floating point numbers.
zMatrix
MatThe solution will be returned here as a column-vector - it corresponds to c in the formulation above. It will contain 64-bit floating point numbers.
Returns
- SolveLPResult
The return codes
SolveP3P(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, SolvePnpMethod)
Finds an object pose from 3 3D-2D point correspondences.
public static int SolveP3P(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, SolvePnpMethod flags)
Parameters
objectPoints
IInputArrayArray of object points in the object coordinate space, 3x3 1-channel or 1x3/3x1 3-channel. VectorOfPoint3f can be also passed here.
imagePoints
IInputArrayArray of corresponding image points, 3x2 1-channel or 1x3/3x1 2-channel. VectorOfPoint2f can be also passed here.
cameraMatrix
IInputArrayInput camera matrix A=[[fx 0 0] [0 fy 0] [cx cy 1]] .
distCoeffs
IInputArrayInput vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
rvecs
IOutputArrayOfArraysOutput rotation vectors (see Rodrigues ) that, together with tvecs , brings points from the model coordinate system to the camera coordinate system. A P3P problem has up to 4 solutions.
tvecs
IOutputArrayOfArraysOutput translation vectors.
flags
SolvePnpMethodMethod for solving a P3P problem: either P3P or AP3P
Returns
- int
Number of solutions
SolvePnP(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, bool, SolvePnpMethod)
Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error
public static bool SolvePnP(IInputArray objectPoints, IInputArray imagePoints, IInputArray intrinsicMatrix, IInputArray distortionCoeffs, IOutputArray rotationVector, IOutputArray translationVector, bool useExtrinsicGuess = false, SolvePnpMethod flags = SolvePnpMethod.Iterative)
Parameters
objectPoints
IInputArrayThe array of object points, 3xN or Nx3, where N is the number of points in the view
imagePoints
IInputArrayThe array of corresponding image points, 2xN or Nx2, where N is the number of points in the view
intrinsicMatrix
IInputArrayThe camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].
distortionCoeffs
IInputArrayThe vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's.
rotationVector
IOutputArrayThe output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2).
translationVector
IOutputArrayThe output 3x1 or 1x3 translation vector
useExtrinsicGuess
boolUse the input rotation and translation parameters as a guess
flags
SolvePnpMethodMethod for solving a PnP problem
Returns
- bool
True if successful
SolvePnP(MCvPoint3D32f[], PointF[], IInputArray, IInputArray, IOutputArray, IOutputArray, bool, SolvePnpMethod)
Estimates extrinsic camera parameters using known intrinsic parameters and extrinsic parameters for each view. The coordinates of 3D object points and their correspondent 2D projections must be specified. This function also minimizes back-projection error.
public static bool SolvePnP(MCvPoint3D32f[] objectPoints, PointF[] imagePoints, IInputArray intrinsicMatrix, IInputArray distortionCoeffs, IOutputArray rotationVector, IOutputArray translationVector, bool useExtrinsicGuess = false, SolvePnpMethod method = SolvePnpMethod.Iterative)
Parameters
objectPoints
MCvPoint3D32f[]The array of object points
imagePoints
PointF[]The array of corresponding image points
intrinsicMatrix
IInputArrayThe camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].
distortionCoeffs
IInputArrayThe vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2]. If it is IntPtr.Zero, all distortion coefficients are considered 0's.
rotationVector
IOutputArrayThe output 3x1 or 1x3 rotation vector (compact representation of a rotation matrix, see cvRodrigues2).
translationVector
IOutputArrayThe output 3x1 or 1x3 translation vector
useExtrinsicGuess
boolUse the input rotation and translation parameters as a guess
method
SolvePnpMethodMethod for solving a PnP problem
Returns
- bool
True if successful
SolvePnPGeneric(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, bool, SolvePnpMethod, IInputArray, IInputArray, IOutputArray)
Finds an object pose from 3D-2D point correspondences.
public static int SolvePnPGeneric(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, bool useExtrinsicGuess = false, SolvePnpMethod flags = SolvePnpMethod.Iterative, IInputArray rvec = null, IInputArray tvec = null, IOutputArray reprojectionError = null)
Parameters
objectPoints
IInputArrayArray of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.
imagePoints
IInputArrayArray of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.
cameraMatrix
IInputArrayInput camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].
distCoeffs
IInputArrayInput vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
rvecs
IOutputArrayOfArraysVector of output rotation vectors (see Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.
tvecs
IOutputArrayOfArraysVector of output translation vectors.
useExtrinsicGuess
boolParameter used for SolvePnpMethod.Iterative. If true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
flags
SolvePnpMethodMethod for solving a PnP problem
rvec
IInputArrayRotation vector used to initialize an iterative PnP refinement algorithm, when flag is SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.
tvec
IInputArrayTranslation vector used to initialize an iterative PnP refinement algorithm, when flag is SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.
reprojectionError
IOutputArrayOptional vector of reprojection error, that is the RMS error between the input image points and the 3D object points projected with the estimated pose.
Returns
- int
The number of solutions
SolvePnPRansac(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray, IOutputArray, bool, int, float, double, IOutputArray, SolvePnpMethod)
Finds an object pose from 3D-2D point correspondences using the RANSAC scheme.
public static bool SolvePnPRansac(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IOutputArray rvec, IOutputArray tvec, bool useExtrinsicGuess = false, int iterationsCount = 100, float reprojectionError = 8, double confident = 0.99, IOutputArray inliers = null, SolvePnpMethod flags = SolvePnpMethod.Iterative)
Parameters
objectPoints
IInputArrayArray of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3D32f can be also passed here.
imagePoints
IInputArrayArray of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPointF can be also passed here.
cameraMatrix
IInputArrayInput camera matrix
distCoeffs
IInputArrayInput vector of distortion coefficients of 4, 5, 8 or 12 elements. If the vector is null/empty, the zero distortion coefficients are assumed.
rvec
IOutputArrayOutput rotation vector
tvec
IOutputArrayOutput translation vector.
useExtrinsicGuess
boolIf true, the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.
iterationsCount
intNumber of iterations.
reprojectionError
floatInlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.
confident
doubleThe probability that the algorithm produces a useful result.
inliers
IOutputArrayOutput vector that contains indices of inliers in objectPoints and imagePoints .
flags
SolvePnpMethodMethod for solving a PnP problem
Returns
- bool
True if successful
SolvePnPRefineLM(IInputArray, IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, MCvTermCriteria)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution
public static void SolvePnPRefineLM(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IInputOutputArray rvec, IInputOutputArray tvec, MCvTermCriteria criteria)
Parameters
objectPoints
IInputArrayArray of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.
imagePoints
IInputArrayArray of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.
cameraMatrix
IInputArrayInput camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].
distCoeffs
IInputArrayInput vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
rvec
IInputOutputArrayInput/Output rotation vector (see Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.
tvec
IInputOutputArrayInput/Output translation vector. Input values are used as an initial solution.
criteria
MCvTermCriteriaCriteria when to stop the Levenberg-Marquard iterative algorithm.
SolvePnPRefineVVS(IInputArray, IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, MCvTermCriteria, double)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D-2D point correspondences and starting from an initial solution.
public static void SolvePnPRefineVVS(IInputArray objectPoints, IInputArray imagePoints, IInputArray cameraMatrix, IInputArray distCoeffs, IInputOutputArray rvec, IInputOutputArray tvec, MCvTermCriteria criteria, double VVSlambda)
Parameters
objectPoints
IInputArrayArray of object points in the object coordinate space, Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. VectorOfPoint3f can also be passed here.
imagePoints
IInputArrayArray of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points. VectorOfPoint2f can also be passed here.
cameraMatrix
IInputArrayInput camera matrix A=[[fx,0,0],[0,fy,0][cx,cy,1]].
distCoeffs
IInputArrayInput vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.
rvec
IInputOutputArrayInput/Output rotation vector (see Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.
tvec
IInputOutputArrayInput/Output translation vector. Input values are used as an initial solution.
criteria
MCvTermCriteriaCriteria when to stop the Levenberg-Marquard iterative algorithm.
VVSlambda
doubleGain for the virtual visual servoing control law, equivalent to the α gain in the Damped Gauss-Newton formulation.
SolvePoly(IInputArray, IOutputArray, int)
Finds all real and complex roots of any degree polynomial with real coefficients
public static double SolvePoly(IInputArray coeffs, IOutputArray roots, int maxiter = 300)
Parameters
coeffs
IInputArrayThe (degree + 1)-length array of equation coefficients (CV_32FC1 or CV_64FC1)
roots
IOutputArrayThe degree-length output array of real or complex roots (CV_32FC2 or CV_64FC2)
maxiter
intThe maximum number of iterations
Returns
- double
The max difference.
Sort(IInputArray, IOutputArray, SortFlags)
Sorts each matrix row or each matrix column in ascending or descending order.So you should pass two operation flags to get desired behaviour.
public static void Sort(IInputArray src, IOutputArray dst, SortFlags flags)
Parameters
src
IInputArrayinput single-channel array.
dst
IOutputArrayoutput array of the same size and type as src.
flags
SortFlagsoperation flags
SortIdx(IInputArray, IOutputArray, SortFlags)
Sorts each matrix row or each matrix column in the ascending or descending order.So you should pass two operation flags to get desired behaviour. Instead of reordering the elements themselves, it stores the indices of sorted elements in the output array.
public static void SortIdx(IInputArray src, IOutputArray dst, SortFlags flags)
Parameters
src
IInputArrayinput single-channel array.
dst
IOutputArrayoutput integer array of the same size as src.
flags
SortFlagsoperation flags
SpatialGradient(IInputArray, IOutputArray, IOutputArray, int, BorderType)
Calculates the first order image derivative in both x and y using a Sobel operator. Equivalent to calling: Sobel(src, dx, CV_16SC1, 1, 0, 3 ); Sobel(src, dy, CV_16SC1, 0, 1, 3 );
public static void SpatialGradient(IInputArray src, IOutputArray dx, IOutputArray dy, int ksize = 3, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayinput image.
dx
IOutputArrayoutput image with first-order derivative in x.
dy
IOutputArrayoutput image with first-order derivative in y.
ksize
intsize of Sobel kernel. It must be 3.
borderType
BorderTypepixel extrapolation method
Split(IInputArray, IOutputArray)
Divides a multi-channel array into separate single-channel arrays. Two modes are available for the operation. If the source array has N channels then if the first N destination channels are not IntPtr.Zero, all they are extracted from the source array, otherwise if only a single destination channel of the first N is not IntPtr.Zero, this particular channel is extracted, otherwise an error is raised. Rest of destination channels (beyond the first N) must always be IntPtr.Zero. For IplImage cvCopy with COI set can be also used to extract a single channel from the image
public static void Split(IInputArray src, IOutputArray mv)
Parameters
src
IInputArrayInput multi-channel array
mv
IOutputArrayOutput array or vector of arrays
SqrBoxFilter(IInputArray, IOutputArray, DepthType, Size, Point, bool, BorderType)
Calculates the normalized sum of squares of the pixel values overlapping the filter. For every pixel(x, y) in the source image, the function calculates the sum of squares of those neighboring pixel values which overlap the filter placed over the pixel(x, y). The unnormalized square box filter can be useful in computing local image statistics such as the the local variance and standard deviation around the neighborhood of a pixel.
public static void SqrBoxFilter(IInputArray src, IOutputArray dst, DepthType ddepth, Size ksize, Point anchor, bool normalize = true, BorderType borderType = BorderType.Default)
Parameters
src
IInputArrayinput image
dst
IOutputArrayoutput image of the same size and type as src
ddepth
DepthTypethe output image depth (-1 to use src.depth())
ksize
Sizekernel size
anchor
Pointkernel anchor point. The default value of Point(-1, -1) denotes that the anchor is at the kernel center
normalize
boolflag, specifying whether the kernel is to be normalized by it's area or not.
borderType
BorderTypeborder mode used to extrapolate pixels outside of the image
Sqrt(IInputArray, IOutputArray)
Calculate square root of each source array element. in the case of multichannel arrays each channel is processed independently. The function accuracy is approximately the same as of the built-in std::sqrt.
public static void Sqrt(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayThe source floating-point array
dst
IOutputArrayThe destination array; will have the same size and the same type as src
StackBlur(IInputArray, IOutputArray, Size)
The function applies and stackBlur to an image. stackBlur can generate similar results as Gaussian blur, and the time consumption does not increase with the increase of kernel size. It creates a kind of moving stack of colors whilst scanning through the image.Thereby it just has to add one new block of color to the right side of the stack and remove the leftmost color.The remaining colors on the topmost layer of the stack are either added on or reduced by one, depending on if they are on the right or on the left side of the stack.The only supported borderType is BORDER_REPLICATE. Original paper was proposed by Mario Klingemann, which can be found http://underdestruction.com/2004/02/25/stackblur-2004.
public static void StackBlur(IInputArray src, IOutputArray dst, Size ksize)
Parameters
src
IInputArrayInput image. The number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S or CV_32F.
dst
IOutputArrayOutput image of the same size and type as src.
ksize
SizeStack-blurring kernel size. The ksize.width and ksize.height can differ but they both must be positive and odd.
StereoCalibrate(IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IInputOutputArray, IInputOutputArray, IOutputArray, IOutputArray, IOutputArrayOfArrays, IOutputArrayOfArrays, IOutputArray, CalibType, MCvTermCriteria)
Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T
public static double StereoCalibrate(IInputArray objectPoints, IInputArray imagePoints1, IInputArray imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IInputOutputArray r, IInputOutputArray t, IOutputArray e, IOutputArray f, IOutputArrayOfArrays rvecs, IOutputArrayOfArrays tvecs, IOutputArray perViewErrors, CalibType flags, MCvTermCriteria termCrit)
Parameters
objectPoints
IInputArrayThe 3D location of the object points. The first index is the index of image, second index is the index of the point
imagePoints1
IInputArrayThe 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point
imagePoints2
IInputArrayThe 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point
cameraMatrix1
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs1
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
cameraMatrix2
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs2
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
imageSize
SizeSize of the image, used only to initialize intrinsic camera matrix
r
IInputOutputArrayOutput rotation matrix. Together with the translation vector T, this matrix brings points given in the first camera's coordinate system to points in the second camera's coordinate system. In more technical terms, the tuple of R and T performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Due to its duality, this tuple is equivalent to the position of the first camera with respect to the second camera coordinate system.
t
IInputOutputArrayOutput translation vector, see description for "r".
e
IOutputArrayThe optional output essential matrix
f
IOutputArrayThe optional output fundamental matrix
rvecs
IOutputArrayOfArraysOutput vector of rotation vectors ( Rodrigues ) estimated for each pattern view in the coordinate system of the first camera of the stereo pair (e.g. std::vector < cv::Mat >). More in detail, each i-th rotation vector together with the corresponding i-th translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms, the tuple of the i-th rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space of the first camera of the stereo pair.
tvecs
IOutputArrayOfArraysOutput vector of translation vectors estimated for each pattern view, see parameter description of previous output parameter ( rvecs ).
perViewErrors
IOutputArrayOutput vector of the RMS re-projection error estimated for each pattern view.
flags
CalibTypeThe calibration flags
termCrit
MCvTermCriteriaTermination criteria for the iterative optimization algorithm
Returns
- double
The final value of the re-projection error.
StereoCalibrate(IInputArray, IInputArray, IInputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IOutputArray, IOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)
Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the fist camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T
public static double StereoCalibrate(IInputArray objectPoints, IInputArray imagePoints1, IInputArray imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IOutputArray r, IOutputArray t, IOutputArray e, IOutputArray f, CalibType flags, MCvTermCriteria termCrit)
Parameters
objectPoints
IInputArrayThe joint matrix of object points, 3xN or Nx3, where N is the total number of points in all views
imagePoints1
IInputArrayThe joint matrix of corresponding image points in the views from the 1st camera, 2xN or Nx2, where N is the total number of points in all views
imagePoints2
IInputArrayThe joint matrix of corresponding image points in the views from the 2nd camera, 2xN or Nx2, where N is the total number of points in all views
cameraMatrix1
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs1
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
cameraMatrix2
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs2
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
imageSize
SizeSize of the image, used only to initialize intrinsic camera matrix
r
IOutputArrayThe rotation matrix between the 1st and the 2nd cameras' coordinate systems
t
IOutputArrayThe translation vector between the cameras' coordinate systems
e
IOutputArrayThe optional output essential matrix
f
IOutputArrayThe optional output fundamental matrix
flags
CalibTypeThe calibration flags
termCrit
MCvTermCriteriaTermination criteria for the iterative optimization algorithm
Returns
- double
The final value of the re-projection error.
StereoCalibrate(MCvPoint3D32f[][], PointF[][], PointF[][], IInputOutputArray, IInputOutputArray, IInputOutputArray, IInputOutputArray, Size, IOutputArray, IOutputArray, IOutputArray, IOutputArray, CalibType, MCvTermCriteria)
Estimates transformation between the 2 cameras making a stereo pair. If we have a stereo camera, where the relative position and orientatation of the 2 cameras is fixed, and if we computed poses of an object relative to the first camera and to the second camera, (R1, T1) and (R2, T2), respectively (that can be done with cvFindExtrinsicCameraParams2), obviously, those poses will relate to each other, i.e. given (R1, T1) it should be possible to compute (R2, T2) - we only need to know the position and orientation of the 2nd camera relative to the 1st camera. That's what the described function does. It computes (R, T) such that: R2=RR1, T2=RT1 + T
public static double StereoCalibrate(MCvPoint3D32f[][] objectPoints, PointF[][] imagePoints1, PointF[][] imagePoints2, IInputOutputArray cameraMatrix1, IInputOutputArray distCoeffs1, IInputOutputArray cameraMatrix2, IInputOutputArray distCoeffs2, Size imageSize, IOutputArray r, IOutputArray t, IOutputArray e, IOutputArray f, CalibType flags, MCvTermCriteria termCrit)
Parameters
objectPoints
MCvPoint3D32f[][]The 3D location of the object points. The first index is the index of image, second index is the index of the point
imagePoints1
PointF[][]The 2D image location of the points for camera 1. The first index is the index of the image, second index is the index of the point
imagePoints2
PointF[][]The 2D image location of the points for camera 2. The first index is the index of the image, second index is the index of the point
cameraMatrix1
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs1
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
cameraMatrix2
IInputOutputArrayThe input/output camera matrices [fxk 0 cxk; 0 fyk cyk; 0 0 1]. If CV_CALIB_USE_INTRINSIC_GUESS or CV_CALIB_FIX_ASPECT_RATIO are specified, some or all of the elements of the matrices must be initialized
distCoeffs2
IInputOutputArrayThe input/output vectors of distortion coefficients for each camera, 4x1, 1x4, 5x1 or 1x5
imageSize
SizeSize of the image, used only to initialize intrinsic camera matrix
r
IOutputArrayThe rotation matrix between the 1st and the 2nd cameras' coordinate systems
t
IOutputArrayThe translation vector between the cameras' coordinate systems
e
IOutputArrayThe optional output essential matrix
f
IOutputArrayThe optional output fundamental matrix
flags
CalibTypeThe calibration flags
termCrit
MCvTermCriteriaTermination criteria for the iterative optimization algorithm
Returns
- double
The final value of the re-projection error.
StereoRectify(IInputArray, IInputArray, IInputArray, IInputArray, Size, IInputArray, IInputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, IOutputArray, StereoRectifyType, double, Size, ref Rectangle, ref Rectangle)
computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, that makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. On input the function takes the matrices computed by cvStereoCalibrate and on output it gives 2 rotation matrices and also 2 projection matrices in the new coordinates. The function is normally called after cvStereoCalibrate that computes both camera matrices, the distortion coefficients, R and T
public static void StereoRectify(IInputArray cameraMatrix1, IInputArray distCoeffs1, IInputArray cameraMatrix2, IInputArray distCoeffs2, Size imageSize, IInputArray r, IInputArray t, IOutputArray r1, IOutputArray r2, IOutputArray p1, IOutputArray p2, IOutputArray q, StereoRectifyType flags, double alpha, Size newImageSize, ref Rectangle validPixRoi1, ref Rectangle validPixRoi2)
Parameters
cameraMatrix1
IInputArrayThe camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1]
distCoeffs1
IInputArrayThe vectors of distortion coefficients for first camera, 4x1, 1x4, 5x1 or 1x5
cameraMatrix2
IInputArrayThe camera matrices [fx_k 0 cx_k; 0 fy_k cy_k; 0 0 1]
distCoeffs2
IInputArrayThe vectors of distortion coefficients for second camera, 4x1, 1x4, 5x1 or 1x5
imageSize
SizeSize of the image used for stereo calibration
r
IInputArrayThe rotation matrix between the 1st and the 2nd cameras' coordinate systems
t
IInputArrayThe translation vector between the cameras' coordinate systems
r1
IOutputArray3x3 Rectification transforms (rotation matrices) for the first camera
r2
IOutputArray3x3 Rectification transforms (rotation matrices) for the second camera
p1
IOutputArray3x4 Projection matrices in the new (rectified) coordinate systems
p2
IOutputArray3x4 Projection matrices in the new (rectified) coordinate systems
q
IOutputArrayThe optional output disparity-to-depth mapping matrix, 4x4, see cvReprojectImageTo3D.
flags
StereoRectifyTypeThe operation flags, use ZeroDisparity for default
alpha
doubleUse -1 for default
newImageSize
SizeUse Size.Empty for default
validPixRoi1
RectangleThe valid pixel ROI for image1
validPixRoi2
RectangleThe valid pixel ROI for image2
StereoRectifyUncalibrated(IInputArray, IInputArray, IInputArray, Size, IOutputArray, IOutputArray, double)
Computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in space, hence the suffix "Uncalibrated". Another related difference from cvStereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations, encoded by the homography matrices H1 and H2. The function implements the following algorithm [Hartley99].
public static bool StereoRectifyUncalibrated(IInputArray points1, IInputArray points2, IInputArray f, Size imgSize, IOutputArray h1, IOutputArray h2, double threshold = 5)
Parameters
points1
IInputArrayThe array of 2D points
points2
IInputArrayThe array of 2D points
f
IInputArrayFundamental matrix. It can be computed using the same set of point pairs points1 and points2 using cvFindFundamentalMat
imgSize
SizeSize of the image
h1
IOutputArrayThe rectification homography matrices for the first images
h2
IOutputArrayThe rectification homography matrices for the second images
threshold
doubleIf the parameter is greater than zero, then all the point pairs that do not comply the epipolar geometry well enough (that is, the points for which fabs(points2[i]TFpoints1[i])>threshold) are rejected prior to computing the homographies
Returns
- bool
True if successful
Remarks
Note that while the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have significant distortion, it would better be corrected before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using cvCalibrateCamera2 and then the images can be corrected using cvUndistort2
Stylization(IInputArray, IOutputArray, float, float)
Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features.
public static void Stylization(IInputArray src, IOutputArray dst, float sigmaS = 60, float sigmaR = 0.45)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src.
sigmaS
floatRange between 0 to 200.
sigmaR
floatRange between 0 to 1.
Subtract(IInputArray, IInputArray, IOutputArray, IInputArray, DepthType)
Subtracts one array from another one: dst(I)=src1(I)-src2(I) if mask(I)!=0 All the arrays must have the same type, except the mask, and the same size (or ROI size)
public static void Subtract(IInputArray src1, IInputArray src2, IOutputArray dst, IInputArray mask = null, DepthType dtype = DepthType.Default)
Parameters
src1
IInputArrayThe first source array
src2
IInputArrayThe second source array
dst
IOutputArrayThe destination array
mask
IInputArrayOperation mask, 8-bit single channel array; specifies elements of destination array to be changed
dtype
DepthTypeOptional depth of the output array
Sum(IInputArray)
Calculates sum S of array elements, independently for each channel Sc = sumI arr(I)c If the array is IplImage and COI is set, the function processes the selected channel only and stores the sum to the first scalar component (S0).
public static MCvScalar Sum(IInputArray src)
Parameters
src
IInputArrayThe array
Returns
- MCvScalar
The sum of array elements
Swap(Mat, Mat)
Swaps two matrices
public static void Swap(Mat m1, Mat m2)
Parameters
Swap(UMat, UMat)
Swaps two matrices
public static void Swap(UMat m1, UMat m2)
Parameters
TempFile(string)
Get a temporary file name
public static string TempFile(string suffix)
Parameters
suffix
stringThe suffix of the temporary file name
Returns
- string
A temporary file name
TextureFlattening(IInputArray, IInputArray, IOutputArray, float, float, int)
By retaining only the gradients at edge locations, before integrating with the Poisson solver, one washes out the texture of the selected region, giving its contents a flat aspect. Here Canny Edge Detector is used.
public static void TextureFlattening(IInputArray src, IInputArray mask, IOutputArray dst, float lowThreshold = 30, float highThreshold = 45, int kernelSize = 3)
Parameters
src
IInputArrayInput 8-bit 3-channel image.
mask
IInputArrayInput 8-bit 1 or 3-channel image.
dst
IOutputArrayOutput image with the same size and type as src.
lowThreshold
floatRange from 0 to 100.
highThreshold
floatValue > 100
kernelSize
intThe size of the Sobel kernel to be used.
Threshold(IInputArray, IOutputArray, double, double, ThresholdType)
Applies a fixed-level threshold to each array element. The function applies fixed-level thresholding to a multiple-channel array. The function is typically used to get a bi-level (binary) image out of a grayscale image ( compare could be also used for this purpose) or for removing a noise, that is, filtering out pixels with too small or too large values. There are several types of thresholding supported by the function. They are determined by type parameter.
public static double Threshold(IInputArray src, IOutputArray dst, double threshold, double maxValue, ThresholdType thresholdType)
Parameters
src
IInputArrayInput array (multiple-channel, 8-bit or 32-bit floating point).
dst
IOutputArrayOutput array of the same size and type and the same number of channels as src.
threshold
doubleThreshold value
maxValue
doubleMaximum value to use with CV_THRESH_BINARY and CV_THRESH_BINARY_INV thresholding types
thresholdType
ThresholdTypeThresholding type
Returns
- double
The computed threshold value if Otsu's or Triangle methods used.
Trace(IInputArray)
Returns sum of diagonal elements of the matrix mat
.
public static MCvScalar Trace(IInputArray mat)
Parameters
mat
IInputArraythe matrix
Returns
- MCvScalar
sum of diagonal elements of the matrix src1
Transform(IInputArray, IOutputArray, IInputArray)
Performs matrix transformation of every element of array src and stores the results in dst Both source and destination arrays should have the same depth and the same size or selected ROI size. transmat and shiftvec should be real floating-point matrices.
public static void Transform(IInputArray src, IOutputArray dst, IInputArray m)
Parameters
src
IInputArrayThe first source array
dst
IOutputArrayThe destination array
m
IInputArraytransformation 2x2 or 2x3 floating-point matrix.
Transpose(IInputArray, IOutputArray)
Transposes matrix src1: dst(i,j)=src(j,i) Note that no complex conjugation is done in case of complex matrix. Conjugation should be done separately: look at the sample code in cvXorS for example
public static void Transpose(IInputArray src, IOutputArray dst)
Parameters
src
IInputArrayThe source matrix
dst
IOutputArrayThe destination matrix
TriangulatePoints(IInputArray, IInputArray, IInputArray, IInputArray, IOutputArray)
Reconstructs points by triangulation.
public static void TriangulatePoints(IInputArray projMat1, IInputArray projMat2, IInputArray projPoints1, IInputArray projPoints2, IOutputArray points4D)
Parameters
projMat1
IInputArray3x4 projection matrix of the first camera.
projMat2
IInputArray3x4 projection matrix of the second camera.
projPoints1
IInputArray2xN array of feature points in the first image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1
projPoints2
IInputArray2xN array of corresponding points in the second image. It can be also a vector of feature points or two-channel matrix of size 1xN or Nx1.
points4D
IOutputArray4xN array of reconstructed points in homogeneous coordinates.
Undistort(IInputArray, IOutputArray, IInputArray, IInputArray, IInputArray)
Transforms the image to compensate radial and tangential lens distortion.
public static void Undistort(IInputArray src, IOutputArray dst, IInputArray cameraMatrix, IInputArray distortionCoeffs, IInputArray newCameraMatrix = null)
Parameters
src
IInputArrayThe input (distorted) image
dst
IOutputArrayThe output (corrected) image
cameraMatrix
IInputArrayThe camera matrix (A) [fx 0 cx; 0 fy cy; 0 0 1].
distortionCoeffs
IInputArrayThe vector of distortion coefficients, 4x1 or 1x4 [k1, k2, p1, p2].
newCameraMatrix
IInputArrayCamera matrix of the distorted image. By default it is the same as cameraMatrix, but you may additionally scale and shift the result by using some different matrix
UndistortPoints(IInputArray, IOutputArray, IInputArray, IInputArray, IInputArray, IInputArray)
Similar to cvInitUndistortRectifyMap and is opposite to it at the same time. The functions are similar in that they both are used to correct lens distortion and to perform the optional perspective (rectification) transformation. They are opposite because the function cvInitUndistortRectifyMap does actually perform the reverse transformation in order to initialize the maps properly, while this function does the forward transformation.
public static void UndistortPoints(IInputArray src, IOutputArray dst, IInputArray cameraMatrix, IInputArray distCoeffs, IInputArray R = null, IInputArray P = null)
Parameters
src
IInputArrayThe observed point coordinates
dst
IOutputArrayThe ideal point coordinates, after undistortion and reverse perspective transformation.
cameraMatrix
IInputArrayThe camera matrix A=[fx 0 cx; 0 fy cy; 0 0 1]
distCoeffs
IInputArrayThe vector of distortion coefficients, 4x1, 1x4, 5x1 or 1x5.
R
IInputArrayThe rectification transformation in object space (3x3 matrix). R1 or R2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used.
P
IInputArrayThe new camera matrix (3x3) or the new projection matrix (3x4). P1 or P2, computed by cvStereoRectify can be passed here. If the parameter is IntPtr.Zero, the identity matrix is used.
UpdateMotionHistory(IInputArray, IInputOutputArray, double, double)
Updates the motion history image as following: mhi(x,y)=timestamp if silhouette(x,y)!=0 0 if silhouette(x,y)=0 and mhi(x,y)<timestamp-duration mhi(x,y) otherwise That is, MHI pixels where motion occurs are set to the current timestamp, while the pixels where motion happened far ago are cleared.
public static void UpdateMotionHistory(IInputArray silhouette, IInputOutputArray mhi, double timestamp, double duration)
Parameters
silhouette
IInputArraySilhouette mask that has non-zero pixels where the motion occurs.
mhi
IInputOutputArrayMotion history image, that is updated by the function (single-channel, 32-bit floating-point)
timestamp
doubleCurrent time in milliseconds or other units.
duration
doubleMaximal duration of motion track in the same units as timestamp.
VConcat(IInputArray, IInputArray, IOutputArray)
Vertically concatenate two images
public static void VConcat(IInputArray src1, IInputArray src2, IOutputArray dst)
Parameters
src1
IInputArrayThe first image
src2
IInputArrayThe second image
dst
IOutputArrayThe result image
VConcat(IInputArrayOfArrays, IOutputArray)
The function vertically concatenates two or more matrices
public static void VConcat(IInputArrayOfArrays srcs, IOutputArray dst)
Parameters
srcs
IInputArrayOfArraysInput array or vector of matrices. all of the matrices must have the same number of cols and the same depth
dst
IOutputArrayOutput array. It has the same number of cols and depth as the src, and the sum of rows of the src. same depth.
VConcat(Mat[], IOutputArray)
The function vertically concatenates two or more matrices
public static void VConcat(Mat[] srcs, IOutputArray dst)
Parameters
srcs
Mat[]Input array or vector of matrices. all of the matrices must have the same number of cols and the same depth
dst
IOutputArrayOutput array. It has the same number of cols and depth as the src, and the sum of rows of the src. same depth.
WaitKey(int)
Waits for key event infinitely (delay <= 0) or for "delay" milliseconds.
public static int WaitKey(int delay = 0)
Parameters
delay
intDelay in milliseconds.
Returns
- int
The code of the pressed key or -1 if no key were pressed until the specified timeout has elapsed
WarpAffine(IInputArray, IOutputArray, IInputArray, Size, Inter, Warp, BorderType, MCvScalar)
Applies an affine transformation to an image.
public static void WarpAffine(IInputArray src, IOutputArray dst, IInputArray mapMatrix, Size dsize, Inter interMethod = Inter.Linear, Warp warpMethod = Warp.Default, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
mapMatrix
IInputArray2x3 transformation matrix
dsize
SizeSize of the output image.
interMethod
InterInterpolation method
warpMethod
WarpWarp method
borderMode
BorderTypePixel extrapolation method
borderValue
MCvScalarA value used to fill outliers
WarpPerspective(IInputArray, IOutputArray, IInputArray, Size, Inter, Warp, BorderType, MCvScalar)
Applies a perspective transformation to an image
public static void WarpPerspective(IInputArray src, IOutputArray dst, IInputArray mapMatrix, Size dsize, Inter interpolationType = Inter.Linear, Warp warpType = Warp.Default, BorderType borderMode = BorderType.Constant, MCvScalar borderValue = default)
Parameters
src
IInputArraySource image
dst
IOutputArrayDestination image
mapMatrix
IInputArray3x3 transformation matrix
dsize
SizeSize of the output image
interpolationType
InterInterpolation method
warpType
WarpWarp method
borderMode
BorderTypePixel extrapolation method
borderValue
MCvScalarvalue used in case of a constant border
Watershed(IInputArray, IInputOutputArray)
Implements one of the variants of watershed, non-parametric marker-based segmentation algorithm, described in [Meyer92] Before passing the image to the function, user has to outline roughly the desired regions in the image markers with positive (>0) indices, i.e. every region is represented as one or more connected components with the pixel values 1, 2, 3 etc. Those components will be "seeds" of the future image regions. All the other pixels in markers, which relation to the outlined regions is not known and should be defined by the algorithm, should be set to 0's. On the output of the function, each pixel in markers is set to one of values of the "seed" components, or to -1 at boundaries between the regions.
public static void Watershed(IInputArray image, IInputOutputArray markers)
Parameters
image
IInputArrayThe input 8-bit 3-channel image
markers
IInputOutputArrayThe input/output Int32 depth single-channel image (map) of markers.
Remarks
Note, that it is not necessary that every two neighbor connected components are separated by a watershed boundary (-1's pixels), for example, in case when such tangent components exist in the initial marker image.
WriteCloud(string, IInputArray, IInputArray, IInputArray)
Write point cloud to file
public static void WriteCloud(string file, IInputArray cloud, IInputArray colors = null, IInputArray normals = null)
Parameters
file
stringThe point cloud file name
cloud
IInputArrayThe point cloud
colors
IInputArrayThe color
normals
IInputArrayThe normals
cvCheckArr(nint, CheckType, double, double)
Checks that every array element is neither NaN nor Infinity. If CV_CHECK_RANGE is set, it also checks that every element is greater than or equal to minVal and less than maxVal.
public static extern int cvCheckArr(nint arr, CheckType flags, double minVal, double maxVal)
Parameters
arr
nintThe array to check.
flags
CheckTypeThe operation flags, CHECK_NAN_INFINITY or combination of CHECK_RANGE - if set, the function checks that every value of array is within [minVal,maxVal) range, otherwise it just checks that every element is neither NaN nor Infinity. CHECK_QUIET - if set, the function does not raises an error if an element is invalid or out of range
minVal
doubleThe inclusive lower boundary of valid values range. It is used only if CHECK_RANGE is set.
maxVal
doubleThe exclusive upper boundary of valid values range. It is used only if CHECK_RANGE is set.
Returns
- int
Returns nonzero if the check succeeded, i.e. all elements are valid and within the range, and zero otherwise. In the latter case if CV_CHECK_QUIET flag is not set, the function raises runtime error.
cvClearND(nint, int[])
Clears (sets to zero) the particular element of dense array or deletes the element of sparse array. If the element does not exists, the function does nothing
public static extern void cvClearND(nint arr, int[] idx)
Parameters
cvConvertScale(nint, nint, double, double)
This function has several different purposes and thus has several synonyms. It copies one array to another with optional scaling, which is performed first, and/or optional type conversion, performed after: dst(I)=src(I)*scale + (shift,shift,...) All the channels of multi-channel arrays are processed independently. The type conversion is done with rounding and saturation, that is if a result of scaling + conversion can not be represented exactly by a value of destination array element type, it is set to the nearest representable value on the real axis. In case of scale=1, shift=0 no prescaling is done. This is a specially optimized case and it has the appropriate cvConvert synonym. If source and destination array types have equal types, this is also a special case that can be used to scale and shift a matrix or an image and that fits to cvScale synonym.
public static extern void cvConvertScale(nint src, nint dst, double scale, double shift)
Parameters
src
nintSource array
dst
nintDestination array
scale
doubleScale factor
shift
doubleValue added to the scaled source array elements
cvCopy(nint, nint, nint)
Copies selected elements from input array to output array: dst(I)=src(I) if mask(I)!=0. If any of the passed arrays is of IplImage type, then its ROI and COI fields are used. Both arrays must have the same type, the same number of dimensions and the same size. The function can also copy sparse arrays (mask is not supported in this case).
public static extern void cvCopy(nint src, nint des, nint mask)
Parameters
src
nintThe source array
des
nintThe destination array
mask
nintOperation mask, 8-bit single channel array; specifies elements of destination array to be changed
cvCreateImage(Size, IplDepth, int)
Creates the header and allocates data.
public static nint cvCreateImage(Size size, IplDepth depth, int channels)
Parameters
size
SizeImage width and height.
depth
IplDepthBit depth of image elements
channels
intNumber of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ...
Returns
- nint
A pointer to IplImage
cvCreateImageHeader(Size, IplDepth, int)
Allocates, initializes, and returns the structure IplImage.
public static nint cvCreateImageHeader(Size size, IplDepth depth, int channels)
Parameters
size
SizeImage width and height.
depth
IplDepthBit depth of image elements
channels
intNumber of channels per element(pixel). Can be 1, 2, 3 or 4. The channels are interleaved, for example the usual data layout of a color image is: b0 g0 r0 b1 g1 r1 ...
Returns
- nint
The structure IplImage
cvCreateMat(int, int, DepthType)
Allocates header for the new matrix and underlying data, and returns a pointer to the created matrix. Matrices are stored row by row. All the rows are aligned by 4 bytes.
public static extern nint cvCreateMat(int rows, int cols, DepthType type)
Parameters
rows
intNumber of rows in the matrix.
cols
intNumber of columns in the matrix.
type
DepthTypeType of the matrix elements.
Returns
- nint
A pointer to the created matrix
cvCreateSparseMat(int, nint, DepthType)
The function allocates a multi-dimensional sparse array. Initially the array contain no elements, that is Get or GetReal returns zero for every index
public static extern nint cvCreateSparseMat(int dims, nint sizes, DepthType type)
Parameters
dims
intNumber of array dimensions
sizes
nintArray of dimension sizes
type
DepthTypeType of array elements
Returns
- nint
Pointer to the array header
cvGet1D(nint, int)
Return the particular array element
public static MCvScalar cvGet1D(nint arr, int idx0)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
Returns
- MCvScalar
the particular array element
cvGet2D(nint, int, int)
Return the particular array element
public static MCvScalar cvGet2D(nint arr, int idx0, int idx1)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
Returns
- MCvScalar
the particular array element
cvGet3D(nint, int, int, int)
Return the particular array element
public static MCvScalar cvGet3D(nint arr, int idx0, int idx1, int idx2)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
idx2
intThe third zero-based component of the element index
Returns
- MCvScalar
the particular array element
cvGetCol(nint, nint, int)
Return the header, corresponding to a specified column of the input array
public static nint cvGetCol(nint arr, nint submat, int col)
Parameters
arr
nintInput array
submat
nintPointer to the prelocate memory of the resulting sub-array header
col
intZero-based index of the selected column
Returns
- nint
The header, corresponding to a specified column of the input array
cvGetCols(nint, nint, int, int)
Return the header, corresponding to a specified col span of the input array
public static extern nint cvGetCols(nint arr, nint submat, int startCol, int endCol)
Parameters
arr
nintInput array
submat
nintPointer to the prelocated memory of the resulting sub-array header
startCol
intZero-based index of the selected column
endCol
intZero-based index of the ending column (exclusive) of the span
Returns
- nint
The header, corresponding to a specified col span of the input array
cvGetDiag(nint, nint, int)
returns the header, corresponding to a specified diagonal of the input array
public static extern nint cvGetDiag(nint arr, nint submat, int diag)
Parameters
arr
nintInput array
submat
nintPointer to the resulting sub-array header
diag
intArray diagonal. Zero corresponds to the main diagonal, -1 corresponds to the diagonal above the main etc., 1 corresponds to the diagonal below the main etc
Returns
- nint
Pointer to the resulting sub-array header
cvGetImage(nint, nint)
Returns image header for the input array that can be matrix - CvMat*, or image - IplImage*.
public static extern nint cvGetImage(nint arr, nint imageHeader)
Parameters
Returns
- nint
Returns image header for the input array
cvGetImageCOI(nint)
Returns channel of interest of the image (it returns 0 if all the channels are selected).
public static extern int cvGetImageCOI(nint image)
Parameters
image
nintImage header.
Returns
- int
channel of interest of the image (it returns 0 if all the channels are selected)
cvGetImageROI(nint)
Returns channel of interest of the image (it returns 0 if all the channels are selected).
public static Rectangle cvGetImageROI(nint image)
Parameters
image
nintImage header.
Returns
- Rectangle
channel of interest of the image (it returns 0 if all the channels are selected)
cvGetMat(nint, nint, out int, int)
Returns matrix header for the input array that can be matrix - CvMat, image - IplImage or multi-dimensional dense array - CvMatND* (latter case is allowed only if allowND != 0) . In the case of matrix the function simply returns the input pointer. In the case of IplImage* or CvMatND* it initializes header structure with parameters of the current image ROI and returns pointer to this temporary structure. Because COI is not supported by CvMat, it is returned separately.
public static extern nint cvGetMat(nint arr, nint header, out int coi, int allowNd)
Parameters
arr
nintInput array
header
nintPointer to CvMat structure used as a temporary buffer
coi
intOptional output parameter for storing COI
allowNd
intIf non-zero, the function accepts multi-dimensional dense arrays (CvMatND*) and returns 2D (if CvMatND has two dimensions) or 1D matrix (when CvMatND has 1 dimension or more than 2 dimensions). The array must be continuous
Returns
- nint
Returns matrix header for the input array
cvGetRawData(nint, out nint, out int, out Size)
Fills output variables with low-level information about the array data. All output parameters are optional, so some of the pointers may be set to NULL. If the array is IplImage with ROI set, parameters of ROI are returned.
public static extern void cvGetRawData(nint arr, out nint data, out int step, out Size roiSize)
Parameters
arr
nintArray header
data
nintOutput pointer to the whole image origin or ROI origin if ROI is set
step
intOutput full row length in bytes
roiSize
SizeOutput ROI size
cvGetReal1D(nint, int)
Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.
public static extern double cvGetReal1D(nint arr, int idx0)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
Returns
- double
the particular element of single-channel array
cvGetReal2D(nint, int, int)
Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.
public static extern double cvGetReal2D(nint arr, int idx0, int idx1)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
Returns
- double
the particular element of single-channel array
cvGetReal3D(nint, int, int, int)
Return the particular element of single-channel array. If the array has multiple channels, runtime error is raised. Note that cvGet*D function can be used safely for both single-channel and multiple-channel arrays though they are a bit slower.
public static extern double cvGetReal3D(nint arr, int idx0, int idx1, int idx2)
Parameters
arr
nintInput array. Must have a single channel
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
idx2
intThe third zero-based component of the element index
Returns
- double
the particular element of single-channel array
cvGetRow(nint, nint, int)
Return the header, corresponding to a specified row of the input array
public static nint cvGetRow(nint arr, nint submat, int row)
Parameters
arr
nintInput array
submat
nintPointer to the prelocate memory of the resulting sub-array header
row
intZero-based index of the selected row
Returns
- nint
The header, corresponding to a specified row of the input array
cvGetRows(nint, nint, int, int, int)
Return the header, corresponding to a specified row span of the input array
public static extern nint cvGetRows(nint arr, nint submat, int startRow, int endRow, int deltaRow)
Parameters
arr
nintInput array
submat
nintPointer to the prelocated memory of resulting sub-array header
startRow
intZero-based index of the starting row (inclusive) of the span
endRow
intZero-based index of the ending row (exclusive) of the span
deltaRow
intIndex step in the row span. That is, the function extracts every delta_row-th row from start_row and up to (but not including) end_row
Returns
- nint
The header, corresponding to a specified row span of the input array
cvGetSize(nint)
Returns number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned.
public static Size cvGetSize(nint arr)
Parameters
arr
nintarray header
Returns
- Size
number of rows (CvSize::height) and number of columns (CvSize::width) of the input matrix or image. In case of image the size of ROI is returned.
cvGetSubRect(nint, nint, Rectangle)
Returns header, corresponding to a specified rectangle of the input array. In other words, it allows the user to treat a rectangular part of input array as a stand-alone array. ROI is taken into account by the function so the sub-array of ROI is actually extracted.
public static nint cvGetSubRect(nint arr, nint submat, Rectangle rect)
Parameters
arr
nintInput array
submat
nintPointer to the resultant sub-array header.
rect
RectangleZero-based coordinates of the rectangle of interest.
Returns
- nint
the resultant sub-array header
cvInitImageHeader(nint, Size, IplDepth, int, int, int)
Initializes the image header structure, pointer to which is passed by the user, and returns the pointer.
public static nint cvInitImageHeader(nint image, Size size, IplDepth depth, int channels, int origin, int align)
Parameters
image
nintImage header to initialize.
size
SizeImage width and height.
depth
IplDepthImage depth
channels
intNumber of channels
origin
intIPL_ORIGIN_TL or IPL_ORIGIN_BL.
align
intAlignment for image rows, typically 4 or 8 bytes.
Returns
- nint
Pointer to the image header
cvInitMatHeader(nint, int, int, int, nint, int)
Initializes already allocated CvMat structure. It can be used to process raw data with OpenCV matrix functions.
public static extern nint cvInitMatHeader(nint mat, int rows, int cols, int type, nint data, int step)
Parameters
mat
nintPointer to the matrix header to be initialized.
rows
intNumber of rows in the matrix.
cols
intNumber of columns in the matrix.
type
intType of the matrix elements.
data
nintOptional data pointer assigned to the matrix header
step
intFull row width in bytes of the data assigned. By default, the minimal possible step is used, i.e., no gaps is assumed between subsequent rows of the matrix.
Returns
- nint
Pointer to the CvMat
cvInitMatNDHeader(nint, int, int[], DepthType, nint)
Initializes CvMatND structure allocated by the user
public static extern nint cvInitMatNDHeader(nint mat, int dims, int[] sizes, DepthType type, nint data)
Parameters
mat
nintPointer to the array header to be initialized
dims
intNumber of array dimensions
sizes
int[]Array of dimension sizes
type
DepthTypeType of array elements
data
nintOptional data pointer assigned to the matrix header
Returns
- nint
Pointer to the array header
cvMaxRect(Rectangle, Rectangle)
Finds minimum area rectangle that contains both input rectangles inside
public static Rectangle cvMaxRect(Rectangle rect1, Rectangle rect2)
Parameters
Returns
- Rectangle
The minimum area rectangle that contains both input rectangles inside
cvRange(nint, double, double)
Initializes the matrix as following: arr(i,j)=(end-start)(icols(arr)+j)/(cols(arr)*rows(arr))
public static extern void cvRange(nint mat, double start, double end)
Parameters
mat
nintThe matrix to initialize. It should be single-channel 32-bit, integer or floating-point
start
doubleThe lower inclusive boundary of the range
end
doubleThe upper exclusive boundary of the range
cvReleaseImage(ref nint)
Releases the header and the image data.
public static extern void cvReleaseImage(ref nint image)
Parameters
image
nintDouble pointer to the header of the deallocated image
cvReleaseImageHeader(ref nint)
Releases the header.
public static extern void cvReleaseImageHeader(ref nint image)
Parameters
image
nintPointer to the deallocated header.
cvReleaseMat(ref nint)
Decrements the matrix data reference counter and releases matrix header
public static extern void cvReleaseMat(ref nint mat)
Parameters
mat
nintDouble pointer to the matrix.
cvReleaseSparseMat(ref nint)
The function releases the sparse array and clears the array pointer upon exit.
public static extern void cvReleaseSparseMat(ref nint mat)
Parameters
mat
nintReference of the pointer to the array
cvResetImageROI(nint)
Releases image ROI. After that the whole image is considered selected.
public static extern void cvResetImageROI(nint image)
Parameters
image
nintImage header
cvReshape(nint, nint, int, int)
initializes CvMat header so that it points to the same data as the original array but has different shape - different number of channels, different number of rows or both
public static extern nint cvReshape(nint arr, nint header, int newCn, int newRows)
Parameters
arr
nintInput array
header
nintOutput header to be filled
newCn
intNew number of channels. new_cn = 0 means that number of channels remains unchanged
newRows
intNew number of rows. new_rows = 0 means that number of rows remains unchanged unless it needs to be changed according to new_cn value. destination array to be changed
Returns
- nint
The CvMat header
cvSet2D(nint, int, int, MCvScalar)
Assign the new value to the particular element of array
public static void cvSet2D(nint arr, int idx0, int idx1, MCvScalar value)
Parameters
arr
nintInput array.
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
value
MCvScalarThe assigned value
cvSetData(nint, nint, int)
Assigns user data to the array header.
public static extern void cvSetData(nint arr, nint data, int step)
Parameters
cvSetImageCOI(nint, int)
Sets the channel of interest to a given value. Value 0 means that all channels are selected, 1 means that the first channel is selected etc. If ROI is NULL and coi != 0, ROI is allocated.
public static extern void cvSetImageCOI(nint image, int coi)
Parameters
cvSetImageROI(nint, Rectangle)
Sets the image ROI to a given rectangle. If ROI is NULL and the value of the parameter rect is not equal to the whole image, ROI is allocated.
public static void cvSetImageROI(nint image, Rectangle rect)
Parameters
cvSetReal1D(nint, int, double)
Assign the new value to the particular element of single-channel array
public static extern void cvSetReal1D(nint arr, int idx0, double value)
Parameters
arr
nintInput array
idx0
intThe first zero-based component of the element index
value
doubleThe assigned value
cvSetReal2D(nint, int, int, double)
Assign the new value to the particular element of single-channel array
public static extern void cvSetReal2D(nint arr, int idx0, int idx1, double value)
Parameters
arr
nintInput array
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
value
doubleThe assigned value
cvSetReal3D(nint, int, int, int, double)
Assign the new value to the particular element of single-channel array
public static extern void cvSetReal3D(nint arr, int idx0, int idx1, int idx2, double value)
Parameters
arr
nintInput array
idx0
intThe first zero-based component of the element index
idx1
intThe second zero-based component of the element index
idx2
intThe third zero-based component of the element index
value
doubleThe assigned value
cvSetRealND(nint, int[], double)
Assign the new value to the particular element of single-channel array
public static extern void cvSetRealND(nint arr, int[] idx, double value)