Skip to content

Commit 616dbd5

Browse files
Roman DonchenkoOpenCV Buildbot
Roman Donchenko
authored and
OpenCV Buildbot
committed
Merge pull request opencv#1107 from abidrahmank:master
2 parents 3b8a13a + 1923d87 commit 616dbd5

7 files changed

+62
-8
lines changed

modules/features2d/doc/common_interfaces_of_descriptor_extractors.rst

+4
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,8 @@ Computes the descriptors for a set of keypoints detected in an image (first vari
5757
5858
.. ocv:function:: void DescriptorExtractor::compute( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, vector<Mat>& descriptors ) const
5959
60+
.. ocv:pyfunction:: cv2.DescriptorExtractor_create.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
61+
6062
:param image: Image.
6163

6264
:param images: Image set.
@@ -72,6 +74,8 @@ Creates a descriptor extractor by name.
7274

7375
.. ocv:function:: Ptr<DescriptorExtractor> DescriptorExtractor::create( const String& descriptorExtractorType )
7476
77+
.. ocv:pyfunction:: cv2.DescriptorExtractor_create(descriptorExtractorType) -> retval
78+
7579
:param descriptorExtractorType: Descriptor extractor type.
7680

7781
The current implementation supports the following types of a descriptor extractor:

modules/features2d/doc/common_interfaces_of_feature_detectors.rst

+4
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,8 @@ Detects keypoints in an image (first variant) or image set (second variant).
4444
4545
.. ocv:function:: void FeatureDetector::detect( const vector<Mat>& images, vector<vector<KeyPoint> >& keypoints, const vector<Mat>& masks=vector<Mat>() ) const
4646
47+
.. ocv:pyfunction:: cv2.FeatureDetector_create.detect(image[, mask]) -> keypoints
48+
4749
:param image: Image.
4850

4951
:param images: Image set.
@@ -60,6 +62,8 @@ Creates a feature detector by its name.
6062

6163
.. ocv:function:: Ptr<FeatureDetector> FeatureDetector::create( const String& detectorType )
6264
65+
.. ocv:pyfunction:: cv2.FeatureDetector_create(detectorType) -> retval
66+
6367
:param detectorType: Feature detector type.
6468

6569
The following detector types are supported:

modules/features2d/doc/drawing_function_of_keypoints_and_matches.rst

+7
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@ Draws the found matches of keypoints from two images.
1111
1212
.. ocv:function:: void drawMatches( const Mat& img1, const vector<KeyPoint>& keypoints1, const Mat& img2, const vector<KeyPoint>& keypoints2, const vector<vector<DMatch> >& matches1to2, Mat& outImg, const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1), const vector<vector<char> >& matchesMask=vector<vector<char> >(), int flags=DrawMatchesFlags::DEFAULT )
1313
14+
.. ocv:pyfunction:: cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) -> outImg
15+
16+
.. ocv:pyfunction:: cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) -> outImg
17+
1418
1519
:param img1: First source image.
1620

@@ -67,6 +71,8 @@ Draws keypoints.
6771

6872
.. ocv:function:: void drawKeypoints( const Mat& image, const vector<KeyPoint>& keypoints, Mat& outImage, const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT )
6973
74+
.. ocv:pyfunction:: cv2.drawKeypoints(image, keypoints[, outImage[, color[, flags]]]) -> outImage
75+
7076
:param image: Source image.
7177

7278
:param keypoints: Keypoints from the source image.
@@ -77,3 +83,4 @@ Draws keypoints.
7783

7884
:param flags: Flags setting drawing features. Possible ``flags`` bit values are defined by ``DrawMatchesFlags``. See details above in :ocv:func:`drawMatches` .
7985

86+
.. note:: For Python API, flags are modified as `cv2.DRAW_MATCHES_FLAGS_DEFAULT`, `cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS`, `cv2.DRAW_MATCHES_FLAGS_DRAW_OVER_OUTIMG`, `cv2.DRAW_MATCHES_FLAGS_NOT_DRAW_SINGLE_POINTS`

modules/features2d/doc/feature_detection_and_description.rst

+25
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,11 @@ Detects corners using the FAST algorithm
1010
.. ocv:function:: void FAST( InputArray image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSupression=true )
1111
.. ocv:function:: void FAST( InputArray image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSupression, int type )
1212
13+
.. ocv:pyfunction:: cv2.FastFeatureDetector([, threshold[, nonmaxSuppression]]) -> <FastFeatureDetector object>
14+
.. ocv:pyfunction:: cv2.FastFeatureDetector(threshold, nonmaxSuppression, type) -> <FastFeatureDetector object>
15+
.. ocv:pyfunction:: cv2.FastFeatureDetector.detect(image[, mask]) -> keypoints
16+
17+
1318
:param image: grayscale image where keypoints (corners) are detected.
1419

1520
:param keypoints: keypoints detected on the image.
@@ -22,6 +27,9 @@ Detects corners using the FAST algorithm
2227

2328
Detects corners using the FAST algorithm by [Rosten06]_.
2429

30+
..note:: In Python API, types are given as ``cv2.FAST_FEATURE_DETECTOR_TYPE_5_8``, ``cv2.FAST_FEATURE_DETECTOR_TYPE_7_12`` and ``cv2.FAST_FEATURE_DETECTOR_TYPE_9_16``. For corner detection, use ``cv2.FAST.detect()`` method.
31+
32+
2533
.. [Rosten06] E. Rosten. Machine Learning for High-speed Corner Detection, 2006.
2634
2735
@@ -65,6 +73,9 @@ The ORB constructor
6573

6674
.. ocv:function:: ORB::ORB(int nfeatures = 500, float scaleFactor = 1.2f, int nlevels = 8, int edgeThreshold = 31, int firstLevel = 0, int WTA_K=2, int scoreType=ORB::HARRIS_SCORE, int patchSize=31)
6775
76+
.. ocv:pyfunction:: cv2.ORB([, nfeatures[, scaleFactor[, nlevels[, edgeThreshold[, firstLevel[, WTA_K[, scoreType[, patchSize]]]]]]]]) -> <ORB object>
77+
78+
6879
:param nfeatures: The maximum number of features to retain.
6980

7081
:param scaleFactor: Pyramid decimation ratio, greater than 1. ``scaleFactor==2`` means the classical pyramid, where each next level has 4x less pixels than the previous, but such a big scale factor will degrade feature matching scores dramatically. On the other hand, too close to 1 scale factor will mean that to cover certain scale range you will need more pyramid levels and so the speed will suffer.
@@ -87,6 +98,11 @@ Finds keypoints in an image and computes their descriptors
8798

8899
.. ocv:function:: void ORB::operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false ) const
89100
101+
.. ocv:pyfunction:: cv2.ORB.detect(image[, mask]) -> keypoints
102+
.. ocv:pyfunction:: cv2.ORB.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
103+
.. ocv:pyfunction:: cv2.ORB.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
104+
105+
90106
:param image: The input 8-bit grayscale image.
91107

92108
:param mask: The operation mask.
@@ -96,6 +112,7 @@ Finds keypoints in an image and computes their descriptors
96112
:param descriptors: The output descriptors. Pass ``cv::noArray()`` if you do not need it.
97113

98114
:param useProvidedKeypoints: If it is true, then the method will use the provided vector of keypoints instead of detecting them.
115+
99116

100117
BRISK
101118
-----
@@ -111,6 +128,8 @@ The BRISK constructor
111128

112129
.. ocv:function:: BRISK::BRISK(int thresh=30, int octaves=3, float patternScale=1.0f)
113130
131+
.. ocv:pyfunction:: cv2.BRISK([, thresh[, octaves[, patternScale]]]) -> <BRISK object>
132+
114133
:param thresh: FAST/AGAST detection threshold score.
115134

116135
:param octaves: detection octaves. Use 0 to do single scale.
@@ -123,6 +142,8 @@ The BRISK constructor for a custom pattern
123142

124143
.. ocv:function:: BRISK::BRISK(std::vector<float> &radiusList, std::vector<int> &numberList, float dMax=5.85f, float dMin=8.2f, std::vector<int> indexChange=std::vector<int>())
125144
145+
.. ocv:pyfunction:: cv2.BRISK(radiusList, numberList[, dMax[, dMin[, indexChange]]]) -> <BRISK object>
146+
126147
:param radiusList: defines the radii (in pixels) where the samples around a keypoint are taken (for keypoint scale 1).
127148

128149
:param numberList: defines the number of sampling points on the sampling circle. Must be the same size as radiusList..
@@ -139,6 +160,10 @@ Finds keypoints in an image and computes their descriptors
139160

140161
.. ocv:function:: void BRISK::operator()(InputArray image, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false ) const
141162
163+
.. ocv:pyfunction:: cv2.BRISK.detect(image[, mask]) -> keypoints
164+
.. ocv:pyfunction:: cv2.BRISK.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
165+
.. ocv:pyfunction:: cv2.BRISK.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
166+
142167
:param image: The input 8-bit grayscale image.
143168

144169
:param mask: The operation mask.

modules/features2d/include/opencv2/features2d.hpp

+4-4
Original file line numberDiff line numberDiff line change
@@ -1404,15 +1404,15 @@ CV_EXPORTS_W void drawKeypoints( const Mat& image, const std::vector<KeyPoint>&
14041404
const Scalar& color=Scalar::all(-1), int flags=DrawMatchesFlags::DEFAULT );
14051405

14061406
// Draws matches of keypints from two images on output image.
1407-
CV_EXPORTS void drawMatches( const Mat& img1, const std::vector<KeyPoint>& keypoints1,
1407+
CV_EXPORTS_W void drawMatches( const Mat& img1, const std::vector<KeyPoint>& keypoints1,
14081408
const Mat& img2, const std::vector<KeyPoint>& keypoints2,
1409-
const std::vector<DMatch>& matches1to2, Mat& outImg,
1409+
const std::vector<DMatch>& matches1to2, CV_OUT Mat& outImg,
14101410
const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1),
14111411
const std::vector<char>& matchesMask=std::vector<char>(), int flags=DrawMatchesFlags::DEFAULT );
14121412

1413-
CV_EXPORTS void drawMatches( const Mat& img1, const std::vector<KeyPoint>& keypoints1,
1413+
CV_EXPORTS_AS(drawMatchesKnn) void drawMatches( const Mat& img1, const std::vector<KeyPoint>& keypoints1,
14141414
const Mat& img2, const std::vector<KeyPoint>& keypoints2,
1415-
const std::vector<std::vector<DMatch> >& matches1to2, Mat& outImg,
1415+
const std::vector<std::vector<DMatch> >& matches1to2, CV_OUT Mat& outImg,
14161416
const Scalar& matchColor=Scalar::all(-1), const Scalar& singlePointColor=Scalar::all(-1),
14171417
const std::vector<std::vector<char> >& matchesMask=std::vector<std::vector<char> >(), int flags=DrawMatchesFlags::DEFAULT );
14181418

modules/nonfree/doc/feature_detection.rst

+12-1
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@ The SIFT constructors.
1616

1717
.. ocv:function:: SIFT::SIFT( int nfeatures=0, int nOctaveLayers=3, double contrastThreshold=0.04, double edgeThreshold=10, double sigma=1.6)
1818
19+
.. ocv:pyfunction:: cv2.SIFT([, nfeatures[, nOctaveLayers[, contrastThreshold[, edgeThreshold[, sigma]]]]]) -> <SIFT object>
20+
1921
:param nfeatures: The number of best features to retain. The features are ranked by their scores (measured in SIFT algorithm as the local contrast)
2022

2123
:param nOctaveLayers: The number of layers in each octave. 3 is the value used in D. Lowe paper. The number of octaves is computed automatically from the image resolution.
@@ -33,6 +35,12 @@ Extract features and computes their descriptors using SIFT algorithm
3335

3436
.. ocv:function:: void SIFT::operator()(InputArray img, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false)
3537
38+
.. ocv:pyfunction:: cv2.SIFT.detect(image[, mask]) -> keypoints
39+
40+
.. ocv:pyfunction:: cv2.SIFT.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
41+
42+
.. ocv:pyfunction:: cv2.SIFT.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
43+
3644
:param img: Input 8-bit grayscale image
3745

3846
:param mask: Optional input mask that marks the regions where we should detect features.
@@ -43,6 +51,7 @@ Extract features and computes their descriptors using SIFT algorithm
4351

4452
:param useProvidedKeypoints: Boolean flag. If it is true, the keypoint detector is not run. Instead, the provided vector of keypoints is used and the algorithm just computes their descriptors.
4553

54+
.. note:: Python API provides three functions. First one finds keypoints only. Second function computes the descriptors based on the keypoints we provide. Third function detects the keypoints and computes their descriptors. If you want both keypoints and descriptors, directly use third function as ``kp, des = cv2.SIFT.detectAndCompute(image, None)``
4655

4756
SURF
4857
----
@@ -105,6 +114,8 @@ Detects keypoints and computes SURF descriptors for them.
105114
.. ocv:function:: void SURF::operator()(InputArray img, InputArray mask, vector<KeyPoint>& keypoints, OutputArray descriptors, bool useProvidedKeypoints=false)
106115
107116
.. ocv:pyfunction:: cv2.SURF.detect(image[, mask]) -> keypoints
117+
.. ocv:pyfunction:: cv2.SURF.compute(image, keypoints[, descriptors]) -> keypoints, descriptors
118+
.. ocv:pyfunction:: cv2.SURF.detectAndCompute(image, mask[, descriptors[, useProvidedKeypoints]]) -> keypoints, descriptors
108119
109120
.. ocv:cfunction:: void cvExtractSURF( const CvArr* image, const CvArr* mask, CvSeq** keypoints, CvSeq** descriptors, CvMemStorage* storage, CvSURFParams params )
110121
@@ -325,4 +336,4 @@ The ``descriptors`` matrix is :math:`\texttt{nFeatures} \times \texttt{descripto
325336

326337
The class ``SURF_OCL`` uses some buffers and provides access to it. All buffers can be safely released between function calls.
327338

328-
.. seealso:: :ocv:class:`SURF`
339+
.. seealso:: :ocv:class:`SURF`

modules/python/src2/cv2.cpp

+6-3
Original file line numberDiff line numberDiff line change
@@ -97,6 +97,7 @@ using namespace cv;
9797
typedef cv::softcascade::ChannelFeatureBuilder softcascade_ChannelFeatureBuilder;
9898

9999
typedef std::vector<uchar> vector_uchar;
100+
typedef std::vector<char> vector_char;
100101
typedef std::vector<int> vector_int;
101102
typedef std::vector<float> vector_float;
102103
typedef std::vector<double> vector_double;
@@ -112,6 +113,8 @@ typedef std::vector<KeyPoint> vector_KeyPoint;
112113
typedef std::vector<Mat> vector_Mat;
113114
typedef std::vector<DMatch> vector_DMatch;
114115
typedef std::vector<String> vector_String;
116+
117+
typedef std::vector<std::vector<char> > vector_vector_char;
115118
typedef std::vector<std::vector<Point> > vector_vector_Point;
116119
typedef std::vector<std::vector<Point2f> > vector_vector_Point2f;
117120
typedef std::vector<std::vector<Point3f> > vector_vector_Point3f;
@@ -830,7 +833,7 @@ template<typename _Tp> struct pyopencvVecConverter
830833
}
831834
};
832835

833-
template <typename _Tp>
836+
template<typename _Tp>
834837
bool pyopencv_to(PyObject* obj, std::vector<_Tp>& value, const ArgInfo info)
835838
{
836839
return pyopencvVecConverter<_Tp>::to(obj, value, info);
@@ -888,9 +891,9 @@ template<typename _Tp> static inline PyObject* pyopencv_from_generic_vec(const s
888891

889892
template<typename _Tp> struct pyopencvVecConverter<std::vector<_Tp> >
890893
{
891-
static bool to(PyObject* obj, std::vector<std::vector<_Tp> >& value, const char* name="<unknown>")
894+
static bool to(PyObject* obj, std::vector<std::vector<_Tp> >& value, const ArgInfo info)
892895
{
893-
return pyopencv_to_generic_vec(obj, value, name);
896+
return pyopencv_to_generic_vec(obj, value, info);
894897
}
895898

896899
static PyObject* from(const std::vector<std::vector<_Tp> >& value)

0 commit comments

Comments
 (0)