Detect calibration boards in observed camera images
$ mrgingham /tmp/image*.jpg # filename x y level /tmp/image1.jpg - - /tmp/image2.jpg 1385.433000 1471.719000 0 /tmp/image2.jpg 1483.597000 1469.825000 0 /tmp/image2.jpg 1582.086000 1467.561000 1 ... $ mrgingham /tmp/image.jpg | vnl-filter -p x,y | feedgnuplot --domain --lines --points --image /tmp/image.jpg [ image pops up with the detected grid plotted on top ] $ mrgingham /tmp/image.jpg | vnl-filter -p x,y,level | feedgnuplot --domain --with 'linespoints pt 7 ps 2 palette' --tuplesizeall 3 --image /tmp/image.jpg [ fancy image pops up with the detected grid plotted on top, detections colored by their decimation level ]
As of today (2022-02-27), mrgingham is included in the bleeding-edge versions of Debian and Ubuntu. So if you’re running at least Debian/testing or the not-yet-published Debian 12 (bookworm) or Ubuntu 22.04 (jammy), you can simply
apt install mrgingham libmrgingham-dev
to install the standalone tool and the development library respectively. For older Debian and Ubuntu distros, packages are available in the mrcal repositories. Please see the mrcal installation page.
If running any other distro, you should build mrgingham from source. First, make sure all the build-time dependencies are installed. These are listed in the =debian/control= file:
Build-Depends: ..., libopencv-dev, libboost-dev, pkg-config, mawk, perl, python3-all, python3-all-dev, python3-numpy
On a Debian-based distro you can:
sudo apt install \
libopencv-dev \
libboost-dev \
pkg-config \
mawk \
perl \
python3-all \
python3-all-dev \
python3-numpy
Replace the package names with their analogues on other distros. Once the dependencies are installed, you
make
and then the tool can be invoked with
./mrgingham
Both chessboard and a square grid of circles are supported. Chessboard are the strongly preferred choice, since the circles cannot produce accurate results: we care about the center point, which we are not directly observing. Thus with closeup and oblique views, the reported circle center and the real circle center could be very far away from each other. Because of this, more work was put into the chessboard detector. Use that one. Really.
These are both nominally supported by OpenCV, but those implementations are slow and not at all robust, in my experience. The implementations here are much faster and work much better. I do use OpenCV here, but only for some core functionality.
Currently mrgingham looks for a square grid of points, with some user-requestable width. Rectangular grids are not supported at this time.
These tools work in two passes:
- Look for “interesting” points in the image. The goal is to find all the
points we care about, in any order. It is assumed that
- there will be many outliers
- there will be no outliers interspersed throughout the points we do care about (this isn’t an unreasonable requirement: areas between chessboard corners have a solid color)
- Run a geometric analysis to find a grid in this set of “interesting” points. This will throw out the outliers and it will order the output
If we return any data, that means we found a full grid. The geometric search is fairly anal, so if we found a full grid, it’s extremely likely that it is “right”.
Once again: the current implementation of mrgingham detects only complete chessboards.
This is based on the feature detector described in this paper: https://arxiv.org/abs/1301.5491
The authors provide a simple MIT-licensed implementation here: http://www-sigproc.eng.cam.ac.uk/Main/SB476Chess
This produces an image of detector response. This library then aggregates these responses by looking at local neighborhoods of high responses, and computing the mean of the position of the points in each candidate neighborhood, weighted by the detector response.
As noted earlier, I look for a square grid, 10x10 points by default. Here that means 10x10 internal corners, meaning a chessboard with 11 squares per side. To ensure robust detections, it is recommended to make the outer squares of the chessboard wider than the inner squares. This would ensure that we see exactly 10 points in a row with the expected spacing. If the outer squares have the same size, the edge of the board might be picked up, and we would see 11 or 12 points instead.
A recommended 10x10 pattern can be printed from this file: chessboard.10x10.pdf.
And a recommended 14x14 pattern can be printed from this file:
chessboard.14x14.pdf. The denser chessboard containts more data, so fewer
observations will be required for convergence of the calibration algorithm. But
a higher-res camera is required to reliably detect the corners. A simple tool to
generate these .pdf
files is included in the mrgingham repository. To make an
NxM chessboard figure do make chessboard.NxM.pdf
in the mrgingham source tree.
N
and M
must be even integers. Note: today mrgingham requires that N =
M=,
but eventually this may be lifted, so this tool does not have this requirement.
This isn’t recommended, and exists for legacy compatibility only
The circle finder does mostly what the first stage of the OpenCV circle detector does:
- Find a reasonable intensity threshold
- Threshold the image
- Find blobs
- Return centroid of the blobs
This is relatively slow, can get confused by uneven lighting (although CLAHE can take care of that), and is inaccurate: nothing says that the centroid of a blob came from the center of the circle on the calibration board.
The mrgingham
tool produces its output in a vnlog text table. The columns are:
filename
: path to the image on diskx
,y
: detected pixel coordinates of the chessboard cornerlevel
: image level used in detecting this corner. Level 0 means “full-resolution”. Level 1 means “half-resolution” and that the noise on this detection has double the standard deviation. Level 2 means “quarter-resolution” and so on.
If no chessboard was found in an image, a single record is output:
filename - - -
The corners are output in a consistent order: starting at the top-left, traversing the grid, in the horizontal direction first. Usually, the chessboard is observed by multiple cameras mounted at a similar orientation, so this consistent order is consistent across cameras.
However, if some cameras in the set are rotated, their observed chessboard
corners will not be consistent anymore: the first corner will be “top-left” in
pixel coordinates for both, which is at the top of the chessboard for
rightside-up cameras, but the bottom of the chessboard for upside-down cameras.
This situation is resolved with the mrgingham-rotate-corners
tool. It
post-processes mrgingham
output to reorder detections from rotated cameras.
See the manpage of that tool for more detail. Eventually the implementation
could be extended to be able to uniquely identify each corner, obviating the
need for mrgingham-rotate-corners
, but we’re not there today.
The user-facing functions live in mrgingham.hh
. Everything is in C++, mostly
because some of the underlying libraries are in C++. All functions return a
bool
to indicate success/failure. All functions put the destination arguments
first. All functions return the output points in
std::vector<mrgingham::PointDouble& points_out>
, an ordered list of found
points. The inputs are one of
- An image filename
- An OpenCV matrix:
cv::Mat& image
- A set of detected points, that are unordered, and are a superset of the points we’re seeking
The prototypes:
namespace mrgingham
{
bool find_circle_grid_from_image_array( std::vector<mrgingham::PointDouble>& points_out,
const cv::Mat& image );
bool find_circle_grid_from_image_file( std::vector<mrgingham::PointDouble>& points_out,
const char* filename );
bool find_chessboard_from_image_array( std::vector<mrgingham::PointDouble>& points_out,
const cv::Mat& image,
int image_pyramid_level = -1 );
int find_chessboard_from_image_file( std::vector<mrgingham::PointDouble>& points_out,
const char* filename,
int image_pyramid_level = -1 );
bool find_grid_from_points( std::vector<mrgingham::PointDouble>& points_out,
const std::vector<mrgingham::Point>& points );
};
The arguments should be clear. The only one that needs an explanation is
image_pyramid_level
:
- if
image_pyramid_level
is 0 then we just use the image as is. - if
image_pyramid_level
> 0 then we cut down the image by a factor of 2 that many times. So for example, level 3 means each dimension is cut down by a factor of 2^3 = 8 - if
image_pyramid_level
< 0 then we try several levels, taking the first one that produces results
There’re several included applications that exercise the library. The
mrgingham-...
tools are distributed, and their manpages appear below. The
test-...
tools are internal.
mrgingham
takes in images as globs (with some optional manipulation given on the cmdline), finds the grids, and returns them on stdout, as a vnlogmrgingham-observe-pixel-uncertainty
evaluates the distribution of corner detections from repeated observations of a stationary scenemrgingham-rotate-corners
corrects chessboard detections produced by rotated cameras by reordering the points in the detection streamtest-find-grid-from-points
ingests a file that contains an unordered set of points with outliers. It the finds the grid, and returns it on stdouttest-dump-chessboard-corners
is a lower-level tool that just finds the chessboard corner features and returns them on stdout. No geometric search is done.test-dump-chessboard-corners
similarly is a lower-level tool that just finds the blob center features and returns them on stdout. No geometric search is done.
There’s a test suite in test/test.sh
. It checks all images in test/data/*
,
and reports which ones produced no data. Currently I don’t ship any actual data.
I will at some point.
NAME mrgingham - Extract chessboard corners from a set of images SYNOPSIS $ mrgingham image*.jpg # filename x y level image1.jpg - - image2.jpg 1385.433000 1471.719000 0 image2.jpg 1483.597000 1469.825000 0 image2.jpg 1582.086000 1467.561000 1 ... $ mrgingham image.jpg | vnl-filter -p x,y,level | feedgnuplot --domain \ --with 'linespoints pt 7 ps 2 palette' \ --tuplesizeall 3 \ --image image.jpg [ image pops up with the detected grid plotted on top, detections color-coded by their decimation level ] DESCRIPTION The mrgingham tool detects chessboard corners from images stored on disk. The images are given on the commandline, as globs. Each glob is expanded, and each image is processed, possibly in parallel if -j was given. The output is a vnlog text table (<https://www.github.com/dkogan/vnlog)> containing columns: - filename: path to the image on disk - x, y: detected pixel coordinates of the chessboard corner - level: image level used in detecting this corner. Level 0 means "full-resolution". Level 1 means "half-resolution" and that the noise on this detection has double the standard deviation. Level 2 means "quarter-resolution" and so on. If no chessboard was found in an image, a single record is output: filename - - - The corners are output in a consistent order: starting at the top-left, traversing the grid, in the horizontal direction first. Usually, the chessboard is observed by multiple cameras mounted at a similar orientation, so this consistent order is consistent across cameras. By default we look for a CHESSBOARD, not a grid of circles or Apriltags or anything else. By default we apply adaptive histogram equalization (CLAHE), then blur with a radius of 1. We then use an adaptive level of downsampling when looking for the chessboard. These defaults work very well in practice. For debugging, pass in --debug. This will dump the various intermediate results into /tmp and it will report more stuff on the console. Most of the intermediate results are self-plotting data files. Run them. For debugging sequence candidates, pass in --debug-sequence x,y where 'x,y' are the approximate image coordinates of the start of a given sequence (corner on the edge of a chessboard. This doesn't need to be exact; mrgingham will report on the nearest corner See the mrgingham project documentation for more detail: <https://github.com/dkogan/mrgingham/> OPTIONS POSITIONAL ARGUMENTS imageglobs Globs specifying the images to process. May be given more than once OPTIONAL ARGUMENTS --blobs Finds circle centers instead of chessboard corners. Not recommended --gridn N Requests detections of an NxN grid of corners. If omitted, N defaults to 10 --noclahe Controls image preprocessing. Unless given, we will apply adaptive histogram equalization (CLAHE algorithm) to the images. This is EXTREMELY helpful if the images aren't illuminated evenly; which applies to most real-world images. --blur RADIUS Controls image preprocessing. Applies a gaussian blur to the image after the histogram equalization. A light blurring is very helpful with CLAHE, since it produces noisy images. By default we will blur with radius = 1. Set to <= 0 to disable --level LEVEL Controls image preprocessing. Applies a downsampling to the image (after CLAHE and --blur, if those are given). Level 0 means 'use the original image'. Level > 0 means downsample by 2**level. Level < 0 means 'try several different levels until we find one that works. This is the default. --no-refine Disables corner refinement. By default, the coordinates of reported corners are re-detected at less-downsampled zoom levels to improve their accuracy. If we do not want to do that, pass --no-refine --jobs N Parallelizes the processing N-ways. -j is a synonym. This is just like GNU make, except you're required to explicitly specify a job count. --debug If given, mrgingham will dump various intermediate results into /tmp and it will report more stuff on the console. The output is self-documenting --debug-sequence If given, we report details about sequence matching. Do this if --debug reports correct-looking corners (all corners detected, no doubled-up detections, no detections inside the chessboard but not on a corner)
NAME mrgingham-observe-pixel-uncertainty - Evaluate observed point distribution from stationary observations SYNOPSIS $ observe-pixel-uncertainty '*.png' Evaluated 49 observations mean 1-sigma for independent x,y: 0.26 $ mrcal-calibrate-cameras --observed-pixel-uncertainty 0.26 ..... [ mrcal computes a camera calibration ] DESCRIPTION mrgingham has finite precision, so repeated observations of the same board will produce slightly different corner coordinates. This tool takes in a set of images (assumed observing a chessboard, with both the camera and board stationary). It then outputs the 1-standard-deviation statistic for the distribution of detected corners. This can then be passed in to mrcal: 'mrcal-calibrate-cameras --observed-pixel-uncertainty ...' The distribution of the detected corners is assumed to be gaussian, and INDEPENDENT in the horizontal and vertical directions. If the x and y distributions are each s, then the LENGTH of the deviation of each pixel is a Rayleigh distribution with expected value s*sqrt(pi/2) ~ s*1.25 THIS TOOL PERFORMS VERY LIGHT OUTLIER REJECTION; IT IS ASSUMED THAT THE SCENE IS STATIONARY OPTIONS POSITIONAL ARGUMENTS input Either 1: A glob that matches images observing a stationary calibration target. This must be a GLOB. So in the shell pass in '*.png' and NOT *.png. These are processed by 'mrgingham' and the arguments passed in with --mrgingham. Or 2: a vnlog representing corner detections from these images. This is assumed to be a file with a filename ending in .vnl, formatted like 'mrgingham' output: 3 columns: filename,x,y OPTIONAL ARGUMENTS -h, --help show this help message and exit --show {geometry,histograms} Visualize something. Arguments can be: "geometry": show the 1-stdev ellipses of the distribution for each chessboard corner separately. "histograms": show the distribution of all the x- and y-deviations off the mean --mrgingham MRGINGHAM If we're processing images, these are the arguments given to mrgingham. If we are reading a pre-computed file, this does nothing --num-corners NUM_CORNERS How many corners to expect in each image. If this is wrong I will throw an error. Defaults to 100 --imagersize IMAGERSIZE IMAGERSIZE Optional imager dimensions: width and height. This is optional. If given, we use it to size the "--show geometry" plot
NAME mrgingham-rotate-corners - Adjust mrgingham corner detections from rotated cameras SYNOPSIS # camera A is rightside-up # camera B is mounted sideways # cameras C,D are upside-down mrgingham --gridn N \ 'frame*-cameraA.jpg' \ 'frame*-cameraB.jpg' \ 'frame*-cameraC.jpg' \ 'frame*-cameraD.jpg' | \ mrgingham-rotate-corners --gridn N \ --90 cameraB --180 'camera[CD]' DESCRIPTION The mrgingham chessboard detector finds a chessboard in an image, but it has no way to know whether the detected chessboard was upside-down or otherwise rotated: the chessboard itself has no detectable marking to make this clear. In the usual case, the cameras as all mounted in the same orientation, so they all detect the same orientation of the chessboard, and there is no problem. However, if some cameras are mounted sideways or upside-down, the sequence of corners will correspond to different corners between the cameras with different orientations. This can be addressed by this tool. This tool ingests mrgingham detections, and outputs them after correcting the chessboard observations produced by rotated cameras. Each rotation option is an awk regular expression used to select images from specific cameras. The regular expression is tested against the image filenames. Each rotation option may be given multiple times. Any files not matched by any rotation option are passed through unrotated.
This is maintained by Dima Kogan <[email protected]>. Please let Dima know if something is unclear/broken/missing.
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
Copyright 2017-2021 California Institute of Technology
Copyright 2017-2021 Dima Kogan ([email protected]
)