[opencv] Single target measurement and problems encountered

Following the space coordinate calculation under the binocular vision: http://blog.csdn.net/qq_15947787/article/details/53366592

Ordinary camera calibration process: http://blog.csdn.net/qq_15947787/article/details/51471535

Single target measurement here is simply a measurement of the desktop (the calibration plate plane is a fixed plane) by a camera size of.

The point on the image and the point in the world coordinate system and the camera's internal and external parameters satisfy such a relationship:

where Zc is a coordinate component in the camera coordinate system, and finally can be eliminated.

Put the internal and external parameters of the above formula into the following form:

So a point in the space (X, Y, Z) can find the coordinates of the corresponding corresponding point on the image (u, v). The point coordinates (u, v) on the image can be used to find a surface that satisfies the equation (Z value uncertainty), two equations, and three unknowns.

and the system consisting of binoculars, there are four equations, three unknowns, which can be obtained by least squares (X, Y, Z).

Write the above formula as the equation form:

and we require (X,Y) under the fixed plane, Z is a constant, you can modify the above formula:

解出X,Y can get The spatial coordinate

————————————————————————————————

is a simple solution process. Firstly, the internal and external parameters of the camera and the rotation matrix and translation vector of each calibration plate are obtained by the opencv single target, and the remapping error is calculated. I am here to remap the error. The space coordinate system is established on the smallest picture. Since the calibration plate is placed horizontally on the table, the Z-axis direction is the height direction of the object, and then the edge points are found on the object, and the coordinates of the space are calculated by the above.

Units are all mm, a grid of calibration plates is 41mm, according to the blog post: http://blog.csdn.net/shenxiaolu1984/article/details/50165635

can know the world coordinate system of this calibration map The situation, but when I was testing, I found that it did not match. But regardless of the situation that does not conform to the world coordinate system, and the location of this world coordinate system is not found, the calculated distance is very accurate.

//Calculate space coordinates
Point3f uv2xyz(Point2f uv)
{
//Two points on the checkerboard
//uv.x = 711;
//uv.y = 917; calculation point (-48.9, -176.7)
//uv.x = 630;
//uv.y = 932; calculation point (-89.1, -184.6)
//The checkerboard size is 41mm and the measurement result is 40.9689.
//Error 0.03mm

// |u| |X|
//Zc*|v| = Ml*|Y|
// |1| |Z|
// |1|
Mat camRT = Mat(3, 4, CV_64F);//Camera M matrix
Hconcat(camRotation, camTranslation, camRT);
Mat camM = camIntrinsic * camRT;

//Least squares A matrix
Mat A = Mat(2,2,CV_64F);
Double aaa = uv.x * camM.at<double>(2,0);
Double bbb = camM.at<double>(0,0);
A.at<double>(0,0) = uv.x * camM.at<double>(2,0) - camM.at<double>(0,0);
A.at<double>(0,1) = uv.x * camM.at<double>(2,1) - camM.at<double>(0,1);

A.at<double>(1,0) = uv.y * camM.at<double>(2,0) - camM.at<double>(1,0);
A.at<double>(1,1) = uv.y * camM.at<double>(2,1) - camM.at<double>(1,1);

//Least squares B matrix
Mat B = Mat(2,1,CV_64F);
Float Z = 200; / / object height
B.at<double>(0,0) = camM.at<double>(0,3) - uv.x * camM.at<double>(2,3) - Z * (uv.x * camM.at <double>(2,2) - camM.at<double>(0,2));
B.at<double>(1,0) = camM.at<double>(1,3) - uv.y * camM.at<double>(2,3) - Z * (uv.y * camM.at <double>(2,2) - camM.at<double>(1,2));

Mat XY = Mat(2,1,CV_64F);
/ / Solve XY using SVD least squares method
Solve(A,B,XY,DECOMP_SVD);

/ / coordinates in the world coordinate system
Point3f world;
World.x = XY.at<double>(0,0);
World.y = XY.at<double>(1,0);
World.z = Z;

Return world;
}