Image analysis: binary image connected domain tag

一, preface

binary image, as the name implies, the brightness value of the image has only two states: black (0) and white (255). Binary images play an important role in image analysis and recognition because of their simple mode and strong expressiveness on the spatial relationship of pixels. In practical applications, the analysis of many images is ultimately converted into analysis of binary images, such as medical image analysis, foreground detection, character recognition, and shape recognition. Binarization + Mathematical Morphology can solve the problem of target extraction in many computer recognition projects.

The most important method of binary image analysis is the connected region marker, which is the basis for all binary image analysis. It marks each individual connected region by marking the white pixels (targets) in the binary image. An identified block, further we can get the geometric parameters of the contour, circumscribed rectangle, centroid, invariant moment of these blocks.

The following is a binary image is marked, compared to the image display effect, this is the goal of our article.

image

二,Connected Domain

Before we discuss the algorithm for connecting area markers, we must first define what is the connected region and what kind of pixel adjacency is connected. In the image, the smallest unit is a pixel, and there are 8 adjacent pixels around each pixel. There are two common adjacencies: 4 adjacencies and 8 adjacencies. 4 adjacent to a total of 4 points, that is, up and down, as shown below left. There are 8 adjacent points in 8, including the point of the diagonal position, as shown in the right figure below.

image        image

If the pixel points A and B are adjacent, we call A and B connected, so we have the following conclusions without proof:

If A and B are connected, B and C are connected, then A and C are connected. .

Visually, points that are connected to each other form an area, and unconnected points form different areas. Such a collection of points where all points are connected to each other, we call a connected area.

In the following figure, if 4 adjacencies are considered, there are 3 connected areas; if 8 adjacencies are considered, there are 2 connected areas. (Note: The image is magnified, and the image square actually has only 4 pixels).

image

三, The mark of the connected area

There are many kinds of connected area markup algorithms. Some algorithms can traverse the image completion mark once, and some need to traverse the image 2 times or more. This also causes the difference in time efficiency between different algorithms. Here we introduce two algorithms.

The first algorithm is the algorithm that is now used in the connected area marker function bwlabel in matlab. It traverses the image once and writes the equivalent pairs of consecutive runs and markers in each row (or column). Then the original image is re-marked by equivalent. This algorithm is the most efficient one I have tried, but the algorithm uses the sparse matrix and the Dulmage-Mendelsohn decomposition algorithm to eliminate the equivalence pair. The principle is cumbersome, so this decomposition algorithm will not be introduced in this article. Instead, the depth-first traversal of the graph replaces the equivalence pair.

The second algorithm is the markup algorithm used in the open source library cvBlob. It marks the entire image by locating the inner and outer contours of the connected region. The core of this algorithm is the contour search algorithm, which we will describe in detail in the article. . Compared with the first method, this algorithm is lower in efficiency. However, when the number of connected regions is less than 100, there is almost no difference between the two. When the number of connected regions reaches the order of 103

, the above algorithm will More than 10 times faster than the algorithm.

四, Based on the itinerary mark

We first give a description of the algorithm, and then combine the actual image to illustrate the steps of the algorithm.

1, progressively scan the image, we call a sequence of consecutive white pixels in each line a run, and note its start start, its end end, and the line number it is in.

2, for a group in all rows except the first row, if it does not overlap with all the groups in the previous row, give it a new label; if it only has a group in the previous row If there is a coincident area, assign the label of the group of the previous line to it; if it overlaps with more than 2 groups of the previous line, assign the current group a minimum number of connected groups, and the last line of this The flags of several groups are written in equivalent pairs, indicating that they belong to the same category.

3, converts equivalence pairs into equivalence sequences, each sequence needs to be given the same label because they are all equivalent. Starting with 1, give each equivalent sequence a label.

4, traverse the mark of the starting group, find the equivalent sequences, and give them new marks.

5, fill in the label of each group with the label.

6, end.

We are going to combine a three-line image description with the above operations.

image

第一行, we get two groups: [2,6] and [10,13], and mark them 1 and 2 at the same time.

第二行, we get two more groups: [6,7] and [9,10], but they all overlap with the group of the previous line, so use the group mark of the previous line, namely 1 and 2.

third line, two: [2,4] and [7,8]. [2,4] This group does not overlap with the previous line, so give it a new token of 3; and [2,4] this group overlaps with the two groups of the previous row, so give it a both The smallest label in the middle, 1, then writes (1, 2) to the equivalent pair.

At the end of all image traversals, we get the starting coordinates of a number of clusters, the ending coordinates, the rows they are in, and their labels. At the same time we also got a list of equivalent pairs.

Below we use C++ to achieve the above process, that is, step 2, divided into two:

1) fillRunVectors function to complete the search and record of all groups;

void fillRunVectors(const Mat& bwImage, int& NumberOfRuns, vector<int>& stRun, vector<int>& enRun, vector<int>& rowRun)
{
    for (int i = 0; i < bwImage.rows; i++)
    {
        const uchar* rowData = bwImage.ptr<uchar>(i);

        if (rowData[0] == 255)
        {
            NumberOfRuns++;
            stRun.push_back(0);
            rowRun.push_back(i);
        }
        for (int j = 1; j < bwImage.cols; j++)
        {
            if (rowData[j - 1] == 0 && rowData[j] == 255)
            {
                NumberOfRuns++;
                stRun.push_back(j);
                rowRun.push_back(i);
            }
            else if (rowData[j - 1] == 255 && rowData[j] == 0)
            {
                enRun.push_back(j - 1);
            }
        }
        if (rowData[bwImage.cols - 1])
        {
            enRun.push_back(bwImage.cols - 1);
        }
    }
}

2) firstPass function complete group mark Generation with a list of equivalent pairs. In contrast, the second function is a bit more difficult to understand.

void firstPass(vector<int>& stRun, vector<int>& enRun, vector<int>& rowRun, int NumberOfRuns,
    vector<int>& runLabels, vector<pair<int, int>>& equivalences, int offset)
{
    runLabels.assign(NumberOfRuns, 0);
    int idxLabel = 1;
    int curRowIdx = 0;
    int firstRunOnCur = 0;
    int firstRunOnPre = 0;
    int lastRunOnPre = -1;
    for (int i = 0; i < NumberOfRuns; i++)
    {
        if (rowRun[i] != curRowIdx)
        {
            curRowIdx = rowRun[i];
            firstRunOnPre = firstRunOnCur;
            lastRunOnPre = i - 1;
            firstRunOnCur = i;

        }
        for (int j = firstRunOnPre; j <= lastRunOnPre; j++)
        {
            if (stRun[i] <= enRun[j] + offset && enRun[i] >= stRun[j] - offset && rowRun[i] == rowRun[j] + 1)
            {
                if (runLabels[i] == 0) // 没有被标号过
                    runLabels[i] = runLabels[j];
                else if (runLabels[i] != runLabels[j])// 已经被标号             
                    equivalences.push_back(make_pair(runLabels[i], runLabels[j])); // 保存等价对
            }
        }
        if (runLabels[i] == 0) // 没有与前一列的任何run重合
        {
            runLabels[i] = idxLabel++;
        }

    }
}

Next is our focus, the processing of equivalence pairs, we need to convert it into several equivalent sequences. For example, there is the following equivalent:

(1,2),(1,6),(3,7),(9-3),(8,1),(8,10),(11,5),(11,8),(11,12),(11,13),(11,14),(15,11)

We need to get the final sequence:

list1:

1-2-5-6-8-10-11-12-13-14-15

list2:

3-7-9

list3:

4

One idea is to regard 1-15 points as the nodes of the graph, and the equivalent pair (1, 2) to illustrate the knot There is a path between point 1 and node 2, and the resulting graph is an undirected graph, ie (1, 2) actually contains (2, 1). We need to traverse the graph to find all the connected graphs. Therefore, we use the principle of image depth-first traversal to search for equivalent sequences.

Starting at node 1, it has 3 paths 1-2, 1-6, 1-8. There are no paths behind 2 and 6, 8 has 2 paths leading to 10 and 11, and 10 has no follow-up Path, 11 has 5 paths leading to 5, 12, 13, 14, 15. The equivalent table 1 is found.

The second equivalent table starts from 3, then only 2 paths lead to 7 and 9, 7 and 9 have no path, and the equivalent table 2 is found.

Finally there are only 4, it did not appear in the equivalent pair, so the single form a sequence (this assumes that the maximum number of the group in step 2 is 15).

image    image    image

The following is a C++ implementation of this process. Each equivalent table is saved with a vector<int>, and the equivalent pair is stored in map<pair<int,int>>.

void replaceSameLabel(vector<int>& runLabels, vector<pair<int, int>>&
    equivalence)
{
    int maxLabel = *max_element(runLabels.begin(), runLabels.end());
    vector<vector<bool>> eqTab(maxLabel, vector<bool>(maxLabel, false));
    vector<pair<int, int>>::iterator vecPairIt = equivalence.begin();
    while (vecPairIt != equivalence.end())
    {
        eqTab[vecPairIt->first - 1][vecPairIt->second - 1] = true;
        eqTab[vecPairIt->second - 1][vecPairIt->first - 1] = true;
        vecPairIt++;
    }
    vector<int> labelFlag(maxLabel, 0);
    vector<vector<int>> equaList;
    vector<int> tempList;
    cout << maxLabel << endl;
    for (int i = 1; i <= maxLabel; i++)
    {
        if (labelFlag[i - 1])
        {
            continue;
        }
        labelFlag[i - 1] = equaList.size() + 1;
        tempList.push_back(i);
        for (vector<int>::size_type j = 0; j < tempList.size(); j++)
        {
            for (vector<bool>::size_type k = 0; k != eqTab[tempList[j] - 1].size(); k++)
            {
                if (eqTab[tempList[j] - 1][k] && !labelFlag[k])
                {
                    tempList.push_back(k + 1);
                    labelFlag[k] = equaList.size() + 1;
                }
            }
        }
        equaList.push_back(tempList);
        tempList.clear();
    }
    cout << equaList.size() << endl;
    for (vector<int>::size_type i = 0; i != runLabels.size(); i++)
    {
        runLabels[i] = labelFlag[runLabels[i] - 1];
    }
}

五, contour-based mark

Here I still give the algorithm description:

1, from top to bottom, traverse the image from left to right.

2, as shown in Figure A below, A is an external contour point (in fact, the first white point encountered during the traversal is the outer contour point), and if it is not marked, it is given to A. A new tag number. We start from point A and follow all the rules (described in detail later in this rule) to track all the outer contour points of A, then return to point A, and mark all the points on the path as the label of A.

3, as shown in Figure B below, if you encounter the marked outline point A'

, then right from A', mark the right point of it as A'

The label until it encounters a black pixel.

4, as shown in Figure C below, if you encounter a point B that has already been marked, and is the point of the inner contour (the pixel below it is black pixel and not on the outer contour), then starting from point B The inner contour is tracked, and the points on the path are all set to the label of B. Since B has been marked the same as A, the inner contour and the outer contour will be labeled with the same label.

5, as shown in Figure D below, if you traverse to a point on the inner contour, you also use the label of the outline to mark the point on the right side until it encounters a black pixel.

6, end.

image

The whole algorithm step, we only scan the image once, and we mark the pixel in the image, either by giving it a new label or by marking it with the label on the left side of the peer. The following is a more detailed algorithm. Description:

For an image that needs to be marked I

, we define a corresponding tag image L, which is used to save the tag information. We start to set all the values ​​on L to 0, and we have A tag variable C, initialized to 1. Then we start scanning image I. When we encounter white pixels, set this point to P

, we need to deal with different situations as follows:

情况1:If P(i, j)

点 is a white pixel. This position is not marked on the L image, and the top of the P point is black, then P is a new outer contour point. At this time we will mark the value of C. Mark the position (x, y) of the P point on the L image, that is, L(x, y) = C

, then we start contour tracking along point P and put the point corresponding to the point on the contour. Marked as C on the top, after completing the search and marking of the entire outline, it returns to point P. Finally, don't forget to add 1 to the value of C. This process is as shown in the image S1 below.

image

情况2:If the point below the P point is an unmarked point (what is the unmark point, the case 3 will be defined after the introduction), then the P point must be the point on the inner contour, this time There are two cases, one is that the P point has been marked on L, indicating that this point is also the point on the outer contour; the other case is that the P point has not been marked on L, then if it is pressed In the above steps, the point to the left of the P point must be marked (this may not be easy to understand at first, you may wish to draw a simple picture, try to try one point and one mark, it is easy to understand), so At this time, we mark the P with the mark value of the left point of the P point, and then trace the inner contour from the P point to mark the point on the inner contour as the label of P.

The image below shows the two P cases analyzed above. The P point on the left is both a point on the outer contour and a point on the inner contour.

image    image

情况3: If a point P is not one of the above two cases, then the left side of the P point must be marked (do not understand, manually mark the above two images), we only need to use The label to the left of it marks the point P on L.

Now we have only one problem left, that is, what is the unmarked point. We know that the bottom point of the inner contour point must be a black pixel below the point P. Is it a black pixel or an unmarked point? Obviously not, the Q point of the image below, it The bottom is also a black pixel, but it is not a point on the inner contour.

In fact, when we are in the contour tracking, we mark the contour points around us, and the points that are searched around the contour points (see the contour tracking algorithm below) are marked as a negative on L. The value (as shown in the image to the right below), so the bottom of the Q point is marked as a negative value, so that the bottom of Q is not an unmarked point, the unmarked point, that is, the label on L has not been modified, that is, 0.

image      image

Obviously, the focus of this algorithm is on the search and marking of contours. For the search of contours, it is to determine the problem of search strategy. We define the tracker rules for inner and outer contours below.

We make a label 0-7 for the 8 points around a pixel, because the first point we encounter in traversing the image is definitely the outer contour point, so let's first determine the outer contour search. Strategy, for the point P of the outer contour, there are two cases:

image

1) If P is the starting point of the outer contour, that is to say we are tracking from the point P, then we are from the 7th (upper right) position P1

Start, see if the 7th is a white point, if it is, then add this point to the outer contour point and mark it the same as the P point. If the 7th point is a black point, press the clockwise 7 -0-1-2-3-4-5-6 This order search until the white point is encountered, the point is determined as P1

, the outer contour is added, and the label setting of this point is the same as the P point . An important step here is to assume that we are white point 2, then we have searched for the three positions 7,0,1, so we have to mark these points as a negative value on L. As shown in the figure below, the result of the right image is marked.

image    image

2) Then if P is the starting point of the outer contour, that is, P is a point on the outer contour path, then it must be entered by a point, we set it to P−1

点, P The position of the −1 point is x (0<=x<=7), then the P point starts from the position of (x+2) mod8 and looks for the next path. (x+2) mod8

is plus 2 The meaning of the modulo, which is reflected in the image is the position of 2 grids clockwise from P-1. After determining the search starting point, follow the above steps to perform the following steps.

After the tracking mode of the outer contour point is determined, the tracking method of the inner contour point is similar, except that if P is the first point of the inner contour, its starting search position is not the 7th point but the 3rd point. The other is exactly the same as the outer contour.

If you want to search above, you are not very intuitive to understand, you may wish to try to search for the following image, you should have a clear understanding. The condition for the end of a path search is that, back to the original point S, there is no unmarked point around S.

The middle image shown below, the path formed from point S is STUTSVWV.

    image 

The function to find the outline in OpenCV already exists, and the hierarchical relationship between the outlines can be obtained. This function is not difficult to implement according to the above algorithm, so this function is no longer implemented here. If you are interested, you can look at the OpenCV source code (contours.cpp).

void bwLabel(const Mat& imgBw, Mat & imgLabeled)
{
    // 对图像周围扩充一格
    Mat imgClone = Mat(imgBw.rows + 1, imgBw.cols + 1, imgBw.type(), Scalar(0));
    imgBw.copyTo(imgClone(Rect(1, 1, imgBw.cols, imgBw.rows)));

    imgLabeled.create(imgClone.size(), imgClone.type());
    imgLabeled.setTo(Scalar::all(0));

    vector<vector<Point>> contours;
    vector<Vec4i> hierarchy;
    findContours(imgClone, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);

    vector<int> contoursLabel(contours.size(), 0);
    int numlab = 1;
    // 标记外围轮廓
    for (vector<vector<Point>>::size_type i = 0; i < contours.size(); i++)
    {
        if (hierarchy[i][3] >= 0) // 有父轮廓
        {
            continue;
        }
        for (vector<Point>::size_type k = 0; k != contours[i].size(); k++)
        {
            imgLabeled.at<uchar>(contours[i][k].y, contours[i][k].x) = numlab;
        }
        contoursLabel[i] = numlab++;
    }
    // 标记内轮廓
    for (vector<vector<Point>>::size_type i = 0; i < contours.size(); i++)
    {
        if (hierarchy[i][3] < 0)
        {
            continue;
        }
        for (vector<Point>::size_type k = 0; k != contours[i].size(); k++)
        {
            imgLabeled.at<uchar>(contours[i][k].y, contours[i][k].x) = contoursLabel[hierarchy[i][3]];
        }
    }
    // 非轮廓像素的标记
    for (int i = 0; i < imgLabeled.rows; i++)
    {
        for (int j = 0; j < imgLabeled.cols; j++)
        {
            if (imgClone.at<uchar>(i, j) != 0 && imgLabeled.at<uchar>(i, j) == 0)
            {
                imgLabeled.at<uchar>(i, j) = imgLabeled.at<uchar>(i, j - 1);
            }
        }
    }
    imgLabeled = imgLabeled(Rect(1, 1, imgBw.cols, imgBw.rows)).clone(); // 将边界裁剪掉1像素
}