利用Meanshift filter和canny边缘检测的效果,可以实现简单的图片的卡通化效果。简单的说,就是用Meanshift filter的结果减去canny算法的结果得到卡通化的效果。
代码如下:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui//highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main()
{
cv::Mat img = cv::imread("../lenna.jpg");
cv::namedWindow("image");
cv::imshow("image", img);
cv::Mat img1;
img1 = img.clone();
//meanshift filter
cv::pyrMeanShiftFiltering(img1.clone(), img1, 10, 30);
cv::namedWindow("image1");
cv::imshow("image1", img1);
cv::Mat img2;
cv::Mat img3;
cv::Mat img4;
//canny
cv::cvtColor(img, img2, CV_BGR2GRAY);
cv::Canny(img2, img3, 150, 150);
cv::cvtColor(img3, img4, CV_GRAY2BGR);
cv::namedWindow("image4");
cv::imshow("image4", img4);
//卡通化的图片
img4 = img1 - img4;
cv::namedWindow("image4_1");
cv::imshow("image4_1", img4);
cv::waitKey(0);
}
下面分别为,原始图像,meanshift filter后的图像,canny边缘图像,以及最终的卡通化图像。
在OpenCV中,meanshift filter函数为 pyrMeanShiftFiltering, 它的函数调用格式如下:
Parameters:
The function implements the filtering stage of meanshift segmentation, that is, the output of the function is the filtered “posterized” image with color gradients and fine-grain texture flattened. At every pixel (X,Y) of the input image (or down-sized input image, see below) the function executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:
where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X‘,Y‘) and average color vector (R‘,G‘,B‘) are found and they act as the neighborhood center on the next iteration:
After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration):
When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than srfrom the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper. Note that the results will be actually different from the ones obtained by running the meanshift procedure on the whole original image (i.e. when maxLevel==0).
原文:http://www.cnblogs.com/mikewolf2002/p/5615209.html