I'm looking to apply Median Filter to a UIImage in my iOS application.
Due to my company restrictions, i cannot use openGL filters.
Any ideas or current implementations would be very welcomed.
Thanks.
Apple's Core Image framework may be your solution. To be precise, you need a subclass of CIFilter which implements a median filter. (Guess you would be interested in CIMedianFilter or have a look at the filter reference)
CIIImage *inputImage = //...
CIFilter *filter = [CIFilter filterWithName:#"CIMedianFilter"];
[filter setDefaults];
[filter setValue:inputImage forKey:#"inputImage"];
CIImage *outputImage = [filter outputImage];
To convert the CIImage to UIImage and vice versa:
CIImage *ciImage = [UIImage imageNamed:#"test.png"].CIImage;
UIImage *uiImage = [[UIImage alloc] initWithCIImage:ciImage];
Look at the github project:
https://github.com/BradLarson/GPUImage
These are a lot of CIFilter-type filters that are GPU accelerated.
Another alternate: Use a Convolution Filter and roll your own median filter using CI's kernel language.
For those of you who can use OpenGL ES in your iOS app, this is how you calculate the median in a pixel neighborhood radius of your choosing:
kernel vec4 medianUnsharpKernel(sampler u) {
vec4 pixel = unpremultiply(sample(u, samplerCoord(u)));
vec2 xy = destCoord();
int radius = 3;
int bounds = (radius - 1) / 2;
vec4 sum = vec4(0.0);
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
sum += unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
}
}
vec4 mean = vec4(sum / vec4(pow(float(radius), 2.0)));
float mean_avg = float(mean);
float comp_avg = 0.0;
vec4 comp = vec4(0.0);
vec4 median = mean;
for (int i = (0 - bounds); i <= bounds; i++)
{
for (int j = (0 - bounds); j <= bounds; j++ )
{
comp = unpremultiply(sample(u, samplerTransform(u, vec2(xy + vec2(i, j)))));
comp_avg = float(comp);
median = (comp_avg < mean_avg) ? max(median, comp) : median;
}
}
return premultiply(vec4(vec3(abs(pixel.rgb - median.rgb)), 1.0));
}
A brief description of the steps
1. Calculate the mean of the values of the pixels surrounding the source pixel in a 3x3 neighborhood;
2. Find the maximum pixel value of all pixels in the same neighborhood that are less than the mean.
3. [OPTIONAL] Subtract the median pixel value from the source pixel value for edge detection.
If you're using the median value for edge detection, there are a couple of ways to modify the above code for better results, namely, hybrid median filtering and truncated media filtering (a substitute and a better 'mode' filtering). If you're interested, please ask.
If someone would like to do it in Xamarin (for iOS), here is the code:
public static UIImage MedianFilter(UIImage image)
{
CIImage ciImage = new CIImage(image);
var medianFilter = new CIMedianFilter() { Image = ciImage };
CIImage output = medianFilter.OutputImage;
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(output, output.Extent);
var uiImage = UIImage.FromImage(cgimage);
return uiImage;
}
Related
I have a pointcloud generated by scanning a planar surface using stereo cameras. I have generated features such as normals, fpfh etc and using this information I want to classify areas in the pointcloud. To enable the use of more traditional CNN approaches I want to convert this pointcloud to a multi-channel image in opencv. I have the pointcloud collapsed to the XY plane, and aligned to the X and Y axes so that I can create a bounding box for the image.
I am looking for ideas on how to proceed further with the mapping from points to pixels. Specifically, I am confused about the image size, and how to go about filling in each pixel with the appropriate data. (Overlapping points would be averaged out, empty ones will be labelled accordingly). Since this is an unorganized pointcloud, I do not have camera parameters to use, and I guess PCL's RangImage class would not work in my case.
Any help is appreciated!
Try creating an empty cv::Mat of predetermined size first. Then iterate through every pixel of that Mat to determine what value it should take.
Here is some code which does something similar to what you were describing:
cv::Mat makeImageFromPointCloud(pcl::PointCloud<pcl::PointXYZI>::Ptr cloud, std::string dimensionToRemove, float stepSize1, float stepSize2)
{
pcl::PointXYZI cloudMin, cloudMax;
pcl::getMinMax3D(*cloud, cloudMin, cloudMax);
std::string dimen1, dimen2;
float dimen1Max, dimen1Min, dimen2Min, dimen2Max;
if (dimensionToRemove == "x")
{
dimen1 = "y";
dimen2 = "z";
dimen1Min = cloudMin.y;
dimen1Max = cloudMax.y;
dimen2Min = cloudMin.z;
dimen2Max = cloudMax.z;
}
else if (dimensionToRemove == "y")
{
dimen1 = "x";
dimen2 = "z";
dimen1Min = cloudMin.x;
dimen1Max = cloudMax.x;
dimen2Min = cloudMin.z;
dimen2Max = cloudMax.z;
}
else if (dimensionToRemove == "z")
{
dimen1 = "x";
dimen2 = "y";
dimen1Min = cloudMin.x;
dimen1Max = cloudMax.x;
dimen2Min = cloudMin.y;
dimen2Max = cloudMax.y;
}
std::vector<std::vector<int>> pointCountGrid;
int maxPoints = 0;
std::vector<pcl::PointCloud<pcl::PointXYZI>::Ptr> grid;
for (float i = dimen1Min; i < dimen1Max; i += stepSize1)
{
pcl::PointCloud<pcl::PointXYZI>::Ptr slice = passThroughFilter1D(cloud, dimen1, i, i + stepSize1);
grid.push_back(slice);
std::vector<int> slicePointCount;
for (float j = dimen2Min; j < dimen2Max; j += stepSize2)
{
pcl::PointCloud<pcl::PointXYZI>::Ptr grid_cell = passThroughFilter1D(slice, dimen2, j, j + stepSize2);
int gridSize = grid_cell->size();
slicePointCount.push_back(gridSize);
if (gridSize > maxPoints)
{
maxPoints = gridSize;
}
}
pointCountGrid.push_back(slicePointCount);
}
cv::Mat mat(static_cast<int>(pointCountGrid.size()), static_cast<int>(pointCountGrid.at(0).size()), CV_8UC1);
mat = cv::Scalar(0);
for (int i = 0; i < mat.rows; ++i)
{
for (int j = 0; j < mat.cols; ++j)
{
int pointCount = pointCountGrid.at(i).at(j);
float percentOfMax = (pointCount + 0.0) / (maxPoints + 0.0);
int intensity = percentOfMax * 255;
mat.at<uchar>(i, j) = intensity;
}
}
return mat;
}
To give this question some context (ho ho):
I am subclassing CIFilter under iOS for the purpose of creating some custom photo-effect filters. As per the documentation, this means creating a "compound" filter that encapsulates one or more pre-existing CIFilters within the umbrella of my custom CIFilter subclass.
All well and good. No problems there. For the sake of example, let's say I encapsulate a single CIColorMatrix filter which has been preset with certain rgba input vectors.
When applying my custom filter (or indeed CIColorMatrix alone), I see radically different results when using a CIContext with colour management on versus off. I am creating my contexts as follows:
Colour management on:
CIContext * context = [CIContext contextWithOptions:nil];
Colour management off:
NSDictionary *options = #{kCIContextWorkingColorSpace:[NSNull null], kCIContextOutputColorSpace:[NSNull null]};
CIContext * context = [CIContext contextWithOptions:options];
Now, this is no great surprise. However, I have noticed that all of the pre-built CIPhotoEffect CIFilters, e.g. CIPhotoEffectInstant, are essentially invariant under those same two colour management conditions.
Can anyone lend any insight as to what gives them this property? For example, do they themselves encapsulate particular CIFilters that may be applied with similar invariance?
My goal is to create some custom filters with the same property, without being limited to chaining only CIPhotoEffect filters.
--
Edit: Thanks to YuAo, I have assembled some working code examples which I post here to help others:
Programmatically generated CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
self.filter = [CIFilter filterWithName:#"CIColorCubeWithColorSpace"];
[self.filter setDefaults];
int cubeDimension = 2; // Must be power of 2, max 128
int cubeDataSize = 4 * cubeDimension * cubeDimension * cubeDimension; // bytes
float cubeDataBytes[8*4] = {
0.0, 0.0, 0.0, 1.0,
0.1, 0.0, 1.0, 1.0,
0.0, 0.5, 0.5, 1.0,
1.0, 1.0, 0.0, 1.0,
0.5, 0.0, 0.5, 1.0,
1.0, 0.0, 1.0, 1.0,
0.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 1.0
};
NSData *cubeData = [NSData dataWithBytes:cubeDataBytes length:cubeDataSize * sizeof(float)];
[self.filter setValue:#(cubeDimension) forKey:#"inputCubeDimension"];
[self.filter setValue:cubeData forKey:#"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
[self.filter setValue:(__bridge id)colorSpace forKey:#"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:#"inputImage"];
CIImage *filteredImageCore = [self.filter outputImage];
CGColorSpaceRelease(colorSpace);
The docs state:
To provide a CGColorSpaceRef object as the input parameter, cast it to type id. With the default color space (null), which is equivalent to kCGColorSpaceGenericRGBLinear, this filter’s effect is identical to that of CIColorCube.
I wanted to go further and be able to read in cubeData from a file. So-called Hald Colour Look-up Tables, or Hald CLUT images may be used to defining a mapping from input colour to output colour.
With help from this answer, I assembled the code to do this also, reposted here for convenience.
Hald CLUT image based CIColorCubeWithColorSpace CIFilter, invariant under different colour management schemes / working colour space:
Usage:
NSData *cubeData = [self colorCubeDataFromLUT:#"LUTImage.png"];
int cubeDimension = 64;
[self.filter setValue:#(cubeDimension) forKey:#"inputCubeDimension"];
[self.filter setValue:cubeData forKey:#"inputCubeData"];
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); // or whatever your image's colour space
[self.filter setValue:(__bridge id)colorSpace forKey:#"inputColorSpace"];
[self.filter setValue:sourceImageCore forKey:#"inputImage"];
Helper Methods (which use Accelerate Framework):
- (nullable NSData *) colorCubeDataFromLUT:(nonnull NSString *)name
{
UIImage *image = [UIImage imageNamed:name inBundle:[NSBundle bundleForClass:self.class] compatibleWithTraitCollection:nil];
static const int kDimension = 64;
if (!image) return nil;
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / kDimension;
NSInteger columnNum = width / kDimension;
if ((width % kDimension != 0) || (height % kDimension != 0) || (rowNum * columnNum != kDimension)) {
NSLog(#"Invalid colorLUT %#",name);
return nil;
}
float *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) return nil;
// Convert bitmap data written in row,column order to cube data written in x:r, y:g, z:b representation where z varies > y varies > x.
NSInteger size = kDimension * kDimension * kDimension * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffset = 0;
int z = 0;
for (int row = 0; row < rowNum; row++)
{
for (int y = 0; y < kDimension; y++)
{
int tmp = z;
for (int col = 0; col < columnNum; col++) {
NSInteger dataOffset = (z * kDimension * kDimension + y * kDimension) * 4;
const float divider = 255.0;
vDSP_vsdiv(&bitmap[bitmapOffset], 1, ÷r, &data[dataOffset], 1, kDimension * 4); // Vector scalar divide; single precision. Divides bitmap values by 255.0 and puts them in data, processes each column (kDimension * 4 values) at once.
bitmapOffset += kDimension * 4; // shift bitmap offset to the next set of values, each values vector has (kDimension * 4) values.
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
return [NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES];
}
- (float *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc( bitmapSize );
if (bitmap == NULL) return NULL;
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap,
width,
height,
8,
bytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
if (context == NULL) {
free (bitmap);
return NULL;
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
float *convertedBitmap = malloc(bitmapSize * sizeof(float));
vDSP_vfltu8(bitmap, 1, convertedBitmap, 1, bitmapSize); // Converts an array of unsigned 8-bit integers to single-precision floating-point values.
free(bitmap);
return convertedBitmap;
}
One may create a Hald CLUT Image by obtaining an identity image (Google!) and then applying to it the same image processing chain applied to the image used for visualising the "look" in any image editing program. Just make sure you set the cubeDimension in the example code to the correct dimension for the LUT image. If the dimension, d, is the number of elements along one side of the 3D LUT cube, the Hald CLUT image width and height would be d*sqrt(d) pixels and the image would have d^3 total pixels.
CIPhotoEffect internally uses
CIColorCubeWithColorSpace filter.
All the color cube data is stored within the CoreImage.framework.
You can find the simulator's CoreImage.framework here (/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk/System/Library/Frameworks/CoreImage.framework/).
The color cube data is named with scube path extension. e.g. CIPhotoEffectChrome.scube
CIColorCubeWithColorSpace internally covert the color cube color values to match the working color space of the current core image context by using private methods:
-[CIImage _imageByMatchingWorkingSpaceToColorSpace:];
-[CIImage _imageByMatchingColorSpaceToWorkingSpace:];
Here's how CIPhotoEffect/CIColorCubeWithColorSpace should work with color management on vs. off.
With color management ON here is what CI should do:
color match from input space to cube space. If these two are equal this is a noop.
apply the color cube.
color match from cube space to the output space. If these two are equal this is a noop.
With color management OFF here is what CI should do:
apply the color cube.
I need to convert a matrix representing a b/w image to UIImage.
For example:
A matrix like this (just the representation). This image would be the symbol '+'
1 0 1
0 0 0
1 0 1
This matrix represents an image in black and white, where black is 0 and white is 1. I need to convert this matrix to UIImage. In this case width would be 3 and height would be 3
I use this method to create an image for my Game Of Life app. The advantages over drawing to a graphics context is that this is ridiculously fast.
This was all written a long time ago so it's a bit messier than what I might do now but the method would stay the same. For some reasons I defined these outside the method...
{
unsigned int length_in_bytes;
unsigned char *cells;
unsigned char *temp_cells;
unsigned char *changes;
unsigned char *temp_changes;
GLubyte *buffer;
CGImageRef imageRef;
CGDataProviderRef provider;
int ar, ag, ab, dr, dg, db;
float arf, agf, abf, drf, dgf, dbf, blah;
}
You won't need all of these for the image.
The method itself...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
// dr = 255, dg = 255, db = 255;
// ar = 0, ag = 0, ab = 0;
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// render the byte array into an image ref
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, colorSpace, kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
// convert image ref to UIImage
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
//return image
return image;
}
You should be able to adapt this to create an image from your matrix.
In order to convert a matrix to UIImage :
CGSize size = CGSizeMake(lines, columns);
UIGraphicsBeginImageContextWithOptions(size, YES, 0);
for (int i = 0; i < lines; i++)
{
for (int j = 0; j < columns; j++)
{
// Choose color to draw
if ( matrixDraw[i*lines + j] == 1 ) {
[[UIColor whiteColor] setFill];
} else {
// Draw black pixel
[[UIColor blackColor] setFill];
}
// Draw just one pixel in i,j
UIRectFill(CGRectMake(i, j, 1, 1));
}
}
// Create UIImage with the current context that we have just created
UIImage *imageFinal = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Basically what we are doing is :
Create a context with the size of our image
Looping for each pixel to see the value. Black is 0 and white is 1. So depends on the value, we set the color.
The most important function :
UIRectFill(CGRectMake(i,j,1,1));
This function let us to fill a pixel in the i,j position with width and height (1 both cases for fill one single pixel)
Finally we create an UIImage with the current context and we call to finish the image context.
Hope it helps someone!
Code :
cv::Point2f src_vertices[4];
src_vertices[0] = c1[0];
src_vertices[1] = c1[1];
src_vertices[2] = c1[2];
src_vertices[3] = c1[3];
cv::Point2f dst_vertices[4];
dst_vertices[0] = c2[0];
dst_vertices[1] = c2[1];
dst_vertices[2] = c2[2];
dst_vertices[3] = c2[3];
cv::Mat warpMatrix = getPerspectiveTransform(src_vertices,dst_vertices);
cv::Mat output = cv::Mat::zeros(original.cols,original.rows , CV_32FC3);
cv::warpPerspective(original, output, warpMatrix,cv::Size(606,606));
UIImage *_adjustedImage = [MAOpenCV UIImageFromCVMat:output];
Below is the original image
After apply straightening, output is below image
Issue
The output of the image that we are getting after straightening is getting cropped a bit from the corner and the output comes from the Open CV framework itself.
How to resolved this issue. Please let me know if anybody has found the solution. Thank you.
Since this question is asked quite often, I've written a few lines of code which save some time for many others.
try this:
cv::Rect computeWarpedContourRegion(const std::vector<cv::Point> & points, const cv::Mat & homography)
{
std::vector<cv::Point2f> transformed_points(points.size());
for(unsigned int i=0; i<points.size(); ++i)
{
// warp the points
transformed_points[i].x = points[i].x * homography.at<double>(0,0) + points[i].y * homography.at<double>(0,1) + homography.at<double>(0,2) ;
transformed_points[i].y = points[i].x * homography.at<double>(1,0) + points[i].y * homography.at<double>(1,1) + homography.at<double>(1,2) ;
}
// dehomogenization necessary?
if(homography.rows == 3)
{
float homog_comp;
for(unsigned int i=0; i<transformed_points.size(); ++i)
{
homog_comp = points[i].x * homography.at<double>(2,0) + points[i].y * homography.at<double>(2,1) + homography.at<double>(2,2) ;
transformed_points[i].x /= homog_comp;
transformed_points[i].y /= homog_comp;
}
}
// now find the bounding box for these points:
cv::Rect boundingBox = cv::boundingRect(transformed_points);
return boundingBox;
}
cv::Rect computeWarpedImageRegion(const cv::Mat & image, const cv::Mat & homography)
{
std::vector<cv::Point> imageBorder;
imageBorder.push_back(cv::Point(0,0));
imageBorder.push_back(cv::Point(image.cols,0));
imageBorder.push_back(cv::Point(image.cols,image.rows));
imageBorder.push_back(cv::Point(0,image.rows));
return computeWarpedContourRegion(imageBorder, homography);
}
cv::Mat adjustHomography(const cv::Rect & transformedRegion, const cv::Mat & homography)
{
if(homography.rows == 2) throw("homography adjustement for affine matrix not implemented yet");
// unit matrix
cv::Mat correctionHomography = cv::Mat::eye(3,3,CV_64F);
// correction translation
correctionHomography.at<double>(0,2) = -transformedRegion.x;
correctionHomography.at<double>(1,2) = -transformedRegion.y;
return correctionHomography * homography;
}
int main()
{
// straightening algorithm without cropping:
cv::Mat original = cv::imread("straightening_src.png");
cv::Mat output;
cv::Point2f src_vertices[4];
cv::Point2f dst_vertices[4];
// I have to add them manually, you can just use your old code here.
// my result will look different, since I don't use your original point correspondences, but system is the same...
src_vertices[0] = cv::Point2f(108,190);
src_vertices[1] = cv::Point2f(273,178);
src_vertices[2] = cv::Point2f(389,322);
src_vertices[3] = cv::Point2f(183,355);
dst_vertices[0] = cv::Point2f(172,190);
dst_vertices[1] = cv::Point2f(374,193);
dst_vertices[2] = cv::Point2f(380,362);
dst_vertices[3] = cv::Point2f(171,366);
// compute homography
cv::Mat warpMatrix = getPerspectiveTransform(src_vertices,dst_vertices);
// now you have to find out, whether the warped image will fit to the output image or whether it will be cropped.
// if it will be cropped you will most probably have to
// 1. find out how big your output image must be and the coordinates it will be warped to.
// 2. modify your transformation (by a translation) so that the output image will be placed properly inside the output image
// part 1: find the region that will hold the new image.
cv::Rect warpedImageRegion = computeWarpedImageRegion(original, warpMatrix);
// part 2: modify the transformation.
cv::Mat adjustedHomography = adjustHomography(warpedImageRegion, warpMatrix);
cv::Size transformedImageSize = cv::Size(warpedImageRegion.width,warpedImageRegion.height);
cv::warpPerspective(original, output, adjustedHomography, transformedImageSize);
cv::imshow("output", output);
cv::imwrite("straightening_result.png", output);
cv::waitKey(-1);
}
for this input (1) and the given transformation correspondences you will get that result (2)
(1)
(2)
After the image is skewed, it should be possible to remove the black extra part of the image.
I'm currently trying to get this bokeh shader to work with GPUImage: http://blenderartists.org/forum/showthread.php?237488-GLSL-depth-of-field-with-bokeh-v2-4-(update)
This is what I've got at the moment:
precision mediump float;
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform float inputImageTextureWidth;
uniform float inputImageTextureHeight;
#define PI 3.14159265
float width = inputImageTextureWidth; //texture width
float height = inputImageTextureHeight; //texture height
vec2 texel = vec2(1.0/width,1.0/height);
//uniform variables from external script
uniform float focalDepth; //focal distance value in meters, but you may use autofocus option below
uniform float focalLength; //focal length in mm
uniform float fstop; //f-stop value
bool showFocus = false; //show debug focus point and focal range (red = focal point, green = focal range)
float znear = 0.1; //camera clipping start
float zfar = 5.0; //camera clipping end
//------------------------------------------
//user variables
int samples = 3; //samples on the first ring
int rings = 3; //ring count
bool manualdof = false; //manual dof calculation
float ndofstart = 1.0; //near dof blur start
float ndofdist = 2.0; //near dof blur falloff distance
float fdofstart = 1.0; //far dof blur start
float fdofdist = 3.0; //far dof blur falloff distance
float CoC = 0.03;//circle of confusion size in mm (35mm film = 0.03mm)
bool vignetting = false; //use optical lens vignetting?
float vignout = 1.3; //vignetting outer border
float vignin = 0.0; //vignetting inner border
float vignfade = 22.0; //f-stops till vignete fades
bool autofocus = false; //use autofocus in shader? disable if you use external focalDepth value
vec2 focus = vec2(0.5, 0.5); // autofocus point on screen (0.0,0.0 - left lower corner, 1.0,1.0 - upper right)
float maxblur = 1.0; //clamp value of max blur (0.0 = no blur,1.0 default)
float threshold = 0.5; //highlight threshold;
float gain = 2.0; //highlight gain;
float bias = 0.5; //bokeh edge bias
float fringe = 0.7; //bokeh chromatic aberration/fringing
bool noise = false; //use noise instead of pattern for sample dithering
float namount = 0.0001; //dither amount
bool depthblur = false; //blur the depth buffer?
float dbsize = 1.25; //depthblursize
/*
next part is experimental
not looking good with small sample and ring count
looks okay starting from samples = 4, rings = 4
*/
bool pentagon = false; //use pentagon as bokeh shape?
float feather = 0.4; //pentagon shape feather
//------------------------------------------
float penta(vec2 coords) //pentagonal shape
{
float scale = float(rings) - 1.3;
vec4 HS0 = vec4( 1.0, 0.0, 0.0, 1.0);
vec4 HS1 = vec4( 0.309016994, 0.951056516, 0.0, 1.0);
vec4 HS2 = vec4(-0.809016994, 0.587785252, 0.0, 1.0);
vec4 HS3 = vec4(-0.809016994,-0.587785252, 0.0, 1.0);
vec4 HS4 = vec4( 0.309016994,-0.951056516, 0.0, 1.0);
vec4 HS5 = vec4( 0.0 ,0.0 , 1.0, 1.0);
vec4 one = vec4( 1.0 );
vec4 P = vec4((coords),vec2(scale, scale));
vec4 dist = vec4(0.0);
float inorout = -4.0;
dist.x = dot( P, HS0 );
dist.y = dot( P, HS1 );
dist.z = dot( P, HS2 );
dist.w = dot( P, HS3 );
dist = smoothstep( -feather, feather, dist );
inorout += dot( dist, one );
dist.x = dot( P, HS4 );
dist.y = HS5.w - abs( P.z );
dist = smoothstep( -feather, feather, dist );
inorout += dist.x;
return clamp( inorout, 0.0, 1.0 );
}
float bdepth(vec2 coords) //blurring depth
{
float d = 0.0;
float kernel[9];
vec2 offset[9];
vec2 wh = vec2(texel.x, texel.y) * dbsize;
offset[0] = vec2(-wh.x,-wh.y);
offset[1] = vec2( 0.0, -wh.y);
offset[2] = vec2( wh.x -wh.y);
offset[3] = vec2(-wh.x, 0.0);
offset[4] = vec2( 0.0, 0.0);
offset[5] = vec2( wh.x, 0.0);
offset[6] = vec2(-wh.x, wh.y);
offset[7] = vec2( 0.0, wh.y);
offset[8] = vec2( wh.x, wh.y);
kernel[0] = 1.0/16.0; kernel[1] = 2.0/16.0; kernel[2] = 1.0/16.0;
kernel[3] = 2.0/16.0; kernel[4] = 4.0/16.0; kernel[5] = 2.0/16.0;
kernel[6] = 1.0/16.0; kernel[7] = 2.0/16.0; kernel[8] = 1.0/16.0;
for( int i=0; i<9; i++ )
{
float tmp = texture2D(inputImageTexture2, coords + offset[i]).r;
d += tmp * kernel[i];
}
return d;
}
vec3 color(vec2 coords,float blur) //processing the sample
{
vec3 col = vec3(0.0);
col.r = texture2D(inputImageTexture, coords + vec2(0.0,1.0)*texel*fringe*blur).r;
col.g = texture2D(inputImageTexture, coords + vec2(-0.866,-0.5)*texel*fringe*blur).g;
col.b = texture2D(inputImageTexture, coords + vec2(0.866,-0.5)*texel*fringe*blur).b;
vec3 lumcoeff = vec3(0.299,0.587,0.114);
float lum = dot(col.rgb, lumcoeff);
float thresh = max((lum-threshold)*gain, 0.0);
return col+mix(vec3(0.0),col,thresh*blur);
}
vec2 rand(vec2 coord) //generating noise/pattern texture for dithering
{
float noiseX = ((fract(1.0-coord.s*(width/2.0))*0.25)+(fract(coord.t*(height/2.0))*0.75))*2.0-1.0;
float noiseY = ((fract(1.0-coord.s*(width/2.0))*0.75)+(fract(coord.t*(height/2.0))*0.25))*2.0-1.0;
if (noise)
{
noiseX = clamp(fract(sin(dot(coord ,vec2(12.9898,78.233))) * 43758.5453),0.0,1.0)*2.0-1.0;
noiseY = clamp(fract(sin(dot(coord ,vec2(12.9898,78.233)*2.0)) * 43758.5453),0.0,1.0)*2.0-1.0;
}
return vec2(noiseX,noiseY);
}
vec3 debugFocus(vec3 col, float blur, float depth)
{
float edge = 0.002*depth; //distance based edge smoothing
float m = clamp(smoothstep(0.0,edge,blur),0.0,1.0);
float e = clamp(smoothstep(1.0-edge,1.0,blur),0.0,1.0);
col = mix(col,vec3(1.0,1.0,0.0),(1.0-m)*0.6);
col = mix(col,vec3(0.0,1.0,1.0),((1.0-e)-(1.0-m))*0.2);
return col;
}
float linearize(float depth)
{
return -zfar * znear / (depth * (zfar - znear) - zfar);
}
float vignette()
{
float dist = distance(textureCoordinate.xy, vec2(0.5,0.5));
dist = smoothstep(vignout+(fstop/vignfade), vignin+(fstop/vignfade), dist);
return clamp(dist,0.0,1.0);
}
void main()
{
//scene depth calculation
float depth = linearize(texture2D(inputImageTexture2, textureCoordinate2.xy).x);
if (depthblur)
{
depth = linearize(bdepth(textureCoordinate2.xy));
}
//focal plane calculation
float fDepth = focalDepth;
if (autofocus)
{
fDepth = linearize(texture2D(inputImageTexture2, focus).x);
}
//dof blur factor calculation
float blur = 0.0;
if (manualdof)
{
float a = depth-fDepth; //focal plane
float b = (a-fdofstart)/fdofdist; //far DoF
float c = (-a-ndofstart)/ndofdist; //near Dof
blur = (a>0.0)?b:c;
}
else
{
float f = focalLength; //focal length in mm
float d = fDepth*1000.0; //focal plane in mm
float o = depth*1000.0; //depth in mm
float a = (o*f)/(o-f);
float b = (d*f)/(d-f);
float c = (d-f)/(d*fstop*CoC);
blur = abs(a-b)*c;
}
blur = clamp(blur,0.0,1.0);
// calculation of pattern for ditering
vec2 noise = rand(textureCoordinate.xy)*namount*blur;
// getting blur x and y step factor
float w = (1.0/width)*blur*maxblur+noise.x;
float h = (1.0/height)*blur*maxblur+noise.y;
// calculation of final color
vec3 col = vec3(0.0);
if(blur < 0.05) //some optimization thingy
{
col = texture2D(inputImageTexture, textureCoordinate.xy).rgb;
}
else
{
col = texture2D(inputImageTexture, textureCoordinate.xy).rgb;
float s = 1.0;
int ringsamples;
for (int i = 1; i <= rings; i += 1)
{
ringsamples = i * samples;
for (int j = 0 ; j < ringsamples ; j += 1)
{
float step = PI*2.0 / float(ringsamples);
float pw = (cos(float(j)*step)*float(i));
float ph = (sin(float(j)*step)*float(i));
float p = 1.0;
if (pentagon)
{
p = penta(vec2(pw,ph));
}
col += color(textureCoordinate.xy + vec2(pw*w,ph*h),blur)*mix(1.0,(float(i))/(float(rings)),bias)*p;
s += 1.0*mix(1.0,(float(i))/(float(rings)),bias)*p;
}
}
col /= s; //divide by sample count
}
if (showFocus)
{
col = debugFocus(col, blur, depth);
}
if (vignetting)
{
col *= vignette();
}
gl_FragColor.rgb = col;
gl_FragColor.a = 1.0;
}
This is my bokeh filter, a subclass of GPUImageTwoInputFilter:
#implementation GPUImageBokehFilter
- (id)init;
{
NSString *fragmentShaderPathname = [[NSBundle mainBundle] pathForResource:#"BokehShader" ofType:#"fsh"];
NSString *fragmentShaderString = [NSString stringWithContentsOfFile:fragmentShaderPathname encoding:NSUTF8StringEncoding error:nil];
if (!(self = [super initWithFragmentShaderFromString:fragmentShaderString]))
{
return nil;
}
focalDepthUniform = [filterProgram uniformIndex:#"focalDepth"];
focalLengthUniform = [filterProgram uniformIndex:#"focalLength"];
fStopUniform = [filterProgram uniformIndex:#"fstop"];
[self setFocalDepth:1.0];
[self setFocalLength:35.0];
[self setFStop:2.2];
return self;
}
#pragma mark -
#pragma mark Accessors
- (void)setFocalDepth:(float)focalDepth {
_focalDepth = focalDepth;
[self setFloat:_focalDepth forUniform:focalDepthUniform program:filterProgram];
}
- (void)setFocalLength:(float)focalLength {
_focalLength = focalLength;
[self setFloat:_focalLength forUniform:focalLengthUniform program:filterProgram];
}
- (void)setFStop:(CGFloat)fStop {
_fStop = fStop;
[self setFloat:_fStop forUniform:fStopUniform program:filterProgram];
}
#end
And finally, this is how I use said filter:
#implementation ViewController {
GPUImageBokehFilter *bokehFilter;
GPUImagePicture *bokehMap;
UIImage *inputImage;
}
- (void)viewDidLoad
{
[super viewDidLoad];
inputImage = [UIImage imageNamed:#"stones"];
bokehMap = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:#"bokehmask"]];
_backgroundImage.image = inputImage;
bokehFilter = [[GPUImageBokehFilter alloc] init];
[self processImage];
}
- (IBAction)dataInputUpdated:(id)sender {
[self processImage];
}
- (void *)processImage {
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
GPUImagePicture *gpuPicture = [[GPUImagePicture alloc] initWithImage:inputImage];
[gpuPicture addTarget:bokehFilter];
[gpuPicture processImage];
[bokehMap addTarget:bokehFilter];
[bokehMap processImage];
[bokehFilter useNextFrameForImageCapture];
[bokehFilter setFloat:inputImage.size.width forUniformName:#"inputImageTextureWidth"];
[bokehFilter setFloat:inputImage.size.height forUniformName:#"inputImageTextureHeight"];
UIImage *blurredImage = [bokehFilter imageFromCurrentFramebuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[self displayNewImage:blurredImage];
});
});
}
- (void)displayNewImage:(UIImage*)newImage {
[UIView transitionWithView:_backgroundImage
duration:.6f
options:UIViewAnimationOptionTransitionCrossDissolve
animations:^{
_backgroundImage.image = newImage;
} completion:nil];
}
...
The first image is the one I'm trying to blur, the second one is a random gradient to test the shader's depth map thingy:
When I start the app on my iPhone, I get this:
After moving the slider (which triggers the dataInputChanged method), I get this:
While that admittedly looks much better than the first image, I still have some problems with this:
There's a diagonal noisy line (inside the red lines I put on the picture) that appears to be unblurred.
The top left of the image is blurry, even though it shouldn't be.
Why do I get this weird behavior? Shouldn't the shader output be the same every time?
Also, how do I get it to respect the depth map? My GLSL shader knowledge is very limited, so please be patient.
The diagonal artifact appears to be caused by your test gradient. You can see that it occurs at about the same place as where your gradient goes to completely white. Try spreading out the gradient so it only reaches 1.0 or 0.0 at the very corners of the image.
It's a pretty big question, and I can't make a full answer because I would really need to test the thing out.
But a few points: The final image that you put up is hard to work with. Because the image has been upscaled so much, I can't tell if it's actually blurred or if it just appears blurry because of the resolution. Regardless, the amount of blur that you're getting (when compared to the original link that you provided) suggests that something isn't working with the shader.
Another thing that concerns me is the //some optimization thingy comment that you've got in there. This is the sort of thing that's going to be responsible for an ugly line in your final output. Saying that you wont have any blur under blur < 0.05 isn't necessarily something that you can do! I would be expecting a nasty artifact as the shader transitions from the blur shader and into the 'optimized' part.
Hope that sheds some light, and good luck!
Have you tried enabling showFocus? This should show the focal point in red and the focal range in green which should help with debugging. You could also try enabling autofocus to ensure that the centre of the image is in focus, because at the moment it's not obvious which distance should be in focus, due to the linearize function changing coordinate systems. After that try tweaking fstop to get the desired amount of blur. You will probably also find that you will need greater than samples = 3 and rings = 3 to produce a smooth bokeh effect.
Your answers helped me get on the right track, and after a few hours of fiddling around with my code and the shader, I managed to get all bugs fixed. Here's what caused them and how I fixed them:
The ugly diagonal line was caused by the linearize() method, so I removed it and made the shader use the RGB values (or, to be more precise: only the R value) from the depth map without processing them first.
The blue-ish image I got from the shader was caused by my own incompetence. These two lines had to be put before the calls to processImage:
[bokehFilter setFloat:inputImage.size.width forUniformName:#"inputImageTextureWidth"];
[bokehFilter setFloat:inputImage.size.height forUniformName:#"inputImageTextureHeight"];
In hindsight, it's obvious why I only got results the second time I used the shader. After fixing those bugs, I went on to optimize it a bit to keep the execution time as low as possible, and now I can tell it to render 8 samples/4 rings and it does so in less than a second. Here's what that looks like:
Thanks for the answers, everyone, I probably wouldn't have gotten those bugs fixed without you.