I'm using CGLayers to implement a "painting" technique similar to Photoshop airbrush - and have run into something strange. When I use transparency and overpaint an area, the color never reaches full intensity (if the alpha value is below 0.5). My application uses a circular "airbrush" pattern with opacity fall off at the edges but I have reproduced the problem just using a semi-transparent white square. When the opacity level is less than 0.5, the overpainted area never reaches the pure white of the source layer. I probably wouldn't have noticed but I'm using the result of the painting as a mask, and not being able to get pure white causes problems. Any ideas what's going on here? Target iOS SDK 5.1.
Below is the resultant color after drawing the semi-transparent square many times over black background:
opacity color
------ -----
1.0 255
0.9 255
0.8 255
0.7 255
0.6 255
0.5 255
0.4 254
0.3 253
0.2 252
0.1 247
Simplified code that shows the issue:
- (void)drawRect:(CGRect)rect
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
// Create grey gradient to compare final blend color
CGRect lineRect = CGRectMake(20, 20, 1, 400);
float greyLevel = 1.0;
for(int i=0;i<728;i++)
{
CGContextSetRGBFillColor(viewContext, greyLevel, greyLevel, greyLevel, 1);
CGContextFillRect(viewContext, lineRect);
lineRect.origin.x += 1;
greyLevel -= 0.0001;
}
// Create semi-transparent white square
CGSize whiteSquareSize = CGSizeMake(40, 40);
CGLayerRef whiteSquareLayer = CGLayerCreateWithContext (viewContext, whiteSquareSize, NULL);
CGContextRef whiteSquareContext = CGLayerGetContext(whiteSquareLayer);
CGContextSetAlpha(whiteSquareContext, 1.0f); // just to make sure
CGContextSetRGBFillColor(whiteSquareContext, 1, 1, 1, 0.3); // ??? color never reaches pure white if alpha < 0.5
CGRect whiteSquareRect = CGRectMake(0, 0, whiteSquareSize.width, whiteSquareSize.height);
CGContextFillRect(whiteSquareContext, whiteSquareRect);
// "Paint" with layer a bazillion times
CGContextSetBlendMode(viewContext, kCGBlendModeNormal); // just to make sure
CGContextSetAlpha(viewContext, 1.0); // just to make sure
for(int strokeNum=0;strokeNum<100;strokeNum++)
{
CGPoint drawPoint = CGPointMake(0, 400);
for(int x=0;x<730;x++)
{
CGContextDrawLayerAtPoint(viewContext, drawPoint, whiteSquareLayer);
drawPoint.x++;
}
}
}
Related
By now points can be drawn with the following code:
// SETUP FOR VERTICES
GLfloat points[graph->vertexCount * 6];
for (int i = 0 ; i < graph->vertexCount; i++)
{
points[i*6] = (graph->vertices[i].x / (backingWidth/2) ) - 1;
points[i*6+1] = -(graph->vertices[i].y / (backingHeight/2) ) + 1;
points[i*6+2] = 1.0;
points[i*6+3] = 0.0;
points[i*6+4] = 0.0;
points[i*6+5] = 1.0;
}
glEnable(GL_POINT_SMOOTH);
glPointSize(DOT_SIZE*scale);
glVertexPointer(2, GL_FLOAT, 24, points);
glColorPointer(4, GL_FLOAT, 24, &points[2]);
glDrawArrays(GL_POINTS, 0, graph->vertexCount);
The points are rendered with red color, and I want to add a white outline outside the points. How can I draw outline of the point?
Question for better displaying
Follow #BDL 's instruction adding bigger points under the red points as outline, they look good.
outlinePoints[i*6] = (graph->vertices[i].x / (backingWidth/2) ) - 1;
outlinePoints[i*6+1] = -(graph->vertices[i].y / (backingHeight/2) ) + 1;
outlinePoints[i*6+2] = 0.9;
outlinePoints[i*6+3] = 0.9;
outlinePoints[i*6+4] = 0.9;
outlinePoints[i*6+5] = 1.0;
But when one point overlaps another point, it's outline is covered by the red point, since the outline points are rendered before all the red points.
I think the right solution is to render one outline point and red point one by one. How to do that?
If you want to render outlines for each point separately, then you can simply render a slightly larger white point first and then render the red point over it. With depth-testing enabled, you might have to adjust the polygon offset when rendering the red point to prevent them from getting hidden behind the white ones.
I am trying to make a VHS effect for an iOS app, just like in this video:
https://www.youtube.com/watch?v=8ipML-T5yDk
I want to realize this effect with the less effect possible to generate less CPU charge.
Basically what I need is to crank up the color levels to create a "chromatic aberration", change Sharpen parameters, and add some gaussian blur + add some noise.
I am using GPUImage. For the Sharpen and Gaussian blur, easy to apply.
I am having two problems:
1) For the "chromatic aberration", the way they do it usually is to duplicate three times the video, and put Red to 0 on one video, blue to 0 on another one, and green to 0 on the last one, and blend them together (just like in the tutorial). But doing this in an iPhone would be too CPU consuming.
Any idea how to achieve the same effect withtout having to duplicate the video and blend it =
2) I also want to add some noise but do not know which GPUImage effect to use. Any idea on this one too ?
Thanks a lot,
Sébastian
(I'm not an iOS developer but I hope this can help someone.)
I wrote a VHS filter on Windows, this is what I did:
Crop the video frame to 4:3 aspect ratio and lower the resolution to 360*270.
Lower color saturation, and apply a color matrix to reduce green color to 93% (so the video will look purple).
Apply a convolve matrix to sharpen the video frame directionally. This is the kernel I used:
0 -0.5 0 0
-0.5 2.9 0 -0.5
0 -0.5 0 0
Blend a real blank VHS footage to your video for the noise (search for "VHS overlay" on YouTube).
Video: Before After
Screenshot: Before After
The CPU and GPU consumption is ok. I apply this filter to real time camera preview on my old windows phone (with Snapdragon 808), and it works fine.
Code (C#, using Win2D library for GPU acceleration, implementing Windows.Media.Effects.IBasicVideoEffect interface):
public void ProcessFrame(ProcessVideoFrameContext context) //This method is called each frame
{
int outputWidth = 360; //Output Resolution
int outputHeight = 270;
IDirect3DSurface inputSurface = context.InputFrame.Direct3DSurface;
IDirect3DSurface outputSurface = context.OutputFrame.Direct3DSurface;
using (CanvasBitmap inputFrame = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, inputSurface)) //The video frame to be processed
using (CanvasRenderTarget outputFrame = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, outputSurface)) //The video frame after processing
using (CanvasDrawingSession outputFrameDrawingSession = outputFrame.CreateDrawingSession())
using (CanvasRenderTarget croppedFrame = new CanvasRenderTarget(canvasDevice, outputWidth, outputHeight, outputFrame.Dpi))
using (CanvasDrawingSession croppedFrameDrawingSession = croppedFrame.CreateDrawingSession())
using (CanvasBitmap overlay = Task.Run(async () => { return await CanvasBitmap.LoadAsync(canvasDevice, overlayFrames[new Random().Next(0, overlayFrames.Count - 1)]); }).Result) //"overlayFrames" is a list containing video frames from https://youtu.be/SHhRFU2Jyfs, here we just randomly pick one frame for blend
{
double inputWidth = inputFrame.Size.Width;
double inputHeight = inputFrame.Size.Height;
Rect ractangle;
//Crop the inputFrame to 360*270, save it to "croppedFrame"
if (3 * inputWidth > 4 * inputHeight)
{
double x = (inputWidth - inputHeight / 3 * 4) / 2;
ractangle = new Rect(x, 0, inputWidth - 2 * x, inputHeight);
}
else
{
double y = (inputHeight - inputWidth / 4 * 3) / 2;
ractangle = new Rect(0, y, inputWidth, inputHeight - 2 * y);
}
croppedFrameDrawingSession.DrawImage(inputFrame, new Rect(0, 0, outputWidth, outputHeight), ractangle, 1, CanvasImageInterpolation.HighQualityCubic);
//Apply a bunch of effects (mentioned in step 2,3,4) to "croppedFrame"
BlendEffect vhsEffect = new BlendEffect
{
Background = new ConvolveMatrixEffect
{
Source = new ColorMatrixEffect
{
Source = new SaturationEffect
{
Source = croppedFrame,
Saturation = 0.4f
},
ColorMatrix = new Matrix5x4
{
M11 = 1f,
M22 = 0.93f,
M33 = 1f,
M44 = 1f
}
},
KernelHeight = 3,
KernelWidth = 4,
KernelMatrix = new float[]
{
0, -0.5f, 0, 0,
-0.5f, 2.9f, 0, -0.5f,
0, -0.5f, 0, 0,
}
},
Foreground = overlay,
Mode = BlendEffectMode.Screen
};
//And draw the result to "outputFrame"
outputFrameDrawingSession.DrawImage(vhsEffect, ractangle, new Rect(0, 0, outputWidth, outputHeight));
}
}
I'm generating random uicolor. I want to avoid light colors like yellow, light green etc... Here's my code
+ (UIColor *)generateRandom {
CGFloat hue = ( arc4random() % 256 / 256.0 ); // 0.0 to 1.0
CGFloat saturation = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from white
CGFloat brightness = ( arc4random() % 128 / 256.0 ) + 0.5; // 0.5 to 1.0, away from black
return [UIColor colorWithHue:hue saturation:saturation brightness:brightness alpha:1];
}
I'm using this for uitableviewcell background color. Cell's textLabel color is white. So if the background color is light green or some other light color its not visible clearly...
How to fix this? Can we avoid generating light colors or can we find which is light color?
If we can find that is light color means I can change the textcolor to some other color...
It sounds like you want to avoid colors close to white. Since you're already in HSV space, this should be a simple matter of setting a distance from white to avoid. A simple implementation would limit the saturation and brightness to be no closer than some threshold. Something like:
if (saturation < kSatThreshold)
{
saturation = kSatThreshold;
}
if (brightness > kBrightnessThreshold)
{
brightness = kBrightnessThreshold;
}
Something more sophisticated would be to check the distance from white and if it's too close, push it back out:
CGFloat deltaH = hue - kWhiteHue;
CGFloat deltaS = saturation - kWhiteSaturation;
CGFloat deltaB = brightness - kWhiteBrightness;
CGFloat distance = sqrt(deltaH * deltaH + deltaS * deltaS + deltaB * deltaB);
if (distance < kDistanceThreshold)
{
// normalize distance vector
deltaH /= distance;
deltaS /= distance;
deltaB /= distance;
hue = kWhiteHue + deltaH * kDistanceThreshold;
saturation = kWhiteSaturation + deltaS * kDistanceThreshold;
brightness = kWhiteBrightness + deltaB * kDistanceThreshold;
}
Light colors are those with high brightness (or lightness, luminosity...).
Generate colors with random hue and saturation, but limit the randomness of brightness to low numbers, like 0 to 0.5. Or keep the brightness constant. If you are showing the colors side by side, the aesthetic impact is usually better if you change only 2 of the 3 components in HSB (HSV, HSL)
I'm developing an app that has to draw 320 vertical gradient lines on a portrait iPhone screen where each gradient line is either 1px or 2px wide (non-retina vs retina). Each gradient line has 1000 positions, with each position able to have a unique color. These 1000 colors (floats) sit in a C-style 2D array (an array of arrays, 320 arrays of 1000 colors)
Currently, the gradient lines are drawn in a For Loop inside the drawRect method of a custom UIView. The problem I'm having is that it takes longer than ONE second to cycle through the For Loop and draw all 320 lines. Within that ONE second, I have another thread that's updating the color arrays and but since it takes longer than ONE second to draw, I don't see every update. I see every second or third update.
I'm using the exact same procedure in my Android code, which has no problems drawing 640 gradient lines (double the amount) multiple times in a second using a SurfaceView. My Android app never misses an update.
If you look at the Android code, it actually draws gradient lines to TWO separate canvases. The array size is dynamic and can be up to half the landscape resolution width of an Android phone (ex 1280 width = 1280/2 = 640 lines). Since the Android app is fast enough, I allow landscape mode. Even with the double the data as an iPhone and drawing to two separate canvases, the Android code runs multiple times a second. The iPhone code with half the number of lines and only drawing to a single context can not draw in under a second.
Is there a faster way to draw 320 vertical gradient lines (each with 1000 positions) on an iPhone?
Is there a hardware accelerated SurfaceView equivalent for iOS that can draw many gradients really fast?
//IPHONE - drawRect method
int totalNumberOfColors = 1000;
int i;
CGFloat *locations = malloc(totalNumberOfColors * sizeof locations[0]);
for (i = 0; i < totalNumberOfColors; i++) {
float division = (float)1 / (float)(totalNumberOfColors - 1);
locations[i] = i * division;
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
for (int k = 0; k < 320; k++) {
CGFloat * colorComponents = arrayOfFloatArrays[k];
CGGradientRef gradient = CGGradientCreateWithColorComponents(
colorSpace,
colorComponents,
locations,
(size_t)(totalNumberOfColors));
CGRect newRect;
if (currentPositionOffset >=320) {
newRect = CGRectMake(0, 0, 1, CGRectGetMaxY(rect));
} else {
newRect = CGRectMake(319 - (k * 1), 0, 1, CGRectGetMaxY(rect));
}
CGContextSaveGState(ctx);
//NO CLIPPING STATE
CGContextAddRect(ctx, newRect);
CGContextClip(ctx);
//CLIPPING STATE
CGContextDrawLinearGradient(
ctx,
gradient,
CGPointMake(0, 0),
CGPointMake(0, CGRectGetMaxY(rect)),
(CGGradientDrawingOptions)NULL);
CGContextRestoreGState(ctx);
//RESTORE TO NO CLIPPING STATE
CGGradientRelease(gradient);
}
//ANDROID - public void run() method on SurfaceView
for (i = 0; i < sonarData.arrayOfColorIntColumns.size() - currentPositionOffset; i++) {
Paint paint = new Paint();
int[] currentColors = sonarData.arrayOfColorIntColumns.get(currentPositionOffset + i);
//Log.d("currentColors.toString()",currentColors.toString());
LinearGradient linearGradient;
if (currentScaleFactor > 1.0) {
int numberOfColorsToUse = (int)(1000.0/currentScaleFactor);
int tmpTopOffset = currentTopOffset;
if (currentTopOffset + numberOfColorsToUse > 1000) {
//shift tmpTopOffset
tmpTopOffset = 1000 - numberOfColorsToUse - 1;
}
int[] subsetOfCurrentColors = new int[numberOfColorsToUse];
System.arraycopy(currentColors, tmpTopOffset, subsetOfCurrentColors, 0, numberOfColorsToUse);
linearGradient = new LinearGradient(0, tmpTopOffset, 0, getHeight(), subsetOfCurrentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("subsetOfCurrentColors.length","" + subsetOfCurrentColors.length);
} else {
//use all colors
linearGradient = new LinearGradient(0, 0, 0, getHeight(), currentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("currentColors.length","" + currentColors.length);
}
paint.setShader(linearGradient);
sonarData.checkAndAddPaint(paint);
numberOfColumnsToDraw = i + 1;
}
//Log.d(TAG,"numberOfColumnsToDraw " + numberOfColumnsToDraw);
currentPositionOffset = currentPositionOffset + i;
if (currentPositionOffset >= sonarData.getMaxNumberOfColumns()) {
currentPositionOffset = sonarData.getMaxNumberOfColumns() - 1;
}
if (numberOfColumnsToDraw > 0) {
Canvas canvas = surfaceHolder.lockCanvas();
if (AppInstanceData.sonarBackgroundImage != null && canvas != null) {
canvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
if (cacheCanvas != null) {
cacheCanvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
}
}
for (i = drawOffset; i < sizeToDraw + drawOffset; i++) {
Paint p = sonarData.paintArray.get(i - dataStartOffset);
p.setStrokeWidth(2);
//Log.d("drawGradientLines", "canvas.getHeight() " + canvas.getHeight());
canvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
if (cacheCanvas != null) {
cacheCanvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
}
}
surfaceHolder.unlockCanvasAndPost(canvas);
}
No comment on the CG code — it's been a while since I've drawn any gradients — but a couple of notes:
You shouldn't be doing that in drawRect because it's called a lot. Draw into an image and display it.
There's no matching free for the malloc, so you're leaking memory like crazy.
It'll have a learning curve, but implement this using OpenGL ES 2.0. I previously took something that was drawing a large number of gradients as well, and reimplemented it using OpenGL ES 2.0 and custom vertex and fragment shaders. It is way faster than the equivalent drawing done using Core Graphics, so you will probably see a big speed boost as well.
If you don't know any OpenGL yet, I would suggest finding some tutorials for working with OpenGL ES 2.0 (has to be 2.0 because that's what offers the ability to write custom shaders) on iOS, to learn the basics. Once you do that, you should be able to significantly increase the performance of your drawing, way above that of the Android version, and maybe would be incentive to make the Android version use OpenGL as well.
I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?