How to create VHS effect on iOS using GPUImage or another library - ios

I am trying to make a VHS effect for an iOS app, just like in this video:
https://www.youtube.com/watch?v=8ipML-T5yDk
I want to realize this effect with the less effect possible to generate less CPU charge.
Basically what I need is to crank up the color levels to create a "chromatic aberration", change Sharpen parameters, and add some gaussian blur + add some noise.
I am using GPUImage. For the Sharpen and Gaussian blur, easy to apply.
I am having two problems:
1) For the "chromatic aberration", the way they do it usually is to duplicate three times the video, and put Red to 0 on one video, blue to 0 on another one, and green to 0 on the last one, and blend them together (just like in the tutorial). But doing this in an iPhone would be too CPU consuming.
Any idea how to achieve the same effect withtout having to duplicate the video and blend it =
2) I also want to add some noise but do not know which GPUImage effect to use. Any idea on this one too ?
Thanks a lot,
Sébastian

(I'm not an iOS developer but I hope this can help someone.)
I wrote a VHS filter on Windows, this is what I did:
Crop the video frame to 4:3 aspect ratio and lower the resolution to 360*270.
Lower color saturation, and apply a color matrix to reduce green color to 93% (so the video will look purple).
Apply a convolve matrix to sharpen the video frame directionally. This is the kernel I used:
0 -0.5 0 0
-0.5 2.9 0 -0.5
0 -0.5 0 0
Blend a real blank VHS footage to your video for the noise (search for "VHS overlay" on YouTube).
Video: Before After
Screenshot: Before After
The CPU and GPU consumption is ok. I apply this filter to real time camera preview on my old windows phone (with Snapdragon 808), and it works fine.
Code (C#, using Win2D library for GPU acceleration, implementing Windows.Media.Effects.IBasicVideoEffect interface):
public void ProcessFrame(ProcessVideoFrameContext context) //This method is called each frame
{
int outputWidth = 360; //Output Resolution
int outputHeight = 270;
IDirect3DSurface inputSurface = context.InputFrame.Direct3DSurface;
IDirect3DSurface outputSurface = context.OutputFrame.Direct3DSurface;
using (CanvasBitmap inputFrame = CanvasBitmap.CreateFromDirect3D11Surface(canvasDevice, inputSurface)) //The video frame to be processed
using (CanvasRenderTarget outputFrame = CanvasRenderTarget.CreateFromDirect3D11Surface(canvasDevice, outputSurface)) //The video frame after processing
using (CanvasDrawingSession outputFrameDrawingSession = outputFrame.CreateDrawingSession())
using (CanvasRenderTarget croppedFrame = new CanvasRenderTarget(canvasDevice, outputWidth, outputHeight, outputFrame.Dpi))
using (CanvasDrawingSession croppedFrameDrawingSession = croppedFrame.CreateDrawingSession())
using (CanvasBitmap overlay = Task.Run(async () => { return await CanvasBitmap.LoadAsync(canvasDevice, overlayFrames[new Random().Next(0, overlayFrames.Count - 1)]); }).Result) //"overlayFrames" is a list containing video frames from https://youtu.be/SHhRFU2Jyfs, here we just randomly pick one frame for blend
{
double inputWidth = inputFrame.Size.Width;
double inputHeight = inputFrame.Size.Height;
Rect ractangle;
//Crop the inputFrame to 360*270, save it to "croppedFrame"
if (3 * inputWidth > 4 * inputHeight)
{
double x = (inputWidth - inputHeight / 3 * 4) / 2;
ractangle = new Rect(x, 0, inputWidth - 2 * x, inputHeight);
}
else
{
double y = (inputHeight - inputWidth / 4 * 3) / 2;
ractangle = new Rect(0, y, inputWidth, inputHeight - 2 * y);
}
croppedFrameDrawingSession.DrawImage(inputFrame, new Rect(0, 0, outputWidth, outputHeight), ractangle, 1, CanvasImageInterpolation.HighQualityCubic);
//Apply a bunch of effects (mentioned in step 2,3,4) to "croppedFrame"
BlendEffect vhsEffect = new BlendEffect
{
Background = new ConvolveMatrixEffect
{
Source = new ColorMatrixEffect
{
Source = new SaturationEffect
{
Source = croppedFrame,
Saturation = 0.4f
},
ColorMatrix = new Matrix5x4
{
M11 = 1f,
M22 = 0.93f,
M33 = 1f,
M44 = 1f
}
},
KernelHeight = 3,
KernelWidth = 4,
KernelMatrix = new float[]
{
0, -0.5f, 0, 0,
-0.5f, 2.9f, 0, -0.5f,
0, -0.5f, 0, 0,
}
},
Foreground = overlay,
Mode = BlendEffectMode.Screen
};
//And draw the result to "outputFrame"
outputFrameDrawingSession.DrawImage(vhsEffect, ractangle, new Rect(0, 0, outputWidth, outputHeight));
}
}

Related

Decrease noise while detecting lines from webcam

I'm trying to implement an online document scanner that automatically detects the edges of it and takes a photo when the area of the rectangle is bigger enough. I implemented the following pipeline in opencvjs:
// Grayscale image
var imgray = new cv.Mat();
cv.cvtColor(srcMat, imgray, cv.COLOR_RGBA2GRAY, 0);
// Blurring
let blurred = new cv.Mat();
let ksize = new cv.Size(7, 7);
cv.GaussianBlur(imgray, blurred, ksize, 0, 0, cv.BORDER_DEFAULT);
// Canny
var canny = new cv.Mat();
low_threshold = 50;
high_threshold = 100;
cv.Canny(blurred, canny, 50, 150, 3, false);
// Hough
rho = 1 // distance resolution in pixels of the Hough grid
theta = Math.PI / 180 // angular resolution in radians of the Hough grid
threshold = 2 // minimum number of votes (intersections in Hough grid cell)
min_line_length = 100 // minimum number of pixels making up a line
max_line_gap = 10 // maximum gap in pixels between connectable line segments
let lines = new cv.Mat();
// Run Hough on edge detected image
// Output "lines" is an array containing endpoints of detected line segments
cv.HoughLinesP(canny, lines, rho, theta, threshold, min_line_length, max_line_gap);
// draw lines
for (let i = 0; i < lines.rows; ++i) {
let startPoint = new cv.Point(lines.data32S[i * 4], lines.data32S[i * 4 + 1]);
let endPoint = new cv.Point(lines.data32S[i * 4 + 2], lines.data32S[i * 4 + 3]);
cv.line(canny, startPoint, endPoint, new cv.Scalar(255, 255, 255, 0), 5);
}
document.getElementById("lines").textContent=lines.rows;
imgray.delete();
blurred.delete();
lines.delete();
return canny;
The result is what you see in the video footage:
The problem is that while Canny processed edges are quite "stable", lines detected by HoughLinesP change continuously position and they are not easy to track. Where am I wrong? Can you suggest some enhancements to this pipeline?

Save frames of background subtraction capture

I am doing a background subtraction capture demo recently but I met with difficulties. I have already get the pixel of silhouette extraction and I intend to draw it into a buffer through createGraphics(). I set the new background is 100% transparent so that I could only get the foreground extraction. Then I use saveFrame() function in order to get png file of each frame. However, it doesn't work as I expected. I intend to get a series of png of the silhouette extraction
with 100% transparent background but now I only get the general png of frames from the camera feed. Is there anyone could help me to see what's the problem with this code? Thanks a lot in advance. Any help will be appreciated.
import processing.video.*;
Capture video;
PGraphics pg;
PImage backgroundImage;
float threshold = 30;
void setup() {
size(320, 240);
video = new Capture(this, width, height);
video.start();
backgroundImage = createImage(video.width, video.height, RGB);
pg = createGraphics(320, 240);
}
void captureEvent(Capture video) {
video.read();
}
void draw() {
pg.beginDraw();
loadPixels();
video.loadPixels();
backgroundImage.loadPixels();
image(video, 0, 0);
for (int x = 0; x < video.width; x++) {
for (int y = 0; y < video.height; y++) {
int loc = x + y * video.width;
color fgColor = video.pixels[loc];
color bgColor = backgroundImage.pixels[loc];
float r1 = red(fgColor); float g1 = green(fgColor); float b1 = blue(fgColor);
float r2 = red(bgColor); float g2 = green(bgColor); float b2 = blue(bgColor);
float diff = dist(r1, g1, b1, r2, g2, b2);
if (diff > threshold) {
pixels[loc] = fgColor;
} else {
pixels[loc] = color(0, 0);
}
}}
pg.updatePixels();
pg.endDraw();
saveFrame("line-######.png");
}
void mousePressed() {
backgroundImage.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height);
backgroundImage.updatePixels();
}
Re:
Then I use saveFrame() function in order to get png file of each frame. However, it doesn't work as I expected. I intend to get a series of png of the silhouette extraction with 100% transparent background but now I only get the general png of frames from the camera feed.
This won't work, because saveFrame() saves the canvas, and the canvas doesn't support transparency. For example, from the reference:
It is not possible to use the transparency alpha parameter with background colors on the main drawing surface. It can only be used along with a PGraphics object and createGraphics(). https://processing.org/reference/background_.html
If you want to dump a frame with transparency you need to use .save() to dump it directly from a PImage / PGraphics.
https://processing.org/reference/PImage_save_.html
If you need to clear your PImage / PGraphics and reuse it each frame, either use pg.clear() or pg.background(0,0,0,0) (set all pixels to transparent black).

SceneKit 3D Marker Augmented Reality iOS

Last couple of weeks I've been working on developing a simple proof-of-concept application in which a 3D model is projected over a specific Augmented Reality marker (in my case I am using Aruco markers) in IOS (with Swift and Objective-C)
I calibrated an Ipad Camera with a specific fixed lens position and used that to estimate the pose of the AR marker (which from my debug analysis seem pretty accurate). The problem seems (surprise, surprise) when I try to use SceneKit scene to project a model over the marker.
I am aware that the axis in opencv and SceneKit are different (Y and Z) and already done this correction as well as the row order/column order difference between the two libraries.
After constructing the projection matrix, I apply that same transform to the 3D model and from my debug analysis the object seems to be translated to the desired position and with the desired rotation. The problem is that it never does overlap the specific image pixel position of the marker. I am using a AVCapturePreviewVideoLayer as to put the video in background that has the same bounds as my SceneKit View.
Has anyone has a clue why this happens? I tried to play with cameras FOV's but with no real impact in the results.
Thank you all for your time.
EDIT1: I Will post some of the code here to reveal what I am currently doing.
I have two subviews inside the main view, one which is a background AVCaptureVideoPreviewLayer and another which is a SceneKitView. Both have the same bounds as the main view.
At each frame I use an opencv wrapper which outputs the pose of each marker:
std::vector<int> ids;
std::vector<std::vector<cv::Point2f>> corners, rejected;
cv::aruco::detectMarkers(frame, _dictionary, corners, ids, _detectorParams, rejected);
if (ids.size() > 0 ){
cv::aruco::drawDetectedMarkers(frame, corners, ids);
cv::Mat rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 2.6, _intrinsicMatrix, _distCoeffs, rvecs, tvecs);
// Let's protect ourselves agains multiple markers
if (rvecs.total() > 1)
return;
_markerFound = true;
cv::Rodrigues(rvecs, _currentR);
_currentT = tvecs;
for (int row = 0; row < _currentR.rows; row++){
for (int col = 0; col < _currentR.cols; col++){
_currentExtrinsics.at<double>(row, col) = _currentR.at<double>(row, col);
}
_currentExtrinsics.at<double>(row, 3) = _currentT.at<double>(row);
}
_currentExtrinsics.at<double>(3,3) = 1;
std::cout << tvecs << std::endl;
// Convert coordinate systems of opencv to openGL (SceneKit)
// Note that in openCV z goes away the camera (in openGL goes into the camera)
// and y points down and on openGL point up
// Another note: openCV has a column order matrix representation, while SceneKit
// has a row order matrix, but we'll take care of it later.
cv::Mat cvToGl = cv::Mat::zeros(4, 4, CV_64F);
cvToGl.at<double>(0,0) = 1.0f;
cvToGl.at<double>(1,1) = -1.0f; // invert the y axis
cvToGl.at<double>(2,2) = -1.0f; // invert the z axis
cvToGl.at<double>(3,3) = 1.0f;
_currentExtrinsics = cvToGl * _currentExtrinsics;
cv::aruco::drawAxis(frame, _intrinsicMatrix, _distCoeffs, rvecs, tvecs, 5);
Then in each frame I convert the opencv matrix for a SCN4Matrix:
- (SCNMatrix4) transformToSceneKit:(cv::Mat&) openCVTransformation{
SCNMatrix4 mat = SCNMatrix4Identity;
// Transpose
openCVTransformation = openCVTransformation.t();
// copy the rotationRows
mat.m11 = (float) openCVTransformation.at<double>(0, 0);
mat.m12 = (float) openCVTransformation.at<double>(0, 1);
mat.m13 = (float) openCVTransformation.at<double>(0, 2);
mat.m14 = (float) openCVTransformation.at<double>(0, 3);
mat.m21 = (float)openCVTransformation.at<double>(1, 0);
mat.m22 = (float)openCVTransformation.at<double>(1, 1);
mat.m23 = (float)openCVTransformation.at<double>(1, 2);
mat.m24 = (float)openCVTransformation.at<double>(1, 3);
mat.m31 = (float)openCVTransformation.at<double>(2, 0);
mat.m32 = (float)openCVTransformation.at<double>(2, 1);
mat.m33 = (float)openCVTransformation.at<double>(2, 2);
mat.m34 = (float)openCVTransformation.at<double>(2, 3);
//copy the translation row
mat.m41 = (float)openCVTransformation.at<double>(3, 0);
mat.m42 = (float)openCVTransformation.at<double>(3, 1)+2.5;
mat.m43 = (float)openCVTransformation.at<double>(3, 2);
mat.m44 = (float)openCVTransformation.at<double>(3, 3);
return mat;
}
At each frame in which the AR marker is found I add a box to the scene and apply the transformation to the object node:
SCNBox *box = [SCNBox boxWithWidth:5.0 height:5.0 length:5.0 chamferRadius:0.0];
_boxNode = [SCNNode nodeWithGeometry:box];
if (found){
[self.delegate returnExtrinsicsMat:extrinsicMatrixOfTheMarker];
Mat R, T;
[self.delegate returnRotationMat:R];
[self.delegate returnTranslationMat:T];
SCNMatrix4 Transformation;
Transformation = [self transformToSceneKit:extrinsicMatrixOfTheMarker];
//_cameraNode.transform = SCNMatrix4Invert(Transformation);
[_sceneKitScene.rootNode addChildNode:_cameraNode];
//_cameraNode.camera.projectionTransform = SCNMatrix4Identity;
//_cameraNode.camera.zNear = 0.0;
_sceneKitView.pointOfView = _cameraNode;
_boxNode.transform = Transformation;
[_sceneKitScene.rootNode addChildNode:_boxNode];
//_boxNode.position = SCNVector3Make(Transformation.m41, Transformation.m42, Transformation.m43);
std::cout << (_boxNode.position.x) << " " << (_boxNode.position.y) << " " << (_boxNode.position.z) << std::endl << std::endl;
}
For example if the translation vector is (-1, 5, 20) the object appears in the scene in position (-1, -5, -20) in the scene, and the rotation is correct also. The problem is that it never appears in the correct position in the background image. I will add some images to show the result.
Does anyone know why this is happening?
Found out the solution. Instead of applying the transform to the node of the object I applied the inverted transformation matrix to the camera node. Then for the camera perspective transform matrix I applied the following matrix:
projection = SCNMatrix4Identity
projection.m11 = (2 * (float)(cameraMatrix[0])) / -(ImageWidth*0.5)
projection.m12 = (-2 * (float)(cameraMatrix[1])) / (ImageWidth*0.5)
projection.m13 = (width - (2 * Float(cameraMatrix[2]))) / (ImageWidth*0.5)
projection.m22 = (2 * (float)(cameraMatrix[4])) / (ImageHeight*0.5)
projection.m23 = (-height + (2 * (float)(cameraMatrix[5]))) / (ImageHeight*0.5)
projection.m33 = (-far - near) / (far - near)
projection.m34 = (-2 * far * near) / (far - near)
projection.m43 = -1
projection.m44 = 0
being far and near the z clipping planes.
I also had to correct the box initial position to center it on the marker.

Detecting HeartBeat Using WebCam?

I am trying to create an application which can detect heartbeat using your computer webcam. I am working on the code since 2 weeks and developed this code and here I got so far
How does it works? Illustrated below ...
Detecting face using opencv
Getting image of forehead
Applying filter to convert it into grayscale image [you can skip it]
Finding the average intensity of green pixle per frame
Saving the averages into an Array
Applying FFT (I have used minim library)Extract heart beat from FFT spectrum (Here, I need some help)
Here, I need help for extracting heartbeat from FFT spectrum. Can anyone help me. Here, is the similar application developed in python but I am not able to undersand this code so I am developing same in the proessing. Can anyone help me to undersatnd the part of this python code where it is extracting the heartbeat.
//---------import required ilbrary -----------
import gab.opencv.*;
import processing.video.*;
import java.awt.*;
import java.util.*;
import ddf.minim.analysis.*;
import ddf.minim.*;
//----------create objects---------------------------------
Capture video; // camera object
OpenCV opencv; // opencv object
Minim minim;
FFT fft;
//IIRFilter filt;
//--------- Create ArrayList--------------------------------
ArrayList<Float> poop = new ArrayList();
float[] sample;
int bufferSize = 128;
int sampleRate = 512;
int bandWidth = 20;
int centerFreq = 80;
//---------------------------------------------------
void setup() {
size(640, 480); // size of the window
minim = new Minim(this);
fft = new FFT( bufferSize, sampleRate);
video = new Capture(this, 640/2, 480/2); // initializing video object
opencv = new OpenCV(this, 640/2, 480/2); // initializing opencv object
opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); // loading haar cscade file for face detection
video.start(); // start video
}
void draw() {
background(0);
// image(video, 0, 0 ); // show video in the background
opencv.loadImage(video);
Rectangle[] faces = opencv.detect();
video.loadPixels();
//------------ Finding faces in the video -----------
float gavg = 0;
for (int i = 0; i < faces.length; i++) {
noFill();
stroke(#FFB700); // yellow rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height); // creating rectangle around the face (YELLOW)
stroke(#0070FF); //blue rectangle
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height-2*faces[i].height/3); // creating a blue rectangle around the forehead
//-------------------- storing forehead white rectangle part into an image -------------------
stroke(0, 255, 255);
rect(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15);
PImage img = video.get(faces[i].x+faces[i].width/2-15, faces[i].y+15, 30, 15); // storing the forehead aera into a image
img.loadPixels();
img.filter(GRAY); // converting capture image rgb to gray
img.updatePixels();
int numPixels = img.width*img.height;
for (int px = 0; px < numPixels; px++) { // For each pixel in the video frame...
final color c = img.pixels[px];
final color luminG = c>>010 & 0xFF;
final float luminRangeG = luminG/255.0;
gavg = gavg + luminRangeG;
}
//--------------------------------------------------------
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
}
sample = new float[poop.size()];
for (int i=0;i<poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
if (sample.length>=bufferSize) {
//fft.window(FFT.NONE);
fft.forward(sample, 0);
// bpf = new BandPass(centerFreq, bandwidth, sampleRate);
// in.addEffect(bpf);
float bw = fft.getBandWidth(); // returns the width of each frequency band in the spectrum (in Hz).
println(bw); // returns 21.5332031 Hz for spectrum [0] & [512]
for (int i = 0; i < fft.specSize(); i++)
{
// println( " Freq" + max(sample));
stroke(0, 255, 0);
float x = map(i, 0, fft.specSize(), 0, width);
line( x, height, x, height - fft.getBand(i)*100);
// text("FFT FREQ " + fft.getFreq(i), width/2-100, 10*(i+1));
// text("FFT BAND " + fft.getBand(i), width/2+100, 10*(i+1));
}
}
else {
println(sample.length + " " + poop.size());
}
}
void captureEvent(Capture c) {
c.read();
}
The FFT is applied in a window with 128 samples.
int bufferSize = 128;
During the draw method the samples are stored in a array until fill the buffer for the FFT to be applied. Then after that the buffer is keep full. To insert a new sample the oldest is removed. gavg is the average gray channel color.
gavg = gavg/numPixels;
if (poop.size()< bufferSize) {
poop.add(gavg);
}
else poop.remove(0);
Coping poop to sample
sample = new float[poop.size()];
for (int i=0;i < poop.size();i++) {
Float f = (float) poop.get(i);
sample[i] = f;
}
Now is possible to apply the FFT to sample Array
fft.forward(sample, 0);
In the code is only show the spectrum result. The heartbeat frequency must be calculated.
For each band in fft you have to find the maximum and that position is the frequency of heartbeat.
for(int i = 0; i < fft.specSize(); i++)
{ // draw the line for frequency band i, scaling it up a bit so we can see it
heartBeatFrequency = max(heartBeatFrequency,fft.getBand(i));
}
Then get the bandwidth to know the frequency.
float bw = fft.getBandWidth();
Adjusting frequency.
heartBeatFrequency = fft.getBandWidth() * heartBeatFrequency ;
After you get samples size 128 that is bufferSize value or greater than that, forward the fft with the samples array and then get the peak value of the spectrum which would be our heartBeatRate
Following Papers explains the same :
Measuring Heart Rate from Video - Isabel Bush - Stanford - link (Page 4 paragraphs below Figure 2 explain this.)
Real Time Heart Rate Monitoring From Facial RGB Color Video Using Webcam - H. Rahman, M.U. Ahmed, S. Begum, P. Funk - link (Page 4)
After looking at your question , I thought let me get my hands onto this and I tried making a repository for this.
Well, having some issues if someone can have a look at it.
Thank you David Clifte for this answer it helped a lot.

How to draw many gradient lines quickly

I'm developing an app that has to draw 320 vertical gradient lines on a portrait iPhone screen where each gradient line is either 1px or 2px wide (non-retina vs retina). Each gradient line has 1000 positions, with each position able to have a unique color. These 1000 colors (floats) sit in a C-style 2D array (an array of arrays, 320 arrays of 1000 colors)
Currently, the gradient lines are drawn in a For Loop inside the drawRect method of a custom UIView. The problem I'm having is that it takes longer than ONE second to cycle through the For Loop and draw all 320 lines. Within that ONE second, I have another thread that's updating the color arrays and but since it takes longer than ONE second to draw, I don't see every update. I see every second or third update.
I'm using the exact same procedure in my Android code, which has no problems drawing 640 gradient lines (double the amount) multiple times in a second using a SurfaceView. My Android app never misses an update.
If you look at the Android code, it actually draws gradient lines to TWO separate canvases. The array size is dynamic and can be up to half the landscape resolution width of an Android phone (ex 1280 width = 1280/2 = 640 lines). Since the Android app is fast enough, I allow landscape mode. Even with the double the data as an iPhone and drawing to two separate canvases, the Android code runs multiple times a second. The iPhone code with half the number of lines and only drawing to a single context can not draw in under a second.
Is there a faster way to draw 320 vertical gradient lines (each with 1000 positions) on an iPhone?
Is there a hardware accelerated SurfaceView equivalent for iOS that can draw many gradients really fast?
//IPHONE - drawRect method
int totalNumberOfColors = 1000;
int i;
CGFloat *locations = malloc(totalNumberOfColors * sizeof locations[0]);
for (i = 0; i < totalNumberOfColors; i++) {
float division = (float)1 / (float)(totalNumberOfColors - 1);
locations[i] = i * division;
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
for (int k = 0; k < 320; k++) {
CGFloat * colorComponents = arrayOfFloatArrays[k];
CGGradientRef gradient = CGGradientCreateWithColorComponents(
colorSpace,
colorComponents,
locations,
(size_t)(totalNumberOfColors));
CGRect newRect;
if (currentPositionOffset >=320) {
newRect = CGRectMake(0, 0, 1, CGRectGetMaxY(rect));
} else {
newRect = CGRectMake(319 - (k * 1), 0, 1, CGRectGetMaxY(rect));
}
CGContextSaveGState(ctx);
//NO CLIPPING STATE
CGContextAddRect(ctx, newRect);
CGContextClip(ctx);
//CLIPPING STATE
CGContextDrawLinearGradient(
ctx,
gradient,
CGPointMake(0, 0),
CGPointMake(0, CGRectGetMaxY(rect)),
(CGGradientDrawingOptions)NULL);
CGContextRestoreGState(ctx);
//RESTORE TO NO CLIPPING STATE
CGGradientRelease(gradient);
}
//ANDROID - public void run() method on SurfaceView
for (i = 0; i < sonarData.arrayOfColorIntColumns.size() - currentPositionOffset; i++) {
Paint paint = new Paint();
int[] currentColors = sonarData.arrayOfColorIntColumns.get(currentPositionOffset + i);
//Log.d("currentColors.toString()",currentColors.toString());
LinearGradient linearGradient;
if (currentScaleFactor > 1.0) {
int numberOfColorsToUse = (int)(1000.0/currentScaleFactor);
int tmpTopOffset = currentTopOffset;
if (currentTopOffset + numberOfColorsToUse > 1000) {
//shift tmpTopOffset
tmpTopOffset = 1000 - numberOfColorsToUse - 1;
}
int[] subsetOfCurrentColors = new int[numberOfColorsToUse];
System.arraycopy(currentColors, tmpTopOffset, subsetOfCurrentColors, 0, numberOfColorsToUse);
linearGradient = new LinearGradient(0, tmpTopOffset, 0, getHeight(), subsetOfCurrentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("subsetOfCurrentColors.length","" + subsetOfCurrentColors.length);
} else {
//use all colors
linearGradient = new LinearGradient(0, 0, 0, getHeight(), currentColors, null, Shader.TileMode.MIRROR);
//Log.d("getHeight()","" + getHeight());
//Log.d("currentColors.length","" + currentColors.length);
}
paint.setShader(linearGradient);
sonarData.checkAndAddPaint(paint);
numberOfColumnsToDraw = i + 1;
}
//Log.d(TAG,"numberOfColumnsToDraw " + numberOfColumnsToDraw);
currentPositionOffset = currentPositionOffset + i;
if (currentPositionOffset >= sonarData.getMaxNumberOfColumns()) {
currentPositionOffset = sonarData.getMaxNumberOfColumns() - 1;
}
if (numberOfColumnsToDraw > 0) {
Canvas canvas = surfaceHolder.lockCanvas();
if (AppInstanceData.sonarBackgroundImage != null && canvas != null) {
canvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
if (cacheCanvas != null) {
cacheCanvas.drawBitmap(AppInstanceData.sonarBackgroundImage, 0, getHeight()- AppInstanceData.sonarBackgroundImage.getHeight(), null);
}
}
for (i = drawOffset; i < sizeToDraw + drawOffset; i++) {
Paint p = sonarData.paintArray.get(i - dataStartOffset);
p.setStrokeWidth(2);
//Log.d("drawGradientLines", "canvas.getHeight() " + canvas.getHeight());
canvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
if (cacheCanvas != null) {
cacheCanvas.drawLine(getWidth() - (i - drawOffset) * 2, 0, getWidth() - (i - drawOffset) * 2, canvas.getHeight(), p);
}
}
surfaceHolder.unlockCanvasAndPost(canvas);
}
No comment on the CG code — it's been a while since I've drawn any gradients — but a couple of notes:
You shouldn't be doing that in drawRect because it's called a lot. Draw into an image and display it.
There's no matching free for the malloc, so you're leaking memory like crazy.
It'll have a learning curve, but implement this using OpenGL ES 2.0. I previously took something that was drawing a large number of gradients as well, and reimplemented it using OpenGL ES 2.0 and custom vertex and fragment shaders. It is way faster than the equivalent drawing done using Core Graphics, so you will probably see a big speed boost as well.
If you don't know any OpenGL yet, I would suggest finding some tutorials for working with OpenGL ES 2.0 (has to be 2.0 because that's what offers the ability to write custom shaders) on iOS, to learn the basics. Once you do that, you should be able to significantly increase the performance of your drawing, way above that of the Android version, and maybe would be incentive to make the Android version use OpenGL as well.

Resources