I'm using the EZAudio library for iOS to handle the playback of an audio file and draw its waveform. I'd like to create a a view with the entire waveform (an EZAudioPlotGL view, which is a subclass of UIView) and then save it as a png.
I'm having a couple problems with this:
The temporary audio plot I'm creating to save the snapshot image is drawing to the view, which I don't understand because I never add it as a subview.
The tempPlot is only drawing the top half of the waveform (not "mirrored" as I set it in the code)
The UIImage being saved from the tempPlot is only saving a short portion of the beginning of the waveform.
The problems can be seen in these images:
How the screen should look after (the original audio plot):
How the screen does look (showing the tempPlot I don't want to draw to the screen):
The saved image I get out that should be a copy of tempPlot:
The EZAudio library can be found here: https://github.com/syedhali/EZAudio
And my project can be found here, if you want to see the problem for yourself: https://www.dropbox.com/sh/8ilfaofvaa8aq3p/AADU5rOwqzCtEmJz-ePRXIDZa
I'm not very experienced with OpenGL graphics, so a lot of the work going on inside the EZAudioPlotGL class is a bit over my head.
Here's the relevant code:
ViewController.m:
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
// Customizing the audio plot's look
self.audioPlot.backgroundColor = [UIColor blueColor];
self.audioPlot.color = [UIColor whiteColor];
self.audioPlot.plotType = EZPlotTypeBuffer;
self.audioPlot.shouldFill = YES;
self.audioPlot.shouldMirror = YES;
// Try opening the sample file
[self openFileWithFilePathURL:[NSURL fileURLWithPath:kAudioFileDefault]];
}
-(void)openFileWithFilePathURL:(NSURL*)filePathURL {
self.audioFile = [EZAudioFile audioFileWithURL:filePathURL];
// Plot the whole waveform
[self.audioFile getWaveformDataWithCompletionBlock:^(float *waveformData, UInt32 length) {
[self.audioPlot updateBuffer:waveformData withBufferSize:length];
}];
//save whole waveform as image
[self.audioPlot fullWaveformImageForSender:self];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *filePath = [[paths objectAtIndex:0] stringByAppendingPathComponent:#"waveformImage.png"];
[UIImagePNGRepresentation(self.snapshotImage) writeToFile:filePath atomically:YES];
}
#end
My Category of EZAudioPlotGL:
- (void)fullWaveformImageForSender:(ViewController *)sender{
EZAudioPlotGL *tempPlot = [[EZAudioPlotGL alloc]initWithFrame:self.frame];
[tempPlot setPlotType: EZPlotTypeBuffer];
[tempPlot setShouldFill: YES];
[tempPlot setShouldMirror: YES];
[tempPlot setBackgroundColor: [UIColor redColor]];
[tempPlot setColor: [UIColor greenColor]];
//plot full waveform on tempPlot
[sender.audioFile getWaveformDataWithCompletionBlock:^(float *waveformData, UInt32 length) {
[tempPlot updateBuffer:waveformData withBufferSize:length];
//tempPlot.glkVC is a getter for the private EZAudioPlotGLKViewController property in tempPlot (added by me in the EZAudioPlotGL class)
sender.snapshotImage = [((GLKView *)tempPlot.glkVC.view) snapshot];
}];
}
drawViewHierarchyInRect only works for capturing CoreGraphics-based view drawing. (CG drawing happens on the CPU and renders into a buffer in main memory, so CG, aka UIGraphics, can just slurp an image out of there.) It won't help you if your view draws its content using OpenGL. (OpenGL drawing happens on the GPU, so you need to use GL to read pixels back from the GPU to main memory before you can build an image out of them.)
It looks like your library does its drawing with an instance of GLKView. (Poking around in the source, EZAudioPlotGL uses EZAudioPlotGLKViewController, which creates its own GLKView.) That class, in turn, has a snapshot method that does all the heavy lifting to get pixels back from the GPU and put them in a UIImage.
Related
I've now filed a bug for the issue below. Anyone with a good
workaround?
I try to save an SKTexture to file, and load it back again, but I don't succeed. The following code snippet can be copied to GameScene.m in the Xcode startup project.
I use textureFromNode in generateTexture, and that seems to be the root cause of my problem. If I use a texture from a sprite, the code works, and two spaceships are visible.
This code worked in iOS 8 but it stopped working in Xcode7 & iOS 9. I just want to verify that this is a bug before I file a bug report. My worry is that I do something wrong with NSKeyedArchiver.
It happens both in simulator and on device.
#import "GameScene.h"
#implementation GameScene
// Generates a texture
- (SKTexture *)generateTexture
{
SKScene *scene = [[SKScene alloc] initWithSize:CGSizeMake(100, 100)];
SKShapeNode *shapeNode = [SKShapeNode shapeNodeWithRectOfSize:CGSizeMake(50, 50)];
shapeNode.position = CGPointMake(50, 50);
shapeNode.strokeColor = SKColor.redColor;
shapeNode.lineWidth = 10;
[scene addChild:shapeNode];
SKTexture *texture = [self.view textureFromNode:scene];
//SKTexture *texture = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"].texture; // This works!
return texture;
}
// Just generate a path
- (NSString *)fullDocumentsPath
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *yourFileName = [documentsDirectory stringByAppendingPathComponent:#"fileName"];
return yourFileName;
}
- (void)didMoveToView:(SKView *)view
{
self.scaleMode = SKSceneScaleModeResizeFill;
// Verify that the generateTexture method indeed produces a valid texture.
SKSpriteNode *s1 = [SKSpriteNode spriteNodeWithTexture:[self generateTexture]];
s1.position = CGPointMake(100, 100);
[self addChild:s1];
// Start with saving the texture.
NSString *fullName = [self fullDocumentsPath];
NSError *error;
NSFileManager *fileMgr = [NSFileManager defaultManager];
if ([fileMgr fileExistsAtPath:fullName])
{
[fileMgr removeItemAtPath:fullName error:&error];
assert(error == nil);
}
NSDictionary *dict1 = [NSDictionary dictionaryWithObject:[self generateTexture] forKey:#"object"];
bool ok = [NSKeyedArchiver archiveRootObject:dict1 toFile:fullName];
assert(ok);
// Read back the texture and place it in a sprite. This sprite is not shown. Why?
NSData *data = [NSData dataWithContentsOfFile:fullName];
NSDictionary *dict2 = [NSKeyedUnarchiver unarchiveObjectWithData:data];
SKTexture *loadedTexture = [dict2 objectForKey:#"object"];
SKSpriteNode *s2= [SKSpriteNode spriteNodeWithTexture:loadedTexture];
NSLog(#"t(%f, %f)", loadedTexture.size.width, loadedTexture.size.height); // Size of sprite & texture is zero. Why?
s2.position = CGPointMake(200, 100);
[self addChild:s2];
}
#end
Update for Yudong:
This might be a more relevant example, but imagine that the scene consists of 4 layers, with lots of sprites. When the game play is over I want to store a thumbnail image of the end scene of the match. The image will be used as a texture on a button. Pressing that button will start a replay movie of the match. There will be lots of buttons with images of old games so I need to store each image on file.
-(SKTexture*)generateTexture
{
SKScene *scene = [[SKScene alloc] initWithSize:CGSizeMake(100, 100)];
SKSpriteNode *ship = [SKSpriteNode spriteNodeWithImageNamed:#"Spaceship"];
ship.position = CGPointMake(50, 50);
[scene addChild:ship];
SKTexture *texture = [self.view textureFromNode:scene];
NSLog(#"texture: %#", texture);
return texture;
}
The solution/work around:
Inspired by Russells code I did the following. It works!
CGImageRef cgImg = texture.CGImage;
SKTexture *newText = [SKTexture textureWithCGImage:cgImg];
I've done a lot of experimenting/hacking with SKTextures. My game utilizes SKTextures. It is written in Swift. Specifically, I've had many problems with textureFromNode and textureFromNode:crop: and creating SKPhysicsBodies from textures. These methods worked fine in ios 8, but Apple completely broke them when they released ios 9.0. In ios 9.0, the textures were coming back as nil. Those nil textures broke SKPhysicsBodies from the textures.
I recently worked on serialization/deserialization of SKTextures.
Some key ideas/clues you might investigate are:
Run ios 9.2. Apple Staff mentioned a lot of issues have been fixed. https://forums.developer.apple.com/thread/17463 I've found ios 9.2 helps with SKTextures but didn't solve every issue especially the serialization issues.
Try PrefersOpenGL (set it to "YES" as a Boolean custom property in your config). Here is a post about PrefersOpenGL in the Apple Dev Forums by Apple Staff. https://forums.developer.apple.com/thread/19683 I've observed that ios 9.x seems to use Metal by default rather than OpenGL. I've found PrefersOpenGL helps with SKTexture issues but still doesn't make my SKShaders work (written in GLSL).
When I tried to serialize/deserialize nodes with SKTextures on ios 9.2, I got white boxes instead of visible textures. Inspired by Apple SKTexture docs that say, "The texture data is loaded when:
The size method on the texture object is called.
Another method is called that requires the texture’s size, such as creating a new SKSpriteNode object that uses the texture object.
One of the preload methods is called (See Preloading the Texture Data.)
The texture data is prepared for rendering when:
A sprite or particle that uses the texture is part of a node tree that is being rendered."
... I've hacked a workaround that creates a secondary texture from the CGImage() call:
// ios 9.2 workaround for white boxes on serialization
let img = texture!.CGImage()
let uimg = UIImage(CGImage: img)
let ntex = SKTexture(image: uimg)
let sprite = SKSpriteNode(texture: ntex, size: texture!.size())
So now my SKSpriteNodes created this way seem to serialize/deserialize fine. BTW, just invoking size() or creating an SKSpriteNode with the original texture does not seem to be enough to reify the texture into memory.
You didn't ask about textureFromNode:crop: but I'm adding observations anyway just in case it helps you: I've found this method in ios 8 worked (although the crop parameters were very tricky and seemed to require normalization with UIScreen.mainScreen().scale) In ios 9.0, this method didn't work at all (returned nil). In ios 9.2 this method now works (it now returns a non-nil texture) however subsequent creation of nodes from the texture do not need the size normalization. And furthermore, to make serialization/deserialization work, I found you ultimately have to do #3 above.
I hope this helps you. I imagine I've struggled more than most with SKTextures since my app is so dependent on them.
I tested your code in Xcode 7 and found texture returned in generateTexture was null. That's the reason why you can't load anything from the file, and you even haven't saved anything.
Try to use NSLog to log the description of your texture or sprite. E.g. add this line in generateTexture:
NSLog(#"texture: %#", texture);
What you will get in console:
texture: '(null)' (300 x 300)
And same for s1 and dict1 in your code:
s1: name:'(null)' texture:[ '(null)'
(300 x 300)] position:{100, 100} scale:{1.00, 1.00} size:{100, 100}
anchor:{0.5, 0.5} rotation:0.00
dict1: {
object = " '(null)' (300 x 300)"; }
You may do these tests on both iOS 8 and iOS 9 and you will probably get different results.
I'm not sure why you add the SKShapeNode to a scene and then save the texture from the scene. One workaround is to set texture for your SKShapeNode, and your code should work fine.
shapeNode.fillTexture = [SKTexture textureWithImageNamed:#"Spaceship"];
SKTexture *texture = shapeNode.fillTexture;
return texture;
Update:
It's quite annoying that textureFromNode doesn't works as expected in iOS 9. I tried to solve it by trial and error but no luck at last. Thus, I asked you if you would consider make a snapshot of the whole screen and set it as your thumbnail. Here's the progress I made today and hope you will get inspired from it.
I created a scene which contained SKLabelNode and SKSpriteNode in didMoveToView. After I clicked anywhere on screen, snapshot would be invoked and the down-scaled screenshot would be saved in the document folder. I used the code here.
- (UIImage *)snapshot
{
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0.5);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Since the thumbnail is saved as UIImage therefore loading it back for the sprite's texture should be done easily. A sample project demonstrates the whole process and it works on both iOS 8 and 9.
I'm trying to animate an array of UIImage with CAKeyframeAnimation. Easy in theory.
Sample code at the bottom of the post.
My problem is that after the animation did finish, I've got a huge leak that is impossible to get rid of it.
Code to init CAKeyframeAnimation:
- (void)animateImages
{
CAKeyframeAnimation *keyframeAnimation = [CAKeyframeAnimation animationWithKeyPath:#"contents"];
keyframeAnimation.values = self.imagesArray; // array with images
keyframeAnimation.repeatCount = 1.0f;
keyframeAnimation.duration = 5.0;
keyframeAnimation.removedOnCompletion = YES;
CALayer *layer = self.animationImageView.layer;
[layer addAnimation:keyframeAnimation
forKey:#"flingAnimation"];
}
Adding a delegate to the animation and removing the animation manually cause the same leak effect:
... // Code to change
keyframeAnimation.delegate = self;
// keyframeAnimation.removedOnCompletion = YES;
keyframeAnimation.removedOnCompletion = NO;
keyframeAnimation.fillMode = kCAFillModeForwards;
....
Then:
- (void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag
{
if (flag)
{
[self.animationImageView.layer removeAllAnimations];
[self.animationImageView.layer removeAnimationForKey:#"flingAnimation"]; // just in case
}
}
The result is always a huge allocation. The size of the stack of memory is proportional to the size of the images:
I uploaded an example to GitHub to check the code.
SOLVED
I found the problem.
As gabbler was saying there was not a leak problem. The problem was a high allocation of Images.
I was releasing the array with the images, however, the images did not disappear from memory.
So finally I found the problem:
[UIImage imageNamed:#""];
From method definition:
This method looks in the system caches for an image object with the specified name and returns that object if it exists. If a matching image object is not already in the cache, this method locates and loads the image data from disk or asset catelog, and then returns the resulting object. You can not assume that this method is thread safe.
So, imageNamed: stores the image in a private Cache.
- The first problem is that you can not take control of the cache size.
- The second problem is that the cache did not get cleaned in time and if you are allocating a lot of images with imageNamed:, your app, probably, will crash.
SOLUTION:
Allocate images directly from Bundle:
NSString *imageName = [NSString stringWithFormat:#"imageName.png"];
NSString *path = [[NSBundle mainBundle] pathForResource:imageName
// Allocating images with imageWithContentsOfFile makes images to do not cache.
UIImage *image = [UIImage imageWithContentsOfFile:path];
Small problem:
Images in Images.xcassets get never allocated. So, move your images outside Images.xcassets to allocate directly from Bundle.
Example project with solution here.
Is it possible to repeat an image in ios similar to CSS function
background-image:imageurl;
background-repeat :repeat-x;
so that an image is perfectly scaled for iphone and iPad screen sizes
You could try this:
- (UIImage *) imageFromAssetImageNamed: (NSString *) name {
NSString * fullKeyPath = [[NSBundle mainBundle] pathForResource:name
ofType:#"png"
inDirectory:#"assets"] ;
return [UIImage imageWithContentsOfFile:fullKeyPath] ;
}
- (UIColor *) colorPatternFromAssetImageNamed: (NSString *) name {
return [UIColor colorWithPatternImage:[self imageFromAssetImageNamed:name]] ;
}
You can then set the background color, for example, using:
self.window.backgroundColor = [self colorPatternFromAssetImageNamed:#"my-bg-color"] ;
You will still need to adjust the frame to control how much of the width/height is covered.
You have loads of options.
Core Graphics gives you
CGContextDrawTiledImage()
UIImage gives you
drawPatternInRect:
(Probably a wrapper of the above )
But the most useful thing is to look at transformations.
CGAffineTransform in the Quartz 2D Drawing guide is the thing you want to read about.
It's pretty cheap and easy in draw rect to just do some iteration that draws the same image at a bunch of locations that are in CG terms translations of the image, meaning it's drawn at another place.
You can even draw to an image context before drawing to a view and get a cached representation so you don't need to always redraw every thing.
Core Animation has transforms as well.
I want to generate a good-looking PDF in my iOS 6 app.
I've tried:
UIView render in context
Using CoreText
Using NSString drawInRect
Using UILabel drawRect
Here is a code example:
-(CGContextRef) createPDFContext:(CGRect)inMediaBox path:(NSString *) path
{
CGContextRef myOutContext = NULL;
NSURL * url;
url = [NSURL fileURLWithPath:path];
if (url != NULL) {
myOutContext = CGPDFContextCreateWithURL ((__bridge CFURLRef) url,
&inMediaBox,
NULL);
}
return myOutContext;
}
-(void)savePdf:(NSString *)outputPath
{
if (!pageViews.count)
return;
UIView * first = [pageViews objectAtIndex:0];
CGContextRef pdfContext = [self createPDFContext:CGRectMake(0, 0, first.frame.size.width, first.frame.size.height) path:outputPath];
for(UIView * v in pageViews)
{
CGContextBeginPage (pdfContext,nil);
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0, (int)(v.frame.size.height));
transform = CGAffineTransformScale(transform, 1, -1);
CGContextConcatCTM(pdfContext, transform);
CGContextSetFillColorWithColor(pdfContext, [UIColor whiteColor].CGColor);
CGContextFillRect(pdfContext, v.frame);
[v.layer renderInContext:pdfContext];
CGContextEndPage (pdfContext);
}
CGContextRelease (pdfContext);
}
The UIViews that are rendered only contain a UIImageView + a bunch of UILabels (some with and some without borders).
I also tried a suggestion found on stackoverflow: subclassing UILabel and doing this:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
BOOL isPDF = !CGRectIsEmpty(UIGraphicsGetPDFContextBounds());
if (!layer.shouldRasterize && isPDF)
[self drawRect:self.bounds]; // draw unrasterized
else
[super drawLayer:layer inContext:ctx];
}
But that didn't change anything either.
No matter what I do, when opening the PDF in Preview the text parts are selectable as a block, but not character per character, and zooming the pdf shows it is actually a bitmap image.
Any suggestions?
This Tutorial From Raywenderlich Saved my Day.Hope it will work for you too.
http://www.raywenderlich.com/6818/how-to-create-a-pdf-with-quartz-2d-in-ios-5-tutorial-part-2
My experience when I did this last year was that Apple didn't provide any library to do it. I ended up importing an open source C library (libHaru). Then I added a function for outputting to it in each class in my view hierarchy. Any view with subviews would call render on its subviews. My UILabels, UITextFields, UIImageViews, UISwitches etc would output their content either as text or graphics accordingly I also rendered background colors for some views.
It wasn't very daunting, but libHaru gave me some problems with fonts so iirc I ended up just using the default font and font size.
It works good with UILabels except that you have to work around a bug:
Rendering a UIView into a PDF as vectors on an iPad - Sometimes renders as bitmap, sometimes as vectors
I am trying to implement my own map engine by using CATiledLayer + UIScrollView.
In drawLayer:inContext: method of my implementation, if I have a certain tile image needed for the current bounding box, I immediately draw it in the context.
However, when I do not have one available in the local cache data structure, the tile image is asynchronously requested/downloaded from a tile server, and not draw anything in the context.
The problem is, when I don't draw anything in the context, that part of the view is shown as a blank tile. And the expected behavior is to show the zoom-in scaled tile view from the previous zoom level.
If you guys have faced any similar problem and found any solution for this, please let me know.
You have to setNeedsDisplayInRect: for the tile as soon as you have the data. You have to live with it being blank until the tile is available because you have no way to affect which tiles CATiledLayer is creating.
I do the same, blocking the thread until the tile has been downloaded. The performance is good, it runs smoothly. I'm using a queue to store every tile request, so I can also cancel those tile requests that are not useful anymore.
To do so, use a lock to stop the thread just after you launch your async tile request, and unlock it as soon as you have your tile cached.
Sounds that good to you? It worked for me!
Your CATiledLayer should be providing this tile from the previous zoom level as you expect. What are your levelsOfDetail and levelsOfDetailBias set to for the tiled layer?
You have to call setNeedsDisplay or setNeedsDisplayInRect: But the problem is if you call this then it will redraw every tile in the scrollView. So try using subclass of UIView instead of CATiledLayer subclass, and implement TiledView (subclass of UIView) like this,
+ (Class) layerClass {
return [CATiledLayer class];
}
-(void)drawRect:(CGRect)r {
CGRect tile = r;
int x = tile.origin.x/TILESIZE;
int y = tile.origin.y/TILESIZE;
NSString *tileName = [NSString stringWithFormat:#"Shed_1000_%i_%i", x, y];
NSString *path =
[[NSBundle mainBundle] pathForResource:tileName ofType:#"png"];
UIImage *image = [UIImage imageWithContentsOfFile:path];
[image drawAtPoint:tile.origin];
// uncomment the following to see the tile boundaries
/*
UIBezierPath* bp = [UIBezierPath bezierPathWithRect: r];
[[UIColor whiteColor] setStroke];
[bp stroke];
*/
}
and
for scrollView,
UIScrollView* sv = [[UIScrollView alloc] initWithFrame:
[[UIScreen mainScreen] applicationFrame]];
sv.backgroundColor = [UIColor whiteColor];
self.view = sv;
CGRect f = CGRectMake(0,0,3*TILESIZE,3*TILESIZE);
TiledView* content = [[TiledView alloc] initWithFrame:f];
float tsz = TILESIZE * content.layer.contentsScale;
[(CATiledLayer*)content.layer setTileSize: CGSizeMake(tsz, tsz)];
[self.view addSubview:content];
[sv setContentSize: f.size];