IOS GIF cocoapod that allows variable durations and loop counts - ios

Im looking for a cocoapod that enables me to use an animation (gif file) that is upto 200 frames, in an efficient manner
Further it needs to be able to:
start/stop animation
set a duration that the animation should run for
Set the number of iterations the animation should run for
So far I have tried FLAnimatedImage and YLGIFImage , and neither appear to satisfy the above constraints while both are excellent for running through gifs in an efficient manner

I was able to use FLAnimatedImage to handle my requirements. See below
start/stop animation: handled by implementing suggestions outlined in comments here: https://github.com/Flipboard/FLAnimatedImage/issues/52
set a duration that the animation should run for: fixed by using an online gif editor
Set the number of iterations the animation should run for : Handled by using https://github.com/Flipboard/FLAnimatedImage/pull/60

Related

UISlider to change UIImage brightness/contrast in Swift Playground is very laggy.

I am currently creating a filter app. The user has the option to change brightness, contrast, etc.
I have implemented the ability to change these attributes of the image and it currently works but when I test this in Xcode Playgrounds the slider is extremely slow and makes Xcode very laggy.
I assume this is because I have written this in a very inefficient way. I don't want to copy and paste all of my code to Stackoverflow (I want to avoid making this too complicated) so I have uploaded it on Github. Located Here if you download the repo and open the playground up it is 100% running and if you test the slider you will see it is very laggy.
I think what is making it so laggy is that I am reseting the image view every time the slider value is changed resulting in a new UIImage being assigned to it every fraction of a second. Here is a tiny snippet of what I I just said, in code. You will probably still have to look at the code I posted on github since I made some protocols and classes
slider.addTarget(c, action: #selector(c.BrightnessChanged(sender:)), for: .valueChanged)
func updateBrightness(sender:UISlider) {
controls.brightness(sender.value)
img.image = controls.outputUIImage()
}
I genuinely have no intention asking for a solution without having tried myself; I have searched all over the interwebs to figure this out with no success.
Thanks Stackoverflow homies!

UICollectionView & AVPlayer Performance

I have a UICollectionView whose cells each have an AVPlayerLayer, all playing a video at the same time. About 9 cells fit on the screen at a time and it's pretty laggy so I need ways to boost the performance to achieve smooth scroll. There's a few things I've tried already:
1) Each cell has only one instance of AVPlayer. It doesn't get created, instead, player.replaceCurrentItemWithPlayer is called when the video url changes.
2) Because I'm using ReactiveCocoa, it's trivial to skip repeat urls to avoid playing the same video twice.
What else can I do to speed up the scroll and performance?
First I want to say it's a bit crazy to see several players in action at the same time: it's a heavy task of rendering anyway.
As far as I know, simply scaling/re-framing the video to smaller size doesn't make it less content to render: if you have a 100x100 picture to render and you render it in a frame of 10x10, it still consumes the same amount of memory as the original picture would consume; same thing goes for videos. So, try to make your video assets having similar resolution as the frame where you would present them.
Store each cell in an NSCache or NSMapTable using the NSIndexPath as the key for each; then, whenever you need a cell, call one directly from the cache or map table.
Obviously, you'll have to create all the cells at once, but you'll get the same scrolling performance as you do with nothing in the cells at all.
Need sample code?
Here's my latest iteration of a perfectly smooth-scrolling collection view with real-time video previews (up to 16 at a time):
https://youtu.be/7QlaO7WxjGg
It even uses a cover flow custom layout and "reflection" view that mirrors the video preview perfectly. The source code is here:
http://www.mediafire.com/download/ivecygnlhqxwynr/VideoWallCollectionView.zip
Unfortunately, UIKit will hit bottlenecks and drop frames when put under pressure for this use case. If you can muster refactoring code to use Facebook's AsyncDisplayKit / then this would be your best bet.
For objective-c / this project here is a good reference.
https://github.com/TextureGroup/Texture/tree/master/examples/ASDKTube
- (ASCellNode *)tableNode:(ASTableNode *)tableNode nodeForRowAtIndexPath:(NSIndexPath *)indexPath
{
VideoModel *videoObject = [_videoFeedData objectAtIndex:indexPath.row];
VideoContentCell *cellNode = [[VideoContentCell alloc] initWithVideoObject:videoObject];
return cellNode;
}
there is swift code available if you dig deep enough in github.

Is there a way to create a scrolling background(right to left) using Core Animator for Mac? (Swift 2 / Xcode)

I have the Core Animator software and I was wondering if there was any way to animate a scrolling background to export into Xcode?
Thank you!
Yea there is. Have you run through the tutorials, or the sample projects? They might help you get started.
What you would want to do is make your background loop animation and then modify the generated source so the animation loops forever.
backgroundAnimation.repeatCount = .infinity
Not clear from your question if it's about Core Animator itself or the code to make it loop forever.

'Capture GPU frame' first frame for iOS app

My application performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.
Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.
Has anybody done this successfully?
I've come across this again, and figured it out properly now. I'll add this as a separate answer, since it's a completely different approach from my other answer.
First, some background. There are three components to capturing a GPU frame:
Telling Xcode that you want to capture a GPU frame. In typical documented use, you do this manually by clicking the GPU Frame Capture "camera" button in Xcode.
Indicating the start of the next frame to capture. Normally, this occurs at the next occurrence of MTLCommandBuffer presentDrawable:, which is invoked to present the framebuffer to the underlying view.
Indicating the end of the frame being captured. Normally, this occurs at the next-but-one occurrence of MTLCommandBuffer presentDrawable:.
In capturing the first frame, or activity before the first frame, only the third of these is available, so we need an alternate way to perform the first two items:
To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:
To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.
To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.
Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.
Try...
[myMTLCommandEncoder insertDebugSignpost: #"com.apple.GPUTools.event.debug-frame"].
To be honest, I haven't tried it myself, but it's analogous to the similar
glInsertEventMarkerEXT(0, "com.apple.GPUTools.event.debug-frame")
documented for OpenGL ES, and there is some mention on the web of it working for Metal.
First, in Metal, I usually use Metal to do parallel compute, then GPU Capture frame is alway grey. So, there are two ways until now I found is Ok.
In iOS 11
you can use the [[MTLCaptureManager alloc] startCaptureWithDevice:m_Device]; to capture frame so you can profile the compute shader performance
lower than iOS 11 (MTLCaptureManager && MTLCaptureScope are new in iOS 11.0 )
you can use the breakpoint, then edit the Action.Capture GPU Frame

UISlider min and max image "Ringer and Alerts" in Settings -> Sound

I want the min and max images from the "Ringer and Alerts" slider in Settings->Sound. With the UIBarButtonItem, I can be set the "Identifier, so that an image appears for things like reply and add. On this bases, I can I do the same for UISlider to look like the one in "Ringers and Alert" either programmatically or through the Interface Builder?
You shouldn't ask these questions on SO, or better not at all. The good old way of taking a screenshot and then selecting and copying the icons you need works perfectly fine. For better results download the UIKit Extractor: https://github.com/0xced/UIKit-Artwork-Extractor

Resources