I have the following code where I create a sprite node that displays an animated GIF. I want to create another function that darkens the GIF when called upon. I could still be able to watch the animation, but the content would be visibly darker. I'm not sure how to approach this. Should I individually darken every texture or frame used to create the animation? If so, how do I darken a texture or frame in the first place?
// Extract frames and duration
guard let imageData = try? Data(contentsOf: url as URL) else {
return
}
let source = CGImageSourceCreateWithData(imageData as CFData, nil)
var images = [CGImage]()
let count = CGImageSourceGetCount(source!)
var delays = [Int]()
// Fill arrays
for i in 0..<count {
// Add image
if let image = CGImageSourceCreateImageAtIndex(source!, i, nil) {
images.append(image)
}
// At it's delay in cs
let delaySeconds = UIImage.delayForImageAtIndex(Int(i),
source: source)
delays.append(Int(delaySeconds * 1000.0)) // Seconds to ms
}
// Calculate full duration
let duration: Int = {
var sum = 0
for val: Int in delays {
sum += val
}
return sum
}()
// Get frames
let gcd = SKScene.gcdForArray(delays)
var frames = [SKTexture]()
var frame: SKTexture
var frameCount: Int
for i in 0..<count {
frame = SKTexture(cgImage: images[Int(i)])
frameCount = Int(delays[Int(i)] / gcd)
for _ in 0..<frameCount {
frames.append(frame)
}
}
let gifNode = SKSpriteNode.init(texture: frames[0])
gifNode.position = CGPoint(x: skScene.size.width / 2.0, y: skScene.size.height / 2.0)
gifNode.name = "content"
// Add animation
let gifAnimation = SKAction.animate(with: frames, timePerFrame: ((Double(duration) / 1000.0)) / Double(frames.count))
gifNode.run(SKAction.repeatForever(gifAnimation))
skScene.addChild(gifNode)
I would recommend to use the colorize(with:colorBlendFactor:duration:) method. It is an SKAction that animates changing the color of a whole node. That way you don't have to get into darkening the individual textures or frames, and it also adds a nice transition from a non-dark to a darkened color. Once the action ends, the node will stay darkened until you undarken it, so any changes to the node's texture will also be visible as darkend to the user.
Choose whatever color and colorBlendFactor will work best for you to have the darkened effect you need, e.g. you could set the color to .black and colorBlendFactor to 0.3. To undarken, just set the color to .clear and colorBlendFactor to 0.
Documentation here.
Hope this helps!
Related
Here is my problem : I want to display a pixel buffer that I calculated to a MTKView. I searched for MTLTexture, MTLBuffer and other Metal objects, but I can't find any way to just present a pixel buffer.
Every tutorial I saw are about presenting 3D objects with vertex and fragments shaders.
I think the buffer has to be presented within the drawInMTKView function (maybe with the MTLRenderCommandEncoder), but again, I can't find any information about this.
I hope I'm not asking an obvious question.
Thanks
Welcome!
I recommend you use Core Image for rendering the content of the pixel buffer into the view. This requires the least manual Metal setup.
Setup the MTKView and some required objects as follows (assuming you have a view controller and a storyboard setup):
import UIKit
import CoreImage
class PreviewViewController: UIViewController {
#IBOutlet weak var metalView: MTKView!
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var ciContext: CIContext!
var pixelBuffer: CVPixelBuffer?
override func viewDidLoad() {
super.viewDidLoad()
self.device = MTLCreateSystemDefaultDevice()
self.commandQueue = self.device.makeCommandQueue()
self.metalView.delegate = self
self.metalView.device = self.device
// this allows us to render into the view's drawable
self.metalView.framebufferOnly = false
self.ciContext = CIContext(mtlDevice: self.device)
}
}
In the delegate method you use Core Image to transform the pixel buffer to fit the contents of the view (this is a bonus, adapt it to your use case) and render it using the CIContext:
extension PreviewViewController: MTKViewDelegate {
func draw(in view: MTKView) {
guard let pixelBuffer = self.pixelBuffer,
let commandBuffer = self.commandQueue.makeCommandBuffer() else { return }
// turn the pixel buffer into a CIImage so we can use Core Image for rendering into the view
let image = CIImage(cvPixelBuffer: pixelBuffer)
// bonus: transform the image to aspect-fit the view's bounds
let drawableSize = view.drawableSize
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
// center in the view
let originX = max(drawableSize.width - scaledImage.extent.size.width, 0) / 2
let originY = max(drawableSize.height - scaledImage.extent.size.height, 0) / 2
let centeredImage = scaledImage.transformed(by: CGAffineTransform(translationX: originX, y: originY))
// Create a render destination that allows to lazily fetch the target texture
// which allows the encoder to process all CI commands _before_ the texture is actually available.
// This gives a nice speed boost because the CPU doesn't need to wait for the GPU to finish
// before starting to encode the next frame.
// Also note that we don't pass a command buffer here, because according to Apple:
// "Rendering to a CIRenderDestination initialized with a commandBuffer requires encoding all
// the commands to render an image into the specified buffer. This may impact system responsiveness
// and may result in higher memory usage if the image requires many passes to render."
let destination = CIRenderDestination(width: Int(drawableSize.width),
height: Int(drawableSize.height),
pixelFormat: view.colorPixelFormat,
commandBuffer: nil,
mtlTextureProvider: { () -> MTLTexture in
return currentDrawable.texture
})
// render into the view's drawable
let _ = try! self.ciContext.startTask(toRender: centeredImage, to: destination)
// present the drawable
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
}
There is a slightly simpler way for rendering into the drawable texture instead of using CIRenderDestination, but this is recommended if you want to achieve high frame rates (see comment).
I think I found a solution : https://developer.apple.com/documentation/metal/creating_and_sampling_textures?language=objc.
In this exemple, they show how to render an image to a Metal view, using just a few vertices and a fragment shader to render the texture to a 2D square.
I'll go from there. Not sure if there isn't a better (simpler ?) way to do that. But I guess that's how Metal wants us to do this.
I am trying to create a specific type of animation that requires a sound and animation to occur for exactly 1 second per texture cycle.
Each time someone clicks a button, it appends a new texture to an array of SKTextures[], then I have an SKSpriteNode animate through each of the textures once. Only problem is, I can't seem to figure out a way to play a sound file for each texture as it animates. I would also like to create an animation for each texture the SKSpriteNode changes to as it goes through the textures.
Here is an example of the flow:
Button Clicked
Append SKTexture array
SKSpriteNode.animateUsingTextures(SKTextureArray, timePerFrame: 1)
While each individual texture displays, animate it, play sound.
Here is what I have that is not working.
(Code is just an abstraction of what I have)
var textures: [SKTexture] = []
let sprite: SKSpriteNode()
let scaleUp = SKAction.scaleTo(300, duration: 0.2)
let scaleDown = SKAction.scaleTo(200, duration: 0.2)
let popAnimation = SKAction.sequence([scaleUp, scaleDown])
func buttonClicked() {
textures.append("Specific Texture")
sprite.animation() // Calls Animation Function
}
func animation() {
let sound = SKAction.playSoundFileNamed("DisplayGesture.mp3", waitForCompletion: true)
let animation = SKAction.animateWithTextures(textures, timePerFrame: 1)
let completeAnimation = SKAction.group([sound, animation, popAnimation])
sprite.runAction(completeAnimation)
}
I think what is happening is that you play the sound once, then animate all of the textures. What you could try doing:
fund animation() {
for myTexture in textures {
let sound = SKAction.playSoundFileNamed("DisplayGesture.mp3", waitForCompletion: true)
sprite.texture = myTexture
}
}
I have a situation, which I do not understand.
I like to create CATextLayers in a background task and show them in my view, because it takes some time.
This works perfect without a background task. I can see the text immediately in my view "worldmapview"
#IBOutlet var worldmapview: Worldmapview! // This view is just an empty view.
override func viewDidLoad(){
view.addSubview(worldmapview);
}
func addlayerandrefresh_direct(){
var calayer=CALayer();
var newtextlayer:CATextLayer=create_text_layer(0,y1: 0,width:1000,heigth:1000,text:"My Text ....",fontsize:5);
self.worldmapview.layer.addSublayer(newtextlayer)
//calayer.addSublayer(newtextlayer);
calayer.addSublayer(newtextlayer)
self.worldmapview.layer.addSublayer(calayer);
self.worldmapview.setNeedsDisplay();
}
When doing this in a backgroundtask, the text does not appear in my view. Sometimes, not always, it appears after some seconds (10 for example).
func addlayerandrefresh_background(){
let qualityOfServiceClass = QOS_CLASS_BACKGROUND
let backgroundQueue = dispatch_get_global_queue(qualityOfServiceClass, 0)
dispatch_async(backgroundQueue, {
var calayer=CALayer();
var newtextlayer:CATextLayer=create_text_layer(0,y1: 0,width:1000,heigth:1000,text:"My Text ....",fontsize:5);
dispatch_async(dispatch_get_main_queue(),{
self.worldmapview.layer.addSublayer(newtextlayer)
//calayer.addSublayer(newtextlayer);
calayer.addSublayer(newtextlayer)
self.worldmapview.layer.addSublayer(calayer);
self.worldmapview.setNeedsDisplay();
})
})
}
func create_text_layer(x1:CGFloat,y1:CGFloat,width:CGFloat,heigth:CGFloat,text:String,fontsize:CGFloat) -> CATextLayer {
let textLayer = CATextLayer()
textLayer.frame = CGRectMake(x1, y1, width, heigth);
textLayer.string = text
let fontName: CFStringRef = "ArialMT"
textLayer.font = CTFontCreateWithName(fontName, fontsize, nil)
textLayer.fontSize=fontsize;
textLayer.foregroundColor = UIColor.darkGrayColor().CGColor
textLayer.wrapped = true
textLayer.alignmentMode = kCAAlignmentLeft
textLayer.contentsScale = UIScreen.mainScreen().scale
return textLayer;
}
Does someone see, what is wrong ?
What is very confusing : The same doing with CAShapeLayer works in the background.
Looks like setNeedDisplay not cause sublayers redraw and we need call it on all layers we need, in this case it's newly added layer
func addlayerandrefresh_background(){
let calayer = CALayer()
calayer.frame = self.worldmapview.bounds
dispatch_async(backgroundQueue, {
for var i = 0; i < 100 ; i+=10 {
let newtextlayer:CATextLayer=self.create_text_layer(0, y1: CGFloat(i) ,width:200,heigth:200,text:"My Text ....",fontsize:5)
calayer.addSublayer(newtextlayer)
}
dispatch_async(dispatch_get_main_queue(),{
self.worldmapview.layer.addSublayer(calayer)
for l in calayer.sublayers! {
l.setNeedsDisplay()
}
})
})
}
All the changes on UI are performed on main thread. You cannot update User Interface on background thread. According to Apple's documentation
Work involving views, Core Animation, and many other UIKit classes
usually must occur on the app’s main thread. There are some exceptions
to this rule—for example, image-based manipulations can often occur on
background threads—but when in doubt, assume that work needs to happen
on the main thread.
I am making a simple game in SpriteKit, and I have a scrolling background. What simply happens is that a few background images are placed adjacent to each other when the game scene is loaded, and then the image is moved horizontally when it scrolls out of the screen. Here is the code for that, from my game scene's didMoveToView method.
// self.gameSpeed is 1.0 and gradually increases during the game
let backgroundTexture = SKTexture(imageNamed: "Background")
var moveBackground = SKAction.moveByX(-self.frame.size.width, y: 0, duration: (20 / self.gameSpeed))
var replaceBackground = SKAction.moveByX(self.frame.size.width, y: 0, duration: 0)
var moveBackgroundForever = SKAction.repeatActionForever(SKAction.sequence([moveBackground, replaceBackground]))
for var i:CGFloat = 0; i < 2; i++ {
var background = SKSpriteNode(texture: backgroundTexture)
background.position = CGPoint(x: self.frame.size.width / 2 + self.frame.size.width * i, y: CGRectGetMidY(self.frame))
background.size = self.frame.size
background.zPosition = -100
background.runAction(moveBackgroundForever)
self.addChild(background)
}
Now I want to increase the speed of the scrolling background at certain points of the game. You can see that the duration of the background's horizontal scroll is set to (20 / self.gameSpeed). Obviously this does not work, because this code is only run once, and therefore the movement speed is never updated to account for a new value of the self.gameSpeed variable.
So, my question is simply: how do I increase the speed (reduce the duration) of my background images' movements according to the self.gameSpeed variable?
Thanks!
You could use the gameSpeed variable to set the velocity of the background. For this to work, firstly, you need to have a reference to your two background pieces (or more if you so wanted):
class GameScene: SKScene {
lazy var backgroundPieces: [SKSpriteNode] = [SKSpriteNode(imageNamed: "Background"),
SKSpriteNode(imageNamed: "Background")]
// ...
}
Now you need your gameSpeed variable:
var gameSpeed: CGFloat = 0.0 {
// Using a property observer means you can easily update the speed of the
// background just by setting gameSpeed.
didSet {
for background in backgroundPieces {
// Minus, because the background is moving from left to right.
background.physicsBody!.velocity.dx = -gameSpeed
}
}
}
Then position each piece correctly in didMoveToView. Also, for this method to work each background piece needs a physics body so you can easily change its velocity.
override func didMoveToView(view: SKView) {
for (index, background) in enumerate(backgroundPieces) {
// Setup the position, zPosition, size, etc...
background.physicsBody = SKPhysicsBody(rectangleOfSize: background.size)
background.physicsBody!.affectedByGravity = false
background.physicsBody!.linearDamping = 0
background.physicsBody!.friction = 0
self.addChild(background)
}
// If you wanted to give the background and initial speed,
// here's the place to do it.
gameSpeed = 1.0
}
You could update gameSpeed in update for example with gameSpeed += 0.5.
Finally, in update you need to check if a background piece has gone offscreen (to the left). If it has it needs to be moved to the end of the chain of background pieces:
override func update(currentTime: CFTimeInterval) {
for background in backgroundPieces {
if background.frame.maxX <= 0 {
let maxX = maxElement(backgroundPieces.map { $0.frame.maxX })
// I'm assuming the anchor of the background is (0.5, 0.5)
background.position.x = maxX + background.size.width / 2
}
}
}
You could make use of something like this
SKAction.waitforDuration(a certain amount of period to check for the updated values)
SKAction.repeatActionForever(the action above)
runAction(your action)
{ // this is the completion block, do whatever you want here, check the values and adjust them accordly
}
CGFloat start = 0; // The start value of X for an animation
CGFloat distance = 100; // The distance X will have traveled when the animation completes
CAMediaTimingFunction* tf = [CAMediaTimingFunction functionWithControlPoints:0 :0 :1 :1]; // Linear easing for simplicity
CGFloat percent = [tf valueAtTime:0.4]; // Returns 0.4
CGFloat x = start + (percent * distance); // Returns 40, which is the value of X 40% through the animation
How can I implement the method valueAtTime: into a category of CAMediaTimingFunction so that it works like described in the code above?
Please note: This is a contrived example. I will actually be using a non-linear timing functions with a UIPanGestureRecognizer for a non-linear drag effect. Thanks.
A timing function is a very simple Bezier curve - two endpoints, 0,0 and 1,1, with one control point each - graphing time (x) against percentage of the animation completed (y), so all you have to do is the Bezier curve math (given x, what's the corresponding y). Google for it and you'll readily find the necessary formulas. Here's a decent place to start that I found: http://pomax.github.io/bezierinfo/
Timing function has to axes (progress and time). If we take your example, we can handle all animation by ourself. I can change animatable property each frame. Here is how we can achieve it using display link.
final class CustomAnimation {
private let timingFunction: CAMediaTimingFunction
private var displayLink: CADisplayLink?
private let frameAmount = 30 // for aniamtion duration 0.5 sec
private let frameCount = 0
init(timingFunction: CAMediaTimingFunction) {
self.timingFunction = timingFunction
}
func startAnimation() {
displayLink = CADisplayLink(target: self, selector: #selector(updateValue))
displayLink?.add(to: .current, forMode: .default)
}
func endAnimation() {
displayLink?.invalidate()
displayLink = nil
}
#objc
private func updateValue() {
guard frameCount < frameAmount else {
endAnimation()
return
}
frameCount += 1
let frameProgress = Double(frameCount) / Double(frameAmount)
let animationProgress = timingFunction.valueAtTime(x: frameProgress)
/// count yout value. change position of ui element and e.t.c
let value = ....
}
}