How to animate GMSURLTileLayer - ios

I have an array of last 10 epochs time.
And I want to create an animation GMSURLTileLayer with these epochs.
I tried to integrate with for loop but it is not working.
This is my code:
let epochs = [10, 20, 30, 40, 50]
private func configureRadarForGoogle(epoch: Int) {
UIView.animate(withDuration: 1.0) {
let url: GMSTileURLConstructor = {(x, y, zoom) in
let urltemplate = "https://tilecache.rainviewer.com/v2/radar/\(epoch)/512/\(zoom)/\(x)/\(y)/2/1_1.png"
return URL(string: urltemplate)
}
let layer = GMSURLTileLayer(urlConstructor: url)
layer.zIndex = 5
layer.map = self.mapView
}
}
private func startAnimation() {
for epoch in self.epochs {
sleep(1)
configureRadarForGoogle(epoch: epoch)
}
}
Is anyone know a better solution? Thanks a lot.

I think you can use the GMSMarkerLayer, which is a subclass of GMSOverlayLayer, available on a per-marker basis, that allows animation of several properties of its associated GMSMarker
Now to Animate you can calculate the points and add them to 2 separate arrays, one for latitude value (y) and one for longitude (x) and then use the values property in CAKeyFrameAnimation to animate as explained here

Related

How can I get an entity to constantly look at the camera in RealityKit similar to Billboard Constraint from SceneKit

I have an app that I am trying to update from SceneKit to RealityKit, and one of the features that I am having a hard time replicating in RealityKit is making an entity constantly look at the camera. In SceneKit, this was accomplished by adding the following billboard constraints to the node:
let billboardConstraint = SCNBillboardConstraint()
billboardConstraint.freeAxes = [.X, .Y]
startLabelNode.constraints = [billboardConstraint]
Which would allow the startLabelNode to freely rotate so that it was constantly facing the camera without the startLabelNode changing its position.
However, I can't seem to figure out a way to do this with RealityKit. I have tried the "lookat" method, which doesn't seem to offer the ability to constantly face the camera. Here is a short sample app where I have tried to implement a version of this in RealityKit, but it doesn't offer the ability to have the entity constantly face the camera like it did in SceneKit:
import UIKit
import RealityKit
import ARKit
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet weak var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
arView.environment.sceneUnderstanding.options = []
arView.debugOptions.insert(.showSceneUnderstanding) // Display a debug visualization of the mesh.
arView.renderOptions = [.disablePersonOcclusion, .disableDepthOfField, .disableMotionBlur] // For performance, disable render options that are not required for this app.
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
if ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh) {
configuration.sceneReconstruction = .mesh
} else {
print("Mesh Classification not available on this device")
configuration.worldAlignment = .gravity
configuration.planeDetection = [.horizontal, .vertical]
}
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
UIApplication.shared.isIdleTimerDisabled = true // Prevent the screen from being dimmed to avoid interrupting the AR experience.
}
#IBAction func buttonPressed(_ sender: Any) {
let screenWidth = arView.bounds.width
let screenHeight = arView.bounds.height
let centerOfScreen = CGPoint(x: (screenWidth / 2), y: (screenHeight / 2))
if let raycastResult = arView.raycast(from: centerOfScreen, allowing: .estimatedPlane, alignment: .any).first
{
addStartLabel(at: raycastResult.worldTransform)
}
}
func addStartLabel(at result: simd_float4x4) {
let resultAnchor = AnchorEntity(world: result)
resultAnchor.addChild(clickToStartLabel())
arView.scene.addAnchor(resultAnchor)
}
func clickToStartLabel() -> ModelEntity {
let text = "Click to Start Here"
let textMesh = MeshResource.generateText(text, extrusionDepth: 0.001, font: UIFont.boldSystemFont(ofSize: 0.01))
let textMaterial = UnlitMaterial(color: .black)
let textModelEntity = ModelEntity(mesh: textMesh, materials: [textMaterial])
textModelEntity.generateCollisionShapes(recursive: true)
textModelEntity.position.x -= textMesh.width / 2
textModelEntity.position.y -= textMesh.height / 2
let planeMesh = MeshResource.generatePlane(width: (textMesh.width + 0.01), height: (textMesh.height + 0.01))
let planeMaterial = UnlitMaterial(color: .white)
let planeModelEntity = ModelEntity(mesh: planeMesh, materials: [planeMaterial])
planeModelEntity.generateCollisionShapes(recursive:true)
// move the plane up to make it sit on the anchor instead of in the middle of the anchor
planeModelEntity.position.y += planeMesh.height / 2
planeModelEntity.addChild(textModelEntity)
// This does not always keep the planeModelEntity facing the camera
planeModelEntity.look(at: arView.cameraTransform.translation, from: planeModelEntity.position, relativeTo: nil)
return planeModelEntity
}
}
extension MeshResource {
var width: Float
{
return (bounds.max.x - bounds.min.x)
}
var height: Float
{
return (bounds.max.y - bounds.min.y)
}
}
Is the lookat function the best way to get the missing feature working in RealityKit or is there a better way to have a Entity constantly face the camera?
k - I haven't messed with RK much, but assuming entity is a scenekit node? - then set constraints on it and it will be forced to face 'targetNode' at all times. Provide that works the way you want it to, then you may have to experiment with how the node is initially created IE what direction it is facing.
func setTarget()
{
node.constraints = []
let vConstraint = SCNLookAtConstraint(target: targetNode)
vConstraint.isGimbalLockEnabled = true
node.constraints = [vConstraint]
}
I was able to figure out an answer to my question. Adding the following block of code allowed the entity to constantly look at the camera:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
planeModelEntity.look(at: arView.cameraTransform.translation, from: planeModelEntity.position(relativeTo: nil), relativeTo: nil)
}

Filtering audio in AudioKit

What i need to do:
record audio file;
as it's record from iPhone/iPad microphone it can be quiet, so i need to filter it to make it louder;
save filtered record;
I'm new in audio programming, but as I understand so far I need "All Pass" filter (if not please correct me).
For this task I've found two libs: Novocaine and AudioKit, but Novocaine written in C, so it's harder to implement it in swift, and I decided to use AudioKit, but I didn't found "All Pass" filter there.
Does anybody know how to implement it in AudioKit and save filtered file? Thank you!
You have a few choices, for musical recordings I recommend AKBooster as it purely boosts the audio, you have to be careful how much you boost, otherwise you might cause clipping.
For spoken word audio I recommend AKPeakLimiter. It will give you the maximum volume without clipping. Set the attackTime and decayTime to lower values to hear a more pronounced effect.
The values of the sliders won't represent the values of the parameters until you move them.
import UIKit
import AudioKit
class ViewController: UIViewController {
let mic = AKMicrophone()
let boost = AKBooster()
let limiter = AKPeakLimiter()
override func viewDidLoad() {
super.viewDidLoad()
mic >>> boost >>> limiter
AudioKit.output = limiter
AudioKit.start()
let inset: CGFloat = 10.0
let width = view.bounds.width - inset * 2
for i in 0..<4 {
let y = CGFloat(100 + i * 50)
let slider = UISlider(frame: CGRect(x: inset, y: y, width: width, height: 30))
slider.tag = i
slider.addTarget(self, action: #selector(sliderAction(slider:)), for: .valueChanged)
view.addSubview(slider)
}
boost.gain = 1
}
#objc func sliderAction(slider: UISlider) {
switch slider.tag {
case 0:
boost.gain = slider.value * 40
case 1:
limiter.preGain = slider.value * 40
case 2:
limiter.attackTime = max(0.001, slider.value * 0.03)
case 4:
limiter.decayTime = max(0.001, slider.value * 0.06)
default: break
}
}
}

How to darken texture of sprite node

I have the following code where I create a sprite node that displays an animated GIF. I want to create another function that darkens the GIF when called upon. I could still be able to watch the animation, but the content would be visibly darker. I'm not sure how to approach this. Should I individually darken every texture or frame used to create the animation? If so, how do I darken a texture or frame in the first place?
// Extract frames and duration
guard let imageData = try? Data(contentsOf: url as URL) else {
return
}
let source = CGImageSourceCreateWithData(imageData as CFData, nil)
var images = [CGImage]()
let count = CGImageSourceGetCount(source!)
var delays = [Int]()
// Fill arrays
for i in 0..<count {
// Add image
if let image = CGImageSourceCreateImageAtIndex(source!, i, nil) {
images.append(image)
}
// At it's delay in cs
let delaySeconds = UIImage.delayForImageAtIndex(Int(i),
source: source)
delays.append(Int(delaySeconds * 1000.0)) // Seconds to ms
}
// Calculate full duration
let duration: Int = {
var sum = 0
for val: Int in delays {
sum += val
}
return sum
}()
// Get frames
let gcd = SKScene.gcdForArray(delays)
var frames = [SKTexture]()
var frame: SKTexture
var frameCount: Int
for i in 0..<count {
frame = SKTexture(cgImage: images[Int(i)])
frameCount = Int(delays[Int(i)] / gcd)
for _ in 0..<frameCount {
frames.append(frame)
}
}
let gifNode = SKSpriteNode.init(texture: frames[0])
gifNode.position = CGPoint(x: skScene.size.width / 2.0, y: skScene.size.height / 2.0)
gifNode.name = "content"
// Add animation
let gifAnimation = SKAction.animate(with: frames, timePerFrame: ((Double(duration) / 1000.0)) / Double(frames.count))
gifNode.run(SKAction.repeatForever(gifAnimation))
skScene.addChild(gifNode)
I would recommend to use the colorize(with:colorBlendFactor:duration:) method. It is an SKAction that animates changing the color of a whole node. That way you don't have to get into darkening the individual textures or frames, and it also adds a nice transition from a non-dark to a darkened color. Once the action ends, the node will stay darkened until you undarken it, so any changes to the node's texture will also be visible as darkend to the user.
Choose whatever color and colorBlendFactor will work best for you to have the darkened effect you need, e.g. you could set the color to .black and colorBlendFactor to 0.3. To undarken, just set the color to .clear and colorBlendFactor to 0.
Documentation here.
Hope this helps!

Adding Array to move group of images in for loop in Swift : SpriteKit

How can I add my starsSqArray to a for loop in my update function that grabs all the SKSpriteNodesfor _starsSq1 so that all of the stars move together and not separately?
Right now my Swift class keeps returning an error saying that _starsSqArray doesn't have a position (my code is bugged out). My goal is to grab the plotted stars and move them downward all at once.
import SpriteKit
class Stars:SKNode {
//Images
var _starsSq1:SKSpriteNode?
//Variables
var starSqArray = Array<SKSpriteNode>()
var _starSpeed1:Float = 5;
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override init() {
super.init()
println("stars plotted")
createStarPlot()
}
/*-------------------------------------
## MARK: Update
-------------------------------------*/
func update() {
for (var i = 0; i < starSqArray.count; i++) {
_starsSq1!.position = CGPoint(x: self.position.x , y: self.position.y + CGFloat(_starSpeed1))
}
}
/*-------------------------------------
## MARK: Create Star Plot
-------------------------------------*/
func createStarPlot() {
let screenSize: CGRect = UIScreen.mainScreen().bounds
let screenWidth = screenSize.width
let screenHeight = screenSize.height
for (var i = 0; i < 150 ; i++) {
_starsSq1 = SKSpriteNode(imageNamed: "starSq1")
addChild(_starsSq1!)
//starSqArray.addChild(_starsSq1)
var x = arc4random_uniform(UInt32(screenSize.width + 400))
var y = arc4random_uniform(UInt32(screenSize.height + 400))
_starsSq1!.position = CGPoint(x: CGFloat(x), y: CGFloat(y))
}
}
}
A couple of suggestions by a design point of view:
You already have all your stars (I guess so) grouped togheter inside a common parent node (that you correctly named Stars). Then you just need to move your node of type Stars and all its child node will move automatically.
Manually changing the coordinates of a node inside an update method does work but (imho) it is not the best way to move it. You should use SKAction instead.
So, if you want to move the stars forever with a common speed and direction you can add this method to Stars
func moveForever() {
let distance = 500 // change this value as you prefer
let seconds : NSTimeInterval = 5 // change this value as you prefer
let direction = CGVector(dx: 0, dy: distance)
let move = SKAction.moveBy(direction, duration: seconds)
let moveForever = SKAction.repeatActionForever(move)
self.runAction(moveForever)
}
Now you can remove the code inside the update method and call moveForever when you want the stars to start moving.
Finally, at some point the stars will leave the screen. I don't know the effect you want to achieve but you will need to deal with this.
for (SKSpriteNode *spriteNode in starSqArray) {
spriteNode.position = CGPoint(x: spriteNode.position.x , y: spriteNode.position.y + GFloat(_starSpeed1))
}
Use the above code in the update() function.
Hope this helps...

Get percentage at specific time in a CAMediaTimingFunction?

CGFloat start = 0; // The start value of X for an animation
CGFloat distance = 100; // The distance X will have traveled when the animation completes
CAMediaTimingFunction* tf = [CAMediaTimingFunction functionWithControlPoints:0 :0 :1 :1]; // Linear easing for simplicity
CGFloat percent = [tf valueAtTime:0.4]; // Returns 0.4
CGFloat x = start + (percent * distance); // Returns 40, which is the value of X 40% through the animation
How can I implement the method valueAtTime: into a category of CAMediaTimingFunction so that it works like described in the code above?
Please note: This is a contrived example. I will actually be using a non-linear timing functions with a UIPanGestureRecognizer for a non-linear drag effect. Thanks.
A timing function is a very simple Bezier curve - two endpoints, 0,0 and 1,1, with one control point each - graphing time (x) against percentage of the animation completed (y), so all you have to do is the Bezier curve math (given x, what's the corresponding y). Google for it and you'll readily find the necessary formulas. Here's a decent place to start that I found: http://pomax.github.io/bezierinfo/
Timing function has to axes (progress and time). If we take your example, we can handle all animation by ourself. I can change animatable property each frame. Here is how we can achieve it using display link.
final class CustomAnimation {
private let timingFunction: CAMediaTimingFunction
private var displayLink: CADisplayLink?
private let frameAmount = 30 // for aniamtion duration 0.5 sec
private let frameCount = 0
init(timingFunction: CAMediaTimingFunction) {
self.timingFunction = timingFunction
}
func startAnimation() {
displayLink = CADisplayLink(target: self, selector: #selector(updateValue))
displayLink?.add(to: .current, forMode: .default)
}
func endAnimation() {
displayLink?.invalidate()
displayLink = nil
}
#objc
private func updateValue() {
guard frameCount < frameAmount else {
endAnimation()
return
}
frameCount += 1
let frameProgress = Double(frameCount) / Double(frameAmount)
let animationProgress = timingFunction.valueAtTime(x: frameProgress)
/// count yout value. change position of ui element and e.t.c
let value = ....
}
}

Resources