Jerky moving image or text on DirectX or Opengl - directx

Background is i'm developing a component which is like a digital signage, with moving ticker text runs from left to right in loop, lays over a playing video or still image.
But the ticker text is not fluid enough, especially when a video is loaded from inside or outside the application, it jerks very much.
I have been stuck by this issue for years, technologies I have tested include D3D and WPF.
I never tried OpenGL, however persoannly i think it equals to D3D.
Can you give me some guidelines or even samples?

Related

iOS timecode-synced downloadable animation system

As an introduction and context, I'm currently a novice iOS app developer and I want to make sure I'm not reinventing the wheel too much as I make this app (reinventing wheels can get very expensive.)
The app will allow the user to download our videos off the internet and will allow storage for offline usage. The problem with storing these videos on the device is that many of them will be too long and thus too big to be practical to store.
The videos are quite simple however, consisting of a couple short "real" video clips at the beginning and end, with the bulk of the video being still images animated around the screen. The animations would consist solely of opacity and simple transformation keyframes (translate, scale, rotate around static anchor point), and would require a variety of easing functions for each transition.
The hardest part likely would be that the "video" player will also have to be able to track with an audio player's timecode, and will have to support seeking to any arbitrary point like a normal video player.
So, now that I've described the problem, here's the solution I've come up with so far. Hopefully doing it this way will reduce the probability of XY problems. :)
The idea is to basically do a dumbed-down version of what Final Cut and other editing programs do with animations—have a bunch of clips, sometimes overlapping, and be able to animate the position, scale, rotation, and opacity of each using keyframes.
My first instinct as far as implementation goes is to use some of iOS's game engine stuff to do animations (maybe SceneKit because it seems to allow animations to use scene time as opposed to real time, despite the fact that it's primarily 3d and I am doing 2d animations) and manually handle syncing time with the audio player, as well as manually handling the adding and removing of nodes from the scene when seeking through the video and when clips begin/end.
What are some built-in systems, plugins, etc. that I can take advantage of to make this easier and faster to develop and maintain? Double points if I don't have to transcode the animations by hand to some custom format.
As I mentioned in my comment your question is rather broad and contains multiple questions in one, I will address what you mentioned to be likely the hardest part:
https://developer.apple.com/documentation/avfoundation/avplayeritem
https://developer.apple.com/documentation/avfoundation/avasset
Instead of SceneKit, take a look at SpriteKit and its SKVideoNode.
Also, research Metal video processing. There are quit a few example projects available you could use as a starting point.

iOS SpriteKit split screen for lazy eye exercise

A Little Background
My son Seth has a Lazy Eye and there is evidence that his condition can be improved by playing video games.
Tetris Video Game Helps Treat Lazy Eye
Basically he has poor focus in one eye and perfect focus in the other eye. Over time his brain has started the process of shutting off and ignoring the bad eye. He wears a contact now to correct the focus issue but, his brain is still in the habit of ignoring that eye.
So not just any game will work. He needs something that forces his eyes to collaborate to bring together and track images.
I can use Durovis Dive or Google Cardboard to separate the images he will be processing.
He is a fan of Flappy Bird so a clone of this would be a good start.
My thought is to have the bird visible only to his left eye and the pipes only visible to his right eye. The background is visible to both eyes to give his brain a reference to bring the images together.
So Here is the actual question
I have ran into an issue trying to get a scrolling background and scrolling pipes working in two screens that clip at the right point.
How can I create a screen like below that
The background scrolls in sync in both windows?
The pipes clip at the center?
Thanks!
A crop node (or two) might do the trick so that it masks out the other half of the screen.

Is iOS using dirty regions by default?

Is iOS using some form of dirty region rendering by default while rendering applications?
For example, if I have a graph paper background graphic on a text input field, and the user scrolls through it, will the whole graphic be redrawn every time, or only the part that has changed (In the graph paper example: If a pixel was white before and will be white on the next rendering, will it be redrawn or left alone)?
Again, this question is mostly out of curiosity and not out of programming needs. I did not find anything in the developer manuals about this.
Not only is Richard J. Ross III's comment evidence — Apple have reserved the right to do partial redraws even though the iPhone had full on-GPU composition from day one, so it isn't a legacy thing — but the advice always handed out in OpenGL sessions is to try to avoid composition of OpenGL on OpenGL or any two rapidly changing views since it causes the compositor to do a lot of extra work that you can usually avoid (by moving all the content into one OpenGL view in that specific case).
I would therefore assume the compositor has some notion of updating only dirty parts of the layer hierarchy.

cacheAsBitmap has no effect on a Sprite masked with a scrollRect in AIR for iOS

I'm developing a simple kinetic menu UI component for AIR for iPad. It's basically a lightweight fill-in for a combobox that matches the style of iOS. I have a sprite containing any where from 2 to 60 item buttons that pops up and lets you flick/ scroll through them, only showing about 7 items at any given time.
My first attempt at this used a mask over my sprite, moving my menu sprite up and down under the stationary mask. This produced lackluster results on the test device (< 20 fps).
I then tried a blitting solution, leaving the menu sprite off the display list and using BitmapData.draw() to render only the part to the list i needed visible. This produced the best results on my Windows dev platform, but this time the framerate dropped below 10 fps on iPad. I am assuming I was incurring either a taxing CPU usage or a GPU readback penalty. Originally I had hoped to be able to run my app a 60 fps, however I've ratcheted my goal down to a more humble 30 fps.
Which brings me to my 3rd attempt at this UI component using the sprite's .scrollRect masking function in conjunction with .cacheAsBitmap . Again, the observed behaviors differ wildly between AIR on Windows vs. iOS. On Windows it only redraws the part of the menu sprite bounded by the dimensions of the scrollRect as it should. With iOS i can touch the area of the screen above or below the visible area of the menu sprite and still drag the menu even though my finger is over "empty" space! The performance here is decent, hovering between (19 - 25 fps) and would almost certainly be perfect at 30 if it worked as it did on windows.
Does anyone have any ideas either about the scrollRect feature's behavior on AIR for iOS or a better way of implementing an iOS native style gliding menu in AIR for iOS?
Note, the above methods were tried in both CPU and GPU mode, but CPU mode performed vastly better. I used AIR 2.7 installed on top of Flash Pro CS 5.5, with FlashDevelop as my IDE.
http://esdot.ca/site/2011/fast-rendering-in-air-3-0-ios-android#comment-10
Really nice guy from the above link: "Ya, scrollRect is basically a no-go on mobile, basically forget that API even exists. Believe it or not… old school masking is the way to go. Round and round we go!"

Image partly off screen killing as3 frame rate on IOS

I'm developing a game in as3 for iPhone, and I've gotten it running reasonably well (consistanty 24fps on iPhone 3G), but I've noticed that when the "character" goes partly off the screen, the frame rate drops to 10-12fps. Does anyone know why this is and what I can do to remedy it?
Update - Been through the code pretty thoroughly, even made a new project just to test animations. Started a image offscreen and moved it across the screen and back off. Any time the image is offscreen, even partially, the frame rates are terrible. Once the image is fully on the screen, things pick back up to a solid 24fps. I'm using cacheAsBitmap, I've tried masking the stage, I've tried placing the image in a movieclip and using scrollRect. I would keep objects from going off the screen, except that the nature of the game I'm working on has objects dropping from the top down (yes, I'm using object pooling. No, I'm not scaling anything. Striclt x,y translations). And yes, I realize that Obj-C is probably the best answer, but I'd really like to avoid that if I can. AS3 is so much nicer to write in
Try and take a look at the 'blitmasking' technique: http://www.greensock.com/blitmask
From Doyle himself:
A BlitMask is basically a rectangular Sprite that acts as a high-performance mask for a DisplayObject by caching a bitmap version of it and blitting only the pixels that should be visible at any given time, although its bitmapMode can be turned off to restore interactivity in the DisplayObject whenever you want. When scrolling very large images or text blocks, BlitMask can greatly improve performance, especially on mobile devices that have weaker processorst

Resources