Why is Print Screen versus what is actually displaying on the monitor are different? - nvidia

I'm working on an application that screen captures a monitor in real-time, encodes it, sends it over ethernet, decodes it, then displays that monitor in an application.
So I put the decoder application on the same monitor that is being captured. I then open a timer application and put it next to the decoder application. I can then start the timer and see the latency between main instance of the timer and the timer within the application.
What's weird is that if I take a picture of the monitor with a camera, I get one latency measurement (almost always ~100ms) but if I take a Print Screen of the monitor, the latency between the two is much lower (~30-60ms).
Why is that? How does Print Screen work? Why would it result in 40+ ms difference? Which latency measurement should I trust?

Print Screen saves the screenshot to your clipboard which is stored on your RAM (highest speed storage system in your computer), whereas what you are doing probably writes the screenshot data to your HDD/SSD and then reads it again to send over the internet, which takes a lot longer to do.

Related

Actionscript: performance of creating text fields and fills on iOS

I am curious to know what sort of performance I can expect from Air on iOS. I have a class which creates 30 display objects each with several text fields and a fill.
Each display object takes about 0.011 seconds to create on my PC. This raises to 0.056 on an iPad Retina (A7).
From my debugging it takes around 0.004 to create and format a textfield.
When I get to 30 display objects the 0.056 becomes 1.68 seconds.
Is this typical?
Can anything be done ?
I have traced each and every stage of the class and every function is taking about the same time to execute, so I do not think one specific stage has an issue.
Welcome to the world of mobile devices. They all have a very low (if you compare to that of desktop PCs) CPU power, while Flash uses CPU to render its content normally.
the larger area you need to redraw on the screen - the worse performance is
the more objects are there to draw - the worse performance is
Shapes, strokes, fills, fonts - they all are vector data that Flash need to render and draw, thus they all take a heavy toll on CPU usage, which also results in heavy battery drain. That's why Apple discontinued supporting Flash Plugin on their devices long ago.
Then Adobe announced Stage3D which allows (with a certain dose of work) to take advantage of GPU-rendering, which is faster even on desktop computers, and literally saves the day for Flash/AIR application on mobile devices.
Long story short, slow performance is the way things are for native Flash content. If you want better performance and faster applications on mobile devices, you need to proceed with some GPU-enabled framework, like Gamua Starling.

iOS AudioQueue stuttering

I am building a streaming system for audio playback, and every so often, the audio either glitches or starts stuttering for a second or two.
I am running a single output audioQueue with 3 allocated buffers of 1024 samples with a sample rate of 22050.
I hold a separate list of buffers ready to stream, and that buffer is never empty (logs always show at least one filled buffer there whenever a playback_callback is called). playback_callback just memcpy-s a ready buffer into one of the three AudioBuffers with no locks or other weirdness.
playback_callback takes at most 0.9ms to run (measured via mach_absolute_time), which is far below the 1.0/(22050/1024) = 46.4 ms.
I initiate the queue with either CFRunLoopGetMain () or NULL (which should use an "internal thread") and get the same behavior in both cases.
If buffer size is turned absurdly high (16384 instead of 1024), glitches go away. If the nuber of AudioBuffers is turned from 3 to 8, it practically goes away (happens ~20x more rarely). However, neither of those settings is workable for me as it is not ok for the system to take a second to react to a stream switch (0.1 - 0.2s would still be tolerable)
Any help and ideas on the matter would be greatly appreciated.

Fast display of image using openCV

I have written an image processing application using Visual C++ forms and OpenCV on a windows machine. Everything seems to work ok, but displaying the images is very slow - only a few fps. I would like to be able to get to 30 or so. I am currently using the standard imshow(...) followed by waitkey(1).
My question is: Is there a better (i.e. faster) way to get an image from memory to the monitor.
The Mat structure used by openCV is essentially a fancy header pointing to a contiguous block of unsigned char values.
Edit:
I tested my code with the VS2013 profiler and it claims that I am spending 50% of the execution time in imshow/waitkey.
I've seen several discussions on this in the OpenCV Q/A forum and they always end with "you shouldn't be using imshow except for debugging" but nobody is suggesting anything else to use, so I thought I'd try here.
guy
Without seeing what you have, here is the approach I would take to achieve what you want.
Have a dedicated thread for frame acquisition from the camera. Insert the acquired frames into a synchronized queue, that is consumed by:
Image processing thread. Takes frames from the queue, processes them into images suitable for display. It changes a synchronized output image, and notifies GUI about it.
Main (GUI) thread is only dedicated to display. When it is notified of an image update, it swaps the synchronized output image with its current working image. (To avoid copying and extra allocations, we just reuse those two image buffers.) Then it invalidates the window. In a WM_PAINT handler, it then displays the image using BitBlt.
Some notes:
Minimize allocation/deallocation of buffers. For acquisition, you could have a pre-allocated pool of buffers to cycle through.
Prepare the output images in format and size that suit display.
Keep track of the number of frames in the queue and set some upper limit. Define an algorithm for dropping excess frames, so that you don't run out of memory and don't lag too much.
If you just want to ditch the sleep in waitKey and want something simpler, have a look at this question
Instrument your code -- add timing of the crucial parts using high resolution timer. Log them, and/or keep statistics, history.

How fast can I send a UIImage between an iPhone and Apple Watch, with watchOS 2?

I'm building an Apple WatchOS 2 app which is continuously animated with generated images.
Because these can't be bundled with the app, they're generated in InterfaceController, and then set to display on the watch like so:
self.imageGroup?.setBackgroundImage(self.image)
Until this point, I've been generating these at a rate of 1 image per second, which feels fairly safe, but obviously gives a very low framerate of 1fps. Now I'm wondering how much this could be improved?
I measured the speed at which the UIImages themselves are generated, which is a fairly low .017 seconds. The size of these images is fairly consistent, too at about 10000 bytes. If there was no further delay, that'd give me a much more acceptable performance of about 58fps.
My question is - Is there a typical speed at which bluetooth communicates with my phone, which I could compare to that image size to determine a realistic frame rate?
Or - I presume that calling setBackgroundImage doesn't block the main thread while that happens. Is there a way that I can find out how long it takes for that to actually be set?
Apple doesn't have this speed documented because so much of it depends on connection strength. And since a user doesn't need to have the watch and phone right next to each other, the further away (or the type of objects between the phone and watch) a user is the slower it will transfer.
Your images are 10 KB, and you want to send 58 images per second so 580 KB or .58 MB per second? The amount of data doesn't sound unrealistic (though it will be a battery drain). However, each network call between the two devices will have some overhead. Do these image need to be sent in real time? If not you would likely get better performance if you could delay for 1-2 seconds initially and then batch a group of 58 images together which you would them animate on the watch. You would only have 1 network call every second which would be more more manageable for the devices than 58 calls per second.

In image processing, what is real time?

in image processing applications what is considered real time? Is 33 fps real time? Is 20 fps real time? If 33 and 20 fps are considered real time then is 1 or 2 fps also real time?
Can anyone throw some light.
In my experience, it's a pretty vague term. Often, what is meant is that the algorithm will run at the rate of the source (e.g. a camera) supplying the images; however, I would prefer to state this explicitly ("the algorithm can process images at the frame rate of the camera").
Real time image processing = produce output simultaneously with the input.
The input may be 25 fps but you may choose to process 1 of every 5 frames(that makes 5 fps processing) and your application is still real time.
TV streaming software: all the frames are processed.
Security application and the input is CCTV security cams: you may choose to skip some frames to fit the performance.
3d game or simulation: fps changes depending on the current scene.
And they are all real time.
Strictly speaking, I would say real-time means that the application is generating images based on user input as it occurs, e.g. a mouse movement which changes the facing of an avatar.
How successful it is at this task - 1 fps, 10 fps, 100 fps, etc - is actually another question.
Real-time describes an approach, not a performance metric.
If however you ask what is the slowest fps which passes as usable by a human, the answer is about 15, I think.
i think it depends on what the real time application is. If the app is showing slideshows with 1 picture every 3 seconds, and the app can process 1 picture within this 3 seconds and show it, then it is real time processing.
If the movie is 29.97 frames per second, and the app can process all 29.97 frames within the second, then it is also real time.
An example is, if an app can take the movie from a VCR or Cable's analog output, and compress it into 29.97 frames per second video and also send all that info to a remote area for another person to watch, then it is real time processing.
(Hard) Real time is when an outcome has no value when delivered too early or too late.
Any FPS is real time provided that displayed frames represent what should be displayed at the very instant they are displayed.
The notion of real-time display is not really tied to a specific frame rate - it could be defined as the minimum frame rate at which movement is perceived as being continuous. So for slow moving objects in a visual frame (e.g. ships in a harbour, or stars in the night sky) a relatively slow frame rate might suffice, whereas for rapid movement (e.g. a racing car simulator) a much higher frame rate would be needed.
There is also a secondary consideration of latency. A real-time display must have sufficiently low latency in relation to other events (e.g. behaviour of a real-time simulation) that there is no perceptible lag in display updates.
That's not actually an easy question (even without taking into account differences between individulas).
Wikipedia has a good article explaining why. For what it's worth, I think cinema films run at 24fps so, if you're happy with that, that's what I'd consider realtime.
It depends on what exactly you are trying to do. For some purposes 1fps or even 2 spf (Seconds per frame) could be considered real-time. For others thats way too slow ...
That said, real-time means that it takes as long (or less) to process x frames as it would take to just present those x frames.
It depends.
automatic aircraft cannon - 1000 fps
monitoring - 10 - 15 fps
authentication - 1 fps
medical devices - 1 fph
I guess the term is used with different meanings in different contexts. In industrial image processing, real time processing is usually the opposite of offline processing. In offline processing applications, you record images (many of them) and process them at a later time. In real time processing, the system that acquires the images also processes them, at the same time, so the processing frame rate must not be higher than the acquisition frame rate.
Real-time means your implementation is fast enough to meet some deadline. The deadline is part of your system's specification. If it's an interactive UI and the users are not too picky, 15Hz update can be OK, although it can feel laggy. If you're using it to drive a car along the motorway 30Hz is about right. If it's a missile, well, maybe 100Hz?

Resources