There's a Stackoverflow thread elsewhere that points out that Firemonkey has to display video through the primary thread. I am trying to use a DirectX camera to snag a series of images (in Win8.1 for now--other OS's can wait). So I use the SampleBufferReady and SampleBufferSync approaches in the Embarcadero example code (that just has a TImage on a form), but with enough changes that I never see anything. I need to do my display in a TImageViewer; pointing the tbitmap in the SampleBufferSync at that tbitmap is easy. But nothing displays. From a procedural viewpoint, pseudocode of what I want is
setup whatever
camera.startcapture
repeat
repeat until framecaptured {what SampleBufferReady should do -- only fire when ready}
Imageviewer.repaint {inside SampleBufferReady?}
inc(mycounter) {inside SampleBufferReady?}
until (mycounter>mylimit) or (user interrupts video input)
camera
One could add a ttimer to slow things down. What I don't "get" is
must I define my own TEvent to find out that the camera's snagged an image, or does this even already exist? I would have thought that SampleBufferReady would respond to the arrival of an image and I could process whatever inside that event.
to display an image in something other than a TImage, will I need to turn off the camera, paint the bitmap, then turn the camera back on? If so, will I need to have SampleBufferReady contain a command to turn the camera off? Boy, does that sound clunky!
Suggestions?
here is a complete code source that i test the c++ version that is same as pascal in term of function calls and mechanism, only the syntax differ:
download here the pascal version.
the code work fine for both android and desktop( i test the c++ version). so download, test and confirm for me the pascal code.
Related
I am working with OpenCV with Visual "C++." I am new to OpenCV.
What I am hoping to find is an Image Viewer that works like a modeless dialog box so that I can display a continuous stream of images in it.
I was trying to use the OpenCV namedwindow function for this but it does not appear to be suitable for this task for the following reason:
Any viewer created with namedwindow goes out of scope when the invoking function exits and the user has to block with waitKey in order to prevent the viewer from closing immediately.
What would be really helpful is if there was a viewer window in OpenCV that persists until the user explicitly closes it and to which images can be streamed sequentially using imshow or a similar mechanism. Or perhaps, there is a way of using namedwindow like this that I have not discovered.
Just create the window in a scope that ends later.
Calling cv::waitKey(1) will give the window just enough time to update and continue running.
First of all, this is my first question on SO - if I made any mistakes, please do not tar and feather me ;)
I have a simple test application to play with the Mitov AudioLab components (www.mitov.com) version 7 in Delphi XE6. On my form, there is a TALWavePlayer, a TALSpeexCompressor, a TALSpeexDecompressor, a TALAudioMixer and a TALAudioOut, building a simple audio processing chain. I can connect the inputs and outputs visually at design time (in the OpenWire view). when I run my test application, I can hear the wave file through the speaker - whithout a single line of code. That's the easy (working) part.
(grrrr... can't post images, would have made things much clearer ;)
Now I disconnect the TALSpeexDecompressor output pin from the TALAudioMixer input pin visually at design time (OpenWire view). I want to replace this same connection in code at run time. (For the sake of simplicity I keep the single input pin and channel of the TALAudioMixer, so they do not need to be created in code).
I tried exactly the same optoins that work to connect other AudioLab components at run time (audio output pin -> audio input pin).
1.) decomp.OutputPin.Connect(mixer.InputPins[0]);
2.) decomp.OutputPin.Connect(mixer.Channels.Items[0].InputPin);
But with the TALSpeexDecompressor, this does not work - there is no signal leaving the decompressor. I do not have the source code of the components, so I cannot debug the application to find out what's going wrong.
Solution:
Stop and then start the wave player again after connecting the decompressor and the mixer dynamically. This somehow solves the issue. I do not know what happens under the hood, but after restarting the TALWavePlayer, the signal leaves the TALSpeexDecompressor and enters the TALAudioMixer. I stumbled over the solution when I set the "filename" property of the TALWavePlayer component in code, not in the property editor. Because of another (default) setting "RestartOnNewFile" = True, the wave player was restarted internally and the signal flow worked.
procedure Tform1.Button1Click(Sender: TObject);
var
channel: TALAudioMixerChannelItem;
begin
channel := mixer.Channels.Add;
waveplayer.Stop;
channel.InputPin.Connect(decomp.OutputPin);
waveplayer.Start;
end;
It is obvious that the AudioLab components can make simple tasks even simpler, but due to the poor documentation in their DocuWiki you have to follow the "try and error" path often, sometimes even for days. Unfortunately my real issue is more complicated than the simple test case I provided. I have an UDP client and server in the chain, so I have no control over the wave player on the client side when I dynamically connect the decompressor to the mixer on the server side. Obviously a deeper knowledge of these components is required, perhaps coming from experience. So this will be my next question here on SO.
Apologies to everyone for the insufficient documentation in the components :-( .
We are working to get a new release in the next 3-4 weeks that will contain again the F1 help, and we are working to make it as complete as possible.
Unfortunately we had to release the 7.0 without documentation in order to have it available on time for the RAD Studio XE6 :-( .
Please contact me directly - mitov#mitov.com so I can help you with the Speex issue, and connecting the pins.
With best regards,
Boian Mitov
is it possible to use NetStream to publish the stage constantly to a FMS?
I have tried to attach a camera to the netstream which works perfectly. However I want to publish a stream showing the stage and all its elements / objects including the case where a user interacts with the elements and changes their position/appearance.
Thank you very much.
As I know it's not possible this way.
You can't use a custom input for the netstream to encode it.
You have to following options:
if you can reproduce the same elements on the other side, create an API, that only passes the interactions (i.e. drawLine(startX,startY,endX,endY), loadImage(url), etc). This way everything will be shown on both PC, with much less data traffic and CPU usage
if you have a very complex stage and somehow it's impossible to reproduce it on the other side, then you can create bitmap shots, and send them through FMS JPEGencode (not too nice)
use a webcam splitter that grabs the stage, and it can be a webcam source (not too nice)
I am trying to display a JPEG image as it downloads, using part of the data, similiar to many web browsers do, or the facebook app.
there is a low-quality version of the image(just part of the data) and then display the full image in full quality.
this is best shown in the VIDEO HERE
I followed this SO question:
How do I display a progressive JPEG in an UIImageView while it is being downloaded?
but all I got was a imageview that is being rendered as data keeps comes in, no low-quality version first, no true progressive download and render.
can anyone share a code snippet or point me to where I can find more info as to how this can be implemented in an iOS app ?
tried this link for example which shows JPEG info, it identifies the image as progressive
http://www.webpagetest.org/jpeginfo/jpeginfo.php?url=http://cetus.sakura.ne.jp/softlab/software/spibench/pic_22p.jpg
and I used the correct code sequence
-(void)connection:(NSURLConnection*)connection didReceiveData:(NSData*)data
{
/// Append the data
[_dataTemp appendData:data];
/// Get the total bytes downloaded
const NSUInteger totalSize = [_dataTemp length];
/// Update the data source, we must pass ALL the data, not just the new bytes
CGImageSourceUpdateData(_imageSource, (CFDataRef)_dataTemp, (totalSize == _expectedSize) ? true : false);
/// We know the expected size of the image
if (_fullHeight > 0 && _fullWidth > 0)
{
[_imageView setImage:[UIImage imageWithCGImage:image]];
CGImageRelease(image);
}
}
but the code only shows the image when it is finished loading, with other images, it will show it as it downloading, but only top to bottom, no low quality version and then progressively add detail as browsers do.
DEMO PROJECT HERE
This is a topic I've had some interest in for a while: there appears to be no way to do what you want using Apple's APIs, but if you can invest time in this you can probably make it work.
First, you are going to need a JPEG decoding library: libjpeg or libjpeg-turbo. You will then need to integrate it into something you can use with Objective-C. There is an open source project that uses this library, PhotoScrollerNetwork, that uses leverages the turbo library to decode very large jpegs "on the fly" as they download, so they can be panned and zoomed (PhotoScroller is an Apple project that does the panning and zooming, but it requires pre-tiled images).
While the above project is not exactly what you want, you should be able to lift much of the libjpeg-turbo interface to decode progressive images and return the low quality images as they are received. It would appear that your images are quite large, otherwise there would be little need for progressive images, so you may find the panning/zooming capability of the above project of use as well.
Some users of PhotoScrollerNetwork have requested support for progressive images, but it seems there is very little general use of them on the web.
EDIT: A second idea: if it's your site that you would use to vend progressive images (and I assume this since there are so few to be found normally), you could take a completely different tact.
In this case, you would construct a binary file of your own design - one that had say 4 images inside it. The first four bytes would provide the length of the data following it (and each subsequent image would use the same 4-byte prefix). Then, on the iOS side, as the download starts, once you got the full bytes of the first image, you could use those to build a small low res UIImage, and show it while the next image was being received. When the next one fully arrives, you would update the low res image with the newer higher res image. Its possible you could use a zip container and do on the fly decompression - not 100% sure. In any case, the above is a standard solution to your problem, and would provide near-identical performance to libjpeg, with much much less work.
I have implemented a progressive loading solution for an app I am currently working on. It does not use progressive Jpeg as I needed more flexibility loading different-res versions, but I get the same result (and it works really well, definitely worth implementing).
It's a camera app working in tandem with a server. So the images originate with the iPhone's camera and are stored remotely. When the server gets the image, it gets processed (using imageMagick, but could be any suitable library) and stored in 3 sizes - small thumb (~160 x 120), large thumb (~400x300) and full-size (~ double retina screensize). Target devices are retina iPhones.
I have an ImageStore class which is responsible for loading images asynchronously from wherever they happen to be, trying the fastest location first (live cache, local filesystem cache, asset library, network server).
typedef void (^RetrieveImage)(UIImage *image);
- (void) fullsizeImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)largeThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
- (void)smallThumbImageFromPath:(NSString*)path
completion:(RetrieveImage)completionBlock;
Each of these methods will also attempt to load lower-res versions. The completion block actually loads the image into it's imageView.
Thus
fullsizeImageFromPath
will get the fullsized version, and also call largeThumbImageFromPath
largeThumbImageFromPath
will get the large thumb and also call smallThumbImageFromPath
smallThumbImageFromPath
will just get the small thumb
These methods invoke calls that are wrapped in cancellable NSOperations. If a larger-res version arrives before any of it's lower-res siblings, those respective lower-res calls are cancelled. The net result is that fullsizeImageFromPath may end up applying the small thumb, then the large thumb, and finally the full-res image to a single imageView depending on which arrives first. The result is really smooth.
Here is a gist showing the basic idea
This may not suit you as you may not be in control of the server side of the process. Before I had implemented this, I was pursuing the solution that David H describes. This would have been a lot more work, and less useful once I realised I also needed access to lower-res images in their own right.
Another approach which might be closer to your requirements is explained here
This has evolved into NYXProgressiveImageView, a subclass of UIImageView which is distributed as part of NYXImagesKit
Finally ... for a really hacky solution you could use a UIWebView to display progressive PNGs (progressive JPegs do not appear to be supported).
update
After recommending NYXProgressiveImageView, I realised that this is what you have been using. Unfortunately you did not mention this in your original post, so I feel I have been on a bit of a runaround. In fact, reading your post again, I feel you have been a little dishonest. From the text of your post, it looks as if the "DEMO" is a project that you created. In fact you didn't create it, you copied it from here:
http://cocoaintheshell.com/2011/05/progressive-images-download-imageio/ProgressiveImageDownload.zip
which accompanies this blog entry from cocoaintheshell
The only changes you have made is one NSLog line, and to alter the JPG test URL.
The code snippet that you posted isn't yours, it is copied from this project without attribution. If you had mentioned this in your post it would have saved me a whole heap of time.
Anyway, returning to the post... as you are using this code, you should probably be using the current version, which is on github:
https://github.com/Nyx0uf/NYXImagesKit
see also this blog entry
To keep your life simple, you only need these files from the project:
NYXProgressiveImageView.h
NYXProgressiveImageView.m
NYXImagesHelper.h
NYXImagesHelper.m
Next you need to be sure you are testing with GOOD images
For example, this PNG works well:
http://www.libpng.org/pub/png/img_png/pnglogo-grr.png
You also need to pay attention to this cryptic comment:
/// Note: Progressive JPEG are not supported see #32
There seems to be an issue with JPEG tempImage rendering which I haven't been able to work out - maybe you can. That is the reason why your "Demo" is not working correctly, anyway.
update 2
added gist
I believe this is what you are looking for:
https://github.com/contentful-labs/Concorde
A framework for downloading and decoding progressive JPEGs on iOS and OS X, that uses libjpeg-turbo as underlying JPEG implementation.
Try it, may be its useful for you :
https://github.com/path/FastImageCache
https://github.com/rs/SDWebImage
I have the same problem then i found something tricky its not proper solution but it works.
You have to load low resolution/thumbnail image when loaded then load
actual image.
This is example for android i hope you can transform it into ios version.
Good afternoon all!
I'm experiencing a rather annoying issue with one of my current projects. I'm working with a hardware library (NVAPI Pascal header translation by Andreas Hausladen) in one of my current projects. This lib allows me to retrieve information from an NVIDIA GPU. I'm using it to retrieve temperatures, and with the help of Firemonkey's TAnimateFloat, i'm adjusting the angle on a custom-made dial to indicate the temperature.
As FMX defaults to Direct 2D on Windows, i can monitor the FPS with any of the various "gamer" tools out there (MSI Afterburner, FRAPS, etc).
The issue i'm having is that when i put the system into sleep mode (suspend to RAM/S3), and then start it up again, the interface on my application is blacked out (partially or completely), and nothing on the UI is visibly refreshing. I'm calling the initialization for the NVAPI library regularly and checking the result via a timer, but this doesn't fix the issue. I'm also running ProcessMessages and repaint on the parent dial and it's children controls (since i can't seem to find a repaint for the form or even an equivalent).
I tried various versions of the library, and each one presents the same issue. The next paragraph indicates that this was in fact NOT the issue, and that it's actually the renderer at fault.
I have one solution, but i want to know if there's something more... elegant, available. The solution i have involves adding FMX.Types.GlobalUseDirect2D := False; before Application.Initialize in my projects source. However, this forces FMX to use GDI+ rather than Direct2D. It works of course, but i'd like to keep D2D open as an option if i can. I can use FindCmdLineSwitch to toggle this on/off dependant on parameters, but this still requires me to restart the application to change from D2D to GDI+ or vice-versa.
What's weird about it is that the FPS counter (from FRAPS in my case) indicates that there's still activity happening in the UI (as the value changes as would be expected), but the UI itself isn't visibly refreshing.
Is this an issue related to Direct2D, or a bug with Firemonkey's implementation? More importantly, is there a better method to fixing it than disabling D2D? Lastly, also related, is it possible to "reinitialize" an application without terminating it first (so perhaps i can allow the user to switch between GDI+ and D2D without needing to restart the application)?
This is may be of the issues with FM prior to the update 4 hotfix - 26664/QC 104210
Fixes the issue of a FireMonkey HD form being unresponsive after user unlock - installing this might resolve the issue for you.
The update should be part of your registered user downloads from the EDN (direct link http://cc.embarcadero.com/item/28881).