Image Stream in ImageReader from Camera2 API - opencv

I want to capture Images as a sequential stream in OnImageAvailableListener of the ImageReader.
My Aim is to process these images in a background thread.
In the basic camera2 example this ImageAvailable Listener is called only once when I take a picture by clicking a button.
I need to get this OnImageAvailable called in real time.

You need to add an ImageReader Surface into the repeating preview request targets as well, instead of just to the still image capture.
You likely also want to use a YUV_420_888 format instead of JPEG, depending on your use case.

Related

How should I implement video transitions with Metal Kit and AVFoundation?

I am making a video editor and so far I have been able to apply filters to a single frame. The way I currently have everything set up works perfectly. It's a lot of code to show and I don't really need help with the code specifically, so I'll just explain what I do. I use a video composition, WITHOUT a custom compositor that has one AVMutableVideoCompositionInstruction, but multiple AVCompositionTracks (one for each asset) in my AVMutableComposition. Each track has its own layer instructions that handle scale, orientation, and position for each video. I then extract frames using a player item video output and render the frames with metal to apply filters and effects. This works really well and the performance is great.
Now I am faced with applying transitions, which requires me to overlap tracks in the AVMutableComposition. The problem with this is that the video output can't extract frames from specific trackID's and will only extract the top layer. Also, when I overlap tracks, the video doesn't show at all. So I came to the conclusion that I need a custom compositor. I implemented the compositor, but there are a few problems. I can't use layer instructions, but I know this can easily be solved by handling my transforms directly through my vertex shader. The biggest issue is that I need filters to be applied to each frame before the transition is done. For example, a transition between A and B: when I extract frames from track A for the transition, I need all of track A's filters and effects to be applied. When I extract frames from track B I need all of track B's filters to be applied. Then I need to render the transition with the filtered frames from A and B. I can do this in the compositor, but I won't be able to make live updates. I need live updates for my app, for example, changing the intensity of track A's filter with a slider should show every single increment updated live on the player. This solution doesn't allow for that since I would have to change the entire video composition to change the properties of the instructions and/or video compositor.
I've also looked into using an AVAssetReader, however, I am not sure if this will be fast enough or be able to handle seeking through videos efficiently.
So to recap, what I need is a way to extract frames from specific tracks that are overlapped and allow for live updates of any filters. If anyone can lead me in the right direction I'd appreciate it. Thank you.

How to add real time stamp on the streaming and recording video for every frame in Swift?

My app uses VideoCore project for live streaming to Wowza server and store the video. Also it uses AVCaptureMovieFileDataOutput to record the offline video.
I want to embed the video capturing time stamp on top-left of video, and it is not a static time. It means it is not only a static watermark but also a real video capturing time display.
For the streaming case, I have no idea for now. For the offline case, I tried to utilize AVCaptureAudioDataOutput to get every frame to add time text overlay. But this causes preview screen freezes.
Any tips are helpful.
Thank you.
My platform is Xcode7.3 + Swift2
I did some similar thing using transcodig on wowza, the transcodig menu enables image overlay, this image could be refreshed every second (or less), so if you create an image with timestamps every second, wowza takes it and put it on the stream every second. you can define where to put the image, the size and transparency.
to create the image I use PHP, but you could use another tool that enables image creation.

How to get live output while processing images in ImageJ?

All what I am describing here takes place in ImageJ 32-bit version(Java 1.6.0_20).The Plugin I am using to get the capture stream from the camera is Civil Capture.
I made it work that i get image slices in a stack from a live video capture stream, with a camera. Afterwords I process them, lets just say i feed the values of one horizontal line of pixels of one image in an array and than plot this line with the methods from Plot and Plot Window.
So now my Problem:
When I run my plugin it first captures the picture than it gets processed. Now a Window is beeing opend for the Image stack and one for the Plot.
For the next incoming image it will also be processed and the Plot is beeing updated. But only at the end, when as many images, as I want to, were processed, the Plot and the images will actually be displayed. While the processing is happening, both Windows are white and dont show anything.
Is it possible to make ImageJ display the plot and the images after the processing part? and how?
I have found the Dynamik Profiler but I didnt get it to work the way I wanted it to. Especally since you have to input a image and not an array.
This is my first question so dont be too hard on me if i missed to give you information.
You should be able to use ImagePlus#updateAndDraw() to update the image display.
For the Plot class, use:
Plot#show()
to display it for the first time, and:
Plot#updateImage()
to update the plot window.
Solution:
So the problem was that the entier program took place in one thread. So to fix it, the ploting had to be done in a seperat thread.

iOS AVPlayer: How to slow down a 30fps video to 1fps

I have a 30fps Quicktime .mov of still images I created with AVAssetWriter. (It's only about 10 frames long). I would like the user to be able to slow it down using a UISlider to about 1fps, but when I adjust the AVPlayer .rate property from 1 down to 0, it doesn't get anywhere near 1fps, it just stops playback (because a 0 rate is effectively stopping/pausing it, which makes sense). But how can I slow the player down to about 1fps? I think I'd need to do some math to calculate the actual rate, but that's where I'm stuck. Would it end up being something like 0.000000000000001?
Thanks!
If this was a requirement of mine I would approach this as follows (also suggested by Inafziger in the comments). Use AVAssetReader and roll my own viewer for the images. This would give you precise control using a timer as stated in your comments. Make sure you reuse some preallocated image(s) memory area (you can probably get away with space for a single image). I would probably take a pull approach like CoreAudio. When you need an image pull it from some image buffer manager class which calls AVAssetReaders read function. This way you can have N buffers that will always be available. This may be a little overkill. I do believe AVAssetReader pre decodes some amount of the movie upon initialization. This is why I say you can more than likely just get away with using a single buffer for reading image data into.
From you comment about memory issues. I do believe there are some functions in the AVAssetReader and associated classes that use the create rule.

How to find the source video size using VMR9 renderless mode

My application uses VMR9 Renderless mode to play a WMV file. I build a filter graph with IGraphBuilder::RenderFile and control playback with IMediaControl. Everything plays okay, but I can't figure out how to determine the source video size. Any ideas?
Note: This question was asked before in How can I adjust the video to a specified size in VMR9 renderless mode?. But the solution was to use Windowless mode instead of Renderless mode, which would require rewriting my code.
Firstly you want the Video renderer. You can do this by using EnumFilters on the IGraphBuilder interface. Then call EnumPins on that filter to find the input pin. You can then call ConnectionMediaType to get the media type being fed into that filter. Now depending what formattype is set to you can cast the pbFormat pointer to the relevant structure and from there find out what the video size is. If you want the size before that (to see if some scaling is going on) you can work your way back across the pin using "ConnectedTo" to get the next filter back. You can then find its input pins and repeat the ConnectionMediaType call. Repeat until you get to the filter's pin that you want.
You could use the MediaInfo project at http://mediainfo.sourceforge.net/hr/Download/Windows and through the CS wrapper included in the VCS2010 or VCS2008 folders get all the information about a video you need.
EDIT: Sorry I thought you were on managed. But in either case the MediaInfo can be used, so maybe it helps.

Resources