MJPEG Stream Information - stream

I am receiving a MJPEG Stream from my camera. When I look at the video data with an hex editor it seems that it doesn't contain any streaming information. I just see one raw JPEG after another, but no information about the framerate etc. .
Is the lack of any meta information normal for MJPEG or is it just related to the camera I am using? If there a no information about the stream, how can a player know how fast to play the video?

The lack of metadata is normal. IP Cameras typically send MJPEG as just that, one JPEG image after another as a stream. This is the most basic valid MJPEG file. If you were to take a bunch of jpegs, cat them together into a large, giant file, and feed it to ffmpeg, it would see it as a valid mjpeg format file. Some cameras will add an additional header to contain audio data, but it is not needed to be considered valid motion jpeg.
Many cameras will include a header like X-Framerate, in the HTTP header when the stream is initially sent, or you can set it as part of the camera configuration. However, when a camera sends only jpegs, there is no way to tell from the stream itself what the framerate is.

Is the lack of any meta information normal for MJPEG or is it just related to the camera I am using? If there a no information about the stream, how can a player know how fast to play the video?
To add to already answered, IP camera is a live video source and frames are typically presented as soon as they arrive from camera. Rare IP camera attaches extra per frame information other than fame size (some don't do even this! they send data and separators only). Still some do attach time stamps and extra data like motion detection state.
Most of the IP cameras don't do constant frame rate. That is, frame rate might vary, esp. lower down in low light conditions. It is the responsibility of the receiving side to attach per frame time stamps when multiplexing the data into container format. Time stamp might be recovered from metadata (which rarely exists) or - more frequently - receiver stamps a frame with local receive time.
This is the way for the player to play back video sequence in proper rate. Live feed is typically presented on "show received frame as soon as possible" basis.

Normally MJPEG data is sent within a streaming media wrapper such as AVI or MOV (quicktime). The wrapper format will contain the framerate and information about the optional audio data.

Related

AVSampleBufferDisplayLayer plays too fast

So I have put together a sample project https://github.com/liuxuan30/TestH264.git that uses VideoToolBox to have a H264 sample decoder to display a stream file, captured from a camera.
The H264 decoder using VideoToolBox is copied from internet, I didn't write it, when I tried to play my h264 stream file, it plays too fast, comparing to ffmpeg or ffplay, which both played back at a normal speed.
I wanted to ask, how to fix this behaviour? Thanks.
This happens because of this constant kCMSampleAttachmentKey_DisplayImmediately:
If this key is present, the sample should be displayed as soon as possible rather than
according to its presentation timestamp. Use this attachment at run time to request this
behavior from a display pipeline such as the AVSampleBufferDisplayLayer class.
This attachment is not written to media files.
from Apple documation
So you have two options of displaying:
Display immediately - which is probably good for real-time stream, when you need to display frame as soon as possible
Display frames at specific timestamp
*comparing to ffmpeg or ffplay, which both played back at a normal speed.
ffplay and ffmpeg probably use timestamp at this point.
I have same result as you from your test H.264 file, but it's happens because you get all decoded frame at once so decoder is displaying it immediately.
You can watch this video for more information about VideoToolbox framework:
Direct Access to Video Encoding and Decoding

Transcoding fMP4 to HLS while writing on iOS using FFmpeg

TL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling AVAssetWriters where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.
What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
How will I know that AVFoundation has finished writing a fragment within the file so that I can pipe it into FFmpeg?
How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
Strictly speaking you don't need to transcode the fmp4 if it contains h264+aac, you just need to repackage the sample data as TS. (using ffmpeg -codec copy or gpac)
Wrt. alignment (1.2) I suppose this all depends on your encoder settings (frame rate, sample rate and GOP size). It is certainly possible to make sure that audio and video align exactly at fragment boundaries (see for example: this table). If you're targeting iOS, I would recommend using HLS protocol version 3 (or 4) allowing timing to be represented more accurately. This also allows you to stream audio and video separately (non-multiplexed).
I believe ffmpeg should be capable of pushing a live fmp4 stream (ie. using a long-running HTTP POST), but playout requires origin software to do something meaningful with it (ie. stream to HLS).

Identify audio sample in MDAT Atom without MOOV Atom

I am trying to write a live video broadcaster over RTSP from an ios device. I am utilizing AVAssetWriter so I can take advantage of hardware encoding. To send over RTSP I have to get the avcC information out of the MOOV block, however the MOOV block is only written from AVAssetWriter when you have finished the session, which of course is not finished as I am streaming this live.
I have gotten around this with the video by encoding, writing, and then finishing a single sample buffer to file, and the parsing the file to get the avcC information out. That works just fine.
After that for the live stream, since AVAssetWriter will only write to a file, I am writing it out to file and then reading from that file with a chasing file offset. When I do this with video only, I can read the Nalu's from the MDAT Atom in the written file without any MOOV information as the size of each Nalu is given in the first 4 bytes of the Nalu. So I can read that amount, process it, and send it on its way over an RTSP stream. So with video only, everything works perfectly fine and I get real good HD stream to a stream server.
The problem I am now having is when I try to incorporate audio into the stream from the mic. I can encode it just fine with AVAssetWriter and I get proper interleaved formated mp4 file to read from, however unlike the H264 Nalu's, the audio samples in the file do not have the size of the sample as their first byte. So far the only way I can see to define that is with the STSZ and STCO Atoms in the MOOV, which of course I dont have because it is a live stream.
With all that in mind, does any one know a way to identify audio sample segments in an MDAT Atom without the information from the MOOV Atom? As soon as I figure that out, Im home free.
Thanks in advance for any insight.
After a lot of research and emails out to people, I at least have an answer, and the answer is, I cant do it this way. Normally AAC samples in streams where dont have an index is wrapped in ADTS headers which holds the length field for the packet. However, since I am using AVAssetWriter for the audio, and AVAssetWriter writes directly to an MP4 file, the ADTS wrap is stripped off because of the index that will be in the MOOV Atom.
Therefore I will have to encode the audio differently, probably through Audio Queue services and meld it into the Video packets when applying to the RTSP stream.
Maybe this will help someone else in the future looking down this same road.
Many thanks to Geraint Davies at http://www.gdcl.co.uk for leading me down the right path.

Match a sound in recorded audio stream

I have a PCM stream incoming from the microphone. I am analyzing short chunks (Java language) of it to detect short spikes in sound loudness (amplitude). I have a determined sound that plays periodically and I need to know if detected spike is in fact this sound recorded. I have the PCM for sound played, it's completely determined.
I have no clue where to start, should I perform some comparison in time domain or frequency domain? Would be great if someone could give me some insight on how this is done and where should I dig.
Thanks.
It sounds like you want to compare an incoming set of pulses to a references set of pulses. Cross-correlation is probably what you want to use. You may need to precondition your data first, eg create an envelope instead of using raw data, or the cross-correlation may fail unless the match is perfect.

is it possible for avassetwriter to output to memory

I would like to write an iphone app that continuously capture video, h.264 encode them in 10 seconds interval and upload to a storage server. This can be done with avassetwriter, and I can keep on deleting the old files as I create new ones. However, as flash memory have a limited write cycles, this scheme will destroy the flash after a few thousand write cycles through the flash. Is there a way to redirect avassetwriter to memory, or create a ram drive on the iphone?
Thanks!
Yes avassetwriter is the only way to get to the hardware decoder. and simply reading back the file while its written doesn't give you the moov atoms so avfoundation or mpmediaplayer based players won't be able to read it back. you only have a couple choices , periodically stop the asassetwriter and write to the file on a background thread, effectively segmenting your movie into smaller complete files. or you could deal with the incomplete mp4 on the server side, you will have to decode the raw nalu's and recreate the missing moov atoms. If your using ffmpeg mov.c is source to look at. This is also were an incomplete mp4 file would fail.

Resources