UYVY format to be converted - image-processing

I' m framing video images from video stream and I took one frame of the video streaming from a video Port (as a first step of my application) so I could transmit the raw UYVY video data. After running, these data are stored into a .dat file
Meanwhile, before transmitting the raw data, I am looking for a way to display the decoded information stored in the .dat file. In other words, Is there a software that would convert the UYVY raw data into a picture?
Thank you for your assistance

Each of your .dat file is an UYVY 422 image that can be display.
Without any details about your platform and your running context, I would recommend the use of the FFMPEG suite.
Using ffplay you could display your image with
ffplay -video_size WIDTHxHEIGHT -pixel_format uyvy422 filename.dat
As you saw the only things you need to know is the image size!

Related

AVSampleBufferDisplayLayer plays too fast

So I have put together a sample project https://github.com/liuxuan30/TestH264.git that uses VideoToolBox to have a H264 sample decoder to display a stream file, captured from a camera.
The H264 decoder using VideoToolBox is copied from internet, I didn't write it, when I tried to play my h264 stream file, it plays too fast, comparing to ffmpeg or ffplay, which both played back at a normal speed.
I wanted to ask, how to fix this behaviour? Thanks.
This happens because of this constant kCMSampleAttachmentKey_DisplayImmediately:
If this key is present, the sample should be displayed as soon as possible rather than
according to its presentation timestamp. Use this attachment at run time to request this
behavior from a display pipeline such as the AVSampleBufferDisplayLayer class.
This attachment is not written to media files.
from Apple documation
So you have two options of displaying:
Display immediately - which is probably good for real-time stream, when you need to display frame as soon as possible
Display frames at specific timestamp
*comparing to ffmpeg or ffplay, which both played back at a normal speed.
ffplay and ffmpeg probably use timestamp at this point.
I have same result as you from your test H.264 file, but it's happens because you get all decoded frame at once so decoder is displaying it immediately.
You can watch this video for more information about VideoToolbox framework:
Direct Access to Video Encoding and Decoding

Transcoding fMP4 to HLS while writing on iOS using FFmpeg

TL;DR
I want to convert fMP4 fragments to TS segments (for HLS) as the fragments are being written using FFmpeg on an iOS device.
Why?
I'm trying to achieve live uploading on iOS while maintaining a seamless, HD copy locally.
What I've tried
Rolling AVAssetWriters where each writes for 8 seconds, then concatenating the MP4s together via FFmpeg.
What went wrong - There are blips in the audio and video at times. I've identified 3 reasons for this.
1) Priming frames for audio written by the AAC encoder creating gaps.
2) Since video frames are 33.33ms long, and audio frames 0.022ms long, it's possible for them to not line up at the end of a file.
3) The lack of frame accurate encoding present on Mac OS, but not available for iOS Details Here
FFmpeg muxing a large video only MP4 file with raw audio into TS segments. The work was based on the Kickflip SDK
What Went Wrong - Every once in a while an audio only file would get uploaded, with no video whatsoever. Never able to reproduce it in-house, but it was pretty upsetting to our users when they didn't record what they thought they did. There were also issues with accurate seeking on the final segments, almost like the TS segments were incorrectly time stamped.
What I'm thinking now
Apple was pushing fMP4 at WWDC this year (2016) and I hadn't looked into it much at all before that. Since an fMP4 file can be read, and played while it's being written, I thought that it would be possible for FFmpeg to transcode the file as it's being written as well, as long as we hold off sending the bytes to FFmpeg until each fragment within the file is finished.
However, I'm not familiar enough with the FFmpeg C API, I only used it briefly within attempt #2.
What I need from you
Is this a feasible solution? Is anybody familiar enough with fMP4 to know if I can actually accomplish this?
How will I know that AVFoundation has finished writing a fragment within the file so that I can pipe it into FFmpeg?
How can I take data from a file on disk, chunk at a time, pass it into FFmpeg and have it spit out TS segments?
Strictly speaking you don't need to transcode the fmp4 if it contains h264+aac, you just need to repackage the sample data as TS. (using ffmpeg -codec copy or gpac)
Wrt. alignment (1.2) I suppose this all depends on your encoder settings (frame rate, sample rate and GOP size). It is certainly possible to make sure that audio and video align exactly at fragment boundaries (see for example: this table). If you're targeting iOS, I would recommend using HLS protocol version 3 (or 4) allowing timing to be represented more accurately. This also allows you to stream audio and video separately (non-multiplexed).
I believe ffmpeg should be capable of pushing a live fmp4 stream (ie. using a long-running HTTP POST), but playout requires origin software to do something meaningful with it (ie. stream to HLS).

FFmpeg save stream to mp3

I have an iOS project that play online radio streams, it is use FFmpeg to play. Also I added ability to record streams, decode streams via avcodec_decode_audio4 function, and write output to .wav file. But this files are too big, because it is uncompressed format, so I want to decode files to .mp3.
I have found couple ways to convert audio but only when audio it is ready file, but I want decode to some compressed format as soon as I get chunk of data from stream, not ready file.
Is it possible?
Can you give me some advise how to achieve this?
You can use ffmpeg (aka libav) to encode the audio you're reading with avcodec_decode_audio4 into a file as mp3, as long as libav was configured with lame (--enable-libmp3lame).
Basically, you configure an mp3 codec, then call avcodec_encode_audio2 (who names these things?) on the progressive output of avcodec_decode_audio4.
The canonical example can be confusing because it also deals with video, but you should be able to tease the details you want out of it.
This post on transcoding audio by arashafiei is broadly helpful.

Audio format to choose for Big audio files

Which audio file format is best to use for large audio files? I have many large audio files to be used in my app but their current mp3 size is of hundred of MB's
If you want to save more storage on audio files, file format may not change too much on the file size, reducing the bit rate(for example 320Kbps to 128Kbps) can reduce the file size significantly.
:how to do it using microsofts audio compression manager?(practically its not well documented in m.s.d.n.
Windows provide codecs that compress specifically audio files. The audio files tipically are PCM format (WAVE_FORMAT_PCM) and get played by using the simplest directsound method (check msdn it`s at hand and it works)
To play a file using directsound, thus PCM format you first create a directsound object, create a directsoundbuffer, and then pump the PCM data directly to the buffer using a keep-fill-buffer algorithm.
If you wish to use codecs, u try and write a procedure that opens a stream file and passes it through a acm driver object, thus (de)compressing it.
The driver for ACM (audio compression manager) finds a codecs that suits the input source and decompresses it yet again to WAVE_FORMAT_PCM for your app be able to play it.

MJPEG Stream Information

I am receiving a MJPEG Stream from my camera. When I look at the video data with an hex editor it seems that it doesn't contain any streaming information. I just see one raw JPEG after another, but no information about the framerate etc. .
Is the lack of any meta information normal for MJPEG or is it just related to the camera I am using? If there a no information about the stream, how can a player know how fast to play the video?
The lack of metadata is normal. IP Cameras typically send MJPEG as just that, one JPEG image after another as a stream. This is the most basic valid MJPEG file. If you were to take a bunch of jpegs, cat them together into a large, giant file, and feed it to ffmpeg, it would see it as a valid mjpeg format file. Some cameras will add an additional header to contain audio data, but it is not needed to be considered valid motion jpeg.
Many cameras will include a header like X-Framerate, in the HTTP header when the stream is initially sent, or you can set it as part of the camera configuration. However, when a camera sends only jpegs, there is no way to tell from the stream itself what the framerate is.
Is the lack of any meta information normal for MJPEG or is it just related to the camera I am using? If there a no information about the stream, how can a player know how fast to play the video?
To add to already answered, IP camera is a live video source and frames are typically presented as soon as they arrive from camera. Rare IP camera attaches extra per frame information other than fame size (some don't do even this! they send data and separators only). Still some do attach time stamps and extra data like motion detection state.
Most of the IP cameras don't do constant frame rate. That is, frame rate might vary, esp. lower down in low light conditions. It is the responsibility of the receiving side to attach per frame time stamps when multiplexing the data into container format. Time stamp might be recovered from metadata (which rarely exists) or - more frequently - receiver stamps a frame with local receive time.
This is the way for the player to play back video sequence in proper rate. Live feed is typically presented on "show received frame as soon as possible" basis.
Normally MJPEG data is sent within a streaming media wrapper such as AVI or MOV (quicktime). The wrapper format will contain the framerate and information about the optional audio data.

Resources