I stream the webcam with FFmpeg (command line) via UDP as above. On the client side I use Java OpenCV, the capture line; VideoCapture.open("udp://xx.xx.xx.xx:xx). If I sent the stream as mpegts (ffmpeg -f mpegts), I can display the stream but; if I sent it as rawvideo (ffmpeg -f rawvideo), I couldn't.
Is there any parameter to set (like CvType) ?
Mpegts has properties that are specifically designed for transmission over a one way losssy transport like UDP, or digital television. It has packets that are repeated every 100ms to tell the reader how to bootstrap decoding, it has start of frame flags (payload unit start indicator), it has a packet counter to detect skipped and out of order packets, and several other important features.
Raw video has none of this. It’s just a bunch of bytes. If a single packet is lost (including the first packet) the decoder will have no idea where the start and end of frames are, and is unable to reconstruct a stream.
Therefor such a feature is generally not supported in video tools. If you need to send raw video, use TCP not UDP.
Related
I am troubleshooting an Audio Over IP network system which uses multicast streams to pass audio over ethernet.
When a customer has pops/clicks in his or her audio, it usually means multicast flooding or some other network issue. I can use Wireshark to capture the packets and see this happening.
However, I would also like to be able to listen to the audio stream if possible so that I can hear what is happening. I can do this easily when I am working with VoIP calls but this is not VoIP/SIP.
I have turned on the RTP_RTSP and RTP_UDP protocols and can isolate the streams. But when I try to play one or save it as an .au file I am unable to do so. This is what they pretty much always look like when I hit "Play:"
Am I missing something, or is it only possible to play VoIP streams, not ordinary AoIP streams in Wireshark?
First, the Wireshark audio player supports a limited number of codecs see here. Check if your encoding is in that list. Second, the number of packets in you RTP stream looks too big for any "regular" codec: 1063 for ~6 seconds of recording (for example, G711 with 10 milliseconds sampling gives you only ~600 packets)
I have an AXIS IP camera (M1054) which sends an H264/RTP stream via RTSP.
Unfortunately, they do not send SPS and PPS NALUs at all, they only transfer (fragmented) Codec slices.
I'm trying to decode that stream with the iOS VideoToolbox framework which needs the H264 SPS and PPS tuple to correctly setup the CMFormatDescription.
I wonder how I can synthesize the necessary parameter sets from looking at the actual H264 slices?
Update: I have captured an example session where mplayer manages to display the stream via Wireshark. The capture file is here and you can see the whole RTSP setup as well as a couple of seconds RTP.
RTP consists of 3 sets of flows.
RTP for the media
RTSP for controlling the connection
RTCP for the sender confirmation and timestamps.
Although the SPS/PPS is often in band inside the stream and is transported via RTP - it doesn't need to be there (and may be shouldn't be there). The SPS/PPS is transmitted as part of the setup process (RTSP). I usually recommend running http://www.live555.com/ in the debugger to learn about the details of the process - but http://www.live555.com/ is currently down.
In very rare circumstances you could recreate the SPS/PPS from a well known constrained H.264 stream. But in general you can't. So the SPS/PPS are metadata of the H.264 stream that is not redundantly stored anywhere else.
So if your familiarize yourself with the setup process - RTSP - it will be pretty obvious.
I am trying to stream RTP Packets (which is streaming an audio) from RTP URL e.g. rtp://#225.0.0.0
after so much research on the same i have somewhat streamed the URL in my device and playing it with https://github.com/maknapp/vlckitSwiftSample.
This is only playing the Streamed Data but does not have any function to store the data.
From research and other sources i dint find much content and simple information that should be helpful to stream the Packet over RTP and store it in iOS Device.
I have tried with following link.
https://github.com/kewlbear/FFmpeg-iOS-build-script
https://github.com/chrisballinger/FFmpeg-iOS
These two are not even compiling due to POD Issues other projects or guide just giving me reference on RTSP Stream instead of RTP Stream.
If anyone can give us a guidance or any idea that how we can implement such things then it will be appreciated.
First foremost, you need to understand how this works.
The sender i.e. the creator of RTP stream is probably doing the following:
Uses a source for the data: In case of audio, this could be the microphone or audio samples or a file
Encodes the audio using a audio codec such as AAC or Opus.
Uses RTP packetizer to create RTP packets from encoded audio frames
Uses a transport layer such as UDP to send these packets
Protocols such as RTSP provides the necessary signaling information to provide better stream information. Usually RTP itself isn't enough as things such as congestion control, feedback, dynamic bit rate are handled with the help of RTCP.
Anyway, in order to store the incoming stream, you need to do the following:
Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer. There could be headers corresponding to a particular codec to make sure you refer to a correct RFC.
Once you have access to encoded frames, you can write the same in a media container such as m4a or ogg depending upon the audio codec used in the stream.
In order to play the stream, you need to do the following:
Use a RTP depacketizer to get the encoded audio frames out of it. You can write your own or use a third party implementation. In fact ffmpeg is a big framework which has all necessary code for most of the codecs and protocols. However for your case, find a simple RTP depacketizer.
Once you have access to encoded frames, use a audio decoder (available as a library) to decode the frames or check if your platform supports that codec directly for playback
Once you have access to decoded frames, in iOS, you can use AVFoundation to play the same.
If you are looking at an easy way to do it, may be use a third party implementation such as http://audiokit.io/
Context
Most RTP streams (from e.g. an IP camera) need some information from a SDP to be able to decode them.
SDP is usually fetched just in time, usually from a RTSP URL but other means are possible (e.g. HTTP).
Specific case
We have a situation where an RTP stream (from a camera, UDP sent at all time whether anyone listens or not) will be played using VLC, but providing VLC an RTSP URL to fetch SDP just in time is not an option.
There is a RTSP service yet we need to query it in advance and dump the resulting SDP file to feed it to VLC later. Doing a RTSP query just-in-time is useless anyway since the stream exists at all times.
How to do that with VLC?
Search before you post
Of course I've been searching Google, videolan wiki and StackExchange.
Information is difficult to find because when people talk about streaming, RTSP, RTP, they are generally usig VLC to generate a RTP stream, or output a SDP that VLC generates because it does the encoding, etc.
It's not the case here. The SDP to dump comes from the serveur with a single RTSP query.
Question
Basically, I'm looking for a command-line like:
vlc --sout...something...rtsp://sourceIP:Port/...something...out...myfile.sdp
That would dump the SDP in myfile.sdp.
Then, later, running vlc with the myfile.sdp as argument is expected to play the stream.
We did not find a solution using VLC alone (I even looked a little at the VLC source code). So we used a somehow "brute force" solution but hey, it works.
What we do at configure time is ask VLC to play stream once, while Wireshark captures packets with filter rtsp and sdp. One packet appears containing the SDP data we want. We select it and use "extract selected bytes to ..." and save to a file with name ending with .sdp.
That gives us a file containing the SDP information we want. Job done.
I am trying to make an application that use audio streaming through TCP connection, Using Delphi 7 and Indy9.
More clearly, How i can stream input from Client microphone and send it to (TCP or HTTP Server)? Consider real time.
Thank you
I never did this, but I think you can start with the basics ...
Set the frame rate to be used, 8000hz is a good choice
Choice a chunk size to capture from Microphone (1024, 2048, 4096,
etc)
Capture the audio from Microphone in short int or float32 (RAW Audio)
Put this chunk in one socket buffer preferably UDP, and send to
another side over UDP connection
If you make a loop with this process are you sending audio data by
socket
Now The other side just need get the data from every UDP connection
and play
This steps are a basic audio streaming :-)
In the future you might want to work with queuing, but that's another story