Can I listen to non-VoIP audio captured in Wireshark - stream

I am troubleshooting an Audio Over IP network system which uses multicast streams to pass audio over ethernet.
When a customer has pops/clicks in his or her audio, it usually means multicast flooding or some other network issue. I can use Wireshark to capture the packets and see this happening.
However, I would also like to be able to listen to the audio stream if possible so that I can hear what is happening. I can do this easily when I am working with VoIP calls but this is not VoIP/SIP.
I have turned on the RTP_RTSP and RTP_UDP protocols and can isolate the streams. But when I try to play one or save it as an .au file I am unable to do so. This is what they pretty much always look like when I hit "Play:"
Am I missing something, or is it only possible to play VoIP streams, not ordinary AoIP streams in Wireshark?

First, the Wireshark audio player supports a limited number of codecs see here. Check if your encoding is in that list. Second, the number of packets in you RTP stream looks too big for any "regular" codec: 1063 for ~6 seconds of recording (for example, G711 with 10 milliseconds sampling gives you only ~600 packets)

Related

Displaying UDP Multicast Rawvideo Stream

I stream the webcam with FFmpeg (command line) via UDP as above. On the client side I use Java OpenCV, the capture line; VideoCapture.open("udp://xx.xx.xx.xx:xx). If I sent the stream as mpegts (ffmpeg -f mpegts), I can display the stream but; if I sent it as rawvideo (ffmpeg -f rawvideo), I couldn't.
Is there any parameter to set (like CvType) ?
Mpegts has properties that are specifically designed for transmission over a one way losssy transport like UDP, or digital television. It has packets that are repeated every 100ms to tell the reader how to bootstrap decoding, it has start of frame flags (payload unit start indicator), it has a packet counter to detect skipped and out of order packets, and several other important features.
Raw video has none of this. It’s just a bunch of bytes. If a single packet is lost (including the first packet) the decoder will have no idea where the start and end of frames are, and is unable to reconstruct a stream.
Therefor such a feature is generally not supported in video tools. If you need to send raw video, use TCP not UDP.

Creating a rtsp client for live audio and video broadcasting in objective C

I am trying to create a RTSP client which live broadcast Audio and Video. I modified the iOS code at link http://www.gdcl.co.uk/downloads.htm and able to broadcast the Video to server properly. But now i am facing issues in broadcasting the audio part. In the link example the code is written in such a way that it writes the Video data to file and than reads the data from the file and upload the NALU's video packets to RTSP server.
For Audio part i am not sure how to proceed on it. Right now what i have tried is that get the audio buffer from mic and than broadcast it to the server directly by adding RTP headers and ALU.. but This approach is not properly working as Audio starts lagging behind and lag increases with time. Can someone let me know if there is some better approach to achieve this and with lip sycn audio/video.
Are you losing any packets on the client? If so, you need to leave "space." If you receive packet 1,2,3,4,6,7, You need to leave space for the missing packet (5).
The other possibility is a what is known as a clock drift problem. The clock (crystal) on your client and server are not perfectly in sync with each other.
This can be caused by environment, temperature changes, etc.
Let's say in a perfect world your server is producing audio samples 20ms audio samples at 48000 hz. Your client is playing them back using a sample rate of 48000 hz. Realistically your client and server are not exactly 48000hz. Your server might be 48000.001 and your client might be 47999.9998. So your server might be delivering faster than your client or vise versa. You would either consume packets too fast and under run the buffer or lag too far behind and overflow the client buffer. In your case, it sounds like the client is playing back too slow and slowly lagging behind the server. You might only lag a couple milliseconds per minute but the issue will keep continuing and it will look like a 1970s lip synced Kung Fu movie.
In other devices, there is often a common clock line to keep things in sync. For example, Video camera clocks, midi clocks. multitrack recorder clocks.
When you deliver data over IP, there is no common clock shared between a client and server. So your issue concerns syncing clocks between disparate devices with no. I have successfully solved this problem using this general approach:
A) Let the client count the rate of packets that come in over a period of time.
B) Let the client count the rate that the packets are consumed (played back).
C) Adjust the sample rate of the client based on A and B.
So your client requires that you adjust the sample rate of the playback. So yes you play it faster or slower. Note that the playback rate change will be very very subtle. You might set the sample rate to be 48000.0001 hz instead of 48000 hz. The difference in pitch would be undetectable by humans as it would only cause a fraction a cent difference in pitch. I gave an explanation of a very simplified approach. There many other nuances and edge cases that must be considered when developing such a control system. You don't just set it and forget it. You need a control system to manage the playback.
An interesting test to demonstrate this is to take two devices with the exact same file. A long recording (say 3 hours) is best. Start them at the same time. After 3 hours of playback, you will notice that one is ahead of the other.
This post explains that it is NOT a trivial task to stream audio and video.

Delphi 7, indy9 tcp audio streaming

I am trying to make an application that use audio streaming through TCP connection, Using Delphi 7 and Indy9.
More clearly, How i can stream input from Client microphone and send it to (TCP or HTTP Server)? Consider real time.
Thank you
I never did this, but I think you can start with the basics ...
Set the frame rate to be used, 8000hz is a good choice
Choice a chunk size to capture from Microphone (1024, 2048, 4096,
etc)
Capture the audio from Microphone in short int or float32 (RAW Audio)
Put this chunk in one socket buffer preferably UDP, and send to
another side over UDP connection
If you make a loop with this process are you sending audio data by
socket
Now The other side just need get the data from every UDP connection
and play
This steps are a basic audio streaming :-)
In the future you might want to work with queuing, but that's another story

delphi, indy10 tcp audio streaming

i am trying to make an application that use video/audio streaming through TCP connection, i already done the video streaming with indy10 component(idtcpserver and idtcpclient), is it possible do the same thing but with audio?
Sure.
TCP is just data channel. It is totally agnostic to what kind of data is transferred to it. HTML pages, programs, video, audio - whatever. It is just a data channel within TCP protocol.
However, "streaming" usually means "near to real time". If some frames of video or audio did not arrived during few seconds - they better be skipped and forgotten and newer music or video be played. You would not want your Skype conversation suddenly stuck for a minute and then playback all that minute to you, just because of few seconds network jam. You'd better loose a word or two and then either recover by context or ask the correspondent to repeat. Thus TCP with built-in retransmissions and usually not very large buffers is not a perfect choice for multimedia streaming. Usually UDP + application-implemented integrity control is better choice for it.
I believe you need to use the unit VFW. With avistream, you join video + sound in a compressed stream.

iPhone music streaming

I'm trying to send music over bluetooth from one iOS device to another. I've been using this to build packets like in Ray Wenderlich's SNAP tutorial, but I've been having trouble reconstructing the packet information on the receiving phone. I have tried using https://github.com/abbood/iphoneAudioSyncer but I think it is too complicated for my needs (since I do not need synced playing). What is the simplest buffer approach that accounts for things like lost/out of order packets? I have read through a lot of CoreAudio stuff but it is very dense, so I would appreciate help from someone who has tackled this type of problem.
when you talk about los/out of order packets.. you're talking about the topic of Packet Loss Concealment.. which is a very dense topic (I mean if you think core audio is dense.. wait till you dive into PLC).
In a nutshell, there are many ways to deal with packet loss.. but the simplest way (which I advise you to do) is to replace the lost packets with silence (same goes with out of order packets.. if a packet is out of order.. just discard it).
that being said.. you are dealing with audio that is streamed to you (ie sent via the bluetooth/wifi network).. which means in almost 100% of the time it's compressed audio you're getting (ie Variable Bit Rate audio VBR).. if you simply try to substitute lost VBR packets with silence.. you'll run into this problem. You'll either have to insert silence packets in the same compression format as the VBR audio you're dealing with, or you will have to convert your VBR compressed audio into non-compressed audio (Lossless PCM), then insert zeros in place of the missing packets.

Resources