inter & intra stream in A Multimedia Synchronization System - stream

I have a question that's required in my material to succeed
the tutor hasn't explained it, nor the material in my Multimedia Course.In Addition, I have searched the internet for it, and return empty handed.
the question is the following:
Suppose we want to design and implement a system to manage the synchronization between the following media in a multimodal system.
Knowing that the time interval for each packet of :
1. audio medium is 40 ms
2. video medium is 20 ms
3. haptic medium is 10 ms
and with a certain time the audio packet number 32, and the video packet number 50, and the haptic packet number 41 must be synchronized.
Audio media stream
30 31 32 29 33 34 38 35 36 37
Video media stream
50 53 51 52 53 55 56 57 58 54
Haptic media stream
40 41 42 43 44 48 49 50 51 52
• What are the key points of the desired synchronization system in two modes in elastic and inelastic (real time) traffic. (Note: provide the calculation need for inter and intra media synchronization).
• Which of the above medium have been more affected by the jitter and which have been more affected by the packet loss?
now, for you, who may say I'm a lazy or something like that, I want to say that I really tried to try and solve the problem, and here is my theories about the solution:
(the following is a demonstration of what I think intra media means)
the audio packet that requires sync. is 80 ms away
the video packet needs for sync. needs 0 ms (as it's the first packet)
the haptic packet needs 10 ms (as it's one packet away)
so, in my point of view, audio is the slowest, so I have 2 possible ways to solve it:
either omit the first two packets from audio, and the 1st haptic packet, so all can begin at the same time.
Or, I have to delay the video and the haptic, so they can all be sync in the end.
I'm really afraid of this question, please give me a hand on this one.
Thanks forwards for your precious comments.

Related

decode lorawan data gps tracker Moko LW001-BG Thethings network

I am new in Lora, I tray to connect my Lora GW & GPS tracer LW001-BG to The things Network, & it successfully connected to TTN, but how to convert or decode the data from the GPS to latlong format?
here is the documentation http://doc.mokotechnology.com/index.php?s=/2&page_id=143
I receive data format like this 02 01 56 F8 0B 45 F4 29 32 46 and I need to convert/decode it to readable format
thanks I hope someone can help me
The payload of the message is in bytes 3-6 (for the latitude) and 7-10 (for the longitude). The first two bytes indicate how many packages there are (two) and which the current one is (the first).
The four bytes represent a 32 bit floating point value; in your example this is 2239.5210 for the latitude. This means 22 degrees, 39 minutes, and 31.26 seconds (which is the fraction times sixty).
You can see this in an on-line converter: As the byte order is lowest byte first, you need to reverse it, convert it to binary, and then check the checkboxes in the binary representation:
54 F8 0B 45 becomes 45 0B F8 56 or binary
01000101000010111111100001010110
Here the first bit is the sign, followed by 8 bits of the exponent and 23 bits mantissa. The decimal representation is 2239.52099609 and you discard all digits after the fourth to get 2239.5210 (with rounding).
Depending on how you process this data, you might be able to simply cast this to a float variable, as they are generally following the 32 bit IEEE 754 standard.

How to modify the timestamp range of a .pcap file?

Problem
I need to modify a .pcap file captured over a timespan of 5 minutes such that it simulates a .pcap file captured over a timespan of 20 minutes. The problem is that I don't know how to do this.
Example
To illustrate the problem, suppose I have a .pcap file with 4 captured packets p1-p4 and t as a start time such that:
p1 is sent at t+ 0 minutes
p2 is sent at t+ 1 minutes
p3 is sent at t+ 2 minutes
p4 is sent at t+ 3 minutes
I want my resulting .pcap file to contain the same four packets but with the timestamps scaled (from 5 minutes to 20 minutes) such that they represent the following:
p1 is sent at t+ 0 minutes
p2 is sent at t+ 4 minutes
p3 is sent at t+ 8 minutes
p4 is sent at t+ 12 minutes
Tried solution(s)
editcap, however the only option I could find here is to adjust all timestamps with a set value using the -t option.
Background
I am using tcpreplay to replay a scenario in which a user browsers a webpage. I simultaniously inject some packets which are dependent on the .pcap file I replayed, i.e. the packets are injected by live monitoring of the replayed traffic and subsequently adjusting the packets it sends out. This entire traffic trace - i.e. both the replayed traffic and the injected packets - are captured using tcpdump. As there are a lot of large .pcap files I want to replay, I use the tcpreplay --multiplier option to speed up the process. However, this means the final capture is a compressed version of the original .pcap file. I would like to 'stretch' the newly created .pcap to be the same size as the original.
This can be accomplished with Wireshark using its "Time Shift" feature.
Assuming the timestamp for packet 1 is 2017-08-17 12:00:00.000000, select packet 1 then choose "Edit -> Time Shift..." and set the time for packet 1 to 2017-08-17 12:00:00.000000 (i.e., don't change this one). Click the box next to "...then set packet" and enter 2 for the packet number and 2017-08-17 12:04:00.000000 as the timestamp. You'll notice that it also indicates, "and extrapolate the time for all other packets", which is what you want. Hit Apply.
At this point, the timestamps should be adjusted to what you want, although the sub-second component might not end up being exactly the same for all packets and for some reason even packet 1's sub-second component is not exactly what was originally specified. If you really want to retain the original sub-second component, then you'll have to adjust one packet at a time. Considering that there are only 4 packets to adjust, this should be feasible. I might suggest filing a Wireshark bug report for the erroneous sub-second adjustment though.

ZeroMQ - Lifetime of connections?

We want a connectionless client-server. But, we want to reduce the overhead of creating/closing connections on every single request.
e.g., on client side, if connection was idle for 5 seconds, close it. Then create a new connection when you decided to send a new request.
ZeroC ICE use this model.
The question is, can I set a life time for ZeroMQ connections?
e.g. if connection was idle for 5 seconds, it will be closed automatically. Then on each request, I check if connection still alive. If it wasn't, I re-connection to the server.
Sure you can. But to do this you need a Win_RELOC procedure sequence. After installing the arm bait model of Win_LOC over the desired port in ZeroMQs, you can start listening over the wide suite of protocols for a while.
Realization part is genuine, mostly come around a 1 min - 1000 hrs re loader. Most of these configs can be reconstructed with MAGA_LAPO counter measure.
The simplest way to attain this is to avoid the baud rates customization model. Most of it comprises of hop values attaining max of .0000017845 nano-hops/ammp.
The chart consists of
J K 1 J K 1 I E
1 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Frame Status
A one byte field used as a primitive acknowledgement scheme on whether the frame was recognized and copied by its intended receiver.
A C 0 0 A C 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
A = 1, Address recognized C = 1, Frame copied
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits
0MQ manages TCP connections for you automatically. (I assume your client/server will use TCP.) It provides very little information about connect/disconnect/reconnect status. Nor does it provide any "lifetime" or "timeout" features for sockets.
You'll need to implement the timeout logic you describe in your clients. At a high level: when the client needs to make a request it will first connect a socket, dispatch the request, get the response, then set a timer for 5 seconds. If another request is made in < 5 sec then it reuses the existing connection and resets the timer to 5 sec. If the timer fires then it closes the connection.
Be aware that 0MQ sockets are not thread safe. If your timer fires on a separate thread then it cannot safely close the 0MQ socket. Only the thread that created the socket should close it.

What is the effect of UIP_CONF_BUFFER_SIZE in contiki_conf.h file

I am working on contiki for some time, and recently I faced a weird problem where I noticed that the cooja mote fails to receive any data packet larger than 57 bytes, for z1 mote the limit is something around 96 - 97 bytes (in cooja simulator) and in real hardware(mbxxx target) I've observed that this limit is 92 bytes. Anyone else faced similar situation, is this has something to do with platform specific configuration, and how do I change this? I've looked into contiki_conf.h file and found UIP_CONF_BUFFER_SIZE parameter. What is the effect if this parameter is changed?
I figured it out and it seems like the maximum IP payload handled by the uip stack. Thus it sums up the 40 byte IP header + 8 byte UDP header + UDP payload size. Same would hold for TCP connections. So for example if th UIP_CONF_BUFFER_SIZE is set to 140, and if we ping the mote with an effective IP packet size more than 140, the mote will fail to respond!

How does sending tinygrams cause network congestion?

I've read advice in many places to the effect that sending a lot of small packets will lead to network congestion. I've even experienced this with a recent multi-threaded tcp app I wrote. However, I don't know if I understand the exact mechanism by which this occurs.
My initial guess is that if the MTU of the physical transmission media is fixed, and you send a bunch of small packets, then each packet may potential take up an entire transmission frame on the physical media.
For example, my understanding is that even though Ethernet supports variable frames most equipment uses a fixed Ethernet frame of 1500 bytes. At 100 Mbit, a 1500 byte frame "goes by" on the wire every 0.12 milliseconds. If I transmit a 1 byte message ( plus tcp & ip headers ) every 0.12 milliseconds I will effectively saturate the 100Mb Ethernet connection with 8333 bytes of user data.
Is this a correct understanding of how tinygrams cause network congestion?
Do I have all my terminology correct?
In wired ethernet at least, there is no "synchronous clock" that times the beginning of every frame. There is a minimum frame size, but it's more like 64 bytes instead of 1500. There are also minimum gaps between frames, but that might only apply to shared-access networks (ATM and modern ethernet is switched, not shared-access). It is the maximum size that is limited to 1500 bytes on virtually all ethernet equipment.
But the smaller your packets get, the higher the ratio of framing headers to data. Eventually you are spending 40-50 bytes of overhead for a single byte. And more for its acknowledgement.
If you could just hold for a moment and collect another byte to send in that packet, you have doubled your network efficiency. (this is the reason for Nagle's Algorithm)
There is a tradeoff on a channel with errors, because the longer frame you send, the more likely it experience an error and will have to be retransmitted. Newer wireless standards load up the frame with forward error correction bits to avoid retransmissions.
The classic example of "tinygrams" is 10,000 users all sitting on a campus network, typing into their terminal session. Every keystroke produces a single packet (and acknowledgement).... At a typing rate of 4 keystrokes per second, That's 80,000 packets per second just to move 40 kbytes per second. On a "classic" 10mbit shared-medium ethernet, this is impossible to achive, because you can only send 27k minimum sized packets in one second - excluding the effect of collisions:
96 bits inter-frame gap
+ 64 bits preamble
+ 112 bits ethernet header
+ 32 bits trailer
-----------------------------
= 304 bits overhead per ethernet frame.
+ 8 bits of data (this doesn't even include IP or TCP headers!!!)
----------------------------
= 368 bits per tinygram
10000000 bits/s ÷ 368 bits/packet = 27172 Packets/second.
Perhaps a better way to state this is that an ethernet that is maxed out moving tinygrams can only move 216kbits/s across a 10mbit/s medium for an efficiency of 2.16%
A TCP Packet transmitted over a link will have something like 40 bytes of header information. Therefore If you break a transmission into 100 1 byte packets, each packet sent will have 40 bytes, so about 98% of the resources used for transmission are overhead. If instead, you send it as one 100 byte packet, the total transmitted data is only 140 bytes, so only 28% overhead. In both cases you've transmitted 100 bytes of payload over the network, but in one you used 140 bytes of network resources to accomplish it, and in the other you've used 4000 bytes. In addition, it take more resources on the intermediate routers to correctly route 100 41 byte payloads, than 1 40 byte payloads. Routing 1 byte packets is pretty much the worst case scenerio for the routers performancewise, so they will generally exhibit their worst case performance under this situation.
In addition, especially with TCP, as performance degrades due to small packets, the machines can try do do things to compensate (like retransmit) that will actually make things worse, hence the use of Nagles algorithm to try to avoid this.
BDK has about half the answer (+1 for him). A large part of the problem is that every message comes with 40 bytes of overhead. Its actually a little worse than that though.
Another issue is that there is actually minimum packet size specified by IP. (This is not MTU. MTU is a Maximum before it will start fragmenting. Different issue entirely) The minimum is pretty small (I think 46 bytes, including your 24 byte TCP header), but if you don't use that much, it still sends that much.
Another issue is protocol overhead. Each packet sent by TCP causes an ACK packet to be sent back by the recipient as part of the protocol.
The result is that is you do something silly, like send one TCP packet every time the user hits a key, you could easily end up with a tremendous amount of wasted overhead data floating around.

Resources