In the source code of rabbit.app, frame has max size {frame_max,131072}.
If the message's size beyond the limit, will message will be refused to send or the message will be divided and then to be sent again?
Your message will be split into several frames if it is larger than the frame_max variable, see section 2.3.5.2 in the AMQP specification. On the receiving side it is reassembled automatically and you are presented with the message.
The actual frame size used may be different from the configured frame_max as it is negotiated with clients. I think the frame size is configurable mainly for performance tuning, see the comments in the RabbitMQ configuration docs
Related
The standard message frame size for CAN is 108-bits (correct me if I am wrong about this... I am still learning about CAN)
Would I be able to send a message that has the sizes of 750 bytes, 2 bytes, or 510 bytes?
Since 108-bits = 13.50 bytes, I assume I could send the 2 bytes message, but how about the other message sizes?
CAN is a datalink protocol with a maximum payload of 8 bytes (64 bytes if you are using CAN-FD) per frame. If you need to send a message larger than that, you will need to make use of a transport protocol to split the message up into individual frames. Depending on the context, you can create your own ad-hoc protocol to do this, or you may look into a standard protocol such as CANopen or J1939 to provide transport services for you.
How can I read the packet size that was recieved in wireshark-dissector?
Is that data aviliable from the tvbuff_t?
If by "packet size" you mean the size of the data handed to the dissector in the tvb, then:
tvb_reported_length(tvb) is the size as seen on the wire;
tvb_length(tvb) is the size as actually captured (which can be less than the size on the wire).
In either case, the size returned is that of the data handed to the dissector(i.e., not including any of the lower level headers (ethernet, etc)).
If you want the size of the complete packet as originally seen on the wire or as saved)
pinfo->fd->pkt_len // packet-len
pinfo->fd->caplen // amount actually captured
(See epan/frame_data.h) in the dissector source tree).
Dissectors do not (i.e., should not) normally need to access info about the actual full size of the frame.
If this is the data needed, if you can indicate why this data is needed, then I may be able to suggest a different approach.
I was wondering about the parameters of two APIs of epoll.
epoll_create (int size) - in this API, size is defined as the size of event pool. But, it seems that having more events than the size still works. (I've put the size as 2 and forced event pool to have 3 events... but it still works !?) Thus I was wondering what this parameter actually means and curious about the maximum value of this parameter.
epoll_wait (int maxevents) - for this API, the maxevents definition is straight-forward. However, I can see the lackness of information or advices on how to determin this parameter. I expect this parameter to be changed depending on the size of epoll event pool size. Any suggestions or advices will be great. Thank you!
1.
"man epoll_create"
DESCRIPTION
...
The size is not the maximum size of the backing store but just a hint
to the kernel about how to dimension internal structures. (Nowadays,
size is unused; see NOTES below.)
NOTES
Since Linux 2.6.8, the size argument is unused, but must be greater
than zero. (The kernel dynamically sizes the required data strucā
tures without needing this initial hint.)
2.
Just determine an accurate number by yourself, but be aware that
giving it a small number may drop out the efficiency a little bit.
Because the smaller number assigned to "maxevent" , the more often you may have to call epoll_wait() to consume all the events, queued already on the epoll.
I need to write a rate limiter, that will perform some stuff each time X bytes were transmitted.
The straightforward is to check the length of each transmitted packet, but I think it will be to slow for me.
Is there a way to use some kind of network event, that will be triggered by transmitted packets/bytes?
I think you may look at netfilter.
Using its (kernel level) api, you can have your custom code triggered by network events, modify received messages before passing it to application, and so on.
http://www.netfilter.org/
It's protocol dependent, actually. But for TCP, you can setsockopt the SO_RCVLOWAT option to define the minimum number of bytes (watermark) to permit the read operation.
If you need to enforce the maximum size too, adjust the receive buffer size using SO_RCVBUF.
I need to calculate total data transfer while transferring a fixed size data from client to server in TCP/IP. It includes connecting to the server, sending request,header, receiving response, receiving data etc.
More precisely, how to get total data transfer while using POST and GET method?
Is there any formula for that? Even a theoretical one will do fine (not considering packet loss or connection retries etc)
FYI I tried RFC2616 and RFC1180. But those are going over my head.
Any suggestion?
Thanks in advance.
You can't know the total transfer size in advance, even ignoring retransmits. There are several things that will stop you:
TCP options are negotiated between the hosts when the connection is established. Some options (e.g., timestamp) add additional data to the TCP header
"total data transfer size" is not clear. Ethernet, for example, adds quite a few more bits on top of whatever IP used. 802.11 (wireless) will add even more. So do HDLC or PPP going over a T1. Don't even think about frame relay. Some links may use compression (which will reduce the total size). The total size depends on where you measure it, even for a single packet.
Assuming you're just interested in the total octet size at layer 2, and you know the TCP options that will be negotiated in advance, you still can't know the path MTU. Which may change, even while the connection is in progress. Or if you're not doing path MTU discovery (which would be wierd), then the packet may get fragmented somewhere, and the remote end will see a different amount of data transfer than you.
I'm not sure why you need to know this, but I suggest that:
If you just want an estimate, watch a typical connection in Wireshark. Calculate the percent overhead (vs. the size of data you gave to TCP, and received from TCP). Use that number to estimate: it will be close enough, except in pathological situations.
If you need to know for sure how much data your end saw transmitted and received, use libpcap to capture the packet stream and check.
i'd say on average that request and response have about 8 lines of headers each and about 30 chars per line. Then allow for the size increase of converting any uploaded binary to Base64.
You didn't say if you also want to count TCP packet headers, in which case you could assume an MTU of about 1500 so add 16 bytes (tcp header) per 1500 data bytes
Finally, you could always setup a packet sniffer and count actual bytes for a sample of data.
oh yeah, and you may need to allow for deflate/gzip encoding as well.