RDMA memory buffer - memory

I knew that RDMA requires both sender and receiver register their memory before data transfer. I am wondering if the registered memory on both sender and receiver should be same or not. If same, I think RDMA wastes lots of memory since both side basically are storing the identical data. Is there any way to reduce such problem?

After a network transfer, both the sender and the receiver contain copies of the same information. However, depending on the application and communication pattern, buffers on both sides can be reused. For example, the initiator of a remote read operation can use the same buffers for the results of the read after it is done with the previous results.

Related

Bluetooth Low Energy data transmission on iOS

I'm recently working on a project which uses Bluetooth Low Energy. I implemented most of communication protocol, however I started having concerns, that actually I don't know how the data transmission works and if the solution that I implemented is going to behave in the same way with all devices.
So my main concern is what data chunk is received when I get a notification from peripheral(_:didUpdateValueFor:error:)? Is it only as big as negotatiated MTU size? Or maybe iOS receives information about chunk size and waits to receive it all before triggering peripheral(_:didUpdateValueFor:error:)?
When a peripheral sends chunks let's say 100 bytes each, can I assume that I will get always in a single notification 100 bytes? Or could it be last 50 bytes from previous chunk and first 50 bytes from the next one? That would be quite tricky and hard to detect where is the beginning of my frame.
I tried to find more information in Apple documentation but there is nothing about it.
My guess is that I receive always a single state of characteristic. Therefore it would mean that chunks depend on implementation on peripheral side. But what if characteristic is bigger than MTU size?
First, keep in mind that sending streaming data over a characteristic is not what characteristics are designed for. The point of characteristics is to represent some small (~20 bytes) piece of information like current battery level, device name, or current heartbeat. The idea is that a characteristics will change only when the underly value changes. It was never designed to be a serial protocol. So your default assumption should be that it's up to you to manage everything about that.
You should not write more data to a characteristic than the value you get from maximumWriteValueLength(for:). Chunking is your job.
Each individual value you write will appear to the receiver atomically. Remember, these are intended to be individual values, not chunks out of a larger data stream, so it would make no sense to overlap values from the same characteristic. "Atomically" means it all arrives or none of it. So if your MTU can handle 100 bytes, and you write 100 bytes, the other side will receive 100 bytes or nothing.
That said, there is very little error detection in BLE, and you absolutely can drop packets. It's up to you to verify that the data arrived correctly.
If you're able to target iOS 11+, do look at L2CAP, which is designed for serial protocols rather than using GATT.
If you can't do that, I recommend watching WWDC 2013 Session 703, which covers this use case in detail. (I am having trouble finding a link to it anymore, however.)

Could you use the internet to store data in the transmission space between countries?

Is it possible to bounce data back and forwards between lets say a USA computer and an Australian computer through the internet and just send these packets back and forwards and use this bounced data as a data storage?
As I understand it would take some time for the data to go from A to B, lets say 100 milliseconds, then therefore the data in transfer could be considered to be data in storage. If both nodes had a good bandwidth and free bandwidth, could data be stored in this transmission space? - by bounce the data back and forwards in a loop.
Would there be any reasons why this would not work.
The idea comes from a different idea I had some time ago where I thought you could store data in empty space by shooting laser pulse between two satellites a few light minutes apart. In the light minutes of space between then you could store data in this empty space as the transmission of data.
Would there be any reasons why this would not work.
Lost packets. Although some protocols (like TCP) have means to prevent packet loss, it involves the sender re-sending lost packets as needed. That means each node must still keep a copy of the data available to send it again (or the protocol might fail), so you'd still be using local storage while the communication does not complete.
If you took any networking classes, you would know the End-to-End principle, which states
The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes
Hence, you can not expect routers between your two hosts to keep the data for you. They have to freedom to discard it at anytime (or they themselves may crash at any time with your data in their buffer).
For more, you can read this wiki link:
End-to-End principle
It think this should actually work as in reality you store that information in various IO buffers of the numerous routers, switches and network cards. However the amount of storable information would probably be too small to have practical use, and network administrators of all levels are unlikely to enjoy and support such a creative approach.
Storing information in the delay line is a known approach and has been used to build memory devices in the past. However the past methods rely on delay during signal propagation over physical medium. As Internet mostly uses wires and electromagnetic waves that travel with the sound of light, not much information can be stored this way. Past memory devices mostly used sound waves.

Is there a UDP-based protocol that offers more robust sending of large data elements without datagram reliability?

On one end, you have TCP, which guarantees that packets arrive and that they arrive in order. It's also designed for the commodity Internet, with congestion control algorithms that "play nice" in traffic. On the other end of the spectrum, you have UDP, which doesn't guarantee arrival time and order of packets, and it allows you to send large data to a receiver. Somewhere in the middle, you have reliable UDP-based programs, such as UDT, that offer customized congestion control algorithms and reliability, but with greater speed and flexibility.
However, what I'm looking for is the capability to send large chunks of data over UDP (greater than the 64k datagram size of UDP), but without a concern for reliability of each individual datagram. The idea is that the large data is broken down into datagrams of a specified size (<= 64,000 bytes), probably with some header data stuck on the front and sent over the network. On the receiving side, these datagrams are read in and stored. If a datagram doesn't arrive, all of the datagrams associated with that transfer are simply thrown out by the client.
Most of the "reliable UDP" implementations try to maintain reliability of each datagram, but I'm only interested in the whole, and if I don't get the whole, it doesn't matter - throw it all away and wait for the next. I'd have to dig deeper, but it might be possible with custom congestion control algorithms in UDT. However, are there any protocols with this approach?
You could try ENet, whilst not specifically aimed at what you're trying to do it does have the concept of 'fragmented data blocks' whereby you send data larger than its MTU and it sends as a sequence of datagrams of its MTU with header details that relate one part of the sequence to the rest. The version I'm using only supports 'reliable' fragments (that is the ENet reliability layer will kick in to resend missing fragments) but I seem to remember seeing discussion on the mailing list about unreliable fragments which would likely to exactly what you want; i.e. deliver the whole payload if it all arrives and throw away the bits if it doesn't.
See http://enet.bespin.org/
Alternatively take a look at the answers to this question: What do you use when you need reliable UDP?

InterProcess Communication and Cooperate processes

I have two solutions for data transfer and information between co-operate processes:
Message Passing and Shared Memory.
1- But I do not know which one is suitable for low(small) data exchange, and why?
2- Implementation which is easier to communicate between computers?
3- Which one is faster? And why?
Below are the answers which I hope helps you out:
1) I would suggest to go with "Message Passing" for small data exchange. Using Message passing you can avoid all the problems that you have to face in shared memory like locking, synchronization etc.
2) Well you can't implement Shared memory across computers, hence you have to go with message passing. Using TCP sockets (even UDP sockets), Named pipes etc.
3) If you compare both than Shared memory is fast as the data is not copied between the processes as it is being done in Message passing, but I would suggest you to not choose Shared memory over message passing just on the fact of being "faster" as there are other aspects which are on the side of message passing like simplicity, avoid all locking problems

What is buffer? What are buffered reads and writes?

I heard the word buffer after a long time today and wondering if somebody can give a good overview of buffer and some examples of how it matters in today's world.
A buffer is generally a portion of memory that contains data that has not yet been fully committed to its intended device. In the case of buffered I/O, generally there is a fast device and a slow device. The devices themselves need not have disparate speeds, but perhaps the interfaces between them differ or perhaps it is more time-consuming to either produce or consume the data than the other part is.
The idea is that you temporarily store the generated data in a buffer so that it is not lost when the slower device isn't ready to handle it. Once the device is ready, the another buffer may take the current buffer's place and the consuming device will process the data in the first buffer.
In this manner, the slower device receives the data at a moderated pace rather than the fire-hose that the original data source can be.

Resources