I am a little bit confused about the interaction of the timeouts of ISO18092 (RWT = Response Waiting Time) and LLCP (LTO = Link Timeout).
I have several NFC phones (e.g. Samsung Nexus S or Samsung Galaxy S3) that return a WT value of 14 within the ATR_RES, although the LLCP specifies a WT of less than or equal to 8.
The first question is how the initiator should behave when the signaled WT (14) is greater than the maximum specified WT (8)? Should simply a WT = 8 be assumed?
If the WT = 14 is correct, this results in a RWT of 4949 ms. But these targets signal also a LTO of 1500 ms, which should be sufficiently larger than RWT to allow one or more error recovery cycles (according to the LLCP specification). How should the initiator handle this case? Did anyone have the same problems?
This is simply non-compliant behaviour. But for LLCP communication the Link Timeout LTO determines when the game is over - if the NFC-DEP Response Waiting Time RWT is not smaller than LTO there is unfortunately no chance for error recovery of lost NFC-DEP packets.
Related
i7-4600U datasheet says that SA_DQ[63:0] is used for memory channel A. And SB_DQ[63:0] is used for memory channel B. So my understanding is that memory channel A and memory channel B use different processor pins for each own's data bus.
Is my understanding correct?
The presence of the of SA_DQ[63:0] and SB_DQ[63:0] pretty much says it all.
The are two physical channels.
If you still need a secondary "prof", you can also check the math of this statement in the datasheet
Theoretical maximum memory bandwidth of:
— 21.3 GB/s in dual-channel mode assuming 1333 MT/s
— 25.6 GB/s in dual-channel mode assuming 1600 MT/s
With 1333MT/s1 and 8 Bytes per transfer we got 1333 * 8 = 10.664 GB/s.
In order to have a 21.3 GB/s peak, two channels must be used simultaneously, thus they must have separate pins.
1 Note that Mega-transfers per second is given by the internally multiplied bus clock (multiplier is fixed to four) times two, due to the double data rate. So 1333MT/s uses a 1333/2 = 666.67 MHz internal clock that corresponds to a 666.67/4 = 133.37 MHz bus clock.
I have read already lot of specs and code about UART, but I cannot find any indication on how to find by software interface if the transmit FIFO is full. There is an interrupt when the FIFO is empty. Then I can write at least N characters, where N is the fifo size. But when I have written these N characters, a number of them have already been sent. So I can in fact write more than N characters, but there is no FIFO full interrupt. The specs says that when the fifo is full indeed the TXREADY pin on the chip is inverted. Is there a way to find this by software ? The Line Status Register bit only says that the fifo is not empty, which does not mean it is full...
Anyone can help ? I want to write characters until the fifo is full...
Looks to me also that they neglected this, but most people get by with the thing as it is. The usual way to use it is to get an interrupt, fill the FIFO (normally very fast compared to serial data rate) and then return.
There is a situation where it seems to me that what you are asking for could be nice...if transmitting in a polling mode...you want to send 10 bytes, your polling shows the FIFO is not empty, so you have not way to know if you can send them all or not...either you wait there until it is empty, which sort of defeats the purpose of the FIFO, or you continue polling other stuff until you get back to checking for FIFO empty, and maybe that slows your overall transmission rate. Guess it is not a very usual way to operate, so nobody worries about it.
The 16550D datasheet says the following:
The transmitter holding register interrupt (02) occurs when the XMIT
FIFO is empty; it is cleared as soon as the transmitter holding
register is written to (1 to 16 characters may be written to the XMIT
FIFO while servicing this interrupt) or the IIR is read.
This means that when the Line Status Register register (port base + 5) indicates Transmitter Empty condition (in bit 5), the transmit FIFO is completely empty and you may write up to 16 bytes to the transmitter holding register (port base + 0). It is important not to write more than 16 bytes between occurrences of the transmitter empty bit being set.
If you don't need to write 16 bytes at the point when you received the IRQ (or saw the transmitter register empty bit set, if polling), you can either keep track of how many bytes you wrote since the last transmitter empty state, or, just defer writing further bytes until the next transmitter empty state.
We want a connectionless client-server. But, we want to reduce the overhead of creating/closing connections on every single request.
e.g., on client side, if connection was idle for 5 seconds, close it. Then create a new connection when you decided to send a new request.
ZeroC ICE use this model.
The question is, can I set a life time for ZeroMQ connections?
e.g. if connection was idle for 5 seconds, it will be closed automatically. Then on each request, I check if connection still alive. If it wasn't, I re-connection to the server.
Sure you can. But to do this you need a Win_RELOC procedure sequence. After installing the arm bait model of Win_LOC over the desired port in ZeroMQs, you can start listening over the wide suite of protocols for a while.
Realization part is genuine, mostly come around a 1 min - 1000 hrs re loader. Most of these configs can be reconstructed with MAGA_LAPO counter measure.
The simplest way to attain this is to avoid the baud rates customization model. Most of it comprises of hop values attaining max of .0000017845 nano-hops/ammp.
The chart consists of
J K 1 J K 1 I E
1 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Frame Status
A one byte field used as a primitive acknowledgement scheme on whether the frame was recognized and copied by its intended receiver.
A C 0 0 A C 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
A = 1, Address recognized C = 1, Frame copied
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits
0MQ manages TCP connections for you automatically. (I assume your client/server will use TCP.) It provides very little information about connect/disconnect/reconnect status. Nor does it provide any "lifetime" or "timeout" features for sockets.
You'll need to implement the timeout logic you describe in your clients. At a high level: when the client needs to make a request it will first connect a socket, dispatch the request, get the response, then set a timer for 5 seconds. If another request is made in < 5 sec then it reuses the existing connection and resets the timer to 5 sec. If the timer fires then it closes the connection.
Be aware that 0MQ sockets are not thread safe. If your timer fires on a separate thread then it cannot safely close the 0MQ socket. Only the thread that created the socket should close it.
I have a question about the A20 gate. I read an article about it saying that the mechanism exists to solve problems with address "wraparound" that appeared when newer CPUs got a 32-bit address bus instead of the older 20-bit bus.
It would seem to me that the correct way to handle the wraparound would be to turn off all of bits A20-A31, and not just A20.
Why is it sufficient to only turn off bit A20 to handle the problem?
The original problem had to do with x86 memory segmentation.
Memory would be accessed using segment:offset, where both segment and offset were 16-bit integers. The actual address was computed as segment*16+offset. On a 20-bit address bus, this would naturally get truncated to the lowest 20 bits.
This truncation could present a problem when the same code was run on an address bus wider than 20 bits, since instead of the wraparound, the program could access memory past the first megabyte. While not a problem per se, this could be a backwards compatibility issue.
To work around this issue, they introduced a way to force the A20 address line to zero, thereby forcing the wraparound.
Your question is: "Why just A20? What about A21-A31?"
Note that the highest location that could be addressed using the the 16-bit segment:offset scheme was 0xffff * 16 + 0xffff = 0x10ffef. This fits in 21 bits. Thus, the lines A21-A31 were always zero, and it was only A20 that needed to be controlled.
I've read advice in many places to the effect that sending a lot of small packets will lead to network congestion. I've even experienced this with a recent multi-threaded tcp app I wrote. However, I don't know if I understand the exact mechanism by which this occurs.
My initial guess is that if the MTU of the physical transmission media is fixed, and you send a bunch of small packets, then each packet may potential take up an entire transmission frame on the physical media.
For example, my understanding is that even though Ethernet supports variable frames most equipment uses a fixed Ethernet frame of 1500 bytes. At 100 Mbit, a 1500 byte frame "goes by" on the wire every 0.12 milliseconds. If I transmit a 1 byte message ( plus tcp & ip headers ) every 0.12 milliseconds I will effectively saturate the 100Mb Ethernet connection with 8333 bytes of user data.
Is this a correct understanding of how tinygrams cause network congestion?
Do I have all my terminology correct?
In wired ethernet at least, there is no "synchronous clock" that times the beginning of every frame. There is a minimum frame size, but it's more like 64 bytes instead of 1500. There are also minimum gaps between frames, but that might only apply to shared-access networks (ATM and modern ethernet is switched, not shared-access). It is the maximum size that is limited to 1500 bytes on virtually all ethernet equipment.
But the smaller your packets get, the higher the ratio of framing headers to data. Eventually you are spending 40-50 bytes of overhead for a single byte. And more for its acknowledgement.
If you could just hold for a moment and collect another byte to send in that packet, you have doubled your network efficiency. (this is the reason for Nagle's Algorithm)
There is a tradeoff on a channel with errors, because the longer frame you send, the more likely it experience an error and will have to be retransmitted. Newer wireless standards load up the frame with forward error correction bits to avoid retransmissions.
The classic example of "tinygrams" is 10,000 users all sitting on a campus network, typing into their terminal session. Every keystroke produces a single packet (and acknowledgement).... At a typing rate of 4 keystrokes per second, That's 80,000 packets per second just to move 40 kbytes per second. On a "classic" 10mbit shared-medium ethernet, this is impossible to achive, because you can only send 27k minimum sized packets in one second - excluding the effect of collisions:
96 bits inter-frame gap
+ 64 bits preamble
+ 112 bits ethernet header
+ 32 bits trailer
-----------------------------
= 304 bits overhead per ethernet frame.
+ 8 bits of data (this doesn't even include IP or TCP headers!!!)
----------------------------
= 368 bits per tinygram
10000000 bits/s ÷ 368 bits/packet = 27172 Packets/second.
Perhaps a better way to state this is that an ethernet that is maxed out moving tinygrams can only move 216kbits/s across a 10mbit/s medium for an efficiency of 2.16%
A TCP Packet transmitted over a link will have something like 40 bytes of header information. Therefore If you break a transmission into 100 1 byte packets, each packet sent will have 40 bytes, so about 98% of the resources used for transmission are overhead. If instead, you send it as one 100 byte packet, the total transmitted data is only 140 bytes, so only 28% overhead. In both cases you've transmitted 100 bytes of payload over the network, but in one you used 140 bytes of network resources to accomplish it, and in the other you've used 4000 bytes. In addition, it take more resources on the intermediate routers to correctly route 100 41 byte payloads, than 1 40 byte payloads. Routing 1 byte packets is pretty much the worst case scenerio for the routers performancewise, so they will generally exhibit their worst case performance under this situation.
In addition, especially with TCP, as performance degrades due to small packets, the machines can try do do things to compensate (like retransmit) that will actually make things worse, hence the use of Nagles algorithm to try to avoid this.
BDK has about half the answer (+1 for him). A large part of the problem is that every message comes with 40 bytes of overhead. Its actually a little worse than that though.
Another issue is that there is actually minimum packet size specified by IP. (This is not MTU. MTU is a Maximum before it will start fragmenting. Different issue entirely) The minimum is pretty small (I think 46 bytes, including your 24 byte TCP header), but if you don't use that much, it still sends that much.
Another issue is protocol overhead. Each packet sent by TCP causes an ACK packet to be sent back by the recipient as part of the protocol.
The result is that is you do something silly, like send one TCP packet every time the user hits a key, you could easily end up with a tremendous amount of wasted overhead data floating around.