I am currently working on a hobby OS, specifically the ATA driver. I am having some issues with PIO data-in commands with interrupts. I am trying to execute the READ MULTIPLE command to read multiple sectors from the drive, block by block, with an interrupt firing for each block.
If I request a read of 4 blocks (1 sector per block). I expect to get 4 interrupts, one for each data block. Upon receiving the 4th interrupt I can identify that I've transferred all data and update my request structure accordingly. However, in VirtualBox I've found that after the last data block has been transferred I received yet another interrupt (STATUS = 0x50, READY, OVERLAPPED MODE SERVER REQ). I can simply read the STATUS register then to clear it, but I don't think I should ever receive the 5th interrupt according to the specs.
So what is the proper way acknowledge an interrupt issued by an ATA device?
In this example I issue a READ MULTIPLE command, and then my ISR does the following:
disables CPU interrupts, sets nIEN
Read a single data block (not sector!) fro the DATA register,
If all data has been read, read the STATUS register to clear the 'extra' interrupt
Exit by clearing nIEN, and sending EOI's to both the master and slave PICs
The ATA specs for the PIO data-in command protocol don't indicate that you need to read the status register. From that I assumed that when I receive an interrupt all I have to do is follow the protocol and finish by sending the EOIs to the PICs. As for the setting/clearing of nIEN, in dealing with VirtualBox I've found that if I don't do this I don't receive any interrupts past the first one. So I set nIEN when entering the ISR, then clear it before I leave. I'd think that wouldn't have any effect, but it must be related to reading/writing that specific register.
This always happens to me, I post a question I've been struggling with, only to find the answer myself shortly after.
The ATA-6 spec I've been referencing has this one line in the PIO data-in section (9.5):
When in this state, the host shall read the device Status register.
With ATA the Status register has a side-effect: it clears a pending interrupt. I knew this, but I didn't correctly read this part before. It doesn't mention why you should read the register, it just states it exactly as above.
The important part is how this works with the interrupt handler. After issuing a PIO data-in command, once the INTRQ is asserted, you simply read the Status register once to clear the interrupt, then continue to processes the interrupt and return as normal (just sending EOIs to the PICs.) What had me confused is that none of the documentation I read mentioned exactly how this should work with interrupts (receive an INTRQ, read Status, processes interrupt.) Most online guides only deal with polled IO.
This is one of the difficulties with low-level programming, key details, such as needing to read the Status register in the ISR, are often glanced over. This one was left as a single line in the protocol description. Call me picky but I just would have expected more emphasis on this point.
Related
I am working on CANopen architecture and had three questions:
1- When the 'synchronous window' is closed until the next SYNC message, should we send the SDO message? Can we not send a message during this period?
2- Is it possible not to send the PDO message during the simultaneous window?
3- What is the answer that the slaves give in the SYNC message?
Disclaimer: I don't have exact answers but I just wanted to share my assumptions & thoughts.
CiA 301 doesn't mention the relation between synchronous window and SDOs. In normal operation after the initial configuration, one may assume that SDOs aren't present on the system, or at least they are rare compared to PDOs. Although not strictly necessary, SDOs are generally initiated by a device which has the master role, and that device also produces the SYNC messages (again, it's not strictly necessary but it's the usual/common implementation). So, the master device may adjust the timing of SDO requests according to the synchronous window.
Here is a quote from CiA 301:
If the synchronous window length expires all synchronous TPDOs may be
discarded and an EMCY message may be transmitted; all synchronous
RPDOs may be discarded until the next SYNC message is received.
Synchronous RPDO processing is resumed with the next SYNC message.
CiA 301 uses the word "may" (see the quote above). So I'm not sure if it's mandatory or not. In my opinion, it makes sense to follow the advice and abort synchronous TPDO transmissions after the synchronous window and send an EMCY message. Event-driven (non-synchronous) TPDOs can be sent within or after the synchronous window.
There is no direct response to SYNC messages. On SYNC reception, SYNC consumers (slaves) sample their inputs, drive their outputs according to the previous RPDOs, and start transmitting their TPDOs containing the previous samples (or the current ones? I'm not sure about this).
Synchronous windows are for specific PDO synchronization only. For hard real-time systems, data might be required to arrive within certain fixed time intervals - not too early, not too late. That is, it acts as a real-time deadline. If such features are enabled, you need to take that in consideration when doing the specific CANopen bus implementation.
For example if some SDO communication would occupy the bus so that the PDO can't meet its time window, that would be a problem. But this is easily solved by giving the PDO a lower COBID than the SDO, which should already be the case in most default device profile setups like "DS401 GPIO module". Other than that, you would have to make sure there is no ridiculous bus loads or that nodes hang up or get busy doing other things.
In systems with hard real-time requirements you probably don't want to allow any SDO communication during operational mode to begin with.
What is the answer that the slaves give in the SYNC message?
That question doesn't make any sense. You need to study what the SYNC message does and what it is for.
Message sending in Erlang is asynchronous, meaning that a send expression such as PidB ! msg evaluated by a process PidA immediately yields the result msg without blocking the latter. Naturally, its side effect is that of sending msg to PidB.
Since this mode of message passing does not provide any message delivery guarantees, the sender must itself ascertain whether a message has been actually delivered by asking the recipient to confirm accordingly. After all, confirming whether a message has been delivered might not always be required.
This holds true in both the local and distributed cases: in the latter scenario, the sender cannot simply assume that the remote node is always available; in the local scenario, where processes live on the same Erlang node, a process may send a message to a non-existent process.
I am curious as to how the side effect portion of !, i.e, message sending, works at the VM-level when the sender and recipient processes live on the same node. In particular, I would like to know whether the sending operation completes before returning. By completes, I mean to say that for the specific case of local processes, the sender: (i) acquires a lock on the message queue of the recipient, (ii) writes the message directly into its queue, (iii) releases the lock and, (iv) finally returns.
I came across this post which I did not fully understand, although it seems to indicate that this could be the case.
Erik Stenman's The Beam Book, which explains many implementation details of the Erlang VM, answers your question in great detail in its "Lock Free Message Passing" section. The full answer is too long to copy here, but the short answer to your question is that yes, the sending process completely copies its message to a memory area accessible to the receiver. If you consult the book you'll find that it's more complicated than steps i-iv you describe in your question due to issues such as different send flags, whether locks are already taken by other processes, multiple memory areas, and the state of the receiving process.
Most server framework/examples using sockets and I/O completion ports makes notifications in a way I couldn't completely figure out the purpose.
Upon read packets are processed, usually they are reordered to circumvent thread scheduling issues processing packets out of order no matter IOCP ensure a FIFO queue.
The problem is when a socket is closed gracefully or by an error. I saw in both situation, and again by the o.s. thread scheduler, the close notification may be sent to the application (i.e. http server using the framework) "before" the notification of data previously readed.
I think that the close notification should be queued in such way so the application receives it after previous reads.
Is there any intended use in most code I saw or my behavior may be correct depending on the situation?
What you suggest makes sense and I would imagine that any code that handles graceful close (a read returning 0 bytes) would do so by processing it after any proceeding successful read. Errors coming out of GetQueuedCompletionStatus(), such as connection reset errors, etc, are harder to integrate into the receive flow as they occur out of band as far as the receive data is concerned. Your question's a bit vague and depends very much on the code you're using and how you (or the people who wrote that code) want to handle these things. There is no single correct way, IMHO.
This question already has an answer here:
How are data breakpoints created?
(1 answer)
Closed 1 year ago.
Requirements:
I need to generate an interrupt, when a memory location changes or is written to. From an ISR, I can trigger a blue screen which gives me a nice stack trace with method names.
Approaches:
Testing the value in the timer ISR. Obviously this doesn't give satisfying results.
I discovered the bochs virtual machine. It has a basic builtin debugger that can set data breakpoints and stop the program. But I can't seem to generate an interrupt at that point.
bochs allows one to connect a gdb to it. I haven't been able to build it with gdb support though.
Other thoughts:
A kind of "preview instruction" interrupt that triggers for every instruction before executing it. The set of used memory-writing instructions should be pretty manageable, but it would still be a PITA to extract the adress I think. And I think there is no such interrupt.
A kind of "preview memory access" interrupt. Again, I don't think its there.
Abuse paging. Mark the page of interest as not present and test the address in the page fault handler. One would still have to distinguish read and write operations and I think, the page fault handler doesn't get to know the exact address, just the page number.
See chapter 16 in Intel's Software Developer's Manual Volume 3A. It gives information about using the debug registers, which provide support for causing the debugger exception when accessing a certain address, among other things. The interrupt will be triggered after the instruction which caused it. Specifically, you will have to set one of dr0-dr3 to the address you want to watch, and dr7 with the proper values to tell the processor what types of accesses should cause the interrupt.
NOTE: I'll use the ssh_sftp channel as an example here, but I've noticed the same behaviour when using different channels.
After starting a channel:
{ok, ChannelPid} = ssh_sftp:start_channel(State#state.cm),
(where cm is my Connection Manager), I'm performing an operation through the channel. Say:
ssh_sftp:write_file(ChannelPid, FilePath, Content),
Then, I'm stopping the channel:
ssh_sftp:stop_channel(ChannelPid),
Since, as far as I know, the channel is implemented as a gen_server, I was expecting the requests to be sequentialized.
Well, after a bit of tracing, I've noticed that the channel is somehow stopped before the file write is completed and the result of the operation is sent through the channel. As a conclusion, the response is not sent through the channel, since the channel doesn't exist anymore.
If I don't stop the channel explicitely, everything works fine and the file write (or any other operation performed through the channel) is completed correctly. But I would prefer to avoid to leave open channels. On the other hand, I would prefer to avoid implementing my own receive handler to wait for the result before the channel can be stopped.
I'm probably missing something trivial here. Do you have any idea why this is happening and/or I could fix it?
I repeat, the ssh_sftp is just an example. I'm using my own channels, implemented using the existing channels in the Erlang SSH application as a template.
As you can see in ssh_sftp.erl it forcefully kills channel after 5 sec timeout with exit(Pid, kill) which interrupts the process regardless of whether it's processing something or not.
Related quote from erlang man:
If Reason is the atom kill, that is if exit(Pid, kill) is called, an untrappable exit signal is sent to Pid which will unconditionally exit with exit reason killed.
I had a similar issue with ssh_connection:exec/4. The problem is that these ssh sibling modules ( ssh_connection, ssh_sftp, etc) all appear to behave asynchronously, therefore a closure of channel of ssh itself will shut down the ongoing action.
The options are:
1) do not close the connection : this may lead to leak of resources. Purpose of my question here
2) After the sftp, introduce a monitoring function that waits by monitoring on the file you are transfering at the remote server ( checksum check ). This can be based on ssh_connection:exec and poll on the file you are transferring. Once the checksum matches what you expect, you can free the main module