Harddisk working principle - hard-drive

I have 10 bytes data to write to a file, after my program writes 9 bytes and 7 bits to hard disk, if electricity cuts, how many bytes can I read from this file after electricity arrives? 9 bytes or 10 bytes?

You can't say anything. There's too many layers of abstraction here. Your program often buffers, the OS buffers, the chipset buffers, the drive itself buffers, and at some point the data will be written.
When you ask for a hard sync on the data through something like fsync all you're getting is a confirmation that at least your data was written, no guarantee that nothing else was.
It takes non-zero amounts of time for this data to stream through all those layers and physically end up on your disk, SSD or otherwise. If power cuts at some point in this process and you haven't received a write confirmation the safe thing to assume is you do not know how much was written. You'll have to inspect whatever files you were writing to before and see what data is present.
When your system reboots it will probably have to recover from the journal anyway, and any uncommitted changes will be rolled back. In your example the number of bytes actually written is zero.

Related

Flash memory raw data changes depending on the reading tool. Why?

I've been playing around with the raw data inside an 8GB Memory Stick, reading and writing directly into specific sectors, but for some reason changes don't remain consistent.
I've used Active # Disk Editor to write a string at a specific sector and it seems consistent when I read it through Active (it survives unmounting, rebooting...), but if I try to read it through terminal using dd and hexdump the outcome is different.
Some time ago I was researching ways to fully and effectively erase a disk and I read somewhere that solid state drives such as flash drives or SSDs have more memory than it's stated so its internals keep replacing parts of the memory in order to increase lifespan or something like that.
I don't know if it is because of that or if it's even correct. Could you tell me if I'm wrong or where to find good documentation about the subject?
Okay I just figured it out.
Apparently when you open a disk inside a Hex editor there's two ways you can go, you can open it as a physical disk (the whole disk) or as a logical disk, aka a volume or a partition.
Active # Disk Editor was opening it as a physical disk, while using dd and hexdump was dumping it as a logical disk. In other words it was dumping the content of the only partition inside the physical disk. This means there was an offset between the real physical sectors where I was writing data using Active and the ones that I was reading (quite a big offset, 2048 sectors of 512 bytes each).
So changes were being made, I was just looking at the wrong positions. Hope this saves someone a few minutes.

iOS Bluetooth BLE read data maximum size

I have an iOS app that reads/writes on a BLE device. The device is sending me data over 20 bytes long and I see they get trimmed. Based on the following thread
Bluetooth LE maximum transmission size
it looks like iOS is trimming the data. That thread shows the solution on how to write bigger data sizes, but how do we read info larger than 20 bytes?
For anyone looking at this post years later like I am, we ran into this question as well at one point. I would like to share some helpful hints for data larger than 20 bytes.
Since the data is larger than one packet can handle, you will need to send it in multiple packets. It helps significantly if your data ALWAYS ends with some sort of END byte. For us, our end byte gives the size of the total byte array so we can check that at the end of reading.
Create a loop that checks for a packet constantly and stops when it receives that end byte (would also be good to have a timeout for that loop).
Make sure to clear the "buffer" when you start a new read.
It is nice to have an "isBusy" boolean to keep track of whether another function is currently waiting to read from the device. This prevents read overlaps. For us, if the port is currently busy, we wait a half second and try again.
Hope this helps!

Read data from PLC with Delphi and libnodave library

I’m here again with a new question; this time about PLC.
I start by saying I’m new of PLC and I’ve never saw one of them until a couple of month ago.
I’m asked to write a program that read, from Delphi, some data from a PLC Siemens S7-300 in order to archive them in a SQL Server database. I’m using the “libnodave” library.
The program is quite simple. I must verify a bit and when it is on I have to read the data from the PLC and set off the bit. With the library I’ve told about I can read and write without problems, but the data I have to read are stored in a group of byte (about 60 bytes), so I’ve to read some bytes, skip some others and read others bytes. Moreover the bit I must test is in the end of this group of bytes.
So I read the entire group of bytes I put the data red in a group of variables and then I test the bit and, if it is on, I store the data into the database.
In order to skip the byte I don’t have to read I use this kind of statements:
for i := 1 to 14 do
daveGetU8(dc);
for i := 1 to 6 do
daveGetU16(dc);
My questions are these:
There is a better way to read the data skipping the ones I don’t have
to read?
Is it convenient to read the entire group of bytes and after
test the bit or is better to make two reading separated?
I say this because I’ve found in internet that the read operations requires some time, so is better to make the minimum numbers of reading possible.
Eros
Communicating with a PLC involves some overhead.
You send a request and after some time you receive an answer.
Often the communication is through a serial line with limited bandwidth.
The timing then involves:
Time to send the request
Time for the PLC to respond
Time to transfer the response
It is difficult to give a definite answer to your questions, since we don't know how critical the timing is.
Anyway, polling the flag byte only seems like reasonable way to go.
When the flag is set, read the entire block in one command and then clear the flag.
Reading the data in small parts to avoid the gaps, is probably more time consuming than reading the entire block at once.
You can make the maths yourself since you know the specifications.
Example:
Lets say the baud rate is 9600 baud. This means roughly 1 byte per millisecond transfer time. The command to read is about 10 bytes long and the block answer about 70 bytes (assuming the protocol is binary). The PLC delay time about 50 ms. This adds to 130 ms, while reading the flag only adds to about 70 ms.
Only you can say if the additional polling time of 70 ms is acceptable.
Edit: In a comment it is stated that the communication is via ethernet on a 100+ MBit/s line. In that case, I suggest to read all data in one command and process it in the PC. Timing is of little concern with such bandwidth.

Data memory and Instructions on PIC18F4321

We are studying the PIC18F4321, and at some point my professor drew the following diagram on the board:
He made it look like instructions (such as ADDLW 0X02, MOVWF 0X24, etc) will take two addresses in data memory, because memory addresses in the PIC18F4321 only take a byte and instructions are 16 bits wide.
But in the datasheet of the PIC18F4321, I cannot find where it says that these 16 bits instructions will ever be stored in data memory. Before he said that, I had in mind that the data memory was for storing register values, not full instructions. On the other hand, I know that there is also program memory, but program memory it is not 8 bits wide, which makes his drawing even more confusing.
1) Are 16 bits instructions ever stored in Data Memory?
2) One way I found of trying to explain the picture is that perhaps the memory in question is not necessarily 8 bits wide, it is just that every address can only take 8 bits. So <8> would be simply stating how many bits you can hold in that address. Would this be a reasonable explanation?
1) Are 16 bits instructions ever stored in Data Memory?
No. Data memory is not used for storing instructions - you cannot execute any code from data memory. All instructions are stored in program memory, which consists of 16 bit instruction words. The datasheet details the format and layout of the different instructions. Some instructions are single word, some require multiple words. The program memory is addressed by a 21 bit program counter, which encompasses a 2Mbyte space although for the PIC18F4321 there is just 8Kbytes of program memory, which equates to 4096 single-word instructions.
Data memory consists of 8 bit bytes, addressed by a 12 bit bus, which allows up to 4096 bytes of data memory although the PIC18F4321 has just 512 bytes of data memory, split into two banks of 256 bytes. This data memory contains the SFR's (special function registers) and the general purpose registers (GPR) that you use in your application.
All of this is explained in greater detail in the datasheet for this device, specifically in Section 5.
The way that program memory is addressed by the program counter (PC) enforces the 16-bit instruction word alignment by forcing the least significant bit of the PC to zero, which forces access in multiples of two bytes. Quoting from the datasheet:
The PC addresses bytes in the program memory. To prevent the PC from
becoming misaligned with word instructions, the Least Significant bit
of PCL is fixed to a value of ‘0’. The PC increments by 2 to address
sequential instructions in the program memory.
I suggest that you thoroughly read Section 5 of the linked datasheet and see if you have any remaining doubts. It contains a lot of detail, but it is well described even though it will take more than one reading to understand it completely.

What is the fastest way for reading huge files in Delphi?

My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes.
The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this:
FileStream.Position := Offset;
MemoryStream.CopyFrom(FileStream, Size);
This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size.
The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue.
I am using Delphi 2007 on a Windows XP box.
What can I do to speed up this file access?
edit:
The file access is slow for large files, regardless of which part of the file is being read.
The program usually does not read the file sequentially. The order of the chunks is user driven and cannot be predicted.
It is always slower to read a chunk from a large file than to read an equally large chunk from a small file.
I am talking about the performance for reading a chunk from the file, not about the overall time it takes to process a whole file. The latter would obviously take longer for larger files, but that's not the issue here.
I need to apologize to everybody: After I implemented file access using a memory mapped file as suggested it turned out that it did not make much of a difference. But it also turned out after I added some more timing code that it is not the file access that slows down the program. The file access takes actually nearly constant time regardless of the file size. Some part of the user interface (which I have yet to identify) seems to have a performance problem with large amounts of data and somehow I failed to see the difference when I first timed the processes.
I am sorry for being sloppy in identifying the bottleneck.
If you open help topic for CreateFile() WinAPI function, you will find interesting flags there such as FILE_FLAG_NO_BUFFERING and FILE_FLAG_RANDOM_ACCESS . You can play with them to gain some performance.
Next, copying the file data, even 100Kb in size, is an extra step which slows down operations. It is a good idea to use CreateFileMapping and MapViewOfFile functions to get the ready for use pointer to the data. This way you avoid copying and also possibly get certain performance benefits (but you need to measure speed carefully).
Maybe you can take this approach:
Sort the entries on max fileposition and then to the following:
Take the entries that only need the first X MB of the file (till a certain fileposition)
Read X MB from the file into a buffer (TMemorystream
Now read the entries from the buffer (maybe multithreaded)
Repeat this for all the entries.
In short: cache a part of the file and read all entries that fit into it (multhithreaded), then cache the next part etc.
Maybe you can gain speed if you just take your original approach, but sort the entries on position.
The stock TMemoryStream in Delphi is slow due to the way it allocates memory. The NexusDB company has TnxMemoryStream which is much more efficient. There might be some free ones out there that work better.
The stock Delphi TFileStream is also not the most efficient component. Wayback in history Julian Bucknall published a component named BufferedFileStream in a magazine or somewhere that worked with file streams very efficiently.
Good luck.

Resources