I need to load the file into an FPGA board. Since the file (YUV) is around 10MB or more, I believe I would have to store it into the SDRAM (DDR3) then load it into the FPGA. I'm quite new so I am unsure how exactly do I load the file from the PC to the DDR3 through Vivado? I'm using a Kintex-7 KC705 board. Thank you.
Related
I've been playing around with the raw data inside an 8GB Memory Stick, reading and writing directly into specific sectors, but for some reason changes don't remain consistent.
I've used Active # Disk Editor to write a string at a specific sector and it seems consistent when I read it through Active (it survives unmounting, rebooting...), but if I try to read it through terminal using dd and hexdump the outcome is different.
Some time ago I was researching ways to fully and effectively erase a disk and I read somewhere that solid state drives such as flash drives or SSDs have more memory than it's stated so its internals keep replacing parts of the memory in order to increase lifespan or something like that.
I don't know if it is because of that or if it's even correct. Could you tell me if I'm wrong or where to find good documentation about the subject?
Okay I just figured it out.
Apparently when you open a disk inside a Hex editor there's two ways you can go, you can open it as a physical disk (the whole disk) or as a logical disk, aka a volume or a partition.
Active # Disk Editor was opening it as a physical disk, while using dd and hexdump was dumping it as a logical disk. In other words it was dumping the content of the only partition inside the physical disk. This means there was an offset between the real physical sectors where I was writing data using Active and the ones that I was reading (quite a big offset, 2048 sectors of 512 bytes each).
So changes were being made, I was just looking at the wrong positions. Hope this saves someone a few minutes.
I'm studying how virtual memory works and I'm not sure what happens if I load a big file (smaller than the physical memory, though) with fread() and similar.
As far as I understand, the operating system might not allocate the entire corresponding physical memory. Instead, it could wait until a page fault is triggered as my program reads a specific portion of the file (a portion not yet mapped to physical memory).
This is basically the behavior of a memory mapped file. So, if my assumptions are correct, what is the benefit of using system calls like mmap()? Just to avoid the usual for-loop dance when reading with fread(), maybe?
read(),fread() will read the amount you specified into the buffer you provide. Mmap is a separate interface into the kernel file cache. Where the two intersect is that the kernel will most likely first read the file into cache buffers, then copy select bits of those cache buffers into your user buffer.
This double copy is often necessary because your program doesn't provide the necessary alignment and blocking size the underlying device requires, and if the data requires transformation (decrypt, uncompress), it needs a place to do it from.
This kernel cache is kept coherent with the file, so system wide reads and writes go through it.
If you mmap the file, you may be able to avoid the double copy; but have to deal with changes to the file appearing un-announced.
I have a DE1-SoC Board and would like to experiment with it.
(Board description:
http://www.terasic.com.tw/cgi-bin/page/archive.pl?Language=English&CategoryNo=205&No=836&PartNo=1)
My wish is to incorporate non-volatile storage.
As for a start, implementing the following exercise would make me happy:
When the board is turned on, the integer stored in memory should appear in
readable format on HEX LEDS.
So far I have implemented the "ability" to change the value of HEX LEDS using extra buttons connected to board. However, if the board is turned off, the whole "ability" is gone. I then need to reconnect the board to my PC and re-download the binary code to FPGA. In addition to that, the value stored in LEDS is also reset to default. I would like to avoid having reconnect my FPGA to computer.
How to start working on this?
Looking at board documentation for memory:
64MB (32Mx16) SDRAM on FPGA
1GB (2x256Mx16) DDR3 SDRAM on HPS
Micro SD Card Socket on HPS
Does it mean that DE1-SoC has no non-volatile storage incorporated? If it does, how to access it?
I also have all pin assignments of the board in a single file "de1soc_pin_assignments.qsf"
Can I connect external SD card to "Micro SD Card Socket" and use it as a flash? Is it possible to "boot" the binary code from SD card to FPGA (as well as the integer number to LEDS)? If yes, which pin should I use for that?
Thanks a lot for your help in advance
The DE1-SOC includes an EPCS128 configuration flash, which can be used to store the bitstream for your design. See page 105 of the DE1-SOC user manual ("Programming the EPCS Device") for details on how to convert a bitstream to the appropriate format and store it on the flash chip. Once you've done so, the FPGA will "boot" into that bitstream when powered on, without needing to be plugged into a computer.
The configuration flash cannot easily be used to store other data, such as the state of your LEDs. It might be possible to store this data on an SD card, but doing so will not be an easy task, as SD cards require a complex initialization process before they can be accessed.
Synmemo at Sourceforge seems to be very good txt editor and code highlighter. It is a pity that it does not upgrade for long. It is a pure vcl. I want to know what is its maximum length. What is the largest txt file it can load?
Thanks
On a 32bit operating system you can load ~2GB of text file in the editor(not recommended), if you're running a 64 bit os, have a look here Why 2 GB memory limit when running in 64 bit Windows? and here http://cc.embarcadero.com/Item/24309 if you care to load more than 2GB of data in the syn editor.
From my experience I was able to load a couple of hundreds of megs without an issue, but the component becomes less and less responsive depending on how much you really need to load. ~80mb is super fast to load and play with.
I hope this helps.
My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes.
The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this:
FileStream.Position := Offset;
MemoryStream.CopyFrom(FileStream, Size);
This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size.
The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue.
I am using Delphi 2007 on a Windows XP box.
What can I do to speed up this file access?
edit:
The file access is slow for large files, regardless of which part of the file is being read.
The program usually does not read the file sequentially. The order of the chunks is user driven and cannot be predicted.
It is always slower to read a chunk from a large file than to read an equally large chunk from a small file.
I am talking about the performance for reading a chunk from the file, not about the overall time it takes to process a whole file. The latter would obviously take longer for larger files, but that's not the issue here.
I need to apologize to everybody: After I implemented file access using a memory mapped file as suggested it turned out that it did not make much of a difference. But it also turned out after I added some more timing code that it is not the file access that slows down the program. The file access takes actually nearly constant time regardless of the file size. Some part of the user interface (which I have yet to identify) seems to have a performance problem with large amounts of data and somehow I failed to see the difference when I first timed the processes.
I am sorry for being sloppy in identifying the bottleneck.
If you open help topic for CreateFile() WinAPI function, you will find interesting flags there such as FILE_FLAG_NO_BUFFERING and FILE_FLAG_RANDOM_ACCESS . You can play with them to gain some performance.
Next, copying the file data, even 100Kb in size, is an extra step which slows down operations. It is a good idea to use CreateFileMapping and MapViewOfFile functions to get the ready for use pointer to the data. This way you avoid copying and also possibly get certain performance benefits (but you need to measure speed carefully).
Maybe you can take this approach:
Sort the entries on max fileposition and then to the following:
Take the entries that only need the first X MB of the file (till a certain fileposition)
Read X MB from the file into a buffer (TMemorystream
Now read the entries from the buffer (maybe multithreaded)
Repeat this for all the entries.
In short: cache a part of the file and read all entries that fit into it (multhithreaded), then cache the next part etc.
Maybe you can gain speed if you just take your original approach, but sort the entries on position.
The stock TMemoryStream in Delphi is slow due to the way it allocates memory. The NexusDB company has TnxMemoryStream which is much more efficient. There might be some free ones out there that work better.
The stock Delphi TFileStream is also not the most efficient component. Wayback in history Julian Bucknall published a component named BufferedFileStream in a magazine or somewhere that worked with file streams very efficiently.
Good luck.