STM32 - QSPI Flash Read Only Problem In Memory Mapped Mode - memory

In STM32F7, The code is running from the internal flash (default), we can read/write data from/to internal flash.
My problem is: I want to use external QSPI flash for my code execution (Memory mapped mode).
Also during this mode, I would like to use the same QSPI flash for my data storage (ie. saving some settings) while code is executing in QSPI flash. But this is not possible since ST states that in its reference document (AN4760):
In Memory-mapped mode the QUADSPI allows the access to the external
memory for read operation through the memory mapped address region
(from 0x9000 0000 to 0x9FFF FFFF) and allows the external memory to
be seen just like an internal memory.
Is there any solution to my problem (write data to QSPI flash) without exiting from memory mapped mode?
Is it possible to partition QSPI flash into two parts? One is used for memory mapped mode and the other used as just read/write data into it.
Note: I don't want to jump from external flash to internal flash for write data, then jump to external flash again for executing the code.
Any help would be appreciated.
Thanks.

No it is not possible. FLASH memory, if was written before, has to be erased, then you need to enter the wirte mode and write the memory. FLASH memory is always slow to write.
The memory mapped mode is usually used to run the code from the QSPI flash, or to simplify the the read access.

I know this is an older post but for future reference:
You need to stop executing out of external flash to write to it. Maybe by copying a small code block to RAM or executing from internal flash. Then jump to executing that code. That code could disable the external flash from memory mapped mode, write your data to it and then switch it back to memory mapped mode. Obviously during this time you would need to disable any relevant interrupts and make sure there was no accesses to memory mapped flash. Also take special care not to erase and overwrite your code in external flash, unless you want to of course!
This is a similar process to writing to internal flash when you cannot execute from it while writing to it.

You can write to flash by writing to the quadspi registers. See qspi_write_cmd_addr_data() in https://github.com/micropython/micropython/blob/master/ports/stm32/qspi.c

Related

Keil memory window

I need to write data to the program’s internal memory (flash) at an address starting at 0x08000000 for further processing. To do this, I open the memory window with the desired address, select the byte, enter the number, press "Enter", but nothing happens and the data entered is not saved. You can watch the video demonstrating the process here: https://2ch.hk/pr/src/1499956/15818644469750.mp4. In this case, the data at 0x20000000 is recorded without problems. But I need the data exactly in the flash, how to fix it?
The debugger is not capable of writing to flash. You can get your data into that memory location in one of two ways: either by embedding it into your source code and using the linker to control its location, or by writing to the flash from software (assuming the device you're using is able to do this, most are - check the device manual).

STM32F4 Flash Memory Write/Read Questions

I want to declare 4 large arrays of each can store 48000 float_32 constants.
I am using STM32F4, so i figured out ram isnt enough and i should use flash
instead. Upon my research, seems like everyone is talking about one should never use flash for write/read processes on runtime, go for eeprom instead. I am aware of the risks, but why do 1mb flash memory is in there if i should never use it? What is it used for?
On my real problem, i just want to be able to write and generate a sampling array and write it on flash for once the device is initialized, and never be touched again. It is safe this way, but I can't find any useful and easy to understand tutorials anywhere for writing on flash. Is it forbidden? Should we not use 1mb of huge space available on the chip? Please point me to a nice tutorial on writing/reading STM32F4 flash.

How does the cpu decide which data it puts in what memory (ram, cache, registers)?

When the cpu is executing a program, does it move all data through the memory pipeline? Then any piece of data would be moved from ram->cache->registers so all data that's executed goes in the cpu registers at some point. Or does it somehow select the code it puts in those faster memory types, or can you as a programmer select specific code you want to keep in, for example, the cache for optimization?
The answer to this question is an entire course in itself! A very brief summary of what (usually) happens is that:
You, the programmer, specify what goes in RAM. Well, the compiler does it on your behalf, but you're in control of this by how you declare your variables.
Whenever your code accesses a variable the CPU's MMU will check if the value is in the cache and if it is not, then it will fetch the 'line' that contains the variable from RAM into the cache. Some CPU instruction sets may allow you to prevent it from doing so (causing a stall) for specific low-frequecy operations, but it requires very low-level code to do so. When you update a value, the MMU will perform a 'cache flush' operation, committing the cached memory to RAM. Again, you can affect how and when this happens by low-level code. It will also depend on the MMU configuration such as whether the cache is write-through, etc.
If you are going to do any kind of operation on the value that will require it being used by an ALU (arithmetic Logic Unit) or similar, then it will be loaded into an appropriate register from the cache. Which register will depend on the instruction the compiler generated.
Some CPUs support Dynamic Memory Access (DMA), which provides a shortcut for operations that do not really require the CPU to be involved. These include memory-to-memory copies and the transfer of data between memory and memory-mapped peripheral control blocks (such as UARTs and other I/O blocks). These will cause data to be moved, read or written in RAM without actually affecting the CPU core at all.
At a higher level, some operating systems that support multiple processes will save the RAM allocated to the current process to the hard disk when the process is swapped out, and load it back in again from the disk when the process runs again. (This is why you may find 'Page Files' on your C: drive and the options to limit their size.) This allows all of the running processes to utilise most of the available RAM, even though they can't actually share it all simultaneously. Paging is yet another subject worthy of a course on its own. (Thanks to Leeor for mentioning this.)

Is a data segment always required in a program?

I'm in an assembly language course focusing on x86 Pentium processors, and am working on a Linux system. I understand that programs get loaded into memory and that you can perform operations directly within the registers but I'm not sure you can avoid creating a data segment altogether.
A yes or no, followed by a brief explanation as to why would be great.
It is not required. A data segment is simply a block of memory allocated for data and thus can be written to and read from. Code segments are read only. If you try to write to a code segment the hardware will generate an interrupt. However, assembly codes can be fed any address in memory, and if protected mode is disabled, then the hardware won't generate an interrupt.
As an example, the boot sector loads into a very restricted space on launch, and it is quite common (because space is so restricted) to place variables among the code bytes. Once I even wrote a boot sector that adjusted its own byte-code to accommodate differences in booting from different disks. So this is a case of code using code addresses as variables.
However, while you definitely can avoid creating a data segment, 99.99% of the time you do separate out a data segment.
You may also want to read up on protected mode to understand this better.

Accessing outside the memory allocated by the program. (Accessing other app's memory)

Is there a way to access (read or free) memory chunks that are outside the memory that is allocated for the program without getting access violation exceptions.
Well what I actually would like to understand apart from this, is how a memory cleaner (system garbage collector) works. I've always wanted to write such a program. (The language isn't an issue)
Thanks in advance :)
No.
Any modern operating system will prevent one process from accessing memory that belongs to another process.
In fact, it you understood virtual memory, you'd understand that this is impossible. Each process has its own virtual address space.
The simple answer (less I'm mistaken), no. Generally it's not a good idea for 2 reasons. First is because it causes a trust problem between your program and other programs (not to mention us humans won't trust your application either). second is if you were able to access another applications memory and make a change without the application knowing about it, you will cause the application to crash (also viruses do this).
A garbage collector is called from a runtime. The runtime "owns" the memory space and allows other applications to "live" within that memory space. This is why the garbage collector can exist. You will have to create a runtime that the OS allocates memory to, have the runtime execute the application under it's authority and use the GC under it's authority as well. You will need to allow some instrumentation or API that allows the application developer to "request" memory from your runtime (not the OS) and your runtime have a way to not only response to such a request but also keep track of the memory space it's allocating to that application. You will probably need to have a framework (set of DLL's) that makes these calls available to the application (the developer would use them to form the request inside their application).
You have to be sure that your garbage collector does not remove memory other then the memory that is used by the application being executed, as you may have more then 1 application running within your runtime at the same time.
Hope this helps.
Actually the right answer is YES.. there are some programs that does it (and if they exists.. it means it is possible...)
maybe you need to write a kernel drive to accomplish this, but it is possible.
Oh - and I have another example... Debugger attach command... here is one program that interacts with another program memory even though both started as a different process....
of course - messing with another program memory.. if you don't know what you're doing will probably make it crush...

Resources