LittleFS File system - stm32H753BI Cache Issue - memory

When I enable Instruction Cache my LFS filesystem gets corrupts. File cannot open.
If I disable the Instruction cache, then file Operations takes places successfully.
SCB_EnableICache();
Note: LFS does not use Dynamic memory and DMA access.

If your code doesn't use dynamic code in ram (like thunks) and you don't manipulate the PC in any ways the Instruction cache is perfectly transparent afaik. Probably the performance boost make some race conditions to come out.

Related

How does Bazel track files so quickly?

I can't find any information about how Bazel tracks file. The documentation doesn't mention if they use something like facebook's watchman.
It obviously takes some kind of hash and compares, but how exactly does it do it? Because it knows if things hasn't changed immediately and it wouldn't be able to read all those files in such a short time.
Also if you are watching many files it would take up a lot of space with a mono repo like Google? I know that is one of the problems scaling git because "git status" will become to slow unless some intelligent caching is used.
Bazel uses OS filesystem monitoring APIs like inotify on Linux and FSEvents on Mac OS
Check out these classes:
https://github.com/bazelbuild/bazel/blob/c5d0b208f39353ae3696016c2df807e2b50848f4/src/main/java/com/google/devtools/build/lib/skyframe/DiffAwareness.java
https://github.com/bazelbuild/bazel/blob/1d2932ae332ca0c517570f559c6dc0bac430b263/src/main/java/com/google/devtools/build/lib/skyframe/LocalDiffAwareness.java
https://github.com/bazelbuild/bazel/blob/c5d0b208f39353ae3696016c2df807e2b50848f4/src/main/java/com/google/devtools/build/lib/skyframe/MacOSXFsEventsDiffAwareness.java

What is the file size limit? (NodeMCU, Esplorer)

I recently tried to host a little web interface from my ESP8266. But something kept failing until I realized that a bigger file (around 10kb) was corrupt. Well, not really corrupt, but simply incomplete. And no matter how I changed it, the file was always cut off after a certain amount of characters.
My compiled NodeMCU firmware is about 649kb in size, so there should easily be enough space. I mean my board has at least 4MB of storage (32m), so that should be plenty to store my lua, html and css files!
I used Esplorer to upload the files btw.
So what exactly is the limit here?
Is it a memory issue? A flash storage issue? An issue related to Esplorer?
Is it somehow possible to get bigger files onto my board?
edit:
I should mention that uploading the init.lua file always worked even if it was around 10kb. Maybe the uploading mechanism is different for the init.lua file?
Alright, here's the long form of my comment above. My best guess is (was) that this be an issue with ESPlorer. Whenever I look at its source code I'm actually surprized how well it usually works.
At https://frightanic.com/iot/tools-ides-nodemcu/ I compiled a list of tools and IDEs for NodeMCU. I suggest you pick a different uploader and try again. The NodeMCU-Tool for example is solid and it's definitely a lot better maintained than ESPlorer is.

ShFileOperation disk-size abort option

I am using ShFileOperation to copy files to an SD card and it is working fine, almost!
I have some large files, 5GB and greater. When the SD card is empty this all progresses fine. But, when I am updating the files on the SD c ard, ShFileOperation will check remaining disk size and if the file is larger than free-space it will show a "No room" dialog and abort.
The problem arises when the file will be overwriting an existing one and is probably only 3MB or 4MB larger with new stuff. The ShFileOperation does not first check if the destination file exists before checking for disk space.
I have checked all available Flags on the MSDN site and the only one I can find is FOF_NOERRORUI but that is a little too brutal and totalitarian for me. Killing off all error messages just to overcome one problem.
Is there any way I can get ShFileOperation to not do that disk-space check, but still declare serious errors if they occur?
Thanks.
Is there any way I can get ShFileOperation to not do that disk-space check, but still declare serious errors if they occur?
You can use FOF_NOERRORUI to suppress the error UI. Which is indeed exactly what you want. But then you need to provide UI for any errors, since you asked the system not to. That flag essentially means, "let me take charge of reporting errors."
In this situation, I would suggest using CopyFileEx() for each file, utilizing its progress callback to update your own progress dialog as needed.

Optimizing command line GIMP

I am running a script-fu macro using GIMP from the command line. However, it is quite slow to startup and run - about 20-25 seconds. I think a lot of this time is spent on startup - loading all the plugins and such. What are some ways to optimize GIMP on the CL? Is there any way to keep it always running?
Some promising options from the GIMP docs (some of which you may already be using):
--no-interface: Run without a user interface.
--no-data: Do not load patterns, gradients, palettes, or brushes. Often useful in non-interactive situations where start-up time is to be minimized.
--no-fonts: Do not load any fonts. This is useful to load GIMP faster for scripts that do not use fonts, or to find problems related to malformed fonts that hang GIMP.
--no-splash: Do not show the splash screen while starting.
The GIMP FAQ:
The GIMP takes too long to load - how can I speed it up?
The main things are to make sure you are running at least version 1.0, and make sure you compiled with optimization on, debugging turned off, and the shared memory and X shared memory options turned on.
Or, buy a faster system with more memory. 8^)
This question on SuperUser addresses slow GIMP startup time in general and recommends:
Rebuild the font cache file by deleting C:\Documents and Settings\<username>\.fonts-cache1 and then opening GIMP.
Check for slow-loading plugins by starting up with --verbose and seeing where it hangs. Then remove problematic plugins by renaming them in C:\Program Files\GIMP-2.0\lib\gimp\<version>\plug-ins. Alternately, remove all plugins by renaming the whole plugins folder.
Not so much a solution as a different possibility for the future, but have you considered not using GIMP?
GIMP is first and foremost a GUI-based app. If you're doing a lot of repetitive image manipulation from the command line, you might be better off with a tool like ImageMagick that's designed expressly for such use. I don't know how complex your script-fu scripts are, or how easily they could be translated to ImageMagick's (admittedly complex) syntax, but you definitely wouldn't have problems with long startup time.
You could use "Script-fu Server" .
image window > Main menu > filters > script-fu > Start server.
You will be provided with a popup asking for the port to run it in. There is also "help" provided on the same popup, which also describes the protocol used by the server.

dump pe file from memory to disk

I want to dump a process image on the disk and then execute it
i listed the process modules
i used readprocessmemory to read the memory range of the exe
but when i try to execute it fails.how can i solve this?
thanks
You can't.
When you load a PE into memory, (I assume you're using MapAndLoad from ImageHlp.pas,) it loads the modules into memory and loads the data, but it doesn't go through and realign all the pointers the way the standard Windows Loader does.
The pointers in the app are all going to be relative addresses that don't actually point to what they're supposed to point to.
If you know enough about how RVAs and mappings work, you can analyze the code, but you can't actually execute it.

Resources