How to Raspberry pi main memory (Read and write) - memory

I want to access raspberry pi's main memory, I have learned all basic stuff related to the device from Baking Pi website. So far i was working on registers, but now I want to have access to device's main memory. The problem is that i do not know the base address of main memory from where i can start working reading and writing.
I have made a lot of search but Google always points towards GPIO and other stuff.
If someone can please provide me the base address or any web link from where i could know how to continue, Thanks

Did you read the bcm2835 data sheet?
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCkQFjAA&url=http%3A%2F%2Fwww.raspberrypi.org%2Fwp-content%2Fuploads%2F2012%2F02%2FBCM2835-ARM-Peripherals.pdf&ei=QBpSU_m_Jqan8gGV-YDoBQ&usg=AFQjCNHP-j0e8vqALoieF5shMbgPaYcYEg&sig2=bVJ9mqBrcV4xH3FWAfhJGQ&bvm=bv.65058239,d.aWw

Related

STM32 Memory Dump and Extracting Secret Key

I am quite new at embedded development and started with a STM32F429 board to improve myself.
I have just developed a basic Caesar encryption application for my board. It is working well, and defined the secret key as "3". Now I would like to extract this super secret(!) key from my device.
How can I do it? Should I dump the memory or firmware of my device, and how?
May you suggest me any software for this proccess? (Not ST Utility or STM softwares please. Because I would like to try gained experiences on other devices as well.)
Thanks!
I take it the value you're looking for is hardcoded. In that case it resides in the internal flash. So yes, a memory dump will be necessary.
I will go the long way and assume that you know very little about how it works, so if you know some of this stuff, well, good for you. I will try to give a few pointers.
Specifically about STM32:
You have an option to boot the microcontroller from the so-called system memory, which is read-only memory, and it is already preprogrammed from factory with a bootloader. You can talk to that running bootloader via UART (most common way, comes with ST-Link, but any cheapo USB-UART bridge also works). Or it can be some other protocol. You can ask that bootloader to read its flash out to you, among other things. This is covered in AN2606.pdf. It has some useful links in it, such as:
names of documents, where you can find specific bootloader commands for any interface you wish. Of course, you only care about interfaces, that the bootloader of your specific MCU F429 supports, which are found in the same AN2606, page 172 (for bootloader version 0.7, there is also 0.9 for those MCUs, I have no idea how to tell which one you have, so...try? UART configuration seems identical anyway):
So what exactly needs to be done? Flip the state of BOOT0 pin - permanently - of the MCU and reset it (power cycle or reset pin, both ok). You will boot the MCU into bootloader instead of booting program from flash. You can read about it in the Reference manual STM32F429, page 69. It talks about states of BOOT0 and BOOT1 pins on boot. What pins are boot0 - if they're not marked on your board, then you'll have to consult F429 datasheet, page 69 (I swear, it's a coincidence). Depending on your specific IC, it will be one pin or another.
It will activate all MCUs peripherals as per docs above and it and wait on its UART and other pins for commands. Commands listed in the documents I provided above. Let's take a look at AN3155 about USART of bootloader:
And the commands are
are all in that document, the table of contents in pdf really helps to find stuff quickly. Of course, if you need specific details, and you will need specific information about specific commands, it's all in there too. How many bytes in command, how many bytes at a time you read from flash etc. Basically, you can either write your own program that does that (even program another microcontroller to program that microcontroller using victim's bootloader), or use any other software that knows what commands to send to the bootloader. It can be ST utility, it can be any other program. They all implement the very same command set, so it doesn't actually matter much. I couldn't find many programs that do that, the only thing that stood out was stm32flash. Never used it myself. I'm ok with ST stuff, since I know what it does (I think).
Oh yeah back to getting the secret value out. I almost forgot about that. Well, then you open the dump in hex viewer/editor, and scroll it around looking for interesting combination of values. Yeah, that's kinda what it looks like. One can run it via disassembly. Scroll disassembled code around, see if there are any numeric values that stand out. You know, some random number 0xD35B581 or something hardcoded in the middle of pretty program could mean something, like be a serial number or a secret number. Unfortunately, I'm reaching boundaries of my competence here, so won't go any further on what one can do with dump.

Xilinx - Vivado Project: VGA IO not working

I'm new to Xilinx-Vivado. So at the moment we just need to look and see how Vivado and SDK work using Zybo Zynq-7000 Board. I searched on the internet, and found a project with VGA IO. The mysterious thing is that I actually made it to work when I was at school, but due to the current situation, we are not able to get much help, I am now alone with it at home.
This is the project.
Firstly I'd like to ask what does the console below tell me?
I generated the bitstream, and then exported the hardware included the bitstream, lastly I launch SDK. On SDK i programmed the FPGA and then ran the project as Launch as Hardware (System debugger and GDB).
That's how I did it:
Image1
And the configuartions:
Image2
And the output I am getting through the console is:
Image3
To my main problem, it is that I have connected all the cables to the Zybo Board that is required; USB cable from my laptop to the FPGA and VGA cable from the FPGA up to my monitor screen. The problem is that I am not getting any output on my monitor, do I have to enable something so that my VGA cable from FPGA to monitor is working?
This ultimately boils down to standard debugging. I can only give a couple suggestions.
First, confirm that your design is working in simulation; check that your outputs, especially your sync signals, are working as expected.
Next confirm that your IO constraints are set up correctly and that you are using the right IO pins on the board.
If those all seem correct, ideally you'd have access to a signal analyzer, but that sounds unlikely in current circumstances. As an alternative, you can look at using an ILA, like chipscope, to probe the signals and see monitor them in hardware.
Last, and obviously, make sure all of the cables are connected correctly.
Good luck with the design.

ESP32: Best way to store data frequently?

I'm developing a C++ application in the ESP32-DevKitC board where I sense acceleration from an accelerometer. The application goal is to store the accelerometer data until storage is full and then send all the data through WiFi and start all again. The micro also goes to deep-sleep mode when is possible.
I'm currently using the ESP32 NVS library which is very well documented and pretty easy to use. The negative side of this is that the library uses Flash memory, therefore a lot of writings will end up degrading the drive.
I know that Espressif also offers some other storage libraries (FAT, SPIFFS, etc.) but, as far as I know (correct me if I'm wrong), they all use Flash drive.
Is there any other possibility of doing what I want to but without using the Flash storage?
Aclarations
Using Flash memory is not the problem itself, but degrading it.
Storage has to be non volatile or at least not being erased when the micro goes to deep-sleep mode.
I'm not using any Arduino library.
That's a great question that I wish more people would ask.
ESP32s use NOR flash storage, which is usually rated for between 10,000 to 100,000 write cycles (100,000 seems to be the standard these days). Flash can't write single bytes; instead of writes a "page" of bytes, which I believe is 256 bytes. So each 256 byte page is rated for at least 100,000 cycles. When a device is rated for 100,000 cycles it's likely to be usable for at least 10 times that, but the manufacturer is not going to make any promises beyond the 100,000.
SPIFFS (and LittleFS, now used on the ESP8266 Arduino Core) perform "wear leveling", to minimize the number of times a particular page is written. So if you modify the same section of a file repeatedly, it will automatically be written to different pages of flash. FAT is not designed to work well with flash storage; I would avoid it.
Whether SPIFFS with wear leveling will be adequate for your needs depends on your needed lifetime of the device versus how much data you'll be writing and how frequently.
NVS may perform some level of wear levelling, to an extent I'm unsure about. Here, in a forum post with 2 ESP employees, they both confirm that NVS does do some form of wear levelling. NVS is best used to persist things like configuration information that doesn't change frequently. It's not a great choice for storing information that's updated often.
You mentioned that the data just needs to survive deep sleep. If that's the case, your best option (if it's large enough) is to use the ESP32's RTC static RAM. This chunk of memory will survive restarts and deep sleep mode, but will lose its state if power is interrupted. It's real RAM so you won't wear it out by writing to it frequently, and it doesn't cost a lot of energy to write to. The catch is there's only 8KB of it.
If the 8KB of RTC RAM isn't enough and you're writing too much data too frequently to trust that SPIFFS will be okay, your best bet would be an SD card. The ESP32 can talk to an SD card adapter. SD cards use NAND flash, which has a much greater lifespan than NOR and can be safely overwritten many more times (which is why these kinds of cards are usable for filesystems in devices like Raspberry Pis).
Writing to flash also takes much more energy than writing to regular RAM. If your device is going to be battery powered, the RTC RAM is also a better choice than SPIFFS or an SD card from a power savings perspective.
Finally, if you use the RTC RAM I'd recommend starting to write it over wifi before it's full, as bringing up wifi and transmitting the data could easily take long enough that you might run out of space for some samples. Using it as a ring buffer and starting the transmit process when you hit a high water mark rather than when the buffer is full would probably be your best bet.
I know i'm late with this answer but you can buy ESP32 modules with external RAM even with 4-8mb. External ram is really fast ( at least much faster than the flash, it uses SPI interface to communicate ) and you can fit a lot of sensor readings in there.
I'm using an ESP32_WROVER_E module with 8mb external ram ( 4mb is usable with normal function calls ) and 16mb flash.
Here is a link of the module that i'm using at TME's site.

Detecting bad sectors using Delphi or freepascal

Thanks to help by David Heffernan I have a program written in Freepascal (but a Delphi solution to my question would suffice) that reads a physical disk sector by sector. It does so using the Windows API CreateFileW function for the disk handle, then FileFile, FileSeek etc to navigate and read. If all the sectors are OK, it works fine. However, if the disk had bad sectors, I need to treat them differently.
My question is, is there and procedures or libraries that can be used, while reading these sectors, to determine if they are bad sectors? If not, how might I go about it? I gather it is the disk controller that knows what sectors are bad and which are not, so I don't think my program can actually access a bad sector, so how can I detect which are the bad ones and act accordingly? Does one need to query SMART and if so, how?
I have searched this site (only found this C post, which relates to a program, not code) and Googled it and no obvious solutions came to my attention.
Generally speaking, you can't access bad sectors at all (they have been remapped already so are out of LBA). What you can access are pending sectors, attempts to read them will always cause a read error. SMART will tell you nothing but the number of bad/pending sectors. So you probably should continue using chosen API interpreting persistent read errors as diagnostics for "bad" sectors, just make sure they aren't caused by access sharing violation.
If you want to obtain a p-list or g-list somehow, it is only possible (for PATA/SATA, not SCSI) in terminal mode, what requires connection to HDD's service port, USB-to-COM adapter and is vendor- and product-specific, if possible at all.
Sectors and their hardware status are not things that normal user-level code needs to deal with so there is no easy copy/paste API available for this purpose.
Also in general the sector concept is abstracted away on multiple levels. For one example see the Wikipedia: logical disk address translation. Physical sector status is very low-level concept. Some hardware vendors even don't expose it through public API at all. Bad (or suspicious) sectors are often detected in the hardware itself and automatically redirected to other places. So in general the bad disk-sector concept does not exist
MSDN Logging Guidelines
...Bad sectors. If a disk driver encounters a bad sector, it may be able to read from or write to the sector after retrying the operation, but the sector will go bad eventually. If the disk driver can proceed, it should log a Warning event; otherwise, it should log an Error event. If a file system driver finds a large number of bad sectors and fixes them, logging Warning events might help an administrator determine that the disk may be about to fail...
If you really need to work with this low-level concepts then first forget about Pascal or Delphi as your requirements.
Learn how to use the Windows API and once you know it bind to the API in your language of choice (you can map any Win32 user-level API function to Free Pascal easily).
For understanding how user-level code sees the disk abstraction start reading documentation at MSDN → Dev Center - Desktop → Device Management Reference → Device Management Functions → DeviceIOControl function
For understanding how the kernel-level code sees the hardware and how does it communicate with user-level code start reading documentation at MSDN → Dev Center - Hardware → Develop → Drivers → Concepts for all driver developers
For example of reading S.M.A.R.T. disk information see WinSim Inc. DISKID32 source code function ReadPhysicalDriveInNTUsingSmart() in diskid32.cpp
In my opinion you are going to swim in a dark & deep waters without flashlight and swim ring and you should think twice about what you (or your users) really need/want and perhaps improve the question to get a reasonably-sized on-topic answer

How to profile a dart app?

I'm trying the demo of start, which is a pretty simple web site built on dart.
When I run it, the initial memory usage is 10M, but when I visit the home page, refresh it again and again, the memory is growing fast until it gets to 78M, and will never get back.
I want to find what uses the memory, and is there any memory leak, but I don't know how to do it. Is it any tool can help me to profile a dart app?
It has already been pointed out in the comments that there are ways to get a CPU profile from the VM on Linux (https://code.google.com/p/dart/wiki/Profiling).
As far as I understand what you are really looking for is to get a heap or memory profile. While it is possible to print an object histogram when the program terminates (see below), we do not have any convenient way to get the object histogram while your server is running. We do hope to be able to add this capability over the next months.
To print the object histogram when the Dart script exits, you should pass the flag
--print_object_histogram to the Dart VM. This will print the averages of the live objects at the end of each major GC over the life of the program. This can be fine to get a quick overview, but is not ideal to track down and identify real problems.

Resources