Bootloader Erased from MRAM after Running - memory

I am trying to burn the U-boot bootloader in MRAM on my board so that it is able to boot-up on a restart without having to use a serial debugger. I am loading the file using GRMON with the flag -wprot to temporarily enable flash writes to MRAM. The bootloader seems to load into MRAM correctly and then when I run "go 0" the U-boot prompts showup correctly in the serial output. After loading u-boot, there is data stored at 0x00000000 (81882fe0), but when I stop execution and recheck the memory, it seems to be erased. MRAM writes should stick and permanently change the contents of MRAM, so I'm not sure why the bootloader is being erased? Here's an image of what's occurring:

Related

What causes a large exe to load slowly (65,536 bytes at time according to Procmon)?

We are running Docker on a VMware host running Windows 10.
Based on the first comment, I wonder if the issue isn't specifically the read portion, but maybe there is something happening at the end of the load that is preventing this exe from being cached in memory.

Q: save Open Text Exceed windows sizes and positions?

how do you save your last windows sizes and position when using Exceed? I'm using it to run SAS environment but every time I boot it up, windows are always going back to default sizes and positions :(
I found this useful user written paper on using Display Manager.
The size and position of each of these windows can be adjusted using
standard mouse/window techniques. Once they have been adjusted the way
you want them, use the WSAVE ALL command in the command box to save
these settings for your next SAS session.
Also watch out for issues with not having access to your SASUSER library which can occur when you are running multiple SAS jobs at the same time. You can prevent the SASUSER library from not being accessed by all of the jobs if you run using the -RSASUSER option. But then you will not be able to run the WSAVE command. So if you want to make changes to the window locations do it when you do have write access to SASUSER.

Love2d console on PyCharm only writes to console after closing

I'm using PyCharm community edition to create a love2d application. I've created a hotkey that runs a .bat file to run it with lovec.exe which is opens the console as the app runs, then i've created an external tool to run the .batfile through a keyboard shortcut, when i use it, the console opens within PyCharm, which doesn't write anything, and after closing the app, everything that was meant to have been written appears, when i run the .bat file outside of PyCharm it functions perfectly.
I would like to know if there's either an obvious fix to that or, just how to run the console outside of pycharm through an external tool.
This issue is due to the way Lua delays writing files, known as "buffering." To change it, put the following command at the top of your file:
io.stdout:setvbuf( 'no' ) -- Switches buffering for stdout to be off
Read more in Lua's manual:
file:setvbuf (mode [, size])
Sets the buffering mode for an output file. There are three available modes:
"no": no buffering; the result of any output operation appears immediately.
"full": full buffering; output operation is performed only when the buffer is full or when you explicitly flush the file (see io.flush).
"line": line buffering; output is buffered until a newline is output or there is any input from some special files (such as a terminal device).
For the last two cases, size specifies the size of the buffer, in bytes. The default is an appropriate size.

Hadoop Namenode Metadata - fsimage and edit logs

I understand that the fsimage is loaded into the memory on startup and any further transactions are added to the edit log rather than to the fsimage for performance reasons.
The fsimage in memory gets refreshed when the namenode is restarted. For efficiency, secondary name node periodically does a checkpoint to update the fsimage so that the namenode recovery is faster. All these are fine.
But one point which i fail to understand is this,
Lets say that a file already exists and the info about this file is in the fsimage in memory.
Now i move this file to a different location, which is updated in the edit log.
Now when i try to list the old file path, it complains thats it does not exists or whatever.
Does this mean that namenode looks at the edit log as well which is contradictory to the purpose of the fsimage in memory? or how does it know that the file location has changed?
Answer is by looking at information in the edit logs. If information is not available in the edit logs This question stands true for use-case when we write the new file to hdfs. While your namenode is running if you remove fsimage file and try to read the hdfs file it is able to read.
Removing the fsimage file from the running namenode will not cause issue with the read / write operations. When we restart the namenode, there will be errors stating that image file is not found.
Let me try to give some more explanation to help you out.
Only on start up hadoop looks fsimage file, in case if it is not there, namenode does not come up and log for formatting the namenode.
hadoop format -namenode command creates fsimage file (if edit logs are present). After namenode startup file metadata is fetched from edit logs (and if not found information in edit logs searched thru fsimage file). so fsimage just works as checkpoint where inforamtion is saved last time. This is also one of the reason secondary node keeps on sync (after 1 hour / 1 milliion transactions) from edit logs so that on start up from last checkpoint not much needs to be synced.
if you will turn the safemode ( command : hdfs dfsadmin -safemode enter) on and will use saveNamespace (command : hdfs dfsadmin -saveNamespace), it will show below mentioned log message.
2014-07-05 15:03:13,195 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Saving image file /data/hadoop-namenode-data-temp/current/fsimage.ckpt_0000000000000000169 using no compression
2014-07-05 15:03:13,205 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Image file /data/hadoop-namenode-data-temp/current/fsimage.ckpt_0000000000000000169 of size 288 bytes saved in 0 seconds.
2014-07-05 15:03:13,213 INFO org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0
2014-07-05 15:03:13,237 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 170
I'm kind of late to this question, but I think it's worth a clearer response.
If I got you right You want to know, if metadata are stored in edit log why after deleting a file, when we try to list the old file path, it complains that it does not exists or whatever? and how namenode knows that file or directory has been deleted without reading edit log?
It is exactly mentioned in chapter 11 in Hadoop the definitive guide book:
When a filesystem client performs a write operation (such as creating
or moving a file), the transaction is first recorded in the edit log.
The namenode also has an in-memory representation of the filesystem
metadata, which it updates after the edit log has been modified.
The in-memory metadata is used to serve read requests.
Having said that the answer is simple, because after updating the edit log namenode updates the in memory-representation. so when read request received it knows that the file or directory has been deleted and will complain that this does not exist.
The entire file system namespace, including the "mapping of blocks to files" and file system properties, is stored in a file called the FsImage.Remember "mapping of blocks to files" is a part of FsImage.This is stored both in memory and on disk.Along with FsImage, Hadoop will also store in memory, block to datanode mapping through block reports while the name node is (re)started and periodically.So when you move a file to a different location, this will be tracked in the edit log on disk and also when a block report is sent by data node to namenode, namenode will get an up-to-date view of where blocks are located on the cluster.So that way, you will not be able to see the data in old path since block report has updated "mapping of blocks to datanodes".But remember the update has happened only in the memory.Now after a certain amount of time, either in checkpointing or when a name node is restarted, editlogs on disk which already have the updates that you have done(in your case movement of file) will get merged with the old FsImage on disk and creates a new FsImage.Now this updated FsImage will be loaded into memory and the same process repeats.

Detecting if a process is started by IE in protected mode

I am writing a program that is used for simplifying the download of an application installer. The app is really simple in it's working: it just asks the BITS subsystem to download a ZIP from the net and decompress it on the user's desktop and run the second stage installer (the idea is that many of our end users are too dumb to be trusted with a ZIP download link and instructions on how to install the program).
Now, if a user runs IE 7+ in Vista/7, has UAC enabled and selects "execute" instead of "save as", then the program fails. In fact, all attempts to write to the file system or the registry fails due to IE's protected mode.
In order to work around this, I've tagged the executable to trigger a UAC prompt, which works fine. However, it will now trigger the prompt even if that trigger is unnecessary: the program is designed to do the download in the background and resume downloads if the user closes his session before it is done. Now, the UAC prompt is triggered every time the executable is launched.
I would like to detect the fact that I'm running inside the sandbox and, in that case, restart the process, this time with a UAC prompt (easy enough to do). I don't know how I can detect that situation, however, short of attempting to write to the registry.
Any idea ?
Call the IEIsProtectedModeProcess function.
A good document covering IE's protected mode is Understanding and Working in Protected Mode Internet Explorer.

Resources