I have a Beaglebone Black industrial(BBBI), which I assume is very similar to the Beaglebone Black.
I wish to make the BBBI boot to microSD card by default, and I found this guide, which suggested to delete the MLO file from the boot partition. This has been mentioned in a few places based on Google, so I assume it is worth trying
With the intention of renaming the MLO file, I tried to locate the file using $locate, after booting without a microSD card, which presumably boots into Debain on the eMMC. This produces the following:
/opt/backup/uboot/MLO
/opt/source/pru-software-support-package/pru_cape/bin/MLO
/opt/source/pru-software-support-package/pru_cape/bin/MLO/beaglebone
/opt/source/pru-software-support-package/pru_cape/bin/MLO/beaglebone_black
/opt/source/pru-software-support-package/pru_cape/bin/MLO/beaglebone_black/MLO
/opt/source/pru-software-support-package/pru_cape/bin/MLO/beaglebone/MLO
I have double checked that /boot does not have MLO. Is anybody able to share where the MLO is, or did I missed something very obvious?
Alternatively, is there an easier way to boot from the microSD card without pressing the S2 button?
Newer images put MLO outside the filesystems, directly at several "magic" offsets on the eMMC/SD-card. That's documented in the TRM for AM335x.
If you don't care about current eMMC contents, something like this will do the brute force job:
dd if=/dev/zero of=/dev/mmcblk0 bs=1M count=1 (make sure mmcblk0 is the eMMC, not the SD card, it changes if you boot from SD and can be mmcblk1 too)
It wipes out the first Megabyte of the eMMC with zeroes. So the partition table and other things are gone too. Essentially you get a blank eMMC.
If you want to be a bit more surgical, look at the flashing script in /opt/scripts. Also functions.sh is interesting.
Note that MLO is also referred to as SPL.
Try editing the uenv.txt file to load the image from the SDCARD always.
Related
I was looking at my disk with DaisyDisk and I have a 30GB something called Docker.qcow2. More specifically, DaisyDisk puts it under ~/Library/Containers/com.docker.docker/Data/vms/0/Docker.qcow2. Is it some kind of cache? Can I delete it? I have a lot of images from old projects that I won't ever use and I'm trying to clear up my disk.
The .qcow2 file is exposed to the VM as a block device with a maximum size of 64GiB by default. As new files are created in the filesystem by containers, new sectors are written to the block device. These new sectors are appended to the .qcow2 file causing it to grow in size, until it eventually becomes fully allocated. It stops growing when it hits this maximum size.
You can stop Docker and delete this file, however deleting it will also remove all your containers and images. And Docker will recreate this file on start.
If you stumbled upon this, you're probably not stoked about a 64gb file. If you open up Docker > Preferences, you can tone it down quite a bit to a more reasonable size. Doing this will delete the old cow file, and that will delete your containers, so be careful.
I've had the same issue. Instead of deleting the file or adjusting the size using the settings simply use the following commands:
docker images
This will show all of the images on your system and the size of each image (you'd be surprised how quickly this can get out of hand).
docker image rm IMAGEID
This will remove the image with the ID that you can get from the images command above.
I use this method and it frees up a lot of disk space.
I'm running a VM with Debian 7.0 x64 and need to troubleshoot something with a provider, so when I run a grep command, the console outputs a long report. I need to copy all of that text that has been output and place in the body of an email, or post directly on another forum board. I'm sure the solution must be simple, but I can't find it in searching online. I see suggestions for right-clicking with the mouse but my VM console doesn't response to mouse clicks, and then I see suggestions for copying and modifying files within the console, but as I said above I just need to take the raw text to paste elsewhere.
Thanks for the help!!!
the easiest way would be to save the output to a file and attach that to your email. (personally i hate emails that have inlined long error-logs without good cuase - like annotations).
this would also allow you to compress the file before attaching it, reducing the size considerably (as text compresses quite nicely).
if this is not an option, there is xclip, which reads from stdin and puts that into a selection.
$ ls | xclip
allows you to paste (with your middle-mousebutton) the contents of a dir.
if you must use Ctrl-v for pasting, you can also do:
$ ls | xclip -selection c
I know this isn't really code related, but I don't know where else to ask?
While working yesterday I got a message saying that my startup disk was almost full. Which I wasn't too surprised by because it's only a 128gb Air.
But when I fired up Daisydisk to see what the issue was it appears that my computer has stored 2 files in the private/var/tmp directory, each over 30gb. Obviously Daisydisk won't let me erase them because of the directory they are in.
They are called magick-23598T_US4im5XKvQ.pam and magick-23587vell8J7UTKgS.pam
I have no idea where they came from, but I was testing a file upload system for a rails project when this happened. I was however uploading images over no more than 800kb or so. This seems a little extreme for that.
If anyone has any idea what might have happened, or how I can safely free up this space again, I would be massively grateful.
Looks like ImageMagick temp files -- are you processing the images with ImageMagick? There's a similar problem discussed here although the exact cause may be different.
It is likely a large swap file from ImageMagick that hasn't been cleaned up. You can limit the file sizes by editing your policy.xml config for ImageMagic (/etc/ImageMagick/policy.xml on Ubuntu).
More info here: https://www.imagemagick.org/discourse-server/viewtopic.php?f=3&t=29225&p=130707#p130707
My application takes data from a server and saves it to an SQLite database. This works fine in the 9550 (BlackBerry Storm 2) simulator, but when I run this in any other simulator it gives me this error:
file system not ready
Code snippet:
URI myURI = URI.create("file:///store/MyDataBase.db");
Why is this happening?
Richard is right. You need to check for the existence of the filesystem root "store". There is an extra wrinkle for using SQLite, though. RIM only supports SQLite on eMMC storage. So even if "store" exists, it will only work if the underlying storage is eMMC. Notably the BlackBerry Bold 9650 device, AKA Bold2, has "store", but it is not eMMC, so you can't put an SQLite database there.
I'm not aware of any direct way of finding out whether a filesystem is using eMMC. I asked RIM and was told to check the filesystem size. If it's over 1 GB, then it is eMMC. That wasn't a very satisfying answer for me. I ended up checking for the filesystem "system". It is a read-only filesystem, but it is only present for eMMC storage, and if it exists, you can write a database to the "store" filesystem root.
Via the SQLite developer guide overview:
You can use the SQLite API, provided in the net.rim.device.api.database package, to store application data persistently to eMMC memory or a microSD card.
It may be that store is not a mounted and available file system root on the 9550. You should use javax.microedition.io.file.FileSystemRegistry.listRoots() to get an Enumeration of currently mounted file systems.
Sometimes when I add a new file to my path, I have to restart MATLAB or it won't be detected. There must be an other way to this!
I have experienced similar problems (Matlab does not notice it when I change a file). Unfortunately, I have no idea what causes it or how to solve it. I usually find that CLEAR ALL solves the problem, but be aware that it clears all variables in the work space. Some 'REHASH' command (e.g., REHASH TOOLBOXRESET) may also be useful.
I'd love to see a better answer; all documentation that I came across seems to indicate that this cannot happen.
Perhaps this is a problem with Matlab caching certain files at startup to improve performance. This happens with files in certain directories.
From Matlab help for path command:
Note (...) Also note that locations of files in the matlabroot/toolbox directory tree are loaded and cached in memory at the beginning of each MATLAB session to improve performance. If you save files to matlabroot/toolbox directories using an external editor or add or remove files from these directories using file system operations, run rehash toolbox before you use the files in the current session. If you make changes to existing files in matlabroot/toolbox directories using an external editor, run clear functionname before you use the files in the current session. For more information, see the rehash reference page or the Toolbox Path Caching topic in the MATLAB Desktop Tools and Development Environment documentation
I've often seen this happen with networked file locations. I don't understand the mechanism, but it definitely happens. A solution that often works:
path(path);
or, if that fails to pick it up, try this: (NB, this will clear your workspace)
clear classes;
path(path);
We did this last one so much, we put it in script on our common code path called:
shazaam;
Yes, my age is showing.
You want the "rehash" function or you need to set the path again using "path(path)" or similar. It also depends on whether you're using a "frozen" path. Look at the help for ADDPATH.
MATLAB will keep a cached copy of the compiled M-file unless it know that you've changed it. If you've created the file or you've edited it outside of MATLAB, then it may not know that it's changed.
This happens to me when the networked drive connection is lost then restored. rehash does not work but rehash toolboxreset does