sdram banks configuration in uboot - memory

I had made uclinux(uboot+kernel+romfs) for my disco board for (stm32f429).
Now I have a custom board which uses different bank (bank1 instead of bank2 in disco board) I tried to configure u-boot according to new board but I have an odd problem:
when I use the old u-boot without any changes, the boot loader can see kernel image but it has a HARD FAULT ERROR when it extracts the kernel image to SDRAM,and this is natural!
BUT when I configure uboot according to new board structure even uboot doesnt load and I see nothing in serial console!
could you please help me about it, it is more than two months Im trying to solve it!!!

Related

Q: save Open Text Exceed windows sizes and positions?

how do you save your last windows sizes and position when using Exceed? I'm using it to run SAS environment but every time I boot it up, windows are always going back to default sizes and positions :(
I found this useful user written paper on using Display Manager.
The size and position of each of these windows can be adjusted using
standard mouse/window techniques. Once they have been adjusted the way
you want them, use the WSAVE ALL command in the command box to save
these settings for your next SAS session.
Also watch out for issues with not having access to your SASUSER library which can occur when you are running multiple SAS jobs at the same time. You can prevent the SASUSER library from not being accessed by all of the jobs if you run using the -RSASUSER option. But then you will not be able to run the WSAVE command. So if you want to make changes to the window locations do it when you do have write access to SASUSER.

Changing USB0 address of Beaglebone Black?

I am trying to change the static IP address of USB0 port of BeagleBone Black.
I know this thread was open previously (Changing the static IP of Beagle Bone Black USB0). But no answer was found. So I am reopening now, to see if people have got any solution now.
I also found there is solution by Eric Wong. (http://ewong.me/changing-usb0-ip-address-on-the-beaglebone-black/)
----- But this solution is good for may be old debian images. The latest ones have different file contents of what's mentioned in the solution. And therefore above solution does not work.
Steps I did:
step 1: I tried to change /etc/network/interfaces such that default address is 192.168.8.2 instead of 192.168.7.2 as this:
iface usb0 inet static
address 192.168.8.2
netmask 255.255.255.0
network 192.168.8.0
gateway 192.168.8.1
step2: Then I changed contents of file /etc/udhcpd.conf
-- change "192.168.7.1" to "192.168.8.1" in two places.
step3: reboot
Bingo, I lost my connectivity and now I have to rewrite the Debian image onto Beaglebone black again, as I no longer can access it. So basically neither I am unable to access through 192.168.7.2 or 192.168.8.2
So If anyone knows how to do it, It would be really helpful if you can share your thoughts ?
Coming into this late; my network uses 192.168.6/24 and 192.168.7/24 internally, so the latest BB images didn't work for me at all.
First: there's no real substitute for a real serial connection via the J1 connector; a 3.3v USB serial doodad is cheap, and being able to watch the whole boot (and image flashing!) process from the very start is super helpful. Adafruit sells one that works great with BeagleBone: https://www.adafruit.com/product/954
Anyway, on Debian GNU/Linux 10 (Apr 2020 image), /etc/default/bb-boot contains:
...
USB_CONFIGURATION=enable
#It's assumed usb0 is always enabled, usb1 can be disabled...
USB0_SUBNET=192.168.7
USB0_ADDRESS=192.168.7.2
USB0_NETMASK=255.255.255.0
USB1_ENABLE=enable
USB1_SUBNET=192.168.6
USB1_ADDRESS=192.168.6.2
USB1_NETMASK=255.255.255.0
DNS_NAMESERVER=8.8.8.8
Adjust this to taste and reboot. In my case, I changed USB0 to 192.168.70, and #commented out all the USB1 lines altogether.
Because I have to re-flash a bunch of boards periodically, I burned this into the SD card image itself, which saved me a lot of time later.

Storage Spaces Direct

Some background:
I'm trying to set up Azure Pack in a test environment, and are currently woriking on setting up the servers who's going to host it all.
To do this i have two virtual Windows Server 2016 TP4 servers hostet on a ESXI host, and so i need to set up Storage Spaces Direct.
(iSCSI target and Storage Spaces (WS 2012), have been ruled out since the first is a nightmare to set up and the internet told me the second one comes with a low R/W speed).
I've been following this guide: https://technet.microsoft.com/en-us/library/mt126109.aspx
Problem:
When i run this cmdlet: Enable-ClusterStorageSpacesDirect
, I get this warning: No elegible DAS disk found.
Both servers have 3 disk each. They are initialized and 100% unallocated, and I have tried with them beeing both offline and online.
If I try running this cmdlet: (Get-Cluster).DasModeEnabled=1
I get the following error: The property 'DasModeEnabled' cannot be found on this object. Verify that the property exists and can be set.
Any and all help is greatly appriciated!
Storage Spaces Direct doesn't support FC & RAID-controlled LUNs.
The key is to force S2D to accept RAID BusType:
(Get-Cluster).S2DBusTypes=256
Here's a good article about it https://www.starwindsoftware.com/blog/resolving-enable-clusters2d-bus-type-support-issue-on-some-storage-controllers.
Another option is to reflash the controller's firmware to IT mode.
There's also other solutions, like that Starwind, which I suggest you to test.

How is saveenv implemented inside u-boot?

I am trying to figure out which part of code inside the u-boot is responsible for writing to the device from the RAM, when we do a saveenv after setenv. I could find printenv and setenv, but not saveenv. Can someone please shed some light on it?
That depends on what nonvolatile storage the platform is configured for. In any case common/nv_edit.c will be built. But (for example) if env lives in SPI flash, then saveenv() in common/env_sf.c will get built and linked. Do "grep saveenv common/*.c" and you'll see other storage options (eeprom, flash, mmc, nand etc).
CONFIG_ options for any platform are in the appropriate .h under include/configs/. Compare ENV-related options with storage options, that should lead to the right part of code for your platform.

Displaying images using opencv running on remote host, on local display?

I have a simple piece of code which uses openCV, and I want to run it on an ARM development board (freescale MX) which has no display attached to it.
I've been trying for few days now to run the code on this remote device, and display its graphical output on my local machine, with no luck...
My setup:
Ubuntu 10.04 on my local machine
Linux 2.6 running on the board, with all the relevant packages I could think of.
Ethernet connection to my local machine
openCV compiled statically with GTK enabled
using GDB/GDBServer for debug using SSH
and everything is running just fine accept for openCV's HIGHGUI display functions, which have no affect.
Some notes:
I can set the DISPLAY env var to point to my local machine, and I get
gtk-demo/xterm/whatever-is-using-X to appear on my local machine,
even when running from the same SSH session I use for running and
debugging my code.
I'm trying to avoid getting into GTK/QT and patching/ preparing my own display wrapper. I got the feeling that I just need a small modification to the GTK/HIGHGUI config to make this work...
My code (which compiles and runs ok, accept for having no graphical output):
cv::Mat im = cv::imread("/root/capture.jpg");
//im is valid and not empty at this point
cvNamedWindow( "test" );
cv::imshow( "test", im);
cvWaitKey();
cvDestroyWindow( "test" );
Can anyone assist?
Thanks
Update:
Solved
While reading the message I've just posted, I found out that - I actually used getchar() instead of cvWaitKey(), which seems to be important...

Resources