Memory size exhausted when I try to run drush cr - drush

When I try to run drush cache-rebuild it show below error:
PHP Fatal error: Allowed memory size of 2097152 bytes exhausted (tried to allocate 4096 bytes) in /usr/local/src/drush/vendor/composer/autoload_static.php on line 4129
I already set to 4000MB as the problem still exist, do any one know how to fixed it?

Related

How to start U-Boot from SD cards's FAT partition on Beaglebone Black

I'm currently reading Master Embedded Linux Programming and I'm on the chapter where it goes into bootloaders, more specifically U-Boot for the Beaglebone Black.
I have built a crosscompiler and I'm able to build U-Boot, however I can't make it run the way it is described in the book.
After some experimentation and Google'ing, I can make it work by writing MLO and u-boot.img in raw mode (using these command)
However, if I put the files in a FAT32 MBR boot partition, the Beaglebone will not boot, it will only show a string of C's, which indicate that it is trying to get its bootloader from the serial interface and it has decided it cannot boot from SD card.
I have also studied this answer. According to that answer I should be doing everything correctly. I've tried to experiment with the MMC raw mode options in the U-Boot build configuration, but I've not been able to find a change that works.
I feel like there must be something obvious I'm missing, but I can't figure it out. Are there any things I can try to debug this further?
Update: some more details on the partition tables.
When using the "raw way" of putting LBO and u-boot.img on the SD cards, I have not created any partitions at all. This works:
$ sudo sfdisk /dev/sda -l
Disk /dev/sda: 117,75 GiB, 126437294080 bytes, 246947840 sectors
Disk model: MassStorageClass
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
When trying to use a boot partition, that does not work, I have this configuration:
$ sudo sfdisk /dev/sda -l
Disk /dev/sda: 117,75 GiB, 126437294080 bytes, 246947840 sectors
Disk model: MassStorageClass
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x3d985ec3
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 133119 131072 64M c W95 FAT32 (LBA)
Update 2: The contents of the boot partition is the exact same 2 files that I use for the raw writes, so they are confirmed to work:
$ ls -al
total 1000
drwxr-xr-x 2 peter peter 16384 Jan 1 1970 .
drwxr-x---+ 3 root root 4096 Jul 18 08:44 ..
-rw-r--r-- 1 peter peter 108184 Jul 14 13:56 MLO
-rw-r--r-- 1 peter peter 893144 Jul 14 13:56 u-boot.img
Update 3: I have already tried the following U-Boot options to try it go get to work (in the SPL / TPL menu):
"Support FAT filesystems" This is enabled by default. I can't really find a good reference for the U-Boot options, but I am guessing this is what enables booting from a FAT partition (which is what I'm trying to do)
"MCC raw mode: by sector" I have disabled this. As expected, this indeed breaks the booting in raw mode, which is the only thing I got working up till now.
"MCC raw mode: by partition". I have tried to enable this and using partition 1 to load U-Boot from. I'm not sure how to understand this option. I assume raw mode does not require partitions, but this asks for what partition to use...
In general, if any one can point me to a U-Boot configuration reference, that would already by very helpful. Right now, I'm just randomly turning things on and off that sound like they may help.

Max size of string CMD that can be passed to Docker

In Docker references, I didn't find any information about how long the string can be passed to Docker CMD.
What are the limitations?
What is the maximum number of characters I can pass to CMD?
I have done a simple test and found the limit of dockerfile line is 65535 on my CentOS 7/x64 machine.
#./build.sh
Sending build context to Docker daemon 363kB
Error response from daemon: failed to parse Dockerfile: dockerfile line greater than max allowed size of 65535
#ZenithS You're right for Windows (8192 Characters), but Linux is not that easy.
To make it short: For Linux it's hardcoded to 64 or 128 kiB. You could check with xargs --show-limits, which gives a pretty detailed overview:
Your environment variables take up 5354 bytes
POSIX upper limit on argument length (this system): 2089750 <-- ARG_MAX
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 2084396 <-- ARG_MAX - ENV
Size of command buffer we are actually using: 131072 <-- Hardcoded limit which applies actually (see below)
Maximum parallelism (--max-procs must be no greater): 2147483647
The hardcoded limit goes to MAX_ARG_STRLEN which is set to PAGE_SIZE * 32
https://github.com/torvalds/linux/blob/v5.16-rc7/include/uapi/linux/binfmts.h#L15
With getconf PAGE_SIZE you can check (mostly 2048 or 4096 on modern platforms) so results to 64 or 128 kiB.

uboot fails to execute load cmd from uboot.env

I am working with
U-boot v2021.10
BeagleBone Black rev C
I've created an uboot.env image with mkenvimage tool from file
loadfromsd=load mmc 0:1 0x82000000 /zImage; load mmc 0:1 0x88000000 /am335x-boneblack.dtb
set_bootargs=setenv bootargs console=ttyS0,115200n8 root=/dev/mmcblk0p2 rw rootfstype=ext4 rootwait
uenvcmd=setenv auotload no; run set_bootargs; run loadfromsd; printenv bootargs; bootz 0x82000000 - 0x88000000
The problem is in files loading to memory with load cmd in first line.
Full message from start is:
U-Boot SPL 2021.10 (Oct 14 2021 - 20:41:20 -0700)
Trying to boot from MMC1
U-Boot 2021.10 (Oct 14 2021 - 20:41:20 -0700)
CPU : AM335X-GP rev 2.1
Model: TI AM335x BeagleBone Black
DRAM: 512 MiB
ti_sysc target-module#9000: failed to get fck clock
WDT: Started with servicing (60s timeout)
NAND: nand_base: timeout while waiting for chip to become ready
nand_base: No NAND device found
0 MiB
MMC: ti_sysc target-module#7000: failed to get fck clock
OMAP SD/MMC: 0, OMAP SD/MMC: 1
Loading Environment from FAT... OK
<ethaddr> not set. Validating first E-fuse MAC
Net: eth2: ethernet#4a100000, eth3: usb_ether
=> run uenvcmd
4295456 bytes read in 282 ms (14.5 MiB/s)
'ailed to load '/am335x-boneblack.dtb
bootargs=console=ttyS0,115200n8 root=/dev/mmcblk0p2 rw rootfstype=ext4 rootwait
Kernel image # 0x82000000 [ 0x000000 - 0x418b20 ]
ERROR: Did not find a cmdline Flattened Device Tree
Could not find a valid device tree
Actual error is
=> run uenvcmd
4295456 bytes read in 282 ms (14.5 MiB/s)
'ailed to load '/am335x-boneblack.dtb
P.S. My u-boot fails to recognize ${} substitutions properly, and usage of
console=ttyS0,115200n8
bootpartition=mmcblk0p2
set_bootargs=setenv bootargs console=${console} root=/dev/${bootpartition} rw rootfstype=ext4 rootwait
caused and error
syntax error:
rootfstype=ext4 rootwait0n8
this 0n8 was appended after rootwait and shouldn't be there. So I've written this "straight" file without variables.
Thanks to sawdust for info that carriage return character matters and overrides first letter of error msg - I've got an idea that it also matters for path to file in load cmd, and it matters.
If I use space+\r, NOT just \r - everything works fine.

dask jobqueue worker failure at startup 'Resource temporarily unavailable'

I'm running dask over slurm via jobqueue and I have been getting 3 errors pretty consistently...
Basically my question is what could be causing these failures? At first glance the problem is that too many workers are writing to disk at once, or my workers are forking into many other processes, but it's pretty difficult to track that. I can ssh into the node but I'm not seeing an abnormal number of processes, and each node has a 500gb ssd, so I shouldn't be writing excessively.
Everything below this is just information about my configurations and such
My setup is as follows:
cluster = SLURMCluster(cores=1, memory=f"{args.gbmem}GB", queue='fast_q', name=args.name,
env_extra=["source ~/.zshrc"])
cluster.adapt(minimum=1, maximum=200)
client = await Client(cluster, processes=False, asynchronous=True)
I suppose i'm not even sure if processes=False should be set.
I run this starter script via sbatch under the conditions of 4gb of memory, 2 cores (-c) (even though i expect to only need 1) and 1 task (-n). And this sets off all of my jobs via the slurmcluster config from above. I dumped my slurm submission scripts to files and they look reasonable.
Each job is not complex, it is a subprocess.call( command to a compiled executable that takes 1 core and 2-4 GB of memory. I require the client call and further calls to be asynchronous because I have a lot of conditional computations. So each worker when loaded should consist of 1 python processes, 1 running executable, and 1 shell.
Imposed by the scheduler we have
>> ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) unlimited
-s: stack size (kbytes) 8192
-c: core file size (blocks) 0
-m: resident set size (kbytes) unlimited
-u: processes 512
-n: file descriptors 1024
-l: locked-in-memory size (kbytes) 64
-v: address space (kbytes) unlimited
-x: file locks unlimited
-i: pending signals 1031203
-q: bytes in POSIX msg queues 819200
-e: max nice 0
-r: max rt priority 0
-N 15: unlimited
And each node has 64 cores. so I don't really think i'm hitting any limits.
i'm using the jobqueue.yaml file that looks like:
slurm:
name: dask-worker
cores: 1 # Total number of cores per job
memory: 2 # Total amount of memory per job
processes: 1 # Number of Python processes per job
local-directory: /scratch # Location of fast local storage like /scratch or $TMPDIR
queue: fast_q
walltime: '24:00:00'
log-directory: /home/dbun/slurm_logs
I would appreciate any advice at all! Full log is below.
FORK BLOCKING IO ERROR
distributed.nanny - INFO - Start Nanny at: 'tcp://172.16.131.82:13687'
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/dbun/.local/share/pyenv/versions/3.7.0/lib/python3.7/multiprocessing/forkserver.py", line 250, in main
pid = os.fork()
BlockingIOError: [Errno 11] Resource temporarily unavailable
distributed.dask_worker - INFO - End worker
Aborted!
CANT START NEW THREAD ERROR
https://pastebin.com/ibYUNcqD
BLOCKING IO ERROR
https://pastebin.com/FGfxqZEk
EDIT:
Another piece of the puzzle:
It looks like dask_worker is running multiple multiprocessing.forkserver calls? does that sound reasonable?
https://pastebin.com/r2pTQUS4
This problem was caused by having ulimit -u too low.
As it turns out each worker has a few processes associated with it, and the python ones have multiple threads. In the end you end up with approximately 14 threads that contribute to your ulimit -u. Mine was set to 512, and with a 64 core system I was likely hitting ~896. It looks like the a maximum threads per a process I could have had would have been 8.
Solution:
in .zshrc (.bashrc) I added the line
ulimit -u unlimited
Haven't had any problems since.

What is the correct address for loading spiffsimg file

I have used spiffsimg to create a single file containing multiple lua files:
# ./spiffsimg -f lua.img -c 262144 -r lua.script
f 4227 init.lua
f 413 cfg.lua
f 2233 setupWifi.lua
f 7498 configServer.lua
f 558 cfgForm.htm
f 4255 setupConfig.lua
f 14192 main.lua
#
I then use esptool.py to flash the NodeMCU firmware and the file containing the lua files to the esp8266 (NodeMCU dev kit):
c:\esptool-master>c:\Python27\python esptool.py -p COM7 write_flash -fs 32m -fm dio 0x00000 nodemcu-dev-9-modules-2016-07-18-12-06-36-integer.bin 0x78000 lua.img
esptool.py v1.0.2-dev
Connecting...
Running Cesanta flasher stub...
Flash params set to 0x0240
Writing 446464 # 0x0... 446464 (100 %)
Wrote 446464 bytes at 0x0 in 38.9 seconds (91.9 kbit/s)...
Writing 262144 # 0x78000... 262144 (100 %)
Wrote 262144 bytes at 0x78000 in 22.8 seconds (91.9 kbit/s)...
Leaving...
I then run ESPLorer to check the status and get:
PORT OPEN 115200
Communication with MCU..Got answer! AutoDetect firmware...
Can't autodetect firmware, because proper answer not received.
NodeMCU custom build by frightanic.com
branch: dev
commit: b21b3e08aad633ccfd5fd29066400a06bb699ae2
SSL: true
modules: file,gpio,http,net,node,rtctime,tmr,uart,wifi
build built on: 2016-07-18 12:05
powered by Lua 5.1.4 on SDK 1.5.4(baaeaebb)
lua: cannot open init.lua
>
----------------------------
No files found.
----------------------------
>
Total : 3455015 bytes
Used : 0 bytes
Remain: 3455015 bytes
The NodeMCU firmware flashed correctly, but the lua files can't be located.
I have tried flashing to other locations (0x84000, 0x7c000), but I am just guessing at these locations based on reading threads on github.
I used the NodeMCU file.fscfg() routine to get the flash address and size. If I only flash the NodeMCU firmware I get the following:
print (file.fscfg())
524288 3653632
534288 is 0x80000, so I tried flashing only the spiffsimg file (lua.img) to 0x8000, then ran the same print statement and got:
print (file.fscfg())
786432 3391488
The flash address incremented by the exact number of bytes in the lua.img - which I don't understand, why would the flash address change? Is the first number returned by file.fscfg not the starting flash address, but the ending flash address?
What is the correct address for flashing an image file, contain lua files, that was created by spiffsimg?
The version of spiffsimg found here will provide the correct address for flashing an image file that contains lua files.
Do not use this version of spiffsimg as it is out of date.
To install the spiffsimg utility, you need to download and install the entire nodemcu-firmware package (into a linux environment, use make to install - note: make on my debian linux box generated an error, but i was able to go to the ../tools/spiffsimg subdirectory and run make on the Makefile found in that directory to create the utility).
The spiffsimg instructions found here are quite clear, with one exception: the file name you specify, with the -f parameter, needs to include the characters %x. The %x will be replaced with the address that the image file should be flashed to.
For example, the command
spiffsimage -f %x-luaFiles.img -S 4MB -U 465783 -r lua.script
will create a file, in the local directory, with a name like: 80000-luaFiles.img. Which means you should install that image file at address 0x80000 on the ESP8266.
I've never done that myself but I'm reasonably confident the correct answer can be extracted from the docs.
-f specifies the filename for the disk image. '%x' will be replaced
by the calculated offset of the file system.
And a bit further down
The disk image file is placed into the bin directory and it is named
0x<offset>-<size>.bin where the offset is the location where it
should be flashed, and the size is the size of the flash part.
However, there's a slight mismatch between the two statements. We may have a bug in the docs. If "'%x' will be replaced..." then I'd expected the final name won't contain 0x anymore.
Furthermore, it is possible to define a fixed SPIFFS position when you build the firmware.
#define SPIFFS_FIXED_LOCATION 0x100000
This specifies that the SPIFFS filesystem starts at 1Mb from the start of the flash. Unless
otherwise specified, it will run to the end of the flash (excluding
the 16k of space reserved by the SDK).

Resources