My electron armv7l build exits with Segmentation Fault - electron

I have an armv7l board that needs electron. With the assistance of buildroot, I have been able to get the latest version of electron via 'npm install' although I want to build electron myself and deploy it onto my board. I have followed electron's build instructions for cross-compiling it link and I have created a build that sort of executes. The output from the electron build is a file called dist.zip. I copied this over to my target, unzipped, and executed electron although it quickly exited with a segmentation fault. I'm confident that the build is for the correct processor.
Here are the contents that exist inside of dist.zip.
drwxr-xr-x 5 docker-user docker-user 4096 Aug 22 22:31 .
drwxr-xr-x 8 docker-user docker-user 4096 Aug 22 22:29 ..
-rw-r--r-- 1 docker-user docker-user 1060 Aug 10 05:36 LICENSE
-rw-r--r-- 1 docker-user docker-user 3993441 Aug 23 02:02 LICENSES.chromium.html
-rwxr-xr-x 1 docker-user docker-user 4224508 Aug 23 05:17 chrome-sandbox
-rw-r--r-- 1 docker-user docker-user 116794 Aug 23 03:59 chrome_100_percent.pak
-rw-r--r-- 1 docker-user docker-user 175786 Aug 23 03:59 chrome_200_percent.pak
-rwxr-xr-x 1 docker-user docker-user 78589028 Aug 23 05:17 electron
-rw-r--r-- 1 docker-user docker-user 10425616 Aug 10 06:10 icudtl.dat
-rwxr-xr-x 1 docker-user docker-user 161468 Aug 23 05:17 libEGL.so
-rwxr-xr-x 1 docker-user docker-user 5089240 Aug 23 05:17 libGLESv2.so
-rwxr-xr-x 1 docker-user docker-user 1962076 Aug 23 05:17 libffmpeg.so
drwxr-xr-x 2 docker-user docker-user 4096 Aug 22 22:31 locales
-rw-r--r-- 1 docker-user docker-user 82118 Aug 23 02:35 natives_blob.bin
drwxr-xr-x 2 docker-user docker-user 4096 Aug 22 22:31 resources
-rw-r--r-- 1 docker-user docker-user 8438393 Aug 23 03:59 resources.pak
-rw-r--r-- 1 docker-user docker-user 279912 Aug 23 05:14 snapshot_blob.bin
drwxr-xr-x 2 docker-user docker-user 4096 Aug 22 22:31 swiftshader
-rw-r--r-- 1 docker-user docker-user 612184 Aug 23 05:15 v8_context_snapshot.bin
-rw-r--r-- 1 docker-user docker-user 22 Aug 10 05:36 version
I then ran it as seen below and got a fault.
./electron: /usr/lib/libdbus-1.so.3: no version information available (required by ./electron)
./electron: /usr/lib/libasound.so.2: no version information available (required by ./electron)
./electron: /usr/lib/libasound.so.2: no version information available (required by ./electron)
Segmentation fault
I then ran it with strace and this is what I saw.
mprotect(0x75618000, 4096, PROT_READ) = 0
mprotect(0x75be8000, 4096, PROT_READ) = 0
mprotect(0x760ab000, 8192, PROT_READ) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_ACCERR, si_addr=0x76bebfdc} ---
+++ killed by SIGSEGV +++
Segmentation fault
#
I ran this with gdb and did a backtrace and below is what I see.
Program received signal SIGSEGV, Segmentation fault.
0x76fc7048 in relocate_pc24.12735.isra.0 () from /lib/ld-linux-armhf.so.3
(gdb) bt
#0 0x76fc7048 in relocate_pc24.12735.isra.0 () from /lib/ld-linux-armhf.so.3
#1 0x76fc7a76 in _dl_relocate_object () from /lib/ld-linux-armhf.so.3
#2 0x76fc1730 in dl_main () from /lib/ld-linux-armhf.so.3
#3 0x76fcecc4 in _dl_sysdep_start () from /lib/ld-linux-armhf.so.3
#4 0x76fc2906 in _dl_start_final () from /lib/ld-linux-armhf.so.3
#5 0x76fc2ad0 in _dl_start () from /lib/ld-linux-armhf.so.3
#6 0x76fbfa50 in _start () from /lib/ld-linux-armhf.so.3
#7 0x76fbfa50 in _start () from /lib/ld-linux-armhf.so.3
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Any help would be greatly appreciated.

Related

How to know wether Intel SGX DCAP is installed

How should I know if Intel SGX DCAP is installed or not? the server is running ubuntu 20.4.
Is there a way to check this?
SGX driver should be already installed on >5.11 Linux Kernel, while DCAP is supported by >8th generation Intel processors, as well as Flexible Launch Control. This is how you check:
╰─➤ ls -ltr /dev/sgx*
crw------- 1 root root 10, 124 Oct 3 16:30 /dev/sgx_vepc
crw-rw---- 1 root sgx_prv 10, 126 Oct 3 16:30 /dev/sgx_provision
crw-rw---- 1 root sgx 10, 125 Oct 3 16:30 /dev/sgx_enclave
/dev/sgx:
total 0
lrwxrwxrwx 1 root root 16 Oct 3 16:30 provision -> ../sgx_provision
lrwxrwxrwx 1 root root 14 Oct 3 16:30 enclave -> ../sgx_enclave

Docker on RHEL 8 creating files and folder with 027 permission

I am running nginxinc/nginx-unprivileged:stable-alpine docker image on RHEL 8.8 server. when docker container starts its creating directory and file with umask 0027.
But my docker 20.10.17 daemon running with Umask of 0022. my server default umask is 0027 this I can't change due to security requirements.
# systemd-analyze dump |egrep -i 'docker|umask'
ReferencedBy: docker.service (destination-file)
UMask: 0022
Here is inside container file system permission on RHEL 8 server.
# ls -l
total 76
drwxr-x--- 1 root root 4096 Jun 16 21:57 app
drwxr-x--- 1 root root 4096 Jun 16 21:57 bin
drwxr-x--- 5 root root 360 Jun 17 20:18 dev
drwxr-x--- 1 root root 4096 Jun 16 21:57 docker-entrypoint.d
-rwxr-x--- 1 root root 1202 Jun 16 21:57 docker-entrypoint.sh
drwxr-x--- 1 root root 4096 Jun 17 20:18 etc
drwxr-x--- 2 root root 4096 Jun 16 21:57 home
drwxrwxrwt 1 root root 4096 Jun 16 21:57 tmp
drwxr-x--- 1 root root 4096 Jun 16 21:57 usr
drwxr-x--- 1 root root 4096 Jun 16 21:57 var
Here is inside container file system permission on windows machine with same docker iamge.
ls -l
drwxr-xr-x 2 root root 4096 May 23 16:51 bin
drwxr-xr-x 5 root root 360 Jun 17 18:39 dev
drwxr-xr-x 1 root root 4096 Jun 16 10:36 docker-entrypoint.d
-rwxr-xr-x 1 root root 1202 Jun 16 10:36 docker-entrypoint.sh
drwxr-xr-x 1 root root 4096 Jun 17 18:39 etc
drwxr-xr-x 2 root root 4096 May 23 16:51 home
drwxr-xr-x 1 root root 4096 May 23 16:51 usr
drwxr-xr-x 1 root root 4096 May 23 16:51 var
How can I make docker container file system created with umask of 0022?
Thanks
when docker container starts
That means you need to build your own image, based on nginxinc/nginx-unprivileged:stable-alpine, with a new entry point like:
#!/bin/sh
# entrypoint.sh
umask 022
# ... other first-time setup ...
exec "$#"
See "Change umask in docker containers" for more details, but the idea remains the same.

Jailkit User Cannot Execute Nextcloud OCC Commands

I have a fresh install of nextcloud 22.2.0, that I installed according to [these instructions:]1
After NC installation, I hae ZERO errors in my NC log. However, in the Overview section I have some basic wearnings that I know are "false positives" forllowing a new installation. There I want to run the NC occ in order to repair things:
./occ integrity:check-core
However, I get these errors:
Your data directory is invalid
Ensure there is a file called ".ocdata" in the root of the data directory.
Cannot create "data" directory
This can usually be fixed by giving the webserver write access to the root directory. See https://docs.nextcloud.com/server/22/go.php?to=admin-dir_permissions
Setting locale to en_US.UTF-8/fr_FR.UTF-8/es_ES.UTF-8/de_DE.UTF-8/ru_RU.UTF-8/pt_BR.UTF-8/it_IT.UTF-8/ja_JP.UTF-8/zh_CN.UTF-8 failed
Please install one of these locales on your system and restart your webserver.
An unhandled exception has been thrown:
Exception: Environment not properly prepared. in /web/lib/private/Console/Application.php:162
Stack trace:
#0 /web/console.php(98): OC\Console\Application->loadCommands(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#1 /web/occ(11): require_once('/web/console.ph...')
#2 {main}
I was able to resolve this error:
Setting locale to en_US.UTF-8/fr_FR.UTF-8/es_ES.UTF-8/de_DE.UTF-8/ru_RU.UTF-8/pt_BR.UTF-8/it_IT.UTF-8/ja_JP.UTF-8/zh_CN.UTF-8 failed
Please install one of these locales on your system and restart your webserver.
By using:
chattr -i /var/www/clients/client1/web19/
jk_cp -j /var/www/clients/client1/web19/ /usr/lib/locale
chattr +i /var/www/clients/client1/web19/
Can anyone tell me how to resolve the two remaining errors so that the NC occ will work correctly?
thanks
Also the user's permissions are correct:
# ls -la /var/www/clients/client1/web19
total 60
drwxr-xr-x 15 root root 4096 Nov 12 15:12 .
drwxr-xr-x 9 root root 4096 Nov 12 14:50 ..
lrwxrwxrwx 1 root root 7 Nov 12 15:09 bin -> usr/bin
drwxr-xr-x 2 web19 client1 4096 Nov 12 14:50 cgi-bin
drwxr-xr-x 2 root root 4096 Nov 12 17:36 dev
drwxr-xr-x 8 root root 4096 Nov 12 15:12 etc
drwxr-xr-x 4 root root 4096 Nov 12 15:12 home
lrwxrwxrwx 1 root root 7 Nov 12 15:09 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Nov 12 15:09 lib64 -> usr/lib64
drwxr-xr-x 2 root root 4096 Nov 12 19:58 log
drwx--x--- 2 web19 client1 4096 Nov 12 20:05 private
drwx------ 2 web19 client1 4096 Nov 12 15:09 .ssh
drwxr-xr-x 2 root root 4096 Nov 12 14:55 ssl
drwxrwx--- 2 web19 client1 4096 Nov 12 20:09 tmp
drwxr-xr-x 8 root root 4096 Nov 12 15:09 usr
drwxr-xr-x 4 root root 4096 Nov 12 15:12 var
drwx--x--x 14 web19 client1 4096 Nov 12 20:09 web
drwx--x--- 2 web19 client1 4096 Nov 12 14:50 webdav
I had the same problem as you and, curiously, I use the same user/client for the same service.
I've resolved in the following way (in addition to yours solution about "locale").
Go into the jail root (/var/www/clients/client1/web19). Here create the directories to contain PHP stuff:
mkdir -p etc/php/7.4/cli/conf.d
Copy the system-wide php.ini for cli into jail:
cp -a /etc/php/7.4/cli/php.ini etc/php/7.4/cli/php.ini
"Hardly" link every file *.so present in system-wide conf into the jail. For example:
ln /etc/php/7.4/mods-available/apcu.ini 20-apcu.ini
If you has the cache problems too (as me), you can add a definition before run occ.
I'm using the following command:
runuser -l web19 -c "cd /web; php --define apc.enable_cli=1 ./occ"
...and all seems to works fine! :-)

Failing to get PyCharm to work with remote interpreter on docker

When I add a remote interpreter from one of my docker-compose, it doesn't seem to succeed and doesn't show any packages in the dialog. When I add an interpreter to the debugger it says:
python packaging tools not found.
Then if i click on install packaging tools, error displayed:
ERROR: for dockeryard_pycharm_helpers_1
Cannot start service pycharm_helpers: network not found
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_redis_1 ...
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_pycharm_helpers_1
Starting dockeryard_redis_1
Starting dockeryard_worker_1 ...
Starting dockeryard_worker_1
Starting dockeryard_pycharm_helpers_1
ERROR: for dockeryard_pycharm_helpers_1 Cannot start service pycharm_helpers: network not found
ERROR: for pycharm_helpers Cannot start service pycharm_helpers: network not found
[31m
ERROR [0m:
Note, this interpreter was already in use and I was able to connect remotely with PyCharm, but I have added and eventually removed a custom network to the container.
As explained in Configuring Remote Python Interpreters - "When a remote Python interpreter is added, at first the PyCharm helpers are copied to the remote host". And my guess something went wrong since network was updated in the docker-compose.
From what I understand from the error message, when PyCharm starts interpreter it tries to use/find that network c7b0cc277c94ba5f58f6e72dcbab1ba24794e72422e839a83ea6102d08c40452.
I don't see that network listed anywhere when I run:
$ docker network inspect dockeryard_default
So PyCharm stores it somewhere and not been updated with the change.
I have tried to remove interpreter (using PyCharm dialog) and add it back - same result.
How can I get rid of this network and make PyCharm to be able to debug again?
Thanks.
Was having a near identical error and was able to get past it. I did two things though I'm uncertain as to which was the actual solution:
Made sure the mappings were correct under both (a) Preferences -> Project -> Project Interpreter -> Path mappings and (b) Run -> Edit Configurations -> <Your_Configuration> -> Path mappings
Removed/deleted any containers that looked to be related to PyCharm (believe this is more than likely what solved things).
Hope this helps. PyCharm docker-compose seems to work for some and be a real PITA for others.
One other note. I downgraded from PyCharm 2018 to 2017.3 as there's known docker bugs in 2018.
EDIT: And it would seem a docker-compose down from CLI reintroduces the error -_-
TLDR:
The {project_name}_pycharm_helpers_{pycharm_build_number} volume has been removed or is corrupted.
To repopulate it, run:
docker volume rm {project_name}_pycharm_helpers_{pycharm_build_number}
docker run -v {project_name}_pycharm_helpers_{pycharm_build_number}:/opt/.pycharm_helpers pycharm_helpers:{pycharm_build_number}
The pycharm_build_number can be found in the about section of your pycharm (mac OS: Pycharm > About)
Long story
I struggled a lot with PyCharm suddenly not finding the helpers any more or any related bugs, sometimes because I was clearing my containers or volumes. For instance, running
docker rm -f `docker container ps -aq`
docker volume rm $(docker volume ls -q)
will almost surely get pycharm into troubles.
AFAIK about how PyCharm works, there is:
a PyCharm base image named pycharm_helpers with tag corresponding to your pycharm build number (for example: PY-202.7660.27)
the first time you create docker related things, PyCharm creates volumes that get data from this image for later use in your containers. For instance, after a first attempt at running a remote docker-compose interpreter, I see the newly created myproject_pycharm_helpers_PY-202.7660.27 volume when doing docker volume ls.
when running the docker interpreter, PyCharm adds this volume into the /opt/.pycharm_helpers directory by adding at some point a -v myproject_pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers to your command. For instance using docker-compose, you can see the addition of the -f /Users/clementwalter/Library/Caches/JetBrains/PyCharm2020.2/tmp/docker-compose.override.1508.yml and when you actually look into this file you see:
version: "3.8"
services:
local:
command:
- "python"
- "/opt/.pycharm_helpers/pydev/pydevconsole.py"
- "--mode=server"
- "--port=55824"
entrypoint: ""
environment:
PYCHARM_MATPLOTLIB_INTERACTIVE: "true"
PYTHONPATH: "/opt/project:/opt/.pycharm_helpers/pycharm_matplotlib_backend:/opt/.pycharm_helpers/pycharm_display:/opt/.pycharm_helpers/third_party/thriftpy:/opt/.pycharm_helpers/pydev"
PYTHONUNBUFFERED: "1"
PYTHONIOENCODING: "UTF-8"
PYCHARM_MATPLOTLIB_INDEX: "0"
PYCHARM_HOSTED: "1"
PYCHARM_DISPLAY_PORT: "63342"
IPYTHONENABLE: "True"
volumes:
- "/Users/clementwalter/Documents/myproject:/opt/project:rw"
- "pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers"
working_dir: "/opt/project"
volumes:
pycharm_helpers_PY-202.7660.27: {}
You get into troubles when this volume is not correctly populated anymore.
Fortunately the docker volume documentation has a section "Populate a volume using a container" which is exactly what PyCharm does under the hood.
For the record you can check the content of the pycharm_helpers image:
$ docker run -it pycharm_helpers:PY-202.7660.27 sh
/opt/.pycharm_helpers #
you end up into the pycharm_helpers directory and find all the helpers here:
/opt/.pycharm_helpers # ls -la
total 5568
drwxr-xr-x 21 root root 4096 Dec 17 16:38 .
drwxr-xr-x 1 root root 4096 Dec 17 11:07 ..
-rw-r--r-- 1 root root 274 Dec 17 11:07 Dockerfile
drwxr-xr-x 5 root root 4096 Dec 17 16:38 MathJax
-rw-r--r-- 1 root root 2526 Sep 16 11:14 check_all_test_suite.py
-rw-r--r-- 1 root root 3194 Sep 16 11:14 conda_packaging_tool.py
drwxr-xr-x 2 root root 4096 Dec 17 16:38 coverage_runner
drwxr-xr-x 3 root root 4096 Dec 17 16:38 coveragepy
-rw-r--r-- 1 root root 11586 Sep 16 11:14 docstring_formatter.py
drwxr-xr-x 4 root root 4096 Dec 17 16:38 epydoc
-rw-r--r-- 1 root root 519 Sep 16 11:14 extra_syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 generator3
-rw-r--r-- 1 root root 8 Sep 16 11:14 icon-robots.txt
-rw-r--r-- 1 root root 3950 Sep 16 11:14 packaging_tool.py
-rw-r--r-- 1 root root 1490666 Sep 16 11:14 pip-20.1.1-py2.py3-none-any.whl
drwxr-xr-x 2 root root 4096 Dec 17 16:38 pockets
drwxr-xr-x 3 root root 4096 Dec 17 16:38 profiler
-rw-r--r-- 1 root root 863 Sep 16 11:14 py2ipnb_converter.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py2only
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py3only
drwxr-xr-x 7 root root 4096 Dec 17 16:38 pycharm
drwxr-xr-x 4 root root 4096 Dec 17 16:38 pycharm_display
drwxr-xr-x 3 root root 4096 Dec 17 16:38 pycharm_matplotlib_backend
-rw-r--r-- 1 root root 103414 Sep 16 11:14 pycodestyle.py
drwxr-xr-x 24 root root 4096 Dec 17 16:38 pydev
drwxr-xr-x 9 root root 4096 Dec 17 16:38 python-skeletons
drwxr-xr-x 2 root root 4096 Dec 17 16:38 rest_runners
-rw-r--r-- 1 root root 583493 Sep 16 11:14 setuptools-44.1.1-py2.py3-none-any.whl
-rw-r--r-- 1 root root 29664 Sep 16 11:14 six.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 sphinxcontrib
-rw-r--r-- 1 root root 128 Sep 16 11:14 syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 third_party
drwxr-xr-x 3 root root 4096 Dec 17 16:38 tools
drwxr-xr-x 5 root root 4096 Dec 17 16:38 typeshed
-rw-r--r-- 1 root root 3354133 Sep 16 11:14 virtualenv-16.7.10-py2.py3-none-any.whl
to make these helpers available again, following the docker documentation, you have to fix the volume. To do so:
docker rm {project_name}_pycharm_helpers_{pycharm_build}
docker run -v {project_name}_pycharm_helpers_{pycharm_build}:"/opt/.pycharm_helpers" pycharm_helpers:{tag}
et voilà
If you're still seeing this in PyCharm 2020.2 then do this:
close PyCharm
try #peterc's suggestion:
docker ps -a | grep -i pycharm | awk '{print $1}' | xargs docker rm
launch PyCharm again
The option invalidate cache -> Clear downloaded shared indexes will also repopulate the Pycharm volumes. (At least in 2021.1)

Limit GPU usage in nvidia-docker?

I am setting up an internal Jupyterhub on a multi GPU server. Jupyter access is provided through a docker instance. I'd like to limit access for each user to no more than a single GPU. I'd appreciate any suggestion or comment. Thanks.
You can try it with nvidia-docker-compose
version: "2"
services
process1:
image: nvidia/cuda
devices:
- /dev/nvidia0
The problem can be solved in this way, just add the environment variable “NV_GPU” before “nvidia-docker” as follow:
[root#bogon ~]# NV_GPU='4,5' nvidia-docker run -dit --name tf_07 tensorflow/tensorflow:latest-gpu /bin/bash
e04645c2d7ea658089435d64e72603f69859a3e7b6af64af005fb852473d6b56
[root#bogon ~]# docker attach tf_07
root#e04645c2d7ea:/notebooks#
root#e04645c2d7ea:/notebooks# ll /dev
total 4
drwxr-xr-x 5 root root 460 Dec 29 03:52 ./
drwxr-xr-x 22 root root 4096 Dec 29 03:52 ../
crw--w---- 1 root tty 136, 0 Dec 29 03:53 console
lrwxrwxrwx 1 root root 11 Dec 29 03:52 core -> /proc/kcore
lrwxrwxrwx 1 root root 13 Dec 29 03:52 fd -> /proc/self/fd/
crw-rw-rw- 1 root root 1, 7 Dec 29 03:52 full
drwxrwxrwt 2 root root 40 Dec 29 03:52 mqueue/
crw-rw-rw- 1 root root 1, 3 Dec 29 03:52 null
crw-rw-rw- 1 root root 245, 0 Dec 29 03:52 nvidia-uvm
crw-rw-rw- 1 root root 245, 1 Dec 29 03:52 nvidia-uvm-tools
crw-rw-rw- 1 root root 195, 4 Dec 29 03:52 nvidia4
crw-rw-rw- 1 root root 195, 5 Dec 29 03:52 nvidia5
crw-rw-rw- 1 root root 195, 255 Dec 29 03:52 nvidiactl
lrwxrwxrwx 1 root root 8 Dec 29 03:52 ptmx -> pts/ptmx
drwxr-xr-x 2 root root 0 Dec 29 03:52 pts/
crw-rw-rw- 1 root root 1, 8 Dec 29 03:52 random
drwxrwxrwt 2 root root 40 Dec 29 03:52 shm/
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stderr -> /proc/self/fd/2
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdin -> /proc/self/fd/0
lrwxrwxrwx 1 root root 15 Dec 29 03:52 stdout -> /proc/self/fd/1
crw-rw-rw- 1 root root 5, 0 Dec 29 03:52 tty
crw-rw-rw- 1 root root 1, 9 Dec 29 03:52 urandom
crw-rw-rw- 1 root root 1, 5 Dec 29 03:52 zero
root#e04645c2d7ea:/notebooks#
or,read nvidia-docker of github's wiki
There are 3 options.
Docker with NVIDIA RUNTIME (version 2.0.x)
According to official documentation
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=2,3
nvidia-docker (version 1.0.x)
based on a popular post
nvidia-docker run .... -e CUDA_VISIBLE_DEVICES=0,1,2
(it works with tensorflow)
programmatically
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2"

Resources