I have a docker container from which I am trying to run a pyqt app. Everything works well except a chunk of the GUI is not able to render. The docker logs throw this out:
libGL error: failed to load driver: swrast
X Error: GLXBadContext 169
Extension: 154 (Uknown extension)
Minor opcode: 6 (Unknown request)
Resource id: 0x6400003
X Error: BadValue (integer parameter out of range for operation) 2
Extension: 154 (Uknown extension)
Minor opcode: 3 (Unknown request)
Resource id: 0x0
...
QGLContext::makeCurrent(): Failed.
In my Dockerfile, I tried installing pretty much all the packages I could find that might be related, including mesa-utils.
In terms of the docker-compose file, here's what it looks like:
version: '2'
services:
gui:
build: .
volumes:
- .:/usr/src
- /tmp/.X11-unix:/tmp/.X11-unix
command: /bin/bash -c "python start.py"
environment:
- DISPLAY=unix$DISPLAY
- QT_X11_NO_MITSHM=1
devices:
- "/dev/snd:/dev/snd"
- "/dev/dri:/dev/dri"
privileged: true
Any ideas what I might be missing?
Figured it out. I had to build the gui with hardware accelerated OpenGL support. Theres a repo (https://github.com/gklingler/docker3d) that contains docker images with nvidia or other graphics drivers support.
The other catch was, it didn't work for me unless the host and the container had the exact same driver. In order to resolve this, you can run the following shell script if you're running on linux:
#!/bin/bash
version="$(glxinfo | grep "OpenGL version string" | rev | cut -d" " -f1 | rev)"
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/"$version"/NVIDIA-Linux-x86_64-"$version".run
mv NVIDIA-Linux-x86_64-"$version".run NVIDIA-DRIVER.run
Related
after pulling the latest image this issue accures
on this version everything works fine
memsql/cluster-in-a-box:centos-7.3.12-2d93725f98-3.2.11-1.11.7
~/workdir/js/master *16 !1 ▓▒░ docker-compose up memsql ░▒▓ 2 ✘ 15s 2.7.2 08:24:30
Recreating platform-js_memsql_1 ... done
Attaching to platform-js_memsql_1
memsql_1 | 2021-07-26 05:24:41.431505 Starting Cluster
memsql_1 | Latest errors from MemSQL tracelog:
memsql_1 | 13651863 2021-07-26 05:24:55.333 FATAL: Thread 115111: jumpToUpgradeStep: This node is not managed by a supported tool. Please use a toolbox version at least as new as 1.11.3.
memsql_1 | : Failed to connect to MemSQL: process exited: exit status 1
memsql_1 | Traceback (most recent call last):
memsql_1 | File "/startup", line 122, in <module>
memsql_1 | start_cluster()
memsql_1 | File "/startup", line 86, in start_cluster
memsql_1 | ctl("start-node", "--all")
memsql_1 | File "/startup", line 18, in ctl
memsql_1 | subprocess.check_output(["memsqlctl", "-yj"] + list(args)))
memsql_1 | File "/usr/lib64/python2.7/subprocess.py", line 575, in check_output
memsql_1 | raise CalledProcessError(retcode, cmd, output=output)
memsql_1 | subprocess.CalledProcessError: Command '['memsqlctl', '-yj', 'start-node', '--all']' returned non-zero exit status 1
this is my docker-compose setup using memsql/cluster-in-a-box
memsql:
image: memsql/cluster-in-a-box
volumes:
- "./init.sql"
ports:
- "3307:3306"
- "8080:8080"
environment:
START_AFTER_INIT: Y
ROOT_PASSWORD: 'root'
LICENSE_KEY:*************************
OS: macOs bigSur v11.4
Docker : v20.10.7
also tries as suggested by Volodymyr Tkachuk
to run it directly with docker and not working
docker run -i --init --name memsql -e LICENSE_KEY=$LICENSE -e ROOT_PASSWORD=root -p 3306:3306 -p 8080:8080 memsql/cluster-in-a-box:latest docker start memsql
'Please use a toolbox version at least as new as 1.11.3.'
isn't toolbox part of the image?
this issue related to 3rd party dependencies or container issues?
Unfortunately the recent release of SingleStore 7.5 broke upgrade for this Docker image. We added an upgrade step to the release which requires running sdb-upgrade. We will be fixing this, but in the meantime you have two choices:
If you don't care about the data in this image (or you can recreate the data) run docker-compose up -V to start 7.5 with an empty data directory.
If you do care about the data in this image, modify the entrypoint to run sdb-upgrade which should upgrade the data volume, and then you can run the container like normal. You should test this process before running it on your actual image since it is potentially destructive.
Sorry that you ran into this, we will fix the bug soon.
On mac, /a/b is with below permissions:
$ ls -l /a/b
total 0
drwxrwxrwx 2 root wheel 64 13 Jan 08:50 b
$ whoami
user1
$
Below is the docker-compose file to mount /a/b from docker container:
version: '2'
services:
someapp:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_VERSION: ${DOCKER_VERSION}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- /a/b:/var/some_mount
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
On running docker-compose up -d someapp, I see below error:
ERROR: for docker-folder_someapp_1 Cannot start service someapp: b'Mounts denied: \r\nThe path /a/b\r\n is not shared from OS X and is not known to Docker.\r\nYou can configure shared paths from Docker -> Preferences... -> File Sharing.\r\nSee https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.\r\n.'
ERROR: for someapp Cannot start service someapp: b'Mounts denied: \r\nThe path /a/b\r\nis not shared from OS X and is not known to Docker.\r\nYou can configure shared paths from Docker -> Preferences... -> File Sharing.\r\nSee https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.\r\n.'
ERROR: Encountered errors while bringing up the project.
Following the instructions when I add /a/b using File sharing option to existing list:
I get another error popup: The export path /Users/user1/Documents/:a/:a:b overlaps with the export /Users
Another observation is, installing docker on MacOS, using VMWare to run docker, unlike ubuntu :
$ ps -eaf | grep docker
0 11100 1 0 9:02am ?? 0:00.07 /Library/PrivilegedHelperTools/com.docker.vmnetd
1873530912 11108 11038 0 9:02am ?? 0:01.45 /Applications/Docker.app/Contents/MacOS/com.docker.supervisor -watchdog fd:0
I do not see such mount deny issues, running docker daemon in Ubuntu.
1)
How to mount path(/a/b) of docker host to docker container's(/var/some_mount) ? in macos
2)
Is the explicit file sharing needed from docker host, because, docker installation on MacOS makes docker host run on VMWare and docker client run on MacOS?
I'm running a fresh install of Ubuntu 18.04. I downgraded my NVIDIA driver to 390.129 (even though the instructions only mention that when installing Drake with CUDA support, which I'm not doing).
I've installed Docker and built Drake using the instructions here (not with NVIDIA support). docker build --file setup/ubuntu/docker/bionic/Dockerfile --tag drake . worked and docker run --interactive --tty drake bash starts without any issues.
When I try to launch Drake with the GUI using xhost +local:root; docker run --env=DISPLAY --env=QT_X11_NO_MITSHM=1 --interactive --ipc=host --privileged --tty --volume=/tmp/.X11-unix:/tmp/.X11-unix:rw drake; xhost -local:root;, I get this error:
non-network local connections being added to access control list
+ [[ 0 -eq 0 ]]
+ bazel build //tools:drake_visualizer //examples/acrobot:run_passive
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Analyzed 2 targets (100 packages loaded, 20684 targets configured).
INFO: Found 2 targets...
INFO: Elapsed time: 169.872s, Critical Path: 59.75s
INFO: 536 processes: 536 linux-sandbox.
INFO: Build completed successfully, 799 total actions
+ sleep 2
+ ./bazel-bin/tools/drake_visualizer
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Drake Scripts:
Specified: --use_builtin_scripts=all
Available: --use_builtin_scripts=frame,hydroelastic_contact,image,point_pair_contact,time
Hydroelastic Contact Visualizer subscriber added.
DrakeLcmImageViewer: Defer setup until 'DRAKE_RGBD_CAMERA_IMAGES' is received
Contact Visualizer subscriber added.
QOpenGLWindow::beginPaint: Failed to create context
QOpenGLWindow::beginPaint: Failed to make context current
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 741
vtkGenericOpenGLRenderWindow (0x55a0b9a5ac60): GLEW could not be initialized: Missing GL version
QOpenGLFunctions created with non-current context
./setup/ubuntu/docker/entrypoint.sh: line 15: 3175 Segmentation fault (core dumped) ./bazel-bin/tools/drake_visualizer
+ bazel run //examples/acrobot:run_passive
INFO: Analyzed target //examples/acrobot:run_passive (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //examples/acrobot:run_passive up-to-date:
bazel-bin/examples/acrobot/run_passive
INFO: Elapsed time: 0.444s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
non-network local connections being removed from access control list
Any idea what the problem is? Do I have to install my NVIDIA drivers in the container as well? Thanks.
I can confirm the same behavior on my system:
$ docker pull robotlocomotion/drake:latest
latest: Pulling from robotlocomotion/drake
Digest: sha256:17aa147cc215cb91326facae696720de15fdfe439a22a4ecf1bf79ca524a7d63
Status: Image is up to date for robotlocomotion/drake:latest
docker.io/robotlocomotion/drake:latest
$ (
xhost +local:root;
docker run \
--env=DISPLAY \
--env=QT_X11_NO_MITSHM=1 \
--interactive \
--privileged \
--tty \
--volume=/tmp/.X11-unix:/tmp/.X11-unix:rw \
--rm \
robotlocomotion/drake:latest \
/opt/drake/bin/drake-visualizer
xhost -local:root
)
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
QOpenGLWindow::beginPaint: Failed to create context
QOpenGLWindow::beginPaint: Failed to make context current
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 741
vtkGenericOpenGLRenderWindow (0x55b45ac8b730): GLEW could not be initialized: Missing GL version
QOpenGLFunctions created with non-current context
Will post a Drake issue and update this post with it.
Thanks for reporting this!
EDIT: Drake issue: https://github.com/RobotLocomotion/drake/issues/12483
I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile
I've tried to live migrate a wildfly-container to another host like described here. The example with the np container works well. When I replace the example with a simple jboss/wildfly container, I just received this error when criu tries to restore the container on the other host :
Error response from daemon: Cannot restore container <CONTAINER-ID>: criu failed: type NOTIFY errno 0
Error: failed to restore one or more containers
Because I didn't found a solution to this error, I've compiled the linux kernel like described on the criu website and here.
After that sudo criu check prints:
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:711): The current kernel doesn't support packet_diag
Warn (criu/libnetlink.c:54): ERROR -2 reported by netlink
Warn (criu/sockets.c:721): The current kernel doesn't support netlink_diag
Info prctl: PR_SET_MM_MAP_SIZE is not supported
Looks good.
criu --version
Version: 2.11
docker --version
Docker version 1.6.2, build 7c8fca2
Checkpoint/Restore for an example shell script example worked very well. But when I want to checkpoint a container
docker run -d --name looper busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
with
criu dump -t $PID --images-dir /tmp/looper
I receive this output
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/sockets.c:132): Diag module missing (-2)
Error (criu/mount.c:701): mnt: 87:./etc/hosts doesn't have a proper root mount
Error (criu/cr-dump.c:1641): Dumping FAILED.`
I can't find some solutions with these errors. Is there any known solution to live migrate a wildfly-container?
Thanks in advance