Invalid reference format docker - docker

Im currently struggling to run a container and execute a command within this container.I'm trying to use NanoSNP (https://github.com/huangnengCSU/NanoSNP), which is an image available from dockerhub. After fulling this docker, the following command would be sufficient to run the the container:
docker run \
-v "${INPUT_DIR}":"${INPUT_DIR}" \
-v "${OUTPUT_DIR}":"${OUTPUT_DIR}" \
--gpus all \
huangnengcsu/nanosnp:v2.1-gpu \
run_caller.sh \
-b "${INPUT_DIR}/input.bam" \
-f "${INPUT_DIR}/reference.fa" \
-t "${THREADS}" \
-c "${COVERAGE}" \
-o "${OUTPUT_DIR}"
When trying to run the code: I get the following error: Invalid reference format.
This didn't happen before. When inactivating the -v it doesn't give the error again. I'm working with WSL, in VCS with a python script, and running my docker command via subprocess.run, so the following code looks like this (I only included the -v because that's where the error occurred:
subprocess.run(["sudo", "docker", "run", \
"-v", "${INPUT_DIR}",":","${INPUT_DIR}", \
"-v", "${OUTPUT_DIR}",":","${OUTPUT_DIR}", \
If anyone knows or sees the problem, let me know.

Related

"rootless" docker gets permission denied, but account running docker does not - why?

I am running docker "rootless" according to this guide: https://docs.docker.com/engine/security/rootless/
The user which actually runs docker is svc_test.
When I try and start a docker container which has diretory mounts which don't exists - the docker daemon (a.k.a. svc_test user) attempts to mkdir these directories, but fails with
docker: Error response from daemon: error while creating mount source path '/dir_path/dir_name': mkdir /dir_path/dir_name: permission denied.
When I (svc_test) them attempt to do mkdir /dir_path/dir_name I succeed without any issues.
What is going on here and why does this happen?
Clearly I am missing something, but I can't trace what is that exactly.
Update 1:
This is the specific docker cmd I use to run the container:
docker run -d --restart unless-stopped \
--name questdb \
-e QDB_METRICS_ENABLED=TRUE \
--network="host" \
-v /my_mounted_volume/questdb:/questdb \
-v /my_mounted_volume/questdb/public:/questdb/public \
-v /my_mounted_volume/questdb/conf:/questdb/conf \
-v /my_mounted_volume/questdb/db:/questdb/db \
-v /my_mounted_volume/questdb/log:/questdb/log \
questdb/questdb:6.5.2 /usr/bin/env QDB_PACKAGE=docker /app/bin/java \
-m io.questdb/io.questdb.ServerMain \
-d /questdb \
-f
For clarity: my final goal is to be able to run the docker container in question from the same user form which I run my docker daemon (the svc_test user). Hence how I stumbled on this problem.

Using sickcodes/Docker-OSX, how to boot directly into an OS X shell with no display (Xvfb) [HEADLESS], using a custom image?

With the following command, I tried to boot directly into an OS X shell with no display (Xvfb) [HEADLESS], using the sickcodes/docker-osx:naked docker image with a custom Mojave image.
# run your own image headless + SSH
docker run -it \
--device /dev/kvm \
-p 50922:10022 \
-v "${PWD}/mac_hdd_ng.img:/image" \
sickcodes/docker-osx:naked
However it ends up with the following error message:
nohup: appending output to 'nohup.out'
nohup: failed to run command 'Xvfb': No such file or directory
Details to reproduce:
The version of the sickcodes/docker-osx:naked docker image is: https://hub.docker.com/layers/docker-osx/sickcodes/docker-osx/naked/images/sha256-ffff65c24b7a1588dd665f07f52a48ef5efb9941c2e2fa07573e66524029ab08
The custom image - mac_hdd_ng.img is generated by running:
docker run -it \
--device /dev/kvm \
-p 50922:10022 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e "DISPLAY=${DISPLAY:-:0.0}" \
sickcodes/docker-osx:mojave
sudo find /var/lib/docker -size +10G | grep mac_hdd_ng.img # https://github.com/sickcodes/Docker-OSX#container-creation-examples
This youtube video provides a detailed example of using sickcodes/docker-osx to run Mac OS X in Docker.
Building and using the Docker Image using https://github.com/sickcodes/Docker-OSX/blob/master/Dockerfile.naked with the following edits solved the issue.
Replace Line 63 in Dockerfile.naked
RUN pacman -Syu xorg-server-xvfb wget xterm xorg-xhost xorg-xrandr sshpass --noconfirm \
With
RUN pacman -Sy xorg-server-xvfb wget xterm xorg-xhost xorg-xrandr sshpass --noconfirm \
References:
https://github.com/sickcodes/Docker-OSX/issues/515
https://github.com/sickcodes/Docker-OSX/issues/498

Docker not seeing a path

My docker script is this:
docker run --interactive --tty --rm \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/backups:/backups \
neo4j/neo4j-admin:4.4.9 \
neo4j-admin load --database=neo4j --from=/backups/neo4j.dump
When I run it, I'm getting:
docker: invalid reference format.
See 'docker run --help'.
zsh: no such file or directory: --volume=/Users/ironside/neo4j/backups:/backups
zsh: no such file or directory: neo4j/neo4j-admin:4.4.9
But if I do cd $HOME/neo4j/backups and pwd I get /Users/ironside/neo4j/backups. So it exists. Same with the .dump file. It's there.
The data part works, which is very confusing. I'm trying to follow this part from the docs:
https://neo4j.com/docs/operations-manual/current/docker/maintenance/#docker-neo4j-dump
docker run --interactive --tty --rm \
--volume=$HOME/neo4j/data:/data \
--volume=$HOME/neo4j/backups:/backups \
neo4j/neo4j-admin:4.4.9 \
neo4j-admin load --database=neo4j --from=/backups/<dump-name>.dump
What am I missing here?
I'm using a MAC (MacBook Pro (16-inch, 2021)) with Apple M1 Pro chip.

gdb debug in docker failed

envs:
host:centos
docker:ubuntu 16 nivida-docker
program:c++ websocket
desc:
when I use gdb in docker ,I can't use breakpoint ,it just says:warning: error disabling address space randomization: operation not permitted.I see alot of resolutions to this question,all of them tell me to add :--cap-add=SYS_PTRACE --security-opt seccomp=unconfinedto my docker file ,so I did it.here is my docker file:
!/bin/sh
SCRIPT_DIR=$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)
PROJECT_ROOT="$( cd "${SCRIPT_DIR}/.." && pwd )"
echo "PROJECT_ROOT = ${PROJECT_ROOT}"
run_type=$1
docker_name=$2
sudo docker run \
--name=${docker_name} \
--privileged \
--network host \
-it --rm \
--cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
-v ${PROJECT_ROOT}/..:/home \
-v /ssd3:/ssd3 \
xxxx/xx/xxxx:xxxx \
bash
but when restart the container and run gdb ,it always killed like below:
(gdb) r -c conf/a.json -p 8075
Starting program: /home/Service/bin/Service --args -c conf/a.json -p 8075
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Killed
I don't known where is wrong ,anyone have any opinions?
Try this
docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined

docker --volume format for Windows

I'm trying to take a shell script we use at work to set up our development environments and re-purpose it to work on my Windows environment via Git Bash.
The way the containers are brought up in the shell script are as follows:
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=$PWD/var/www:/var/www \
--volume=$PWD/var/log/apache2:/var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
When I run that as-is, it returns the following error message:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: invalid bind mount spec "/C/Users/username/var/docker/environments/development/scripts/var/log/apache2;C:\\Program Files\\Git\\var\\log\\apache2": invalid volume specification: '/C/Users/username/var/docker/environments/development/scripts/var/log/apache2;C:\Program Files\Git\var\log\apache2': invalid mount config for type "bind": invalid mount path: '\Program Files\Git\var\log\apache2' mount path must be absolute. See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I did a bunch of googling and documentation reading, but I'm a little overwhelmed by Docker, and I think I got it wrong. I tried setting up the container as follows:
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=/c/users/username/var/www:/var/www \
--volume=/c/users/username/var/log/apache2:/var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
It still errors out with a similar error message. If I remove the colon:/var/www it comes up, but it doesn't seem to map those directories properly, that is it doesn't know that C:\users\username\var\www = /var/www
I know I'm missing something painfully dumb here, but when I look at the documentation I just glaze over. Any help would be greatly appreciated.
For people using Docker on Windows 10, an extra / has to be included in the path:
docker run -it -v //c/Users/path/on/host:/app/path/in/docker/container command
(notice an extra / near c)
If you are using Git Bash and using pwd then use an extra / there as well:
docker run -p 3000:3000 -v /app/node_modules -v /$(pwd):/app 09b10e9fda85`
(notice / before $(pwd))
Well, I answered my own question moments after I posted it.
This is the correct format.
docker run \
--detach \
--name=server_container \
--publish 80:80 \
--volume=//c/users/username/var/www://var/www \
--volume=//c/users/username/var/log/apache2://var/log/apache2 \
--link=mysql_container:mysql_container \
--link=redis_container:redis_container \
web-server
Should have kept googling a few minutes longer.
If you want to make the path relative, you can use pwd and variables. For example:
CURRENT_DIR=$(pwd)
docker run -v /"$CURRENT_DIR"/../../test/:/test alpine ls test

Resources