I am new at docker and I've been struggling with the following:
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub -v DataVolume5:/src --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I am trying to get data from ui-tests-runner${buildProperties} container from /src into DataVolume5
I am getting 0 files when I list the contents of datavolume5
However, if I try to do the same thing with chrome-node${buildProperties} /home I can see /seluser when I list the contents of datavolume5 which is expected.
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm -v DataVolume5:/seluser --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I tried numerous things that I found online, I checked permissions and that seems fine. The only thing I can think of what's different is that the ui-tests-runner${buildProperties} container is hosting a repository. I don't know what else to try. I have been struggling for a few days now.
This piece of code was taken from the pipeline bit in the Jenkinsfile
You have a race condition between these two commands:
sh "docker run -d ... -v DataVolume5:/src ... ui-tests-runner"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
The first command, with the -d option, will not stop. It will run the container in the background. The second command then runs while your ui-tests-runner container is starting up, and shows the folder before your tests have run.
Named volumes are also populated when first used with the image contents at that location. So when you use a different path that has contents inside your image at that location, you'll get files in the volume.
Once that initialization step is done and the volume is no longer empty, you'll only see files that are written to the volume by the process inside a container. You won't get changes from the image filesystem as images are redeployed since that path in the container is replaced by the contents of the persistent volume.
I presume you're creating DataVolume5 as a named volume, using
docker volume create.
In which case you don't need to specify the absolute path, but docker volume inspect DataVolume5 will give you the path.
Try using a specific host directory as the shared volume instead.
docker run -d -v myVolume:/src ui-tests-runner
first check DataVolume5 containes something after running ui-tests-runner command
in the command docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5
give absolute path of DataVolume5
Eg. docker run --rm -v /abs-path-to-directory/DataVolume5:/datavolume5 ubuntu ls -l datavolume5
Related
I have a Docker image named pfa-image (contains a fairly basic Express-based website), a running mongoDB container named pfa-mongo, and a docker volume named image-volume. When I run the following sequence of commands..:
host$ docker run -d --name pfa-container -v image-volume:/images \
--link pfa-mongo:mongodb -p 5000:5000 pfa-image
host$ docker exec -it pfa-container /bin/bash
container:/pfa-site# cd images
container:/pfa-site/images# touch test.txt
container:/pfa-site/images# exit
host$ docker rm -f pfa-container
host$ docker run -d --name pfa-container -v image-volume:/images \
--link pfa-mongo:mongodb -p 5000:5000 pfa-image
host$ docker exec -it pfa-container /bin/bash
container:/pfa-site# cd images
container:/pfa-site/images# ls
...test.txt is missing. What am I overlooking here? I am quite new to docker and somewhat new to Linux.
Thank you!
I have tried using bind mounts and volumes, to the same result.
Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.
I'm reading the docker documentations, and I've seen this command:
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app,readonly \
nginx:latest
As far as I know, using -d or --detach switch run the command outside of the current terminal emulator, and return the control of terminal back to the user. And also using --tty -t and --interactive -i is completely the opposite. Why would anyone want to use them in a command?
For that specific command, it doesn't make sense, since nginx does not have an interactive component. But in general, it allows you to later attach to the container with docker attach. E.g.
$ docker run --name test-no-input -d busybox /bin/sh
92c0447e0c19de090847b7a36657d3713e3795b72e413576e25ab2ce4074d64b
$ docker attach test-no-input
You cannot attach to a stopped container, start it first
$ docker run --name test-input -dit busybox /bin/sh
57e4adcc14878261f64d10eb7839b35d5fa65c841bbcb3cd81b6bf5b8fe9d184
$ docker attach test-input
/ # echo hello from the container
hello from the container
/ # exit
The first container stopped since it was running a shell, and there was no input on stdin (no -i). A shell exits when it finishes reading input (e.g. the end of a shell script).
I'm following the offical docker guide from here to backup a docker volume. I'm also aware of this SO question however I'm still running into errors. Running the following command:
docker run --rm --volumes-from dbstore -v $(pwd):/backup ny_db_1 tar cvf /backup/backup.tar /dbdata
No matter what image name or container name or container id I put, I get the following error:
Unable to find image 'ny_db_1:latest' locally
The volume I want to backup:
$ docker volume ls
DRIVER VOLUME NAME
local ny_postgres_data
My containers:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39e71e660eda postgres:10.1-alpine "docker-entrypoint.s…" 4 days ago Up 23 minutes 0.0.0.0:5434->5433/tcp ny_db_1
How do I backup my volume?
Update:
I tried the following but ran into a new error:
$ docker run --rm --volumes-from 39e71e660eda -v $(pwd):/backup postgres:10.1-alpine tar:local cvf /backup/backup.tar /dbdata
/usr/local/bin/docker-entrypoint.sh: line 145: exec: tar:local: not found
The docker run syntax is docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...] - ny_db_1 is the name of your container, docker will attempt to use the IMAGE "ny_db_1" which does not exist hence the error: "Unable to find image 'ny_db_1:latest' locally" (latest is the default [:TAG] if none is specified).
--volumes-from will mount volumes from the specified container(s) into a new container spawned from IMAGE[:TAG] for example: docker run --rm --volumes-from db -v $(pwd):/backup ubuntu:18.04 tar czvf /backup/backup.tar /dbdata
Note: if you're backing up a PostgreSQL database then imho you'd be better off using the appropriate tools to backup and restore the database for example:
Backup using pg_dumpall:
docker run --rm \
--name db-backup \
--entrypoint pg_dumpall \
--volume ${PWD}/backup:/backup \
--volumes-from db \
postgres:9 --host /var/run/postgresql --username postgres --clean --oids --file /backup/db.dump
Restore using psql:
docker run --rm -it \
-v ${PWD}/backup:/restore \
--name restore \
postgres:10.1-alpine
docker exec restore psql \
--host /var/run/postgresql \
--username postgres \
--file /restore/db.dump postgres
docker rename restore NEW_NAME
try this command here:
docker run -it --rm -v ny_postgres_data:/volume -v /tmp:/backup ny_db_1 \
tar -cjf /backup/ny_postgres_data -C /volume ./
I would like to create a makefile that runs a docker container, automatically mount the current folder and within the container CD to the shared directory.
I currently have the following which runs the docker image and mounts the directory with no issue. But I am unsure how to get it to change directory.
run:
docker run --rm -it -v $(PWD):/projects dockerImage bash
I've seen some examples where you can append -c "cd /projects" at the end so that it is:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects"
however it will immediately exit the bash command afterwards. Ive also seen an example where you can append && at the end so that it is the following:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects &&".
Unfortunately the console will just hang.
You can specify the working directory in your docker run command with the -w option. So you can do something like this:
docker run --rm -it -v $(PWD):/projects -w /projects dockerImage bash
You can find this option in the official docs here https://docs.docker.com/engine/reference/run/.