npm script for docker - docker

Trying to write an npm script for my project. I am using docker and i was trying simplify the application start commands. I don't want to use the docker compose command at this time.
I am trying to run the below command in npm script.
"scripts": {
"app-start" : "docker system prune && docker run -p 3000:3000 -v
/app/node_modules -v $(pwd):/app $(docker build -f Dockerfile.dev
.)"
}
is there a way to provide parameter to docker run command using docker build, something like below:
docker run -p 3000:3000 -v
/app/node_modules -v $(pwd):/app $(docker build -f Dockerfile.dev
.)
Also any improvements/ suggestions on the above commands

Use the -q / --quiet switch
Suppress the build output and print image ID on success
So your command will end with
docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app $(docker build -q -f Dockerfile.dev .)

Related

Unable to run docker build inside docker run

I am trying to build an image and then run it in one command. Using stackoverflow question I have menaged to scrape this command:
docker run --rm -it -p 8000:8000 -v "%cd%":/docs $(docker build -q .)
However it produces an error:
docker: invalid reference format.
See 'docker run --help'.
First part of the command ( docker run --rm -it -p 8000:8000 -v "%cd%":/docs .) works properly on already built image the problem lies inside $(docker build -q .). Dockerfile is inside the folder I have opened in cmd.
You need to run it in PowerShell, not cmd.
Try docker run --rm -it -p 8000:8000 -v ${PWD}:/docs $(docker build -q .).

Trying to copy a script into a detached Docker container, and execute it with docker exec

Right now I am setting my Docker instance running with:
sudo docker run --name docker_verify --rm \
-t -d daoplays/rust_v1.63
so that it runs in detached mode in the background. I then copy a script to that instance:
sudo docker cp verify_run_script.sh docker_verify:/.
and I want to be able to execute that script with what I expected to be:
sudo docker exec -d docker_verify bash \
-c "./verify_run_script.sh"
However, this doesn't seem to do anything. If from another terminal I run
sudo docker container logs -f docker_verify
nothing is shown. If I attach myself to the Docker instance then I can run the script myself but that sort of defeats the point of running in detached mode.
I assume I am just not passing the right arguments here, but I am really not clear what I should be doing!
When you run a command in a container you need to also allocate a pseudo-TTY if you want to see the results.
Your command should be:
sudo docker exec -t docker_verify bash \
-c "./verify_run_script.sh"
(note the -t flag)
Steps to reproduce it:
# create a dummy script
cat > script.sh <<EOF
echo This is running!
EOF
# run a container to work with
docker run --rm --name docker_verify -d alpine:latest sleep 3000
# copy the script
docker cp script.sh docker_verify:/
# run the script
docker exec -t docker_verify sh -c "chmod a+x /script.sh && /script.sh"
# clean up
docker container rm -f docker_verify
You should see This is running! in the output.

Run docker run command as npm script

I have the following command that runs when run by itself (Output: Hello):
$ docker run -it --rm --name fetch_html -v ${pwd}:/usr/src/myapp -w /usr/src/myapp php:7.4-cli php
Hello
However, I want to run it as a npm script as it's a little tedious writing the whole thing out every time:
{
...
"scripts": {
"fetch_html": "docker run -it --rm --name fetch_html -v ${pwd}:/usr/src/myapp -w /usr/src/myapp php:7.4-cli php scripts/fetch_html/cli.php"
},
...
Then:
$ npm run fetch_html
But it gives me the following error:
docker: Error response from daemon: create ${pwd}: "${pwd}" includes
invalid characters for a local volume name, only
"[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a
host directory, use absolute path.
I've tried to change it to $(pwd) as I recall Windows and Linux having different syntax here(?). The host machine is Windows 10.
If you are using the Windows 10 CMD interpreter, try this:
{
...
"scripts": {
"fetch_html": "docker run -it --rm --name fetch_html -v %cd%\\:/usr/src/myapp -w /usr/src/myapp php:7.4-cli php scripts/fetch_html/cli.php"
},
...

Docker does not copy over updated files when building

My dockerfile:
FROM nginx:1.15.8-alpine
#config
copy ./nginx.conf /etc/nginx/nginx.conf
copy ./html/ /usr/share/nginx/html/
How I run it:
docker rm -vf $(docker ps -a -q)
docker rmi -f $(docker images -a -q)
docker build --no-cache . -t netvis
docker run -it -p 8081:80 netvis
Hi!
When I update files in the html/ directory on the local machine and then run the build commands the files are not updated in the docker container.
I have been told that the solution to this problem is to use the --no-cache option when building, which didn't work and to run the two deletion commands before the build command which also didn't work.
I have also tried restarting docker and also running "docker system prune -a" which also didn't work.
Thanks for any help!
So, per my comments, using COPY:
# Latest version is v1.21.1
FROM nginx:1.21.1-alpine
COPY ./html/ /usr/share/nginx/html/
NOTE I just used html and left the config unchanged.
rm -rf ./html
mkdir ./html
echo '<html><body>Hello Freddie</body></html>' > ./html/index.html
docker build \
--tag=68856201:v1 \
--file=./Dockerfile \
.
docker run \
--interactive --tty --rm \
--publish=8081:80 \
68856201:v1
Then from another shell:
curl localhost:8081/index.html
<html><body>Hello Freddie</body></html>

Cannot share data between volumes on different containers on Jenkins

I am new at docker and I've been struggling with the following:
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub -v DataVolume5:/src --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I am trying to get data from ui-tests-runner${buildProperties} container from /src into DataVolume5
I am getting 0 files when I list the contents of datavolume5
However, if I try to do the same thing with chrome-node${buildProperties} /home I can see /seluser when I list the contents of datavolume5 which is expected.
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm -v DataVolume5:/seluser --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I tried numerous things that I found online, I checked permissions and that seems fine. The only thing I can think of what's different is that the ui-tests-runner${buildProperties} container is hosting a repository. I don't know what else to try. I have been struggling for a few days now.
This piece of code was taken from the pipeline bit in the Jenkinsfile
You have a race condition between these two commands:
sh "docker run -d ... -v DataVolume5:/src ... ui-tests-runner"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
The first command, with the -d option, will not stop. It will run the container in the background. The second command then runs while your ui-tests-runner container is starting up, and shows the folder before your tests have run.
Named volumes are also populated when first used with the image contents at that location. So when you use a different path that has contents inside your image at that location, you'll get files in the volume.
Once that initialization step is done and the volume is no longer empty, you'll only see files that are written to the volume by the process inside a container. You won't get changes from the image filesystem as images are redeployed since that path in the container is replaced by the contents of the persistent volume.
I presume you're creating DataVolume5 as a named volume, using
docker volume create.
In which case you don't need to specify the absolute path, but docker volume inspect DataVolume5 will give you the path.
Try using a specific host directory as the shared volume instead.
docker run -d -v myVolume:/src ui-tests-runner
first check DataVolume5 containes something after running ui-tests-runner command
in the command docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5
give absolute path of DataVolume5
Eg. docker run --rm -v /abs-path-to-directory/DataVolume5:/datavolume5 ubuntu ls -l datavolume5

Resources