What is the right way for running multiple commands in one action?
For example:
I want to run a python script as action. Before running this script I need to install the requirements.txt.
I can think of several options:
Create a Dockerfile with the command RUN pip install -r requirements.txt in it.
Use the python:3 image, and run the pip install -r requirements.txt in the entrypoint.sh file before running the arguments from args in main.workflow.
use both pip install and python myscript.py as args
Another example:
I want to run a script that exists in my repository, then compare 2 files (its output and a file that already exists).
This is a process that includes two commands, whereas in the first example, the pip install command can be considered a building command rather than a test command.
the question:
Can I create another Docker for another command, which will contain the output of the previous Docker?
I'm looking for guidelines for the location of the command in Dockerfile, in entrypoint or in args.
You can run multiple commands using a pipe | on the run attribute. Check this out:
name: My Workflow
on: [push]
jobs:
runMultipleCommands:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- run: |
echo "A initial message"
pip install -r requirements.txt
echo "Another message or command"
python myscript.py
bash some-shell-script-file.sh -xe
- run: echo "One last message"
On my tests, running a shell script like ./myscript.sh returns a ``. But running it like bash myscript.sh -xe worked like expected.
My workflow file |
Results
If you want to run this inside the docker machine, an option could be run some like this on you run clause:
docker exec -it pseudoName /bin/bash -c "cd myproject; pip install -r requirements.txt;"
Regard to the "create another Docker for another command, which will contain the output of the previous Docker", you could use multistage-builds on your dockerfile. Some like:
## First stage (named "builder")
## Will run your command (using add git as sample) and store the result on "output" file
FROM alpine:latest as builder
RUN apk add git > ./output.log
## Second stage
## Will copy the "output" file from first stage
FROM alpine:latest
COPY --from=builder ./output.log .
RUN cat output.log
# RUN your checks
CMD []
This way the apk add git result was saved to a file, and this file was copied to the second stage, that can run any check on the results.
Related
My .gitlab-ci.yml has the following
run python:
image: python:3.10
script:
- |
cd "src"
pip install -r ../requirements.txt
ls -l
At first I thought it was the entrypoint specified by the image python:3.10. However I ran the image locally, I was directly thrown into the Python REPL. It no way runs cd .
So in the script part, which shell is used? sh, bash, or zsh? And is it possible to specify a shell to my liking?
By default, it will use bash.
If you want to change that, you can do so on your own runnner instances only.
This is documented here: https://docs.gitlab.com/runner/shells/
I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test
I am new in docker world. So I have existing Dockerfile which somewhat looks like below:
# Base image
FROM <OS_IMAGE>
# Install dependencies
RUN zypper --gpg-auto-import-keys ref -s && \
zypper -n install git net-tools libnuma1
# Create temp user
RUN useradd -ms /bin/bash userapp
# Creating all the required folders that is required for installment.
RUN mkdir -p /home/folder1/
RUN mkdir -p /home/folder2/
RUN sudo pip install --upgrade pip
RUN python3 code_which_takes_time.py
# Many more stuff below this.
So code_which_takes_time.py takes time to run which will download many stuff and will execute it.
So the requirement is whenever we add more statements below RUN python3 code_which_takes_time.py will unnecessary will execute this python script everytime while building an image.
So I would like to split this image into 2 Dockerfiles.
One file you can run only once. This file will have time consuming stuff which can be run only once while building an image.
Second one will be used to add anymore statements which will added as more layers on top of the existing image.
Because if I run docker build -t "test" . for the current file, it will execute my python script again and again. It's time consuming and I don't want to run it again and again.
My questions:
How can split Dockerfile as I mentioned above.?
How can I build an image with 2 image files.?
How can I run these 2 files?
As of now I do :
Build and run: docker build -t "test" . && docker run -it "test"
Just Build : docker build -t "test" .
Just Run : docker run -it "test"
One thing i can suggest after reading the scenario is that, You want to split your workflow in two Dockerfiles, As far as i know, you can easily break them.
Maintain your first Dockerfile, which will build an image with your python code code_which_takes_time.py executed, commit that image with name "Root_image".
After that when you want to add other tasks in that "Root_image", like RUN python3 etc, simply create a new Dockerfile and use FROM Root_image in that Dockerfile and do the stuff you want to do in it. After performing your task commit your work and named it as "Child_image", eventually your child image is the one inherited from that "Root_image".
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]
In docker I want to do this:
git clone XYZ
cd XYZ
make XYZ
However because there is no cd command, I have to pass in the full path everytime (make XYZ /fullpath). Any good solutions for this?
To change into another directory use WORKDIR. All the RUN, CMD and ENTRYPOINT commands after WORKDIR will be executed from that directory.
RUN git clone XYZ
WORKDIR "/XYZ"
RUN make
You can run a script, or a more complex parameter to the RUN. Here is an example from a Dockerfile I've downloaded to look at previously:
RUN cd /opt && unzip treeio.zip && mv treeio-master treeio && \
rm -f treeio.zip && cd treeio && pip install -r requirements.pip
Because of the use of '&&', it will only get to the final 'pip install' command if all the previous commands have succeeded.
In fact, since every RUN creates a new commit & (currently) an AUFS layer, if you have too many commands in the Dockerfile, you will use up the limits, so merging the RUNs (when the file is stable) can be a very useful thing to do.
I was wondering if two times WORKDIR will work or not, but it worked :)
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y python3.6
WORKDIR /usr/src
COPY ./ ./
WORKDIR /usr/src/src
CMD ["python3", "app.py"]
You can use single RUN command for all of them
RUN git clone XYZ && \
cd XYZ && \
make XYZ
In case you want to change the working directory for the container when you run a docker image, you can use the -w (short for --workdir) option:
docker run -it -w /some/valid/directory/inside/docker {image-name}
Ref:
docker run options: https://docs.docker.com/engine/reference/commandline/run/#options
Mind that if you must run in bash shell, you need not just the RUN make, but you need to call the bash shell before, since in Docker, you are in the sh shell by default.
Taken from /bin/sh: 1: gvm: not found, which would say more or less:
Your shell is /bin/sh, but source expects /bin/bash, perhaps because it
puts its initialization in ~/.bashrc.
In other words, this problem can occur in any setting where the "sh" shell is used instead of the "bash", causing "/bin/sh: 1: MY_COMMAND: not found".
In the Dockerfile case, use the recommended
RUN /bin/bash -c 'source /opt/ros/melodic/setup.bash'
or with the "[]" (which I would rather not use):
RUN ["/bin/bash", "-c", "source /opt/ros/melodic/setup.bash"]
Every new RUN of a bash is isolated, "starting at 0". For example, mind that setting WORKDIR /MY_PROJECT before the bash commands in the Dockerfile does not affect the bash commands since the starting folder would have to be set in the ".bashrc" again. It needs cd /MY_PROJECT even if you have set WORKDIR.
Side-note: do not forget the first "/" before "opt/../...". Else, it will throw the error:
/bin/bash: opt/ros/melodic/setup.bash: No such file or directory
Works:
=> [stage-2 18/21] RUN ["/bin/bash", "-c", "source /opt/ros/melodic/setup.bash"] 0.5s
=> [stage-2 19/21] [...]
See “/bin/sh: 1: MY_COMMAND: not found” at SuperUser for some more details on how this looks with many lines, or how you would fill the ".bashrc" instead. But that goes a bit beyond the actual question here.
PS: You might also put the commands you want to execute in a single bash script and run that bash script in the Dockerfile (though I would rather put the bash commands in the Dockerfile as well, just my opinion):
#!/bin/bash
set -e
source /opt/ros/melodic/setup.bash