how to copy file from docker container to host using shell script? - docker

I have created an image, which is an automation project. when I run container it executes all test inside the container then it generates the test report. I want to take this report out before deleting container.
FROM maven:3.6.0-ibmjava-8-alpine
COPY ./pom.xml .
ADD ./src $HOME/src
COPY ./test-execution.sh /
RUN mvn clean install -Dmaven.test.skip=true -Dassembly.skipAssembly=true
ENTRYPOINT ["/test-execution.sh"]
CMD []
Below is shell file
#!/bin/bash
echo parameters you provided : "$#"
mvn test "$#"
cp api-automation:target/*.zip /Users/abcd/Desktop/docker_report

You will want to use the docker cp command. See here for more details.
However, it appears docker cp does not support standard unix globbing patterns (i.e * in your src path).
So instead you will want to run:
docker cp api-automation:target/ /Users/abcd/Desktop/docker_report
However, then you will have to have a final step to remove all the non-zip files from your docker_report directory.

Related

Docker JBoss SVN automation script? RPM v. YUM?

As it stands, my Dockerfile works as written below, but currently I have to run the two commented lines in order to pull, compile, and deploy my application to the server. I tried creating a shell script to run those commands using ADD and ENTRYPOINT, but when I run (using the docker commands below) the shell script runs and then the container exits.
What/How do I modify (I'm assuming, the docker run command) to fix this?
Is there an easier way to import libraries than the multiple URLS for RPM? I tried using YUM, but I wasn't sure how to set up my repo for installing anything.
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [the URLS of the 40 libraries I need for SVN]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
#RUN svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
#RUN /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/bin/jar -cvf $JBOSS_HOME/standalone/deployments/myapp.war /usr/svn/myapp
Docker commands
docker build . -t myapp:latest
docker run -d -p 8080:8080 -p 9990:9990 --env-file=svnvars.cfg myapp:latest
Found out what I was doing wrong. I was trying to use
/opt/eap/bin/standalone.sh
as the last command in my entrypoint script.
I discovered this was wrong by calling
docker images inspect myapp:latest
where I found
"Cmd": [
"/opt/eap/bin/openshift-launch.sh"
],
I was calling the wrong command. So I fixed this by replacing the command in my shell script and changing my ENTRYPOINT to CMD.
Here are the corrected files:
Dockerfile
FROM registry.access.redhat.com/jboss-eap-7/eap71-openshift
USER root
RUN rpm -i [too many libraries]
ADD subversion_installer_1.14.1.sh /home/svn_installer.sh
ADD svnvars.cfg /var/svn/svnvars.cfg
RUN yes | /home/svn_installer.sh
USER jboss
ARG REPO_USER
ARG REPO_PW
ARG REPO_URL
ENV REPO_USER=$REPO_USER
ENV REPO_PW=$REPO_PW
ENV REPO_URL=$REPO_URL
ADD entrypoint.sh /home/entrypoint.sh
CMD /home/entrypoint.sh
entrypoint.sh
#!/bin/bash
svn export --username="$REPO_USER" --password="$REPO_PW" "$REPO_URL" /usr/svn/myapp
cd /usr/svn/myapp
ant war
/opt/eap/bin/openshift-launch.sh

How can I prevent Docker from removing intermediate containers when executing RUN command?

The error I'm experiencing is that I want to execute the command "change directory" in my Docker machine, but every time I execute RUN instruction in my Dockerfile, it deletes the actual container (intermediate container).
DOCKERFILE
This happens when I execute the Dockerfile from above
How can I prevent Docker from doing that?
docker build --rm=false
Remove intermediate containers after a successful build (default true)
The current paths are different for Dockerfile and RUN (inside container).
Each RUN command starts from the Dockerfile path (e. g. '/').
When you do RUN cd /app, the "inside path" changes, but not the "Dockerfile path". The next RUN command will again be run at '/'.
To change the "Dockerfile path", use WORKDIR (see reference), for example WORKDIR /opt/firefox.
The alternative would be chaining the executed RUN commands, as EvgeniySharapov pointed out: RUN cd opt; ls; cd firefox; ls
on multiple lines:
RUN cd opt; \
ls; \
cd firefox; \
ls
(To clarify: It doesn't matter that Docker removes intermediate containers, that is not the problem in this case.)
When you use docker build --no-cache this will delete intermediate containers when you build an image. This may affect building times when you run build multiple times. Alternatively you can choose to put multiple shell commands into one shell command using \ and then use it as a RUN argument.
Mote tips could be found here

Dockerfile and Docker run -w / workdir

Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml
In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests

How can I copy files from the GitLab Runner helper container to the build container?

Set up
I set up GitLab Runner with a KubernetesExecutor and want to create a custom helper image which adds some extra files to the build container.
The current set up is quite basic:
A Dockerfile, which adds some files (start.sh and Dockerfile) into the container.
A start.sh file which is present in the helper image. This should be executed when the helper is run.
Code
start.sh
#!/bin/bash
printenv > /test.out # Check whether the script is run.
cp /var/Dockerfile /builds/Dockerfile # Copy file to shared volume.
exec "$#"
Dockerfile
FROM gitlab/gitlab-runner-helper:x86_64-latest
ADD templates/Dockerfile /var/Dockerfile
ADD start.sh /var/run/start.sh
ENTRYPOINT ["sh", "/var/run/start.sh"]
The shared volume between the containers is: /builds. As such, I'd like to copy /var/Dockerfile to /builds/Dockerfile.
Problem
I can't seem to find a way to (even) run start.sh when the helper image is executed. Using kubectl exec -it pod-id -c build bash and kubectl exec -it pod-id -c helper bash, I verify whether the files are created. When I run start.sh (manually) from the latter command, the files are copied. However, neither /test.out nor /builds/Dockerfile are available when logging in to the helper image initially.
Attempts
I've tried setting up a different CMD (/var/run/start.sh), but it seems like it simply doesn't run the sh file.

How to copy a file from the host into a container while starting?

I am trying to build a docker image using the dockerfile, my purpose is to copy a file into a specific folder when i run the "docker run" command!
this my dockerfile code:
FROM openjdk:7
MAINTAINER MyPerson
WORKDIR /usr/src/myapp
ENTRYPOINT ["cp"]
CMD ["/usr/src/myapp"]
CMD ls /usr/src/myapp
After building my image without any error (using the docker build command), i tried to run my new image:
docker run myjavaimage MainClass.java
i got this error: ** cp: missing destination file operand after ‘MainClass.java’ **
How can i resolve this? thx
I think you want this Dockerfile:
FROM openjdk:7
WORKDIR /usr/src/myapp
COPY MainClass.java .
RUN javac MainClass.java
ENV CLASSPATH=/usr/src/myapp
CMD java MainClass
When you docker build this image, it COPYs your Java source file from your local directory into the image, compiles it, and sets some metadata telling the JVM where to find the resulting .class files. Then when you launch the container, it will run the single application you've packaged there.
It's common enough to use a higher-level build tool like Maven or Gradle to compile multiple files into a single .jar file. Make sure to COPY all of the source files you need in before running the build. In Java it seems to be common to build the .jar file outside of Docker and just COPY that in without needing a JDK, and that's a reasonable path too.
In the Dockerfile you show, Docker combines ENTRYPOINT and CMD into a single command and runs that command as the single main process of the container. If you provide a command of some sort at the docker run command, that overrides CMD but does not override ENTRYPOINT. You only get one ENTRYPOINT and one CMD, and the last one in the Dockerfile wins. So you're trying to run container processes like
# What's in the Dockerfile
cp /bin/sh -c "ls /usr/src/myapp"
# Via your docker run command
cp MainClass.java
As #QuintenScheppermans suggests in their answer you can use a docker run -v option to inject the file at run time, but this will happen after commands like RUN javac have already happened. You don't really want a workflow where the entire application gets rebuilt every time you docker run the container. Build the image during docker build time, or before.
Two things.
You have used CMD twice.
CMD can only be used once, think of it as the purpose of your docker image. Every time a container is run, it will always execute CMD if you want multiple commands, you should use RUN and then lastly, used CMD
FROM openjdk:
MAINTAINER MyPerson
WORKDIR /usr/src/
ENTRYPOINT ["cp"]
RUN /usr/src/myapp
RUN ls /usr/src/myapp
Copying stuff into image
There is a simple command COPY the syntax being COPY <from-here> <to-here>
Seems like you want to run myjavaimage so what you will do is
COPY /path/to/myjavaimage /myjavaimage
CMD myjavaimage MainClass.java
Where you see the arrows, I've just written dummy code. Replace that with the correct code.
Also, your Dockerfile is badly created.
ENTRYPOINT -> not sure why you'd do "cp", but it's an actual entrypoint. Could point to the root dir of your project or to an app that will be run.
Don't understand why you want to do ls /usr/src/myapp but if you do want to do it, use RUN and not CMD
Lastly,
Best way to debug docker containers are in interactive mode. That means ssh'ing in to your container, have a look around, run code, and see what is the problem.
Run this: docker run -it <image-name> /bin/bash and then have a look inside and it's usually the best way to see what causes issues.
This stackoverflow page perfectly answers your question.
COPY foo.txt /data/foo.txt
# where foo.txt is the relative path on host
# and /data/foo.txt is the absolute path in the image
If you need to mount a file when running the command:
docker run --name=foo -d -v ~/foo.txt:/data/foo.txt -p 80:80 image_name

Resources