Copy file from docker to host before docker exits - docker

I have a docker image created that is setup to build a file for me. The goal here is to just run the docker and pass it a command to build then I would get the final binary on my host somehow.
This is my run command:
docker run -it builder /bin/sh -c 'cd /go/project; go build main.go'
This will build my project and I will have a binary.
The issue is, once this completed the docker container will exit and the binary is gone. I have tried to run..
docker run -it builder /bin/sh -c 'cd /go/project; go build main.go' && docker cp builder:/go/project/main .
The only issue with this is the docker will have already closed at this point.
Is it possible to just redirect this file to host? Or able to keep the container open enough to copy then close it down?
I have tried doing my_addr()

What you can do is simple create a shared volume between a directory and the path of your main.go file, and copy it there. So you will have your main.go file extracted at the end.
https://docs.docker.com/storage/volumes/

Related

Run commands in Docker during run process

I want to be able to run a docker run... command for my custom Ubuntu command where the docker will run two commands as if they were typed once the docker begins running. I have my docker mounted to a local folder and have a custom code within the mounted folder, I want running the docker to also run cd Project and ./a.out within the docker but I am not sure how to do that in one long command.
I have tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu cd Project && ./a.out but I get an OCI runtime create failed.
I have also tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu -c 'cd Project && ./a.out' but get the same error.
Ultimately, it would be nice to have my mounted directory, cd Project, ./a.out, and exit command in my Dockerfile so that the docker container opens, runs the compiled code within a.out, and then exits with a simple docker run myubuntu command but I know that mounting within the Dockerfile requires the image be rebuilt every time that local folder changes. So that leaves me with being able to open the docker container, run my two commands, and exit the container with 1 docker run command line.
I think you want to start a shell that runs your two commands:
docker run --mount ... myubuntu /bin/bash -c 'cd somewhere && do something'

A script copied through the dockerfile cannot be found after I run docker with -w

I have the following Dockerfile
FROM ros:kinetic
COPY . ~/sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
when I did this (after building it with docker build -t test
docker run --rm -it test /bin/bash
I had a bash terminal and I could clearly see there was a sourceshell.sh file that I could even execute from the Host
However I modified the docker run like this
docker run --rm -it -w "/root/afolder/" test /bin/bash
and now the file sourceshell.sh is nowhere to be seen.
Where do the files copied in the dockerfile go when the working directory is reasigned with docker run?
Option "-w" is telling your container execute commands and access on/to "/root/afolder" while your are COPYing "sourceshell.sh" to the context of the build, Im not sure, you can check into documentation but i think also "~" is not valid either. In order to see your file exactly where you access you should use your dockerfile like this bellow otherwise you would have to navigate to your file with "cd":
FROM ros:kinetic
WORKDIR /root/afolder
COPY . ./sourceshell.sh
RUN["/bin/bash","-c","source /opt/ros/kinetic/setup.bash"]
Just in case you don't understand the diff of the build and run process:
the code above belongs to the the build context, meaning, first build and image (the one you called it "test"). then the command:
docker run --rm -it -w "/root/afolder/" test /bin/bash
runs a container using "test" image and use WORKDIR "/root/afolder"

How to run a private Docker image

docker run -i -t testing bash
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown.
I created the image in Docker Hub , it is private image.
FROM scratch
# Set the working directory to /app
WORKDIR Desktop
ADD . /Dockerfile
RUN ./Dockerfile
EXPOSE 8085
ENV NAME testing
This is in my Dockerfile
I tired to run it, when i run docker images i am getting the details
I think you need to do login in command prompt.useing below command.
docker login -u username -p password url
Apart from the login which should not cause these, as you build an image on your local system which I assume it should exist on local system which will only pull image if not exist on local, the real reason is you are building an image from scratch and there are no binaries in scratch image, even no bash or sh.
Second mistake:
RUN ./Dockerfile
Your Dockerfile is a file, not binaries, while here you are trying to execute using RUN directive.
While scratch appears in Docker’s repository on the hub, you can’t
pull it, run it, or tag any image with the name scratch. Instead, you
can refer to it in your Dockerfile. For example, to create a minimal
container using scratch:
FROM scratch
COPY hello /
CMD ["/hello"]
While here hello can be an executable file such as a C++ compiled file.
Docker scratch image
But what I would suggest to say "hello" in Docker is to use Busybox or Alpine as a base image which has a shell and both are under 5MB.
FROM busybox
CMD ["echo","hello Docker!"]
now build and run
docker build -t hello-docker .
docker run --rm -it hello-docker

Check the File in Exited Container

I have an issue invoking the script to start the container. I think I'd better first find a way to tell if the script is actually located in the right place. But neither docker exec nor docker attach seems to allow me to get into an exited container.
I also tried docker run -it --volumes-from [exited_container_id] ubuntu. I thought I might be able to see the file system in ubuntu but I cannot find the mounting point. Is there any way for me to login to an exited container and see the files that I ADDed?
You can check if the script is located in the right place adding a RUN ls -l / line in your Dockerfile and building the image
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
RUN ls -lah /
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
Then just build the Dockerfile
docker build -t myapp .
You should see the result of that ls in the output of the build

Automatically run command inside docker container after starting up + volume mount

I have created my simple own image from .
FROM python:2.7.11
RUN mkdir /extra/later/ \
&& mkdir /yyy
Now I'm able to perform the following steps:
docker run -d -v xxx:/yyy myimage:latest
So now my volume is mounted inside the container. I'm going to access and I'm able to perform commands on that mounted volume inside my container:
docker exec -it container_id bash
bash# tar -cvpzf /mybackup.tar -C /yyy/ .
Is there a way to automate this steps in your Dockerfile or describing the commands in your docker run command?
The commands executed in the Dockerfile build the image, and the volume is attached to a running container, so you will not be able to run your commands inside of the Dockerfile itself and affect the volume.
Instead, you should create a startup script that is the command run by your container (via CMD or ENTRYPOINT in your Dockerfile). Place the logic inside of your startup script to detect that it needs to initialize the volume, and it will run when the container is launched. If you run the script with CMD you will be able to override running that script with any command you pass to docker run which may or may not be a good thing depending on your situation.
Try using the CMD option in the Dockerfile to run the tar command
CMD tar -cvpzf /mybackup.tar -C /yyy/ .
or
CMD ["tar", "-cvpzf", "/mybackup.tar", "-C", "/yyy/", "."]

Resources