I'm a Docker newbie and I fired the following command which successfully copied the file to a running container.
sudo docker cp app.py 943395e93d1d:/app/app.py
Note : The container was already contains a app.py and so my guess was that because app.py is running inside the container it will not allow the copy .
I did another thing now
sudo docker cp sample.txt 943395e93d1d:/app/sample.txt
Now this also went successfully .
My question : Will the sample.txt file continue to exist within the container even after I stop the container ? So that if I start it again the updated files are there .
Because that means I don't build the docker file again and again but just copy the files if there are changes in some assets of my application ?
Anything you copy in with docker cp will be lost as soon as the container exits. I'd use this command sparingly. It's certainly not a way to update your code; re-run docker build when your application changes.
(In typical use, with an interpreted language where just copying in source code is enough, re-running docker build should be very quick. It will generally know to skip over time-consuming dependency installation and just COPY in the new source code. Even if you only have a single container using Docker Compose to drive things will give you a single place to list out the options you need to run it.)
Related
I created a Dockerfile for my website development (Jekyll in this case, but I do not think that matters much).
In case this information is helpful, I code locally using Visual Studio Code and the Remote Containers extension. This extension allows me to manage my code locally while keeping it in sync with the container.
To publish my website, I run a GitHub Action that creates a container from my Dockerfile and then runs all the build code from an entrypoint.sh file. Here is the pertinent code from my Dockerfile:
FROM ruby:alpine as jekyll
ENV env_workspace_directory=$workspace_directory
... more irrelevant code ...
RUN echo "#################################################"
RUN echo "Copy the GitHub repo to the Docker container"
RUN echo "COPY . ${env_workspace_directory}"
COPY . ${env_workspace_directory}
RUN echo "#################################################"
RUN echo "Run the entrypoint "
ENTRYPOINT ["/entrypoint.sh"]
Because I am using the Remote Containers VS Code extension, I do not want the Dockerfile to contain the COPY . ${env_workspace_directory} code. Instead, I only want that code to run when used as a GitHub Action.
So I guess I have two questions, with the first being ideal:
Is it possible to write like-type code that will copy the contents of the currently open GitHub branch (or at least the main branch), including all files and subfolders into the Docker container using the entrypoint.sh file instead? If so, what would that entrypoint.sh code look like?
Is it possible to leave the COPY command in the Dockerfile and make it conditional? For example "Only run the COPY command if running from a GitHub Action"?
For #1 above, I reviewed this Stack Overflow article that says you can use the docker cp command, but I am unsure if that is (a) correct and (b) how to be sure I am using the $workspace_directory.
I am very new to Dockerfiles, writing shell commands, and GitHub Actions, so apologies if this question is an easy one or if more clarifications are required.
Here is the Development repo if that is useful.
A Docker volume mount happens after the image is built but before the main container process is run. So if you do something like
docker run -v "$PWD/site:/site" your-image
then the entrypoint.sh script will see the host content in the container's /site directory, even if something different had been COPYed in the Dockerfile.
There are no conditionals or flow control in Dockerfiles, beyond shell syntax within individual RUN instructions. You could in principle access a Git repository in your container process, but managing the repository location, ssh credentials, branches, uncommitted files, etc. can get complex.
Depending on how you're using the image, I could suggest two approaches here.
If you have a deploy-time action that uses this image in its entirety to build the site, then just leave your Dockerfile as-is. In development use a bind mount to inject your host content; don't especially worry about skipping the image COPY here.
Another approach is to build the image containing the Jekyll tool, but to treat the site itself as data. In that case you'd always run the image with a docker run -v or Compose volumes: option to inject the data. In the Dockerfile you might create an empty directory to be safe
RUN mkdir "${env_workspace_directory}" # consider a fixed path
and in your entrypoint script you can verify the site exists
if [ ! -f "$env_workspace_directory/_site.yml" ]; then
cat >&2 <<EOF
There does not seem to be a Jekyll site in $env_workspace_directory.
Please re-run this container with the site mounted.
EOF
exit 1
fi
I'm pretty new to docker, so I apologize if this is a simple question. I need to create a script of some sort that will start a docker image in ubuntu:16.04, copy files from a directory onto the container, and run some of the code that was just copied in.
From what I understand, the first step would be to start up the container with something like this:
docker run --name test_container my_image
Then, I need to copy over the files. From what I have found, this is conventionally done on the host with a command like so:
docker cp src/. test_container:/code/src
Lastly, lets say I want to run some code from my container, that I just put on it. If I started my container with the -it tag, I could probably just do something like the following (assuming there was a makefile and hello_world.c in the src folder that was copied):
cd code/src/
make
./hello_world
But is there some way I can have this automated. For example, I want to put the following lines in my docker file:
WORKDIR code/src/
RUN make
RUN ./hello_world
But the main problem is that if I run my dockerfile right at the beginning, I will not have my copied files on the container by the time I get to these commands at the bottom.
I was looking to see if there is a way to copy files onto the container by running commands inside the container. For example:
RUN docker cp src/. test_container:/code/src
But that doesn't seem to work, which kind of makes sense. So I was wondering if there is another good way to automate a process like this.
If you want to bake your files into the image the command is
COPY src /code/src/
WORKDIR /code/src
RUN make
CMD ["./hello_world"]
If you want to use files at runtime, you'd do it with something like
docker run -v $CWD/src:/code/src myimage make
So, I'm trying to get into embedded rust, for which I had to use the nightly version of rust, and modify my .cargo/config.toml to change the target device, and stuff. I decided to use docker, as I didn't want this interfering with my main installation. I don't know much about docker, but I'm assuming, it's quite similar to pipenv, where what I do with the docker image, doesn't affect anything outside it. (Unless I run the code)
So, this is how my Dockerfile looks
FROM jdrouet/rust-nightly:buster-slim AS builder
WORKDIR /usr/source/myapp
COPY . .
RUN cargo build --release
CMD cargo run
When I run sudo docker build . -t name It gives me the error I used to get before modifying my .cargo/config.toml file, which is a good thing, I'm guessing, cuz now I can revert to my original configuration, and make the changes to this image's config files. But I'm not able to find the configuration files for this docker image. I don't know what WORKDIR does, but there is no folder called /source in my /usr directory
So, I'm trying to get into embedded rust, for which I had to use the nightly version of rust, and modify my .cargo/config.toml to change the target device, and stuff
You can put a file in the folder wherever/your/project/is/.cargo/config.toml, and it will only impact the project(s) in that directory.
source: Cargo Book
I don't know much about docker, but I'm assuming, it's quite similar to pipenv
Docker is actually quite different to Pipenv. Cargo is similar to Pipenv in that it manages your dependencies for you (Cargo.toml vs Pipfile), distinguishes between regular dependencies vs dev dependencies vs build-time dependencies, etc. Docker is a level of isolation beyond this -- a Docker container is a completely different filesystem from your actual computer. The Dockerfile is a recipe that tells Docker how to build an image of your container, which Docker can run.
Basically, WORKDIR /usr/source/myapp creates a folder /usr/source/app in the Docker container's file system, and cd's into that for the rest of the Dockerfile. This means that the following line, COPY . ., will copy everything in the same folder as the Dockerfile into the folder in the container /usr/source/app.
I bet if you open a shell into the Docker container like so:
# Build the docker container
docker build . -t my-cool-project:latest
# Run it
docker run -it my-cool-project:latest bash
you should be able to cd /usr/source/app and see all your stuff.
I have a docker container which I use to build software and generate shared libraries in. I would like to use those libraries in another docker container for actually running applications. To do this, I am using the build docker with a mounted volume to have those libraries on the host machine.
My docker file for the RUNTIME container looks like this:
FROM openjdk:8
RUN apt update
ENV LD_LIBRARY_PATH /build/dist/lib
RUN ldconfig
WORKDIR /build
and when I run with the following:
docker run -u $(id -u ${USER}):$(id -g ${USER}) -it -v $(realpath .):/build runtime_docker bash
I do not see any of the libraries from /build/dist/lib in the ldconfig -p cache.
What am I doing wrong?
You need to COPY the libraries into the image before you RUN ldconfig; volumes won't help you here.
Remember that first you run a docker build command. That runs all of the commands in the Dockerfile, without any volumes mounted. Then you take that image and docker run a container from it. Volume mounts only happen when the docker run happens, but the RUN ldconfig has already happened.
In your Dockerfile, you should COPY the files into the image. There's no particular reason to not use the "normal" system directories, since the image has an isolated filesystem.
FROM openjdk:8
# Copy shared-library dependencies in
COPY dist/lib/libsomething.so.1 /usr/lib
RUN ldconfig
# Copy the actual binary to run in and set it as the default container command
COPY dist/bin/something /usr/bin
CMD ["something"]
If your shared libraries are only available at container run-time, the conventional solution (as far as I can tell) would be to include the ldconfig command in a startup script, and use the dockerfile ENTRYPOINT directive to make your runtime container execute this script every time the container runs.
This should achieve your desired behaviour, and (I think) should avoid needing to generate a new container image every time you rebuild your code. This is slightly different from the common Docker use case of generating a new image for every build by running docker build at build-time, but I think it's a perfectly valid use case, and quite compatible with the way Docker works. Docker has historically been used as a CI/CD tool to streamline post-build workflows, but it is increasingly being used for other things, such as the build step itself. This naturally means people are coming up with slightly different ways of using Docker to facilitate various new and different types of workflow.
New to docker here. I followed the instructions here to make a slim & trim container for my Go Project. I do not fully understand what it's doing though, hopefully someone can enlighten me.
Specifically there are two steps to generating this docker container.
docker run --rm -it -v "$GOPATH":/gopath -v "$(pwd)":/app -e "GOPATH=/gopath" -w /app golang:1.8 sh -c 'CGO_ENABLED=0 go build -a --installsuffix cgo --ldflags="-s" -o hello'
docker build -t myDockerImageName .
The DockerFile itself just contains
FROM iron/base
WORKDIR /app
COPY hello /app/
ENTRYPOINT ["./hello"]
I understand (in a broad sense) that the 1st step is compiling the go program and statically linking the C-dependencies (and doing all this inside an unnamed docker container). The 2nd step just generates the docker image according to the instructions in the DockerFile.
What I don't understand is why the first command starts with docker run (why does it need to be run inside a docker container? Why are we not just generating the Go binary outside of it, and then copying it in?)
And if it's being run inside a docker container, how is binary generated in the docker container being dropped on my local machine's file system?(e.g. why do I need to copy the binary back into the image - as it seems to be doing on line 3 of the DockerFile?)
You're actually using 2 different docker containers, each with a different image. The first container is only around during the compilation... it uses the image golang:1.8. What you're doing is mounting your current working directory into that image and compiling it with the version of GO contained in the image.
The second command builds your custom image that uses the iron/base image as its base. You then copy your built application into that image and run it.
Using a golang container to build the binary is usually done for reproducibility of the build process, i.e.:
it ensures that always the same Go version is used,
compilation takes place in an alway clean environment,
and the build host needs no Go installed at all, or can have a different version,
This way, all parts needed to build the "hello" image can be tracked in a version control system.
However, this example mounts the whole local GOPATH, defeating above purpose. Dependencies must be available to the build container, e.g. by vendoring them. Maybe the author considered vendoring out of scope for his example.
(note: this should be a comment, but my reputation does not allow that)