I am wondering how it is possible to reproduce the docker commands seen in this docker image. The image copies certain versions of clang and gcc, which is something I wish to do in my own dockerfile. I cannot use the linked docker image, as it contains many commands that are unnecessary for the work I want to do.
The very first command is
ADD file:2cddee716e84c40540a69c48051bd2dcf6cd3bd02a3e399334e97f20a77126ff in /
Further down, there are many similar COPY commands. I wish to reproduce the following command in my own dockerfile:
COPY dir:49371ba683da700cabfad7284da39bd2144aa0c46086c3015a74737d7be6b51e in /compilers/clang/3.4.2
The command copies clang-3.4.2 into the given folder. I am unsure how I can do the same, or even what the hash is/means.
I tried looking, but I couldn't find the Dockerfile used to create the image. There is another way though.
It's quite a large image and I'm on a terrible internet connection, so I haven't tested this myself, but one thing you can do is copy the things you need from the image into a new one of your own like this
FROM cnsun/perses:perses_part_54_name_clang_trunk AS original
FROM ubuntu:latest
COPY --from=original /compilers/clang/3.4.2 /compilers/clang/3.4.2
You can also copy the files from the image to your computer. Then you can copy them from there into new images without referencing the cnsun image:
docker run --rm -v $(pwd):/dest --entrypoint /bin/bash cnsun/perses:perses_part_54_name_clang_trunk -c "cp -r /compilers/clang/3.4.2 /dest"
This will copy the /compilers/clang/3.4.2 directory into the current directory on the host. If your host is Windows, replace $(pwd) with %cd%.
Related
I'm pretty new to docker, so I apologize if this is a simple question. I need to create a script of some sort that will start a docker image in ubuntu:16.04, copy files from a directory onto the container, and run some of the code that was just copied in.
From what I understand, the first step would be to start up the container with something like this:
docker run --name test_container my_image
Then, I need to copy over the files. From what I have found, this is conventionally done on the host with a command like so:
docker cp src/. test_container:/code/src
Lastly, lets say I want to run some code from my container, that I just put on it. If I started my container with the -it tag, I could probably just do something like the following (assuming there was a makefile and hello_world.c in the src folder that was copied):
cd code/src/
make
./hello_world
But is there some way I can have this automated. For example, I want to put the following lines in my docker file:
WORKDIR code/src/
RUN make
RUN ./hello_world
But the main problem is that if I run my dockerfile right at the beginning, I will not have my copied files on the container by the time I get to these commands at the bottom.
I was looking to see if there is a way to copy files onto the container by running commands inside the container. For example:
RUN docker cp src/. test_container:/code/src
But that doesn't seem to work, which kind of makes sense. So I was wondering if there is another good way to automate a process like this.
If you want to bake your files into the image the command is
COPY src /code/src/
WORKDIR /code/src
RUN make
CMD ["./hello_world"]
If you want to use files at runtime, you'd do it with something like
docker run -v $CWD/src:/code/src myimage make
I have a docker container which I use to build software and generate shared libraries in. I would like to use those libraries in another docker container for actually running applications. To do this, I am using the build docker with a mounted volume to have those libraries on the host machine.
My docker file for the RUNTIME container looks like this:
FROM openjdk:8
RUN apt update
ENV LD_LIBRARY_PATH /build/dist/lib
RUN ldconfig
WORKDIR /build
and when I run with the following:
docker run -u $(id -u ${USER}):$(id -g ${USER}) -it -v $(realpath .):/build runtime_docker bash
I do not see any of the libraries from /build/dist/lib in the ldconfig -p cache.
What am I doing wrong?
You need to COPY the libraries into the image before you RUN ldconfig; volumes won't help you here.
Remember that first you run a docker build command. That runs all of the commands in the Dockerfile, without any volumes mounted. Then you take that image and docker run a container from it. Volume mounts only happen when the docker run happens, but the RUN ldconfig has already happened.
In your Dockerfile, you should COPY the files into the image. There's no particular reason to not use the "normal" system directories, since the image has an isolated filesystem.
FROM openjdk:8
# Copy shared-library dependencies in
COPY dist/lib/libsomething.so.1 /usr/lib
RUN ldconfig
# Copy the actual binary to run in and set it as the default container command
COPY dist/bin/something /usr/bin
CMD ["something"]
If your shared libraries are only available at container run-time, the conventional solution (as far as I can tell) would be to include the ldconfig command in a startup script, and use the dockerfile ENTRYPOINT directive to make your runtime container execute this script every time the container runs.
This should achieve your desired behaviour, and (I think) should avoid needing to generate a new container image every time you rebuild your code. This is slightly different from the common Docker use case of generating a new image for every build by running docker build at build-time, but I think it's a perfectly valid use case, and quite compatible with the way Docker works. Docker has historically been used as a CI/CD tool to streamline post-build workflows, but it is increasingly being used for other things, such as the build step itself. This naturally means people are coming up with slightly different ways of using Docker to facilitate various new and different types of workflow.
I've been trying to build an application called Crowd for a client but my Docker experience isn't great.
This is the DockerHub page
Now, if I run docker run -d -p 8095:8095 --name crowd blacklabelops/crowd
It runs and I have no problem, but if copy and paste their Dockerfile
I get an error telling me that splash.xml doesn't exist. My understanding is that the ADD command in Dockerfile copies files from the source into the container. But obviously I don't have those files because I'm just running the Dockerfile.
So if the docker run command is running based on that Dockerfile, how would the Dockerfile work as a standalone? Please help me understand. Many thanks.
Dockerfile contains instructions on how to build an image. Note that in many cases, these instructions include copying (adding) files into the built image.
docker run runs a complete, built image.
If you wish to build from the Dockerfile source, you should probably clone/download the entire repository, and then docker build locally.
New to docker here. I followed the instructions here to make a slim & trim container for my Go Project. I do not fully understand what it's doing though, hopefully someone can enlighten me.
Specifically there are two steps to generating this docker container.
docker run --rm -it -v "$GOPATH":/gopath -v "$(pwd)":/app -e "GOPATH=/gopath" -w /app golang:1.8 sh -c 'CGO_ENABLED=0 go build -a --installsuffix cgo --ldflags="-s" -o hello'
docker build -t myDockerImageName .
The DockerFile itself just contains
FROM iron/base
WORKDIR /app
COPY hello /app/
ENTRYPOINT ["./hello"]
I understand (in a broad sense) that the 1st step is compiling the go program and statically linking the C-dependencies (and doing all this inside an unnamed docker container). The 2nd step just generates the docker image according to the instructions in the DockerFile.
What I don't understand is why the first command starts with docker run (why does it need to be run inside a docker container? Why are we not just generating the Go binary outside of it, and then copying it in?)
And if it's being run inside a docker container, how is binary generated in the docker container being dropped on my local machine's file system?(e.g. why do I need to copy the binary back into the image - as it seems to be doing on line 3 of the DockerFile?)
You're actually using 2 different docker containers, each with a different image. The first container is only around during the compilation... it uses the image golang:1.8. What you're doing is mounting your current working directory into that image and compiling it with the version of GO contained in the image.
The second command builds your custom image that uses the iron/base image as its base. You then copy your built application into that image and run it.
Using a golang container to build the binary is usually done for reproducibility of the build process, i.e.:
it ensures that always the same Go version is used,
compilation takes place in an alway clean environment,
and the build host needs no Go installed at all, or can have a different version,
This way, all parts needed to build the "hello" image can be tracked in a version control system.
However, this example mounts the whole local GOPATH, defeating above purpose. Dependencies must be available to the build container, e.g. by vendoring them. Maybe the author considered vendoring out of scope for his example.
(note: this should be a comment, but my reputation does not allow that)
So here is a very cool Docker file.
To run it, I do:
wget https://cran.r-project.org/src/contrib/FastRCS_0.0.7.tar.gz
tar -xvzf FastRCS_0.0.7.tar.gz
docker run --rm -ti -v $(pwd):/mnt rocker/r-devel-ubsan-clang check.r --setwd /mnt -a --install-deps FastRCS_0.0.7.tar.gz
But now suppose I want to save this DockerFile and run the saved version from the current directory (i.e. not just the one on github).
How can I do this?
The idea is that I need to customize this DockerFile a bit and run the customized version.
Sounds like you want to download the raw file from https://raw.githubusercontent.com/rocker-org/r-devel-san-clang/master/Dockerfile
and save it into a file named Dockerfile
Then you could edit the file to make your changes, and then just build your image with docker build . when you are in the Dockerfile directory
This is a basic Docker usage question--look into docker commit.
You may want to study one of the many fine Docker tutorials out there.