How can I use existing images as the FROM parameter in a dockerfile?
I'm trying to dockerize a VueJS application, but wanted pierrezemb/gostatic to be the base image -- it's a tiny http server that, in principle, is able to host files and directories. However, when running the completed image and checking the exposed port in the browser, the index.html file loads but all other resources in subfolders 404 fail with:
The resource from “http://localhost:8043/js/app.545bfbc1.js” was blocked due to MIME type (“text/plain”) mismatch (X-Content-Type-Options: nosniff). Curling the resource returns just the 404.
This is likely because the gostatic base image is created to be very much standalone, not to be included as the FROM parameter in a Dockerfile. When I build the code myself and use gostatic to host the directory, everything is fine. When I build with a Dockerfile, build succeeds but I get the aforementioned errors when trying to get resources not in the main directory.
Ideal, standalone use case:
docker run -d -p 80:8043 -v path/to/website:/srv/http --name goStatic pierrezemb/gostatic
Current Dockerfile
FROM pierrezemb/gostatic AS deployment
COPY ./dist/* /srv/http/
EXPOSE 8043
# Note, gostatic calls: ENTRYPOINT ["/goStatic"]
# Therefore CMD need only be goStatic parameters
CMD ["-enable-health", "-enable-logging"]
Note, dist folder is built and functioning. Also notably, the health endpoint doesn't work, and there is no logging (which the flags are set for). It's clear I'm handling the parent image wrong
I'm building and running with the following commands:
docker build -t tweet-dash .
docker run -d -p 8043:8043 --name dash tweet-dash
Dockerfile for goStatic is here
This is actually almost exactly the way you're supposed to use existing images: everything here is being done correctly.
For those coming after; pay attention to the parent base image's Dockerfile -- build your own with it open next to you. Figure out how to use the image by itself, as a standalone first, and then see if you can add on to it.
The Dockerfile is slightly incorrect in this case: when you copy a file directory over with COPY ./dist/* /srv/http, docker will iterate recursively over file structure and add each individual file to /srv/http. No folders will be preserved.
This can be fixed by doing COPY ./dist /srv/http.
Related
This is basically a follow-up question to How to include files outside of Docker's build context?: I'm using large files in all of my projects (several GBs) which I keep on an external drive, only used for development.
I want to COPY or ADD these files to my docker container when building it. The answer linked above allows one to specify a different path to a Dockerfile, potentially extending the build context. I find this unpractical, since this would require setting the build context to system root (?), to be able to include a single file.
Long story short: Is there any way or workaround to include a file that is far removed from the docker build context?
Three suggestions on things you could try:
include a file that is far removed from the docker build context?
You could construct your own build context by cp (or tar) files on the host into a dedicated directory tree. You don't have to use the actual source tree or your build tree.
rm -rf docker-build
mkdir docker-build
cp -a Dockerfile build/the-binary docker-build
cp -a /mnt/external/support docker-build
docker build ./docker-build
# reads docker-build/Dockerfile, and the files in the
# docker-build directory, but nothing else; only sends
# the docker-build directory to Docker as the build context
large files [...] (several GBs)
Docker doesn't deal well with build contexts this large. In the past I've at least seen docker build take a long time just on the step of sending the build context to itself, and docker push and docker pull have network issues when trying to send the gigabyte+ layer around.
It's a little hacky and breaks the "self-contained image" model a little bit, but you can provide these files as a Docker bind-mount instead of including them in the image. Your application needs to know what to do if the data isn't there. When you go to deploy the application, you also need to separately distribute the files alongside the Docker image and other deployment artifacts.
docker run \
-v /mnt/external/support:/app/support
...
the-image-without-the-support-files
only used for development
Potentially you can get away with not using Docker at all during this phase of development. Use a local source tree and local development tools; run your unit tests against these large test fixtures as needed. Build a Docker image only when you're about to run pre-commit integration tests; that may be late enough in the development cycle that you don't need these files.
I think the main thing you are worried about is that you do not want to send all files of a directory to docker daemon while it builds the image.
When directory was so big (in GBss) it takes lot of time to build an image.
If the requirement is to just use those files while you build anything inside docker, you can mount those to the container.
A tricky way
Run a container with base image and mount the direcotries inside it. docker run -d -v local-path:container-path
Get inside the container docker exec -it CONTAINER_ID bash
Run build step ./build-something.sh
Create image from the running container docker commit CONTAINER_ID
Tag the image docker tag IMAGE_ID tag:v1. You can get Image ID from previous command
From long term perspective this method may seem to be very tedious, but if you want to build image for 1 or 2 times , you can try this method.
I tried this for one of my docker image, as I want to avoid large amount of files sent to docker daemon during image build
The copy command gets source and destination values,
just specify full absolute path to your hard drive mount point as the src directory
COPY /absolute_path/to/harddrive /container/path
Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.
I am trying to dockerize my first Go Project (Although the question has nothing to do with Go, I guess!).
Short summary (of what the code is doing) - It simply checks whether a .cache folder is present and creates it if it doesn't exist.
After dockerizing the project, my goal is to mount the path within the container where .cache is created to a host path
Here's my Dockerfile (Multistaged):
FROM golang as builder
ENV GO111MODULE=on
WORKDIR /proj
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
RUN ls
FROM alpine
COPY --from=builder /proj/project /proj/
RUN chmod a+x /proj/project
ENTRYPOINT [ "/proj/project" ]
EDIT: If I run something like this (as #Jan Garaj mentioned in the comments):
docker run --rm -v "`pwd`/data/.cache:/proj/.cache/" project-image:latest
doesn't throw an error, but creates an empty data/.cache folder on the host with no actual (content) files and folders from the container's .cache directory. Although, the executable inside the container is able to create the .cache directory and its subsequent files and folders.
I know, variations of this problem has been asked a lot of times, but trust me, I've tried out all those solutions. The following are some of the questions:
Error response from daemon: OCI runtime create failed: container_linux.go:296
A GitHub issue which looked familiar - Still doesn't have an answer and is open.
Another GitHub issue - Probably the best link so far, but I still couldn't get it to work.
The fact that removing the volume flag makes the run command to work is confusing me a lot.
Can someone please explain what's going on in this case and point me to the right direction.
P.S. - Also, I'm running docker on a MacOS (macOS High Sierra to be specific) and I had to enable file sharing in Docker-> Preferences -> File Sharing with the host mount path (Just an extra information!!).
Needless to say that I have also tried out overriding ENTRYPOINT by trying to fire something like /bin/sh /proj/project which also didn't work (as it couldn't find the executable project even after mentioning the full path from the root). I read somewhere that the alpine image has sh only and doesn't have a bash. I am also changing the privileges of my executable project while building the image to a+x, which also doesn't work.
Please do let me know if any part of the question is unclear. I've also checked in my code here in GitHub if anyone wants to reproduce the error.
When you mount your working directory's subdirectory data to the /proj directory inside the container, the entire folder, including binary you've compiled and copied in there, will no longer be available. Instead, the contents of your data directory will be available inside your container on /proj instead. Essentially, you are 'hiding' the container image's version of the directory and replacing it with a directory from outside the container.
This is because the -v flag, with the argument you've given it, creates a bind mount and uses the second parameter (/proj) as the mount target.
To solve the problem, either copy the binary to a different directory (and change the ENTRYPOINT instruction correspondingly), or choose a different target for the bind mount.
I'm new to using docker for development but wanted to try it in my latest project and have ran into a couple of questions.
I have a scenario where I want to link the current project directory as a volume to a running docker container in development mode, so that file changes can be done locally without restarting the container each time. To do this, I have the following comand:
docker run --name app_instance -p 3100:80 -v $(pwd):/app appimage
In contrast, in production I want to copy files from the current project directory.
E.G in the docker file have ADD . /app (With a .dockerignore file to ignore certain folders). Also, I would like to mount a volume for persistent storage. For this scenario, I have the following command :
docker run --name app_instance -p 80:80 -v ./filestore:/app/filestore appimage
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
My question is, what is the best practice to handle such a situation?
Solutions I have thought of:
Mount project folder to different path than /app during development and ignore the /app directory created in the container by the dockerfile
Have two docker files, one that copies the current project and one that does not.
My problem is that with only one dockerfile, for the development command a volume will be mounted at /app and also files copied with ADD . /app. I haven't tested what happens in this scenario, but I am assuming it is incorrect to have both for the same destination.
For this scenario, it will do as follows:
a) Add your code in host server to app folder in container when docker build.
b) Mount your local app to the folder in the container when docker run, here will always your latest develop code.
But it will override the contents which you added in dockerfile, so this could meet your requirements. You should try it, no need for any complex solution.
I have got Linux VM Docker Image up and running but I have encountered one difficulity.
All assetss that were in my wwwroot folder cannot be found
Failed to load resource: the server responded with a status of 404 (Not Found)
I have included
"webroot": "wwwroot"
In project.json file but that doesn't fix the problem. One more thing is thaht running from VS 2015 (on ISS Express) everything works - is there something that I should include in Dockerfile as well?.
EDIT:
I added VOLUME to docker file but that did not help:
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["kpm", "restore"]
VOLUME ["/wwwroot"]
EXPOSE 5004
ENTRYPOINT ["k", "kestrel"]
are you working through the example here: asp ? I don't know much about asp, but, I think you are pretty close. First, I don't think you need to modify the Dockerfile. You can always mount a volume, the VOLUME keyword just declares it as necessary. But, you do need to modify your project.json file like you have shown, with one difference:
"webroot": "/webroot"
I am assuming that the name is "webroot" and the directory to look in (for the project) is "/webroot". Then, build it, like the example shows:
docker build -t myapp .
So, when you run this do:
docker run -t -v $(pwd)/webroot:/webroot -d -p 80:5004 myapp
What this docker run command does is takes your webroot directory from the current directory ($pwd) and mounts it in the container and calls that mount /webroot. In other words, you container must reference /webroot (not webroot, that would be relative to WORKDIR I think).
I think the bottom line is there are two things going on here. The first one is 'building' the image, the second one is running it. When you run it you provide the volume that you want mounted. As long as you application repects the project.json file's "webroot" value as the place to look for the web pages then this will work.