I thought I had seen examples of this before, but cannot find anything in my Docker-tagged bookmarks or in the Docker in Action book.
The system I am currently working on has a Docker container providing an nginx webserver, and a second container providing a Java application server. The Java app-server also handles the static content of the site (HTML, JS, CSS). Because of this, the building and deployment of changes to the non-Java part is tightly coupled to the app-server. I am trying to separate it from that.
My goal is to be able to produce a third container that provides only the (static) web-application files, something that can be easily updated and swapped out as needed. I have been reading up on Docker volumes, but I'm not 100% certain that is the solution I need here. I simply want a container with a number of files, that provides these files at some given mount-point to the nginx container, in a read-only fashion.
The rough steps would be:
Start with a node.js image
Copy the content from the local instance of the git repo
Run yarn install and yarn build on the content, creating a build/ directory in the process
Copy this to somewhere stand-alone
Result in a container that contains only the results of the "build" step
Is this something that can be done? Or is there a better way to architect this in terms of the Docker ecosystem?
A Docker container fundamentally runs a program; it's not just a collection of files. Probably the thing you want to do with these files is serve them over HTTP, and so you can combine a multi-stage build with a basic Nginx server to do this. A Dockerfile like this would be fairly routine:
FROM node AS build
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY ./ ./
RUN yarn build
FROM nginx
COPY --from=build /app/dist/ /usr/share/nginx/html/
This could be its own container, or it could be the same Nginx you have as a reverse proxy already.
Other legitimate options include combining this multi-stage build with your Java application container, so you have a single server but the asset build pipeline is in Docker; or running this build sequence completely outside of Docker and uploading it to some sort of external service that can host your static files (Amazon S3 can work for this, if you're otherwise in an AWS environment).
Docker volumes are not a good match for something you would build and deploy. Uploading content into a named volume can be tricky (more so if you eventually move this to Kubernetes) and it's more straightforward to version an image than a collection of files in a volume.
Related
I'd like to share some files via a Docker container, but I'm not sure how. I have a project that has several scripts in it. I also have several VMs that need access to those scripts, and especially the latest versions. I'd like to build a docker container that has those scripts inside of it, and then have my VMs be able to mount the container and access the scripts. I tried https://hub.docker.com/r/erichough/nfs-server/ and "baking" the files in, but I don't think that does what I want it to do. Here is the docker file:
FROM erichough/nfs-server:latest
COPY ./Scripts /etc/exports/
EXPOSE 2049
It fails saying that I need to define /etc/exports. Looking at the entrypoint.sh it wants exports to be a file, so I'm guessing paths inside. So I tried creating an exports.txt file that has the path of my files:
exports.txt:
./Scripts
Dockerfile:
FROM erichough/nfs-server:latest
ADD ./exports.txt /etc/exports
EXPOSE 2049
No bueno. Is there a way to accomplish this? My end goal is a docker container in my registry that I can run in my stack. Whenever I update the scripts I push a new container.
Is it possible for Dockerfile to copy over some file from the host filesystem and not from the context it's being build from ?
# Inside Dockerfile
FROM gradle:latest
COPY ~/.super/secrets.yaml /opt
# I think you can work around it with but doesn't look nice
COPY ../../../../../../.super/secrets.yaml /opt
when I ran the command on the /home/user/some/path/to/project/ path ?
docker build .
The usual way to get "external" files into your docker container is by copying them into your build directory before starting the docker build. It is strongly recommended to create a script for this to ensure that your preparation step is reproducible.
No this is not possible to go up the directory. Here is why.
When runnig docker build . have you ever considered what this dot stand for at the end? Well, here is part of docker documentation
The docker build command builds Docker images from a Dockerfile and a
“context”. A build’s context is the set of files located in the
specified PATH or URL. The build process can refer to any of the files
in the context. For example, your build can use a COPY instruction to
reference a file in the context.
As you see, this dot is referencing context path (here it means "this directory"). All files under context path get sent to docker daemon and you can reference only these files in your Dockerfile. Of course, you can think that you are clever and reference / (root path) so you have access to all files on your machine. (I highly encourage you to try this and see what happens). What you should see that happens is that docker client is freezing. Or, does it really? Well, it's not really freezing, its sending all / directory to docker daemon and it can take ages or (what's more probable) you may run out of memory.
So now when you understand this limitation you see that the only way to make it work is to copy the file you are interested in to the location of context path and then run docker build command.
When you do docker build ., that last argument is the build context directory: you can only access files from it.
You could do docker build ../../../ but then every single file in that root directory will get packaged up and send to the Docker daemon, which will be slow.
So instead, do something like:
cp ../../../secret.yaml .
docker build .
rm secret.yaml
However, keep in mind that will result in the secret being embedded in the image forever, which might be a security risk. If it's a secret you need for runtime, better to pass it in via environment variable at runtime. If you only need the secret for building the image, there are other alternatives, e.g. https://pythonspeed.com/articles/docker-build-secrets/.
I am trying to setup a windows nanoserver container as a sidecar container holding the certs that I use for SSL. Because the SSL cert that I need changes in each environment, I need to be able to change the sidecar container (i.e. dev-cert container, prod-cert container, etc) at startup time. I have worked out the configuration problems, but am having trouble using the same pattern that I use for Linux containers.
On linux containers, I simply copy my files into a container and use the VOLUMES step to export my volume. Then, on my main application container, I can use volumes_from to import the volume from the sidecar.
I have tried to follow that same pattern with nanoserver and cannot get working. Here is my dockerfile:
# Building stage
FROM microsoft/nanoserver
RUN mkdir c:\\certs
COPY . .
VOLUME c:/certs
The container builds just fine, but I get the following error when I try and run it. The dockerfile documentation says the following:
Volumes on Windows-based containers: When using Windows-based
containers, the destination of a volume inside the container must be
one of:
a non-existing or empty directory
a drive other than C:
so I thought, easy, I will just switch to the D drive (because I don't want to export an empty directory like #1 requires). I made the following changes:
# Building stage
FROM microsoft/windowservercore as build
VOLUME ["d:"]
WORKDIR c:/certs
COPY . .
RUN copy c:\certs d:
and this container actually started properly. However, I missed in the docs where is says:
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
so, when I checked, I didn't have any files in the d:\certs directory.
So how can you mount a drive for external use in a windows container if, #1 the directory must be empty to make a VOLUME on the c drive in the container, and use must use VOLUME to create a d drive, which is pointless because anything put in there will not be in the final container?
Unfortunately you cannot use Windows containers volumes in this way. Also this limitation is the reason why using database containers (like microsoft/mssql-server-windows-developer) is a real pain. You cannot create volume on non-empty database folder and as a result you cannot restore databases after container re-creation.
As for your use case, I would suggest you to utilize a reverse proxy (like Nginx for example).
You create another container with Nginx server and certificates inside. Then you let it handle all incoming HTTPS requests, terminate SSL/TLS and then pass request to inner application container using plain HTTP protocol.
With such deployment you don't have to copy and install HTTPS certificates to all application containers. There is only one place where you store certificates and you can change dev/test/etc certificates just by using different Nginx image versions (or by binding certificate folder using volume).
UPDATE:
Also if you still want to use sidecar container you can try one small hack. So basically you will move this operation
COPY . .
from build time to runtime (after container starts).
Something like this:
FROM microsoft/nanoserver
RUN mkdir c:\\certs_in
RUN mkdir c:\\certs_out
VOLUME c:/certs_out
CMD copy "C:\certs_in" *.* "D:\certs_out"
I run want to know how to automate my npm project better with docker.
I'm using webpack with a Vue.js project. When I run npm run buld I get a output folder ./dist this is fine. If I then build a docker image via docker build -t projectname . and run this container all is working perfectly.
This is my Dockerfile (found here)
FROM httpd:2.4
COPY ./dist /usr/local/apache2/htdocs/
But it would be nice if I could just build the docker image and not have to build the project manually via npm run build. Do you understand my problem?
What could be possible solutions?
If you're doing all of your work (npm build and others) outside of the container, and have infrequent changes you could use a simple shell script to wrap the two commands.
If you're doing more frequent iterative development you might consider using a task runner (grunt maybe?) as a container service (or running it locally).
If you want to do the task running/building inside of Docker, you might look at docker-compose. The exact details of how to set this would would require more detail about your workflow, but docker-compose makes it relatively easy to define & link multiple services in a single file, and start and stop them with a simple set of commands.
We have a war which needs a configuration file to work.
We want to dockerize it. At the moment we're doing the following:
FROM tomcat:8.0
COPY apps/test.war /usr/local/tomcat/webapps/
COPY conf/ /config/
Our containers is losing the advantages of docker because it's dependent of the configfile. So when we want to execute the .war for other purposes we have to recreate the image which isn't a good approach.
Is it possible to give a config-file as a parameter without mounting it as a volume? Because we don't want the config on our local machine. What could be a solution?
You can pass it as an ENV but I don't see you losing the advantages of Docker. Docker is essentially all about temporary containers that are dispensable. Ideally you want to build a new image for every new app version.