I'm fairly new to docker, but I'm trying to see if I can use it to build the frontend app for a project using it and take the built app and hand it off to another tool.
So ideally, I'd like to:
1) Setup environment using Dockerfile.
2) Run npm run build
What i'm not sure is how can I access the build folder from the container from my host?
My docker file is:
FROM kkarczmarczyk/node-yarn:latest
WORKDIR /app
ADD . /app
RUN yarn --ignore-engines
RUN yarn run build
Then I do:
docker build -t build-app
From the prompts it looks like it builds properly, but I don't know how to get the built app from the container. Its building to a /dist folder on the container.
How can I access it from the host?
You need to mount a volume to your host machine, which allows you to share that particular directory, bidirectional with the container.
You could do this something like
docker run -v <host-path>:<container-path> <image-id>
Refer this answer. docker mounting volumes on host
Related
When I run my docker image, I get error as the image can't find a file which should be there.
Error
C:\Users\..\web>docker run mmy-app-1.0-snapshot:latest
Oops, cannot start the server.
ch.qos.logback.core.joran.spi.JoranException: Could not open URL [file:/deploy/my-app-1.0/logback_prod.xml].
My local machine is a windows 10 on which I want to test my docker image. Eventually, I'll install it on a virtual machine on google cloud. The image is of my play framework application which needs only java runtime only as per this document - https://www.playframework.com/documentation/2.5.x/Deploying
The distribution of the application will have two files in the zip, one for linux and other a .bat file for windows. I want to be able to run the linux version on my windows machine by using docker run. For this I plan to create a docker image of my application.
Question 1) Can I use Docker to create such an image which I can run both on my windows machine and on a linux machine in the cloud?
Question 2) Will I need to create two separate images for linux and windows?
I have installed docker desktop for windows from https://hub.docker.com/?overlay=onboarding. I have created the following Dockerfile.
FROM openjdk:8
ENV APP_NAME my-app
ENV APP_VERSION 1.0-SNAPSHOT
#make a directory deploy in the container
RUN mkdir deploy
#cd to container
WORKDIR deploy
#copy from host (path relative to location of Dockerfile on host) to deploy directory. The deploy directory will have my-app-1.0.zip, logback_prod.xml and application_prod.xml
COPY target/universal/my-app-1.0.zip .
COPY conf/logback_prod.xml .
COPY conf/application_prod.conf .
#unzip deploy/my-app-1.0.zip in container
RUN unzip my-app-1.0.zip
#chmod my-app script in deploy/my-app-1.0/bin/my-app
RUN chmod +x my-app-1.0/bin/my-app
#entrypoint is deploy/....
ENTRYPOINT my-app-1.0/bin/codingjediweb -Dplay.http.secret.key=changemeplease -Dlogger.file=logback_prod.xml -Dconfig.file=application_prod.conf
The dockerfile is at path web on my windows laptop. The same level has conf and target directories. I suppose in Dockerfile the WORKDIR is the path in the image and when I use COPY command, the first path (source) is the local path on my machine and the 2nd path (target) is the path in the image.
Question 3) when I run docker run my-app-1.0-snapshot:latest, I get the error stated at the beginning of the question. Why is the file not found? I notice the url is /deploy/..... As I had set the WORKDIR to ., shouldn't it be ./deploy?
The issue was that in ENTRYPOINT, I needed to specify the correct path - ENTRYPOINT my-app-1.0/bin/codingjediweb -Dplay.http.secret.key=changemeplease -Dlogger.file=deploy/logback_prod.xml -Dconfig.file=deploy/application_prod.conf
The WORKDIR is not used in -Dlogger.file and -Dconfig.file parameters it seems.
What are the best practices regarding this? Say, for example, I am running a java or python app. So I would run a java or python based container but where should my apps live? Should they be in a folder within the base image or should they live in a mounted folder on the host?
You should copy jar to container for dev, stage, production, or test enviroment.
Sometimes developers can mount binaries but is is rear case.
Here is standard Dockerfile sample, "init" is a script for application stat:
FROM openjdk:8-jdk-slim
ENV MYSQL_URL="mysql-master:3306"
EXPOSE 8080
COPY build/libs/app-*SNAPSHOT.jar /opt/app.jar
COPY init /opt/
WORKDIR /opt
VOLUME /logs
ENTRYPOINT ["/opt/init"]
One of advantages is that you can test, deploy to stage, deploy to production the same container and developers can get it for local run to reproduce an issue.
Also you can include versioning into the container tag. And you will know from docker image ls which version of app with what required env is up.
What is the best practice to use Docker container for dev/prod.
Let's say I want my changes to be applied automatically during development without rebuilding and restarting images. As far as I understand I can inject volume for this when running container.
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
Where pwd/src stands for the directory source code. It's working fine so far.
But how to delivery code to production? I think it worse to keep code along with binaries into the docker container. Do I need to create another similar docker file which will use COPY instead? Or it's better to deploy source-code separately like for dev-mode and mount it to docker.
The best practice is to build a new docker image for every version of your code. That has many advantages in production environments as faster deployments, independence from other systems, easier rollbacks, exportability, etc.
It is possible to do it within the same Dockerfile, using multi-stage builds.
The following is a simple example for a NodeJS app:
FROM node:10 as dev
WORKDIR /src
CMD ["myapp.js"]
FROM node:10
COPY package.json .
RUN npm install
COPY . .
Note that this Dockerfile is only for demo purposes, it can be improved in many ways.
When working on dev environment use the following commands to build the base image and run your code with a mounted folder:
docker build --target dev -t username/node-web-app0 .
docker run -v `pwd`/src:/src --rm -it username/node-web-app0
And when you're ready for production, just exec docker run without the --target argument to build the full image, that contains the code:
docker build -t username/node-web-app0:v0.1 .
docker push username/node-web-app0:v0.1
I want to copy my compiled war file to tomcat deployment folder in a Docker container. As COPY and ADD deals with moving files from host to container, I tried
RUN mv /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
as a modification to the answer for this question. But I am getting the error
mv: cannot stat ΓÇÿ/tmp/projects/myproject/target/myproject.warΓÇÖ: No such file or directory
How can I copy from one folder to another in the same container?
You can create a multi-stage build:
https://docs.docker.com/develop/develop-images/multistage-build/
Build the .war file in the first stage and name the stage e.g. build, like that:
FROM my-fancy-sdk as build
RUN my-fancy-build #result is your myproject.war
Then in the second stage:
FROM my-fancy-sdk as build2
COPY --from=build /tmp/projects/myproject/target/myproject.war /usr/local/tomcat/webapps/
A better solution would be to use volumes to bind individual war files inside docker container as done here.
Why your command fails
The command you are running tries to access files which are out of context to for the dockerfile. When you build the image using docker build . the daemon sends context to the builder and only those files are accessible during the build. In docker build . the context is ., the current directory. Therefore, it will not be able to access /tmp/projects/myproject/target/myproject.war.
Copying from inside the container
Another option would be to copy while you are inside the container. First use volumes to mount the local folder inside the container and then go inside the container using docker exec -it <container_name> bash and then copy the required files.
Recommendation
But still, I highly recommend to use
docker run -v "/tmp/projects/myproject/target/myproject.war:/usr/local/tomcat/webapps/myproject.war" <image_name>
Suppose my I checkout my code from github onto ~/repos/shinycode .
$> cd ~/repos/shinycode
$> ls
Dockerfile
www/index.html
$> cat Dockerfile
FROM nginx
ADD www /usr/share/nginx/html
For deployment, everything works fine: checkout from github and run docker build.
In dev environment, however, I want to run the container using the same Dockerfile but also live-edit files in the www directory, such as would have if I supplied a -v www:/usr/share/nginx/html option to docker run.
What is the best practice in this case? Should I have a separate Dockerfile for dev without the final ADD command? Am I going about this in the entirely wrong way?
Thanks
You can use the same Dockerfile and mount over the image's /usr/share/nginx/html folder with any external volume. The mount of the volume takes precedence in the filesystem and you won't see anything from the image at that location.