I am working on an Express server written on TypeScript. The flow of the project is that I have a npm build script in place which takes project files in src folder and compiles them to the dist folder. Both these folders live under the root directory. The project works, but while trying to move everything to docker, although I am mounting the volume and the built files (dist directory) is there in the container, the changes are not reflected on host. (I am using Windows + VirtualBox for Docker)
I have referred to this and this questions. I see that their scenario is same, but their solutions don't seem to work for me. I made sure I am using similar techniques mentioned in answers to these, they don't seem to work.
Directory Structure of the Project:
├── Backend
│ ├── src/
│ │ ├── controllers/
│ │ ├── models/
│ │ ├── routes/
│ │ ├── services/
│ │ ├── index.ts
│ │ └── server.ts
│ ├── dist/ (Created upon compilation)
│ │ ├── controllers/
│ │ ├── data/ (Created upon starting the server)
│ │ ├── models/
│ │ ├── routes/
│ │ ├── services/
│ │ ├── index.js
│ │ └── server.js
│ ├── Dockerfile
│ ├── package.json
│ ├── tsconfig.json
│ ├── tslint.json
│ └── .env
├── Frontend/ (This part is an independent application)
├── docker-compose.yml
└── README.md
When the server starts, it creates a directory in %proj_root%/Backend/dist named data which is used to give input to the application via txt files. Compilation works fine as evident from ls commands I have put in Dockerfile, but the changes done inside of the container (creation of the dist directory) isn't reflected on host. On host, the dist directory is empty, causing the server to crash because there is no server.js file.
Here is my docker-compose.yml:
version: "3"
services:
backend:
build:
context: ./Backend/
volumes:
- ./Backend/dist:/app/dist
- /app/node_modules
- ./Backend:/app
frontend:
build:
context: ./Frontend
ports:
- "3001:8080"
volumes:
- /app/node_modules
- ./Frontend:/app
Here's the Dockerfile for Backend service:
FROM node:8
WORKDIR /app
COPY ./package.json .
RUN npm install
COPY . . # Copying everything to enable standalone usage
RUN ls # Logging before tsc build
RUN npm run build
RUN ls /app/dist # Logging after tsc build. All the built files are visible.
CMD ["npm", "run", "start"]
Upon running docker-compose up, a dist folder should be created in container (/app/dist) and should be reflected on host as %proj_root%/Backend/dist
I understand I could create a script which compiles TS and then runs docker-compose, but that looks like a hacky approach to me. Is there a better solution?
The docker-compose.yml setup you show does two independent things. First, it builds a Docker image, using the Dockerfile you give it in isolation. Second, it takes that image, mounts volumes and applies other settings, and runs a container based on those settings. The Docker image build sequence ignores all of these other settings; nothing you do in the Dockerfile can ever change files on the host system.
When you run a container, whatever content is in the volumes: settings you pass in completely replaces what came out of that image. This is always a one-way "push into the container": the contents of your host's Backend directory replace /app in the container; the contents of an anonymous volume replace its node_modules, and the contents of the host's list directory replace /app/dist.
There is one special-case exception to this. When you start a container, if the volume mount is empty, the contents from the image get copied to the volume. This only happens if there's absolutely nothing in the volume tree at all. If there's already content in the dist host directory, or the node_modules anonymous volume, this replaces whatever was in the image, even if it changed in the image (or changed in the volume, Docker has no way to tell).
As a one-off workaround, if you
rm -rf dist
then the next time you launch the container, Docker will notice that the dist directory is empty and repopulate it from the image.
I'd recommend just deleting these volumes: settings altogether. If you're actively developing the software, do it on the host: Node is very easy to install with typical OS package managers, your IDE won't be confused by your Node interpreter being hidden inside a container, and you won't hit this sort of problem. When you go to deploy, you can just use the Docker image as-is, without separately distributing the code that's also inside the image.
Related
I'm trying to create a Dockerfile that copies all package.json files into the image but keeps the folder structure.
This what I have now:
FROM node:15.9.0-alpine as base
WORKDIR /app/
COPY ./**/package.json ./
CMD ls -laR /app
Running with: sudo docker run --rm -it $(sudo docker build -q .)
But it only copies 1 package.json and puts it in the base dir (/app)
Here is the directory I'm testings on:
├── Dockerfile
├── t1
│ └── package.json
└── t2
└── ttt
├── b.txt
└── package.json
And i would like it to look like this inside the container:
├── Dockerfile
├── t1
│ └── package.json
└── t2
└── ttt
└── package.json
The Dockerfile COPY directive is documented as using the Go filepath.Match function for glob expansion. That only supports the basic glob characters *, ?, [a-z], but not extensions like ** that some shells support.
Since COPY only takes a filename glob as input and it likes to flatten the file structure, I don't think there's a way to do the sort of selective copy you're describing in a single command.
Instead you need to list out the individual files you want to copy. COPY will create directories as needed, but that means you need to repeat paths on both sides of COPY.
COPY t1/package*.json t1/
COPY t2/ttt/package*.json t2/ttt/
I can imagine some hacky approaches using multi-stage builds; have an initial stage that copies in the entire source tree but then deletes all of the files except package*.json, then copies that into the actual build stage. I'd contemplate splitting my repository into smaller modules with separate Dockerfiles per module first.
So, I have a dummy project, which file structure looks like this:
docker-magic
├── Dockerfile
├── .dockerignore
├── just_a_file
├── src
│ ├── folder_one
│ │ ├── main.go
│ │ └── what_again.sh
│ ├── folder_two
│ │ └── what.kts
│ └── level_one.go
└── top_level.go
My Dockerfile looks like this:
FROM ubuntu:latest as builder
WORKDIR /workdir
COPY ./ ./
RUN find . -type f
ENTRYPOINT ["echo", "wow"]
I build this image with a docker build . -t docker-magic:test --no-cache command to avoid caching results.
The idea is simple - I copy all of the files from docker-magic folder into my image and then list all of them via find . -type f.
Also I want to ignore some of the files. I do that with a .dockerignore file. According to the official Docker docs:
The placement of ! exception rules influences the behavior: the last line of the .dockerignore that matches a particular file determines whether it is included or excluded.
Let's consider following contents of .dockerignore:
**/*.go
It should exclude all the .go files. And it does! I get following contents from find:
./.dockerignore
./src/folder_one/what_again.sh
./src/folder_two/what.kts
./just_a_file
./Dockerfile
Next, let's ignore everything. .dockerignore is now:
**/*
And, as expected, I get empty output from find.
Now it gets difficult. I want to ignore all the files except .go files. According to the docs, it would be the following:
**/*
!**/*.go
But I get the following output from find:
./top_level.go
Which is obviously not what is expected, because other .go files, as we have seen, also match this pattern. How do I get the result I wanted - copying only .go files to my image?
EDIT: my Docker version is 20.10.5, build 55c4c88.
Have a Dockerfile to build releases for an Elixir/Phoenix application...The tree directory structure is as follows, where the Dockerfile (which has a dependency on this other Dockerfile) is in the "infra" subfolder and needs access to all the files one level above "infra".
.
├── README.md
├── assets
│ ├── css
│ ├── js
│ ├── node_modules
│ ├── package-lock.json
│ ├── package.json
├── lib
├── infra
│ ├── Dockerfile
│ ├── config.yaml
│ ├── deployment.yaml
The Dockerfile looks like:
# https://github.com/bitwalker/alpine-elixir
FROM bitwalker/alpine-elixir:latest
# Set exposed ports
EXPOSE 4000
ENV PORT=4000
ENV MIX_ENV=prod
ENV APP_HOME /app
ENV APP_VERSION=0.0.1
COPY ./ ${HOME}
WORKDIR ${HOME}
RUN mix deps.get
RUN mix compile
RUN MIX_ENV=${MIX_ENV} mix distillery.release
RUN echo $HOME
COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
RUN tar -xzvf my_app.tar.gz
USER default
CMD ./bin/my_app foreground
The command "mix distillery.release" is what builds the my_app.tar.gz file in the path indicated by the COPY command.
I invoke the docker build as follows in the top-level directory (the parent directory of "infra"):
docker build -t my_app:local -f infra/Dockerfile .
I basically then get an error with COPY:
Step 13/16 : COPY ${HOME}/_build/${MIX_ENV}/rel/my_app/releases/${APP_VERSION}/my_app.tar.gz .
COPY failed: stat /var/lib/docker/tmp/docker-builder246562111/opt/app/_build/prod/rel/my_app/releases/0.0.1/my_app.tar.gz: no such file or directory
I understand that the COPY command depends on the "build context" but I thought that by issuing the "docker build" in the parent directory of infra meant I had the appropriate context set for the COPY, but clearly that doesn't seem to be the case. Is there a way to have a Dockerfile one level below the parent directory that contains all the files needed to build an Elixir/Phoenix "release" (the my_app.tar.gz and associated files created via the command mix distillery.release)? What bits am I missing?
I have a project including multiple Dockerfiles.
The tree is like,
.
├── app1
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── app2
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── common
│ └── my_lib.py
└── deploy.sh
To build my application, common/ is necessary and we have to COPY it inside Dockerfile.
However, Dockerfile cannot afford to COPY files from its parent directory.
To be precise, it is possible if we run docker build with -f option in the project root.
But I would not like to do this because the build context will be unnecessarily large.
When building app1, I don't like to include app2/huge_modules/ in the build context (the same as when building app2).
So, I prepare a build script in each app directory.
Like this.
cd $(dirname $0)
cp ../common/* ./
docker build -t app1 .
But this solution seems ugly to me.
Is there a good solution for this case?
Build a base image containing your common library, and then build your two app images on top of that. You'll probably end up restructuring things slightly to provide a Dockerfile for your common files:
.
├── app1
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── app2
│ ├── Dockerfile
│ ├── app.py
│ └── huge_modules/
├── base
| ├── Dockerfile
| └── common
│ └── my_lib.py
└── deploy.sh
You start by building a base image:
docker build -t mybaseimage base/
And then your Dockerfile for app1 and app2 would start with:
FROM mybaseimage
One possible solution is to start the build process from the top directory, with the -f flag you mentioned, dynamically generating the .dockerignore file.
That is, lets say that you currently build app1. Then you would first create in the top directory a .dockerignore file with the content: app2, then run the build process. After finishing the build, remove the .dockerignore file.
Now you want to build app2? No problem! Similarly generate first dynamically a .dockerignore file with the content app1, build and remove the file. Voila!
I tried to Dockerize a Beego application, but the HTML rendering is not finding HTML files stored inside the view/templates directory.
FROM golang:1.13
WORKDIR /go/src/fileUpload
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
EXPOSE 8080
# Install server application
CMD ["go", "run", "./main/main.go"]
You could try to set the directory containing the templates inside the Docker image.
beego.BConfig.WebConfig.ViewsPath = "myviewpath"
https://beego.me/docs/mvc/view/view.md#template-directory
Edit: directory structure
It is difficult to answer the question, as the directory layout is not clear. However, I can give an example based on quickstart:
export GOPATH="$HOME/go/src"
bee new quickstart
In $GOPATH/src/quickstart/Dockerfile:
FROM golang:1.13
WORKDIR /go/src/quickstart
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
EXPOSE 8080
# Install server application
CMD ["go", "run", "main.go"]
Note that I do not have a directory (./main) in front of main.go. This is what the structure of the app looks like:
tim#sky:~/go/src/quickstart$ tree
.
├── conf
│ └── app.conf
├── controllers
│ └── default.go
├── Dockerfile
├── main.go
├── models
├── routers
│ └── router.go
├── static
│ ├── css
│ ├── img
│ └── js
│ └── reload.min.js
├── tests
│ └── default_test.go
└── views
└── index.tpl
If the views directory in you app is in a different place, you need to add the correct path to main.go as described in my initial answer.