I'm using AWS ECS repository for docker images.
My docker-compose.yml file looks like:
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And what I need is, to be able to pass a environment variable from the docker-compose file, into my docker image.
What I tried was adding the lines for environment (following the example).
version: "3"
services:
my-front-end:
image: myFrontEndImage:myTag
links:
- my-back-end
environment:
- BACKEND_SERVER_PORT=3001
ports:
- "8080:8080"
logging:
driver: 'json-file'
options:
max-size: "50m"
my-back-end:
image: myBackEndImage:myTag
ports:
- "3000:3000"
logging:
driver: 'json-file'
options:
max-size: "50m"
And then in my project (which is a VueJS project) I'm trying to access it by process.env.BACKEND_SERVER_PORT. But I do not see my value and when I tried console.log(process.env); I see that it has only the value {NODE_ENV: "development"}.
So my question here is, how to pass the env variable from the docker-compose to my docker image, so that I will be able to use it inside my project?
Everything in the project works fine, I'm working on this project a long time and docker-compose file works, it's just, now when I have a need of adding this environment variable, I can't make it work.
EDIT: adding few more files, per request in comment.
The .Dockerfile for my-front-end looks like:
FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
As mentioned, this is an VueJS application and here is the part of package.json which you may be interested in:
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
"build": "node build/build.js"
},
While the .Dockerfile for my-back-end looks like:
FROM node:8.11.1
WORKDIR /app/server
COPY package*.json ./
RUN npm i npm#latest -g && \
npm install
COPY . .
CMD ["npm", "start"]
My back-end is actually an express.js app that is listening on a separate port and the app is placed in a folder server under the root of the project.
Here is the part of package.json which you may be interested in:
"scripts": {
"start": "nodemon src/app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
I think you are doing everything right in terms of configuring docker-compose. And it seems like there is some nuances on passing an environment variable to VueJS application.
According to answers to this question you need to name your variables with VUE_APP_* to be able to get them from client-side
Related
I Dockerkized a MENN(Nextjs) stack App, now everything works fine. I run into issues when i need to install npm packages. let me first show you the structure
src/server/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qyg nodemon#2.0.7
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/client/Dockerfile
FROM node:14-alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install -qy
COPY . .
CMD ["npm", "run", "dev"]
src/docker-compose.yml
version: "3"
services:
client:
build:
context: ./client
dockerfile: Dockerfile
ports:
- 3000:3000
networks:
- mern-network
volumes:
- ./client/src:/usr/app/src
- ./client/public:/usr/app/public
depends_on:
- server
environment:
- REACT_APP_SERVER=http://localhost:5000
- CHOKIDAR_USEPOLLING=true
command: npm run dev
stdin_open: true
tty: true
server:
build:
context: ./server
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mern-network
volumes:
- ./server/src:/usr/app/src
depends_on:
- db
environment:
- MONGO_URL=mongodb://db:27017
- CLIENT=http://localhost:3000
command: /usr/app/node_modules/.bin/nodemon -L src/index.js
db:
image: mongo:latest
ports:
- 27017:27017
networks:
- mern-network
volumes:
- mongo-data:/data/db
networks:
mern-network:
driver: bridge
volumes:
mongo-data:
driver: local
Now if i install any packages using the host machine it is as expected updated in package.json file and if run
docker-compose build
the package.json is also updated inside the container which is fine, but i feel like this kinda breaks the whole point of having your App Dockerized! , if multiple developers need to work on this App and they all need to install node/npm in their machines whats the point of using docker other than for deployments? so what I do right now is
sudo docker exec -it cebc4bcd9af6 sh //login into server container
run a command e.g
npm i express
it installs the package and updates package.json but the host package.json is not updated and if i run the build command again all changes are lost as Dockerfile copies in the source code of host into container, is there a way to synchronize the client and host? in a way that if i install a package inside my container that should also update the host files? this way i dont need to have node/npm installed locally and fulfills the purpose of having your App Dockerized!
I want to build my next js project by docker tool, but I got some trouble like this:
Error: Could not find a production build in the '/var/app/.next' directory. Try building your app with 'next build' before starting the production server. https://nextjs.org/docs/messages/production-start-no-build-id
Dockerfile:
FROM node:16-alpine
RUN mkdir -p /var/app
COPY ["./", "/var/app"]
WORKDIR /var/app
RUN npm i -g next
EXPOSE 3002
RUN npm run build
CMD ["npm", "run", "start"]
docker-compose.yml
version: '3.3'
services:
next-project:
container_name: next-project
build: ./
working_dir: /var/app
restart: 'unless-stopped'
volumes:
- ./:/var/app
env_file:
- .env
ports:
- "54000:3002"
I do run commands like this
docker-compose build && docker-compose up -d
the build was successful but when it run is failed, is there any missing configuration?
When you map your current directory to /var/app, all the files that are in that directory in the container become hidden and replaced with the files in the current directory.
Since you don't have a .next directory in the host directory, the container can't find the built files.
To get it to run, you need to remove the mapping of the current directory, so your docker-compose file becomes
version: '3.3'
services:
next-project:
container_name: next-project
build: ./
working_dir: /var/app
restart: 'unless-stopped'
env_file:
- .env
ports:
- "54000:3002"
I want to run my node application with only one command : npm start and based on the NODE_ENV, it will either run nodemon app.js if NODE_ENV=dev or node app.js if NODE_ENV=production but in DETACHED MODE.
What I have so far :
package.json
"start": "docker-compose up --build --force-recreate api",
"api:dev": "nodemon app.js",
"api:prod": "node app.js"
Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json .
RUN npm install
ENTRYPOINT ["/bin/sh"]
CMD ["-c", "if [ \"$NODE_ENV\" = \"dev\" ]; then npm run api:dev; else npm run api:prod; fi"]
And finally the docker-compose file
api:
restart: always
container_name: my_api
build:
context: .
dockerfile: ./docker/api/Dockerfile
depends_on:
- postgres
ports:
- "${API_PORT}:${API_PORT}"
env_file: .env
networks:
- back
volumes:
- .:/app
- node_modules:/app/node_modules
logging:
options:
max-file: "10"
max-size: "10m"
Where/how can I setup the conditional detach mode? I know I should use -d for the detach docker-compose up but I want it conditional based on the NODE_ENV.
I have got main docker-compose file like that:
services:
main:
container_name: main
build:
dockerfile: Dockerfile
context: './backend/'
ports:
- '${APP_PORT}:3000'
volumes:
- './backend/src:/backend/src'
depends_on:
- mongo
mongo:
container_name: mongodb
image: mongo
ports:
- '27017:27017'
environment:
...
volumes:
- mongo-data:/data/db
volumes:
mongo-data:
/backend/Dockerfile (root NestJS folder):
FROM node:14
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY src/ tsconfig*.json ./
CMD ["npm", "run", "start:dev"]
As you can see, there is only src folder in my container, which is good and intended. However, I would like to create a separate container only for testing my application, so instead of just the "src" folder I have to include the "tests" folder as well (or only tests folder and use src folder of this container?). Apart from that, I would also like to run the "jest" command to run unit tests contained in folders in the src folder.
I have no idea how to do this. Can I share folders through multiple containers or something like that?
One way would be to change you Dockerfile to a multi-stage build with a testing stage.
Here I have added a prod and a testing stage as well as your original (dev) build stage.
FROM node:14 as dev
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY src/ tsconfig*.json ./
CMD ["npm", "run", "start:dev"]
FROM dev as testing
COPY tests ./
ENV CI=true
RUN ["npn", "run", "test"]
FROM dev as prod
LABEL version="1.0" "com.example.image"="My Image"
CMD ["npm", "run", "start"]
The you can specify the build target you want with a variable in your docker-compose-file.
services:
main:
container_name: main
build:
dockerfile: Dockerfile
context: './backend/'
target: ${TARGET:-prod}
ports:
...
TARGET can be set in an .env file or by environment variable. I've set a default value so target becomes prod if not specified.
Other possible solutions would be to have multiple Dockerfiles or use a docker-compose.override.yml file with testing commands specified and different build context included.
I keep getting errors that my modules don't exist when I'm running nodemon inside Docker and I save the node files. It takes a couple of saves before it throws the error. I have the volumes mounted like how the answers suggested here but I'm still getting the error and I'm not too sure what's causing it.
Here is my docker-compose.yml file.
version: "3.7"
services:
api:
container_name: api
build:
context: ./api
target: development
restart: on-failure
ports:
- "3000:3000"
- "9229:9229"
volumes:
- "./api:/home/node/app"
- "node_modules:/home/node/app/node_modules"
depends_on:
- db
networks:
- backend
db:
container_name: db
command: mongod --noauth --smallfiles
image: mongo
restart: on-failure
volumes:
- "mongo-data:/data/db"
- "./scripts:/scripts"
- "./data:/data/"
ports:
- "27017:27017"
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongo-data:
node_modules:
Here is my docker file:
# Ger current Node Alpine Linux image.
FROM node:alpine AS base
# Expose port 3000 for node.
EXPOSE 3000
# Set working directory.
WORKDIR /home/node/app
# Copy project content.
COPY package*.json ./
# Development environment.
FROM base AS development
# Set environment of node to development to trigger flag.
ENV NODE_ENV=development
# Express flag.
ENV DEBUG=app
# Run NPM install.
RUN npm install
# Copy source code.
COPY . /home/node/app
# Run the app.
CMD [ "npm", "start" ]
# Production environment.
FROM base AS production
# Set environment of node to production to trigger flag.
ENV NODE_ENV=production
# Run NPM install.
RUN npm install --only=production --no-optional && npm cache clean --force
# Copy source code.
COPY . /home/node/app
# Set user to node for better security.
USER node
# Run the app.
CMD [ "npm", "run", "start:prod" ]
Turns out I didn't put my .dockerignore in the proper folder. You're supposed to put it in the context folder.