I want to run a Nx workspace containing a NestJs project in a Docker container, in development mode. The problem is I am unable to configure docker-compose + Dockerfile to make the project reload on save. I'm a bit confused on why this is not working as I configured a small nestjs project(without nx) in docker and it had no issues reloading on save.
Surely I am not mapping the ports corectly or something.
version: "3.4"
services:
nx-app:
container_name: nx-app
build: .
ports:
- 3333:3333
- 9229:9229
volumes:
- .:/workspace
FROM node:14.17.3-alpine
WORKDIR /workspace
COPY . .
RUN ["npm", "i", "-g", "#nrwl/cli"]
RUN ["npm", "i"]
EXPOSE 3333
EXPOSE 9229
ENTRYPOINT ["nx","serve","main"]
Also tried adding a Angular application to the workspace and was able to reload it on save in the container without issues...
Managed to solve it by adding "poll": 500 in project.json of nestJs app/library.
"targets": {
"build": {
"executor": "#nrwl/node:webpack",
...
"options": {
...
"poll": 500
Related
I already have a vue application containerized and running, but how do I put it in docker compose?
Dockerfile:
FROM node:14
WORKDIR /app
RUN npm install #babel/core #babel/node #babel/preset-env nodemon express axios cors mongodb
COPY . .
EXPOSE 4200
CMD ["npm", "run", "serve"]
So when I try to put port "4200" it says that I have already the same port running, so how do I put that container inside whole app which will store multiple containers?
This is my docker-compose try:
version: '3.8'
services:
posts:
build: ./posts
ports:
- "4200:4200"
So this is the visualisation of something that I want to do:
Change host port in compose
For example:
ports:
- "4201:4200"
I've noticed this is quite common issue when working with containerized Cypress.
I've found one topic here but resetting
settings isn't really a real solution. In some cases may be.
I'm using docker-compose to manage build of my containers:
...
other services
...
nginx:
build:
context: ./services/nginx
dockerfile: Dockerfile
restart: always
ports:
- 80:80
depends_on:
- users
- client
cypress:
build:
context: ./services/cypress
dockerfile: Dockerfile
depends_on:
- nginx
here's my cypress.json:
{
"baseUrl": "http://172.17.0.1",
"video": false
}
I know it's recommended to refer to service directly like this "http://nginx" but it never worked for me and referring to it by IP worked when I used non-containerized Cypress. Now I'm using Cypress in a container to make it consistent with all other services but Cypress is giving me a hard time. I'm not including volumes because so far I didn't see reason for including them. I don't need to persist any data at this point.
Dockerfile:
FROM cypress/base:10.18.0
RUN mkdir /app
WORKDIR /app
ADD cypress_dev /app
RUN npm i --save-dev cypress#4.9.0
RUN $(npm bin)/cypress verify
RUN ["npm", "run", "cypress:e2e"]
package.json:
{
"scripts": {
"cypress:e2e": "cypress run"
}
}
I'd very much appreciated any guidance. Please do ask for more info if I haven't provided enough.
I am trying to make a Dockerfile and docker-compose.yml for a webapp that uses elasticsearch. I have connected elasticsearch to the webapp and exposed it to host. However, before the webapp runs I need to create elasticsearch indices and fill them. I have 2 scripts to do this, data_scripts/createElasticIndex.js and data_scripts/parseGenesToElastic.js. I tried adding these to the Dockerfile with
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD ["npm", "start"]
but after I run docker-compose up there are no indexes made. How can I fill elasticsearch before running the webapp?
Dockerfile:
FROM node:11.9.0
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY package*.json ./
# Install any needed packages specified in requirements.txt
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
#
RUN npm build
RUN npm i natives
# Bundle app source
COPY . .
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD [ "node", "data_scripts/createElasticIndex.js"]
CMD [ "node", "data_scripts/parseGenesToElastic.js"]
CMD [ "node", "servers/PredictionServer.js"]
CMD [ "node", "--max-old-space-size=8192", "servers/PWAServerInMem.js"]
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: webapp
ports:
- "1337:1337"
- "4000:85"
depends_on:
- redis
- elasticsearch
networks:
- redis
- elasticsearch
volumes:
- "/data:/data"
environment:
- "discovery.zen.ping.unicast.hosts=elasticsearch"
- ELASTICSEARCH_URL=http://elasticsearch:9200"
- ELASTICSEARCH_HOST=elasticsearch
redis:
image: redis
networks:
- redis
ports:
- "6379:6379"
expose:
- "6379"
elasticsearch:
image: elasticsearch:2.4
ports:
- 9200:9200
- 9300:9300
expose:
- "9200"
- "9300"
networks:
- elasticsearch
networks:
redis:
driver: bridge
elasticsearch:
driver: bridge
A Docker container only ever runs one command. When your Dockerfile has multiple CMD lines, only the last one has any effect, and the rest are ignored. (ENTRYPOINT here is just a different way to provide the single command; if you specify both ENTRYPOINT and CMD then the entrypoint becomes the main process and the command is passed as arguments to it.)
Given the example you show, I'd run this in three steps:
Start only the database
docker-compose up -d elasticsearch
Run the "seed" jobs. For simplicity I'd probably run them locally
ELASTICSEARCH_URL=http://localhost:9200 node data_scripts/createElasticIndex.js
(using your physical host's name from the point of view of a script running directly on the physical host, and the published port from the container) but if you prefer you can also run them via the Docker setup
docker-compose run web data_scripts/createElasticIndex.js
Once the database is set up, start your whole application
docker-compose up -d
This will leave the running Elasticsearch unaffected, and start the other containers.
An alternate pattern, if you're confident you want to run these "seed" or migration jobs on every single container start, is to write an entrypoint script. The basic pattern here is to start your server via CMD as you have it now, but to write a script that does first-time setup, ending in exec "$#" to run the command, and make that your container's ENTRYPOINT. This could look like
#!/bin/sh
# I am entrypoint.sh
# Stop immediately if any of these scripts fail
set -e
# Run the migration/seed jobs
node data_scripts/createElasticIndex.js
node data_scripts/parseGenesToElastic.js
# Run the CMD / `docker run ...` command
exec "$#"
# I am Dockerfile
FROM node:11.9.0
...
COPY entrypoint.sh ./ # if not already copied
RUN chmod +x entrypoint.sh # if not already executable
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["npm", "start"]
Since the entrypoint script really is just a shell script, you can use arbitrary logic for this, for instance only running the seed job based on the command, if [ "$1" == npm ]; then ... fi but not for debugging shells (docker run --rm -it myimage bash).
Your Dockerfile also looks like you might be trying to start three different servers (PredictionServer.js, PWAServerInMem.js, and whatever npm start starts); you can run these in three separate containers from the same image and specify the command: in each docker-compose.yml block.
Your docker-compose.yml will be simpler if you remove the networks: (unless it's vital to you that your Elasticsearch and Redis can't talk to each other; it usually isn't) and the expose: declarations (which do nothing, especially in the presence of ports:).
I faced the same issue, and I started my journey using the same approach posted here.
I was redesigning some queries that required me frequently index settings and properties mapping changes, plus changes in the dataset that I was using as an example.
I searched for a docker image that I could easily add to my docker-compose file to allow me to change anything in either the index settings or in the dataset example. Then, I could simply run docker-compose up, and I'd see the changes in my local kibana.
I found nothing, and I ended up creating one on my own. So I'm sharing here because it could be an answer, plus I really hope to help someone else with the same issue.
You can use it as follow:
elasticsearch-seed:
container_name: elasticsearch-seed
image: richardsilveira/elasticsearch-seed
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
- INDEX_NAME=my-index
volumes:
- ./my-custom-index-settings.json:/seed/index-settings.json
- ./my-custom-index-bulk-payload.json:/seed/index-bulk-payload.json
You can simply point your index settings file - which should have both index settings + type mappings as usual and point your bulk payload file that should contain your example data.
More instruction at elasticsearch-seed github repository
We could even use it in our E2E and Integrations tests scenarios running in our CI pipelines.
I have angular app, which is built when container is run as shown in Dockerfile, that's built from two images.
FROM node:10.15.3-alpine as builder
RUN mkdir -p /usr/src/banax_education_platform
WORKDIR /usr/src/banax_education_platform
RUN apk add git
COPY package*.json /usr/src/banax_education_platform/
RUN npm i
COPY . /usr/src/banax_education_platform
RUN npm i -g #angular/cli && npm run-script build
FROM nginx:1.13.12-alpine
RUN rm -rf /usr/share/nginx/html/*
COPY /nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /usr/src/banax_education_platform/dist/edu-app /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
So first of all, it fetches node image and build the app with all requirments. and then it fetches another image of nginx and creates server block to show the app. Now I want to split both images, so one container holds angular app and second does nginx reverse proxy.
Came up with this docker-compose.yml
version: "2"
services:
nginx:
build: ./nginx
container_name: angular-nginx
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- web
web:
build: ./angular
container_name: angular-app
ports:
- "8000"
But the issue is, I want this task:
COPY --from=builder /usr/src/banax_education_platform/dist/edu-app /usr/share/nginx/html to be done via docker-compose, cause now I have 2 Dockerfiles, one for angular, and one for nginx, so I don't know how to link that angular container's directory with nginx container. Someone has any idea?
In a typical Web application, you run some sort of packager (for instance, Webpack) that "compiles" your application down to a set of static files. You then need some Web server to send those files to clients, but nothing runs that code on the server side once it's built.
This matches the multi-stage Docker image that you show in the question pretty well. The first stage only builds artifacts, and once it's done, nothing ever runs the image it builds. (Note for example the lack of CMD.) The second stage COPY --from=build the static artifacts it builds, and that's it.
Since there's no actual application to run in the first stage, it doesn't make sense to run this as a separate Docker Compose service or to otherwise "split this up" in the way you're showing.
If you don't mind having multiple build tools, one thing you can do is to treat the packaged application as "data" from the point of view of Nginx (it doesn't know anything about the content of these files, it just sends data over the network). You can pick some directory on your host and mount that into the Nginx container
version: "3"
services:
nginx:
image: nginx:1.13.12-alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
- ./dist:/usr/share/nginx/html
and then npm run-script build on your host system when you need to rebuild and redeploy the application.
I think there is a conceptual misunderstanding. I'm not familiar with Angular, but here's how I would approach this (not tested).
The reverse proxy itself does not need the compiled files that you generate in your Dockerfile. This container (web in your case) is just there to pass HTTP(S) requests to the correct endpoint (which is another webserver, let's call it app). app itself is an Nginx webserver that serves your Angular application.
So, what you would have to do is something like this:
version: "3.7"
services:
web:
image: nginx
volumes:
- ./config/with/proxy/rules:/etc/nginx/conf.d/default.conf
ports:
- 80:80
- 443:443
app:
build: ./path/to/your/dockerfile
expose:
- "8000"
volumes:
- ./path/to/nice/config:/etc/nginx/conf.d/default.conf
depends_on:
- web
The config file of the first volume would have to include all the proxy rules that you need. Basically, something like this should occur somewhere:
location / {
proxy_pass http://app:8000;
}
That's all your reverse proxy has to know.
When I try to build a container using docker-compose like so
nginx:
build: ./nginx
ports:
- "5000:80"
the COPY instructions isnt working when my Dockerfile simply
looks like this
FROM nginx
#Expose port 80
EXPOSE 80
COPY html /usr/share/nginx/test
#Start nginx server
RUN service nginx restart
What could be the problem?
It seems that when using the docker-compose command it saves an intermediate container that it doesnt show you and constantly reruns that never updating it correctly.
Sadly the documentation regarding something like this is poor. The way to fix this is to build it first with no cache and then up it like so
docker-compose build --no-cache
docker-compose up -d
I had the same issue and a one liner that does it for me is :
docker-compose up --build --remove-orphans --force-recreate
--build does the biggest part of the job and triggers the build.
--remove-orphans is useful if you have changed the name of one of your services. Otherwise, you might have a warning leftover telling you about the old, now wrongly named service dangling around.
--force-recreate is a little drastic but will force the recreation of the containers.
Reference: https://docs.docker.com/compose/reference/up/
Warning I could do this on my project because I was toying around with really small container images. Recreating everything, everytime, could take significant time depending on your situation.
If you need to make docker-compose to copy files every time on up command I suggest declaring a volumes option to your service in the compose.yml file. It will persist your data and also will copy files from that folder into the container.
More info here volume-configuration-reference
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
Optionally, you can add the following section to the end of the compose.yml file. It will keep that folder persisted then. The data in that folder will not be removed after the docker-compose stop command or the docker-compose down command. To remove the folder you will need to run the down command with an additional flag -v:
docker-compose down -v
For example, including volumes:
services:
server:
image: server
container_name: server
build:
context: .
dockerfile: server.Dockerfile
env_file:
- .envs/.server
working_dir: /app
volumes:
- ./server_data:/app # <= here it is
ports:
- "9999:9999"
command: ["command", "to", "run", "the", "server", "--some-options"]
volumes: # at the root level, the same as services
server_data: