I have two containers - one containing a react app, another a flask app.
I can build both using the below docker-compose file and their respective Dockerfiles, and am able to access each via the browser on the ports specified. However, my React app's API calls to Flask are not being retrieved (they work without Docker in the picture).
Any suggestions are greatly appreciated!
Docker-compose
version: '3.7'
services:
middleware:
build: ./middleware
command: python main.py run -h 0.0.0.0
volumes:
- ./middleware/:/usr/src/app/
ports:
- 5000:5000
env_file:
- ./middleware/.flaskenv
frontend:
build:
context: ./frontend/app
dockerfile: Dockerfile
volumes:
- './frontend/app:/usr/src/app'
- 'usr/src/app/node_modules'
ports:
- '3001:3000'
environment:
- NODE_ENV=development
links:
- middleware
Dockerfile for flask app
# pull official base image
FROM python:3.8.0-alpine
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
# copy project
COPY . /usr/src/app/
Dockerfile React app
# base image
FROM node:12.2.0-alpine
# set working directory
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/app/package.json
RUN npm install --silent
RUN npm install react-scripts#0.9.5 -g --silent
# start app
CMD ["npm", "start"]
I also have the below in my React app in package.json which enables me to make API calls to the flask app (again, this works fine without Docker)
"proxy": "http://127.0.0.1:5000",
Finally, project structure (in case useful)
website
|
|--middleware (Flask app)
- Dockerfile
- api
|--frontend (React app)
-Dockerfile
-app
|
|-docker-compose.yml
As LinPy and leopal in the comments pointed out 127.0.0.1 in package.json needed to be changed to reference the correct flask container.
"proxy": "http://middleware:5000",
Related
My Setup:
I have 3 Services defined in my docker-compose.yml: frontend backend and postgresql. postgresql is pulled from docker-hub.
frontend and backend are built from their own Dockerfiles, most of the Code of these Dockerfiles is the same and only EXPOSE ENTRPOINT CMD and ARG-Values differ from each other. That is why I wanted to create a 'base-Dockerfile' that these two Services can "include".
Sadly I found out I can not simply "include" a Dockerfile into another Dockerfile, I have to create an Image.
So I tried to create a base image for frontend and backend in my docker-compose.yml:
services:
frontend_base:
image: frontend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/frontend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/frontend/client
backend_base:
image: backend_base_image
build:
context: ./
dockerfile: base.dockerfile
args:
- WORKDIR=/app/backend/
- TOOLSDIR=${PWD}/docker/tools
- LOCALDIR=${PWD}/app/backend/api
frontend:
depends_on:
- frontend_base
# Some more stuff for the service
backend:
depends_on:
- backend_base
# Some more stuff for the service
My 'base-Dockerfile':
FROM node:18
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
The Problem I am facing:
My frontend and backend Dockerfiles try to pull the 'base-image' from docker.io
=> ERROR [docker-backend internal] load metadata for docker.io/library/backend_base_image:latest 0.9s
=> ERROR [docker-frontend internal] load metadata for docker.io/library/frontend_base_image:latest 0.9s
=> CANCELED [frontend_base_image internal] load metadata for docker.io/library/node:18
My Research:
I do not know if my approach is possible, I did not find much Resources about this (integrated with docker-compose) online, only Resources about building the Images via Shell and then using them in a Dockerfile. I also tried this and ran into some other issues, where I could not provide correct arguments to the base-Dockerfile.
So I firstly wanted to find out if it is possible with docker-compose.
I am sorry if this is super obvious and my Question is dumb, I am relatively new to Docker.
We could use the feature of a multistage containerfile to define all three images in a single containerfile:
FROM node:18 AS base
# Set in docker-compose.yml-file
ARG WORKDIR
ARG TOOLSDIR
ARG LOCALDIR
ENV WORKDIR=${WORKDIR}
# Install dumb-init for the init system
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64
RUN chmod +x /usr/local/bin/dumb-init
WORKDIR ${WORKDIR}
RUN mkdir -p ${WORKDIR}
# Copy package.json to the current workdir (for npm install)
COPY ${LOCALDIR}/package*.json ${WORKDIR}
# Install all Packages (refereed from package.json)
RUN npm install
COPY ${TOOLSDIR}/start.sh /usr/local/bin/start.sh
COPY ${LOCALDIR}/ ${WORKDIR}
FROM base AS frontend
...
FROM base AS backend
...
In our docker-compose.yml, we can then build a specific stage for the frontend- and backend-service:
...
frontend:
image: frontend
build:
context: ./
target: frontend
dockerfile: base.dockerfile
...
backend:
image: backend
build:
context: ./
target: backend
dockerfile: base.dockerfile
...
If you want a single base image with shared tools, you can do this almost exactly the way you describe; but the one caveat is that you can't describe the base image in the docker-compose.yml file. You need to run separately from Compose
docker build -t base-image -f base.dockerfile .
I would not try to install any application code in that base Dockerfile. Where you for example install an init wrapper that needs to be shared across all of your application images, that does make sense. I think it's fine to tie a Dockerfile to a specific source-tree and image layout, and don't typically recommend passing filesystem paths as ARGs.
# base.dockerfile
FROM node:18
RUN wget -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v1.2.5/dumb-init_1.2.5_x86_64 \
&& chmod +x /usr/local/bin/dumb-init
COPY docker/tools/start.sh /usr/local/bin/
ENTRYPOINT ["dumb-init", "--"]
CMD ["start.sh"]
The per-image Dockerfiles will look pretty similar – and like every other Node Dockerfile – but there's no harm in repeating this, in much the same way that your components probably have similar-looking but self-contained package.json files.
# */Dockerfile
FROM base-image
WORKDIR /app # also creates it
COPY package*.json ./
RUN npm ci
COPY ./ ./
RUN npm build
EXPOSE 3000
# CMD ["npm", "run", "start"] # if the start.sh from the base is wrong
Of note, this gives you some flexibility to change things if the two image setups aren't identical; if you need an additional build step, or if you want to run a dev server, or package the frontend into a lighter-weight Nginx server.
In the Compose file you'd declare these normally with a build: block. Compose isn't aware of the base image and there's no way to tell it about it.
version: '3.8'
services:
frontend:
build: ./app/frontend/client
ports: ['3000:3000']
backend:
build: ./app/backend/api
ports: ['3001:3000']
One thing I've done here which at least reduces the number of variable references is to consistently use . as the current directory name. In the Compose file that's the directory containing the docker-compose.yml; on the left-hand side of COPY it's the build: context directory on the host; on the right-hand side of COPY it's the most recent WORKDIR. Using . where appropriate means you don't have to repeat the directory name, so you do have a little flexibility if you do need to rearrange your source tree or container filesystem.
I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.
I have created a NX monorepo with angular and nestJS apps and tried very hard to make the reload work inside containers but to no avail. Even though the directories are mounted correctly and I verified that changes in the host are being written inside the container but somehow the process is not picking them up.
I have created a standalone nestJS application and successfully made it work with the container.
Github repo: https://github.com/navdbaloch/dockerized-development-with-nx-monorepo-angular-nestjs
ENV: windows 10 with WSL2, Docker Desktop 4.2.0
Follow is the docker-compose.xml file
version: '3.7'
services:
frontend:
container_name: test-frontend
hostname: poirot_frontend
image: poirot_frontend
build:
context: .
dockerfile: ./apps/fwa/Dockerfile.angular
target: development
ports:
- 4200:4200
networks:
- poirot-network
depends_on:
- api
volumes:
- .:/usr/src
- /usr/src/node_modules
command: npm run start:app
api:
container_name: test-api
hostname: poirot_api
image: poirot_api
build:
context: .
dockerfile: ./apps/fwa-api/Dockerfile.api
target: development
volumes:
- .:/usr/src
- /usr/src/node_modules
ports:
- 3333:3333
- 9229:9229
command: npm run start:api
env_file:
- .env
networks:
- poirot-network
networks:
poirot-network:
driver: bridge
Dockerfile.angular
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist && \
npm install --only=development
COPY . .
RUN npm run build:app
#! this is the production image
FROM nginx:latest as production
COPY ./docker/angular.conf /etc/nginx/nginx.conf
COPY --from=development /usr/src/dist/apps/fwa /usr/share/nginx/html
Dockerfile.api
FROM node:14-alpine As development
WORKDIR /usr/src
COPY package*.json ./
RUN npm install minimist &&\
npm install --only=development
COPY . .
RUN npm run build:api
#! this is the production image
FROM node:14-alpine as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /app
COPY package*.json ./
RUN npm install minimist typescript ts-node lodash reflect-metadata tslib rxjs #nestjs/platform-express #types/bcrypt && \
npm install --only=production
COPY . .
COPY --from=development /usr/src/dist/apps/fwa-api ./dist
EXPOSE 3333
#! Migration runenr command: node_modules/ts-node/dist/bin.js migration-runner.ts
CMD ["node", "dist/main"]
Finally, I was able to make it work after a lot of trial and error.
For angular application, changed server command from npx nx serve to npx nx serve --host 0.0.0.0 --poll 2000.
For the Api, add "poll": 2000 option in angular.json at projects.api.architect.build.options
I have also updated Github repo for reference to anyone looking for the same solution.
I am new in docker. I've built an application with VueJs2 that interacts with an external API. I would like to run the application on docker.
Here is my docker-compose.yml file
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- '8080:8080'
Here is my Dockerfile:
FROM node:14.17.0-alpine as develop-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
RUN yarn install
COPY . .
EXPOSE 8080
CMD ["node"]
Here is the building command I run to build my image an container.
docker-compose up -d
The image and container is building without error but when I run the container it stops immediately. So the container is not running.
Are the DockerFile and compose files set correctly?
First of all you run npm install and yarn install, which is doing the same thing, just using different package managers. Secondly you are using CMD ["node"] which does not start your vue application, so there is no job running and docker is shutting down.
For vue applicaton you normally want to build the app with static assets and then run a simple http server to serve the static content.
FROM node:lts-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy 'package.json' to install dependencies
COPY package*.json ./
# install dependencies
RUN npm install
# copy files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
Your docker-compose file could be as simple as
version: "3.7"
services:
vue-app:
build:
context: .
dockerfile: Dockerfile
container_name: vue-app
restart: always
ports:
- "8080:8080"
networks:
- vue-network
networks:
vue-network:
driver: bridge
to run the service from docker-compose use command property in you docker-compose.yml.
services:
vue-app:
command: >
sh -c "yarn serve"
I'm not sure about the problem but by using command: tail -f /dev/null in your docker-compose file , it will keep up your container so you could track the error within it and find its problem. You could do that by running docker exec -it <CONTAINER-NAME> bash and track the error logs in your container.
version: '3'
services:
ew_cp:
image: vuejs_img
container_name: ew_cp
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
command: tail -f /dev/null
ports:
- '8080:8080'
In your Dockerfile you have to start your application e.g. npm run start or any other scripts that you are using for running your application in your package.json.
I know there are multiple examples (actually only a few) out there, and I've looked into some and tried to apply them to my case but then when I try to lift the container (docker-compose up) I end up with more or less the same error every time.
My folder structure is:
sails-project
--app
----api
----config
----node_modules
----.sailsrc
----app.js
----package.json
--docker-compose.yml
--Dockerfile
The docker-compose.yml file:
sails:
build: .
ports:
- "8001:80"
links:
- postgres
volumes:
- ./app:/app
environment:
- NODE_ENV=development
command: node app
postgres:
image: postgres:latest
ports:
- "8002:5432"
And the Dockerfile:
FROM node:0.12.3
RUN mkdir /app
WORKDIR /app
# the dependencies are already installed in the local copy of the project, so
# they will be copied to the container
ADD app /app
CMD ["/app/app.js", "--no-daemon"]
RUN cd /app; npm i
I tried also having RUN npm i -g sails instead (in the Dockerfile) and command:sails lift, but I'm getting:
Naturally, I tried different configurations of the Dockerfile and then with different commands (node app, sails lift, npm start, etc...), but constantly ending up with the same error. Any ideas?
By using command: node app you are overriding the command CMD ["/app/app.js", "--no-daemon"] which as a consequence will have no effect. WORKDIR /app will create an app folder so you don't have to RUN mkdir /app. And most important you have to RUN cd /app; npm i before CMD ["/app/app.js", "--no-daemon"]. NPM dependencies have to be installed before you start your app.