I have 2 different docker files (production and test environment). I want to build a single multistage docker file with these 2 dockerfiles.
First dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Second dockerfile as below:
FROM wildfly:17.0.0
USER jboss
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jboss:jboss ./install /opt/wildfly/install
COPY --chown=jboss:jboss install.sh /opt/wildfly/bin
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
CMD ["/opt/wildfly/bin/install.sh"]
Question How to create a multistage docker file for these 2 docker files?
Thanks in advance.
I'm not sure what you are trying to do requires a multi-stage build. Instead, what you may want to do is use a custom base image (for dev) which would be your first code block and use that base image for production.
If the first image was tagged as shanmukh/wildfly:dev:
FROM shanmukh/wildfly:dev
USER jboss
RUN rm /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN mv /opt/wildfly/install/wildfly-scripts/SetupforTest.cli /opt/wildfly/install/wildfly-scripts/Setup.cli
RUN rm /opt/wildfly/install/wildfly-scripts/Properties.properties
RUN mv /opt/wildfly/install/wildfly-scripts/Properties-test.properties /opt/wildfly/install/wildfly-scripts/Properties.properties
This could be tagged as shanmukh/wildfly:prod.
The reason why I don't think you want a multi-stage build is because you mentioned you are trying to handle two environments (test and production).
Even if you did want to say use a multi-stage build for production, from what I can see, there is no reason to. If your initial build stage included installing build dependencies such as a compiler, copying code and building it, then it'd be efficient to use a multi-stage build as the final image would not include the initial dependencies (such as the compiler and other potential dangerous dev tools) or any unneeded build artifacts that were produced.
Hope this helps.
Related
There are a few processes I'm struggling to wrap my brain around when it comes to multi-stage Dockerfile.
Using this as an example, I have a couple questions below it:
# Dockerfile
# Uses multi-stage builds requiring Docker 17.05 or higher
# See https://docs.docker.com/develop/develop-images/multistage-build/
# Creating a python base with shared environment variables
FROM python:3.8.1-slim as python-base
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
POETRY_HOME="/opt/poetry" \
POETRY_VIRTUALENVS_IN_PROJECT=true \
POETRY_NO_INTERACTION=1 \
PYSETUP_PATH="/opt/pysetup" \
VENV_PATH="/opt/pysetup/.venv"
ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"
# builder-base is used to build dependencies
FROM python-base as builder-base
RUN apt-get update \
&& apt-get install --no-install-recommends -y \
curl \
build-essential
# Install Poetry - respects $POETRY_VERSION & $POETRY_HOME
ENV POETRY_VERSION=1.0.5
RUN curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python
# We copy our Python requirements here to cache them
# and install only runtime deps using poetry
WORKDIR $PYSETUP_PATH
COPY ./poetry.lock ./pyproject.toml ./
RUN poetry install --no-dev # respects
# 'development' stage installs all dev deps and can be used to develop code.
# For example using docker-compose to mount local volume under /app
FROM python-base as development
ENV FASTAPI_ENV=development
# Copying poetry and venv into image
COPY --from=builder-base $POETRY_HOME $POETRY_HOME
COPY --from=builder-base $PYSETUP_PATH $PYSETUP_PATH
# Copying in our entrypoint
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
# venv already has runtime deps installed we get a quicker install
WORKDIR $PYSETUP_PATH
RUN poetry install
WORKDIR /app
COPY . .
EXPOSE 8000
ENTRYPOINT /docker-entrypoint.sh $0 $#
CMD ["uvicorn", "--reload", "--host=0.0.0.0", "--port=8000", "main:app"]
# 'lint' stage runs black and isort
# running in check mode means build will fail if any linting errors occur
FROM development AS lint
RUN black --config ./pyproject.toml --check app tests
RUN isort --settings-path ./pyproject.toml --recursive --check-only
CMD ["tail", "-f", "/dev/null"]
# 'test' stage runs our unit tests with pytest and
# coverage. Build will fail if test coverage is under 95%
FROM development AS test
RUN coverage run --rcfile ./pyproject.toml -m pytest ./tests
RUN coverage report --fail-under 95
# 'production' stage uses the clean 'python-base' stage and copyies
# in only our runtime deps that were installed in the 'builder-base'
FROM python-base as production
ENV FASTAPI_ENV=production
COPY --from=builder-base $VENV_PATH $VENV_PATH
COPY ./docker/gunicorn_conf.py /gunicorn_conf.py
COPY ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
COPY ./app /app
WORKDIR /app
ENTRYPOINT /docker-entrypoint.sh $0 $#
CMD [ "gunicorn", "--worker-class uvicorn.workers.UvicornWorker", "--config /gunicorn_conf.py", "main:app"]
The questions I have:
Are you docker build ... this entire image and then just docker run ... --target=<stage> to run a specific stage (development, test, lint, production, etc.) or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?
When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?
Thanks for clearing this questions up.
To answer from a less DevSpace-y persepctive and a more general Docker-y one (With no disrespect to Lukas!):
Question 1
Breakdown
❌ Are you docker build ... this entire image and then just docker run ... --target= to run a specific stage
You're close in your understanding and managed to outline the approach in your second part of the query:
✅ or are you only building and running the specific stages you need (e.g. docker build ... -t test --target=test && docker run test ...)?
The --target option is not present in the docker run command, which can be seen when calling docker run --help.
I want to say it isn't the former because you end up with a bloated image with build kits and what not... correct?
Yes, it's impossible to do it the first way, as when --target is not specified, then only the final stage is incorporated into your image. This is a great benefit as it cuts down the final size of your container, while allowing you to use multiple directives.
Details and Examples
It is a flag that you can pass in at build time so that you can choose which layers to build specifically. It's a pretty helpful directive that can be used in a few different ways. There's a decent blog post here talking about the the new features that came out with multi-stage builds (--target is one of them)
For example, I've had a decent amount of success building projects in CI utilising different stages and targets, the following is pseudo-code, but hopefully the context is applied
# Dockerfile
FROM python as base
FROM base as dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
FROM dependencies as test
COPY src/ src/
COPY test/ test/
FROM dependencies as publish
COPY src/ src/
CMD ...
A Dockerfile like this would enable you to do something like this in your CI workflow, once again, pseudo-code-esque
docker build . -t my-app:unit-test --target test
docker run my-app:unit-test pyunit ...
docker build . -t my-app:latest
docker push ...
In some scenarios, it can be quite advantageous to have this fine grained control over what gets built when, and it's quite the boon to be able to run those images that comprise of only a few stages without having built the entire app.
The key here, is that there's no expectation that you need to use --target, but it can be used to solve particular problems.
Question 2
When it comes to local Kubernetes development (minikube, skaffold, devspace, etc.) and running unit tests, are you supposed referring to these stages in the Dockerfile (devspace Hooks or something) or using native test tools in the container (e.g. npm test, ./manage.py test, etc.)?
Lukas covers a devspace specific approach very well, but ultimately you can test however you like. Using devspace to make it easier to run (and remember to run) tests certainly sounds like a good idea. Whatever tool you use to enable an easier workflow, will likely still use npm test etc under the hood.
If you wish to call npm test outside of a container that's fine, if you wish to call it in a container, that's also fine. The solution to your problem will always change depending on your landscape. CICD helps to standardise on external factors and provide a uniform means to ensure testing is performed, and deployments are auditable
Hope that helps in any way shape or form 👍
Copying my response to this from Reddit to help others who may look for this on StackOverflow:
DevSpace maintainer here. For my workflow (and the default DevSpace behavior if you set it up with devspace init), image building is being skipped during development because it tends to be the most annoying and time-consuming part of the workflow. Instead, most teams that use DevSpace have a dev image pushed to a registry and build by CI/CD which is then used in devspace.yaml using replacePods.replaceImage as shown here: https://devspace.sh/cli/docs/configuration/development/replace-pods
This means that your manifests or helm charts are being deployed referencing the prod images (as they should be) and then devspace will (after deployment) replace the images of your pods with dev-optimized images that ship all your tooling. Inside these pods, you can then use the terminal to build your application, run tests along with other dependencies running in your cluster etc.
However, typically teams also start using DevSpace in CI/CD after a while and then they add profiles (e.g. prod profile or integration-testing profile etc. - more on https://devspace.sh/cli/docs/configuration/profiles/basics) to their devspace.yaml where they add image building again because they want to build the images in their pipelines using kaniko or docker. For this, you would specify the build target in devspace.yaml as well: https://devspace.sh/cli/docs/configuration/images/docker#target
FWIW regarding 1: I never use docker run --target but I also always use Kubernetes directly over manual docker commands to run any workloads.
I am in the process of creating a Dockerfile that can build a haskell program. The Dockerfile uses ubuntu focal as a base image, installs ghcup, and then builds a haskell program. There are multiple reasons why I am doing this; it can support a low-configuration CI environment, and it can help new developers who are trying to build a complicated project.
In order to speed up build times, I am using docker v20 with buildkit. I have a sequence of events like this (it's quite a long file, but this excerpt is the relevant part):
# installs haskell
WORKDIR $HOME
RUN git clone https://github.com/haskell/ghcup-hs.git
WORKDIR ghcup-hs
RUN BOOTSTRAP_HASKELL_NONINTERACTIVE=NO ./bootstrap-haskell
#RUN source ~/.ghcup/env # Uh-oh: can't do this.
# We recreate the contents of ~/.ghcup/env
ENV PATH=$HOME/.cabal/bin:$HOME/.ghcup/bin:$PATH
# builds application
COPY application $HOME/application
WORKDIR $HOME/application
RUN mkdir -p logs
RUN --mount=type=cache,target=$HOME/.cabal \
--mount=type=cache,target=$HOME/.ghcup \
--mount=type=cache,target=$HOME/application/dist-newstyle \
cabal build |& tee logs/configure.log
But when I change some non-code files (README.md for example) in application, and build my docker image ...
DOCKER_BUILDKIT=1 docker build -t application/application:1.0 .
... it takes quite a bit of time and the output from cabal build includes a lot of Downloading [blah] followed by Building/Installing/Completed messages from cabal install.
However when I go into my container and type cabal build, it is much faster (it is already built):
host$ docker run -it application/application:1.0
container$ cabal build # this is fast
I would expect it to be just as fast in the prior case as well. Since I have not really changed the code files, and the dependencies are all downloaded, and since I am using RUN --mount.
Are there files somewhere that my --mount=type=cache entries are not covering? Is there a package registry file somewhere that I need to include in its own --mount=type=cache line? As far as I can tell, my builds ought to be nearly instant instead of taking several minutes to complete.
I would like to say that this is my first container and actually my first JAVA app so maybe I will have basic questions so be lenient, please.
I wrote spring boot app and my colleague has written the frontend part for it in angular. What I would like to achieve is to have "one button/one command" in IntelliJ to create a container containing whole app backend and front end.
What I need to do is:
Clone FE from company repository (I am using ssh key now)
Clone BE from GitHub
Build FE
Copy built FE to static folder in java app
Build BE
Create a container running this app
My current solution is to create "builder" container and there build FE and BE and then copy it to "production" container like this:
#BUILDER
FROM alpine AS builder
WORKDIR /src
# add credentials on build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh/ \
&& echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa \
&& echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
&& chmod 600 /root/.ssh/id_rsa
# installing dependencies
RUN apk update && apk upgrade && apk add --update nodejs nodejs-npm \
&& npm install -g #angular/cli \
&& apk add openjdk11 \
&& apk add maven \
&& apk add --no-cache openssh \
&& apk add --no-cache git
#cloning repositories
RUN git clone git#code.siemens.com:apcprague/edge/metal-forming-fe.git
RUN git clone git#github.com:bzumik1/metalForming.git
# builds front end
WORKDIR /src/metal-forming-fe
RUN npm install && ng build
# builds whole java app with front end
WORKDIR /src/metalForming
RUN cp -a /src/metal-forming-fe/dist/metal-forming/. /src/metalForming/src/main/resources/static \
&& mvn install -DskipTests=true
#PRODUCTION CONTAINER
FROM adoptopenjdk/openjdk11:debian-slim
LABEL maintainer jakub.znamenacek#siemens.com
RUN mkdir app
RUN ["chmod", "+rwx", "/app"]
WORKDIR /app
COPY --from=builder /src/metalForming/target/metal_forming-0.0.1-SNAPSHOT.jar .
EXPOSE 4200
RUN java -version
CMD java -jar metal_forming-0.0.1-SNAPSHOT.jar
This works but I takes very long time so I guess this is not correct way how to do it. Could anyone point me in correct direction? I was thinking if there is a way how to make maven to all these steps for me but maybe this is totally off.
Also if you will find any problem in my Dockerfile please let me know as I said this is my first Dockerfile so I could overlook something.
EDITED:
BTW does anyone know how can I get rid of this part:
echo "github.com,140.82.121.3 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==" >> /root/.ssh/known_hosts \
it adds GitHub to known_hosts (I also need to add a company repository there). It is because when I run git clone it will ask me if I trust this ... and I have to write yes but I don't know how to do it if it is automatically running in docker and I have no option to write there yes. I have tried yes | git clone ... but this is also not working
a few things:
1, if this runs "slow" on the machine than it will run slow inside a container too.
2, remove --no-cache,* you want to cache everything that is static, because next time when you build those commands will not run where there is no change. Once there is change in one command than that command will rerun instead using the builder cache and also all subsequent commands will have to rerun too.
*UPDATE: I have mistaken "apk update --no-cache" with "docker build --no-cache". I stated wrong that "apk add --no-cache" would mean that command is not cached, because this command is cached on docker builder level. However with this parameter you wouldn't need to delete in a later step the /var/cache/apk/ directory to make you image smaller, but that you wouldn't need to do here, because you are already using multi stage build, so this would not affect your final image size.
One more thing to clarify, all statements in Dockerfile are checked if they changed, if they did not than docker builder uses the cached layer for it and won't run that statement. Exception is ADD and COPY commands, here builder also checks the copied, added files if they changed with checksum. Also if a statement is changed or ADD-ed COPY-ed file(s) changed than that statement is re-run and all subsequent statements re-run too, so you want to put your source code copy statemant as much at the end as it is possible
If you want to disable this cache, do "docker build --no-cache ..." this way all the steps will be re-run that is in the Dockerfile.
3, specify WORKDIR at the top once, if you need to switch directory later use this:
RUN cd /someotherdir && mycommand
Also specifying a Subsequent WORKDIR will be relativ to the previous WORKDIR so it will mess up readibilty what is the (probably) sole purpose of WORKDIR statement.
4, Enable BuildKit:
Either declare environment variable
DOCKER_BUILDKIT=1
or add this to /etc/docker/daemon.json
{ "features": { "buildkit": true } }
BuildKit might not help in this case, but if you do more complex Dockerfiles with more stages Buildkit can run those parallel so overall build will be faster.
5, Do not skip tests with DskipTests=true :)
6, as stated in a comment, do not clone the repo inside the image build, you do not need to do that at all. Just put the Dockerfile in the / of the repo, and COPY the repo files with a Dockerfile command:
COPY . .
First dot is the source that is your current directory on your machine, second dot is the target, the working dir inside the image, /src with your Dockerfile. You build the image and publish it, push it to a docker registry so others can pull it and start using it. If you want more complex stuff building and publishing with a help of a server, look up CI/CD techniques.
I am new in docker world. So I have existing Dockerfile which somewhat looks like below:
# Base image
FROM <OS_IMAGE>
# Install dependencies
RUN zypper --gpg-auto-import-keys ref -s && \
zypper -n install git net-tools libnuma1
# Create temp user
RUN useradd -ms /bin/bash userapp
# Creating all the required folders that is required for installment.
RUN mkdir -p /home/folder1/
RUN mkdir -p /home/folder2/
RUN sudo pip install --upgrade pip
RUN python3 code_which_takes_time.py
# Many more stuff below this.
So code_which_takes_time.py takes time to run which will download many stuff and will execute it.
So the requirement is whenever we add more statements below RUN python3 code_which_takes_time.py will unnecessary will execute this python script everytime while building an image.
So I would like to split this image into 2 Dockerfiles.
One file you can run only once. This file will have time consuming stuff which can be run only once while building an image.
Second one will be used to add anymore statements which will added as more layers on top of the existing image.
Because if I run docker build -t "test" . for the current file, it will execute my python script again and again. It's time consuming and I don't want to run it again and again.
My questions:
How can split Dockerfile as I mentioned above.?
How can I build an image with 2 image files.?
How can I run these 2 files?
As of now I do :
Build and run: docker build -t "test" . && docker run -it "test"
Just Build : docker build -t "test" .
Just Run : docker run -it "test"
One thing i can suggest after reading the scenario is that, You want to split your workflow in two Dockerfiles, As far as i know, you can easily break them.
Maintain your first Dockerfile, which will build an image with your python code code_which_takes_time.py executed, commit that image with name "Root_image".
After that when you want to add other tasks in that "Root_image", like RUN python3 etc, simply create a new Dockerfile and use FROM Root_image in that Dockerfile and do the stuff you want to do in it. After performing your task commit your work and named it as "Child_image", eventually your child image is the one inherited from that "Root_image".
I have a build process that converts typescript into javascript, minifies and concatenates css files, etc.
I would like to put those files into an nginx docker container, but I don't want the original javascript / css source to be included, nor the tools that I use to build them. Is there a good way to do this, or do I have to run the build outside docker (or in a separately defined container), then COPY the relevant files in?
This page talks about doing something similar in a manual way, but doesn't explain how to automate the process e.g. with docker-compose or something.
Create a docker images with all required tools to build your code also that can clone code and build it. After build it have to copy
into docker volume for example volume name is /opt/webapp.
Launch build docker container using build image in step 1
docker run -d -P --name BuildContainer -v /opt/webapp:/opt/webapp build_image_name
Launch nginx docker container that will use shared volume of build docker in which your build code resides.
docker run -d -P --name Appserver -v /opt/webapp:/usr/local/nginx/html nginx_image_name
After building and shipping your build code to Appserver . you can delete BuildContainer because that is no more required.
Advantage of above steps:
your build code will in host machine so if one Appserver docker fail or stop then your build code will be safe in host machine and you can launch new docker using that build code.
if you create docker image for building code then every time no need to install required tool while launching docker.
you can build your code in host machine also but if you want your code should be build in fresh environment every time then this will be good. or if you use same host machine to build/compile code every time then some older source code may create problem or git clone error etc.
EDIT:
you can append :ro (Read only) to volume by which one container will not affect another. you can Read more about docker volume Here . Thanks #BMitch for suggestion.
The latest version of docker supports multi-stage builds where build products can be copied from on container to another.
https://docs.docker.com/engine/userguide/eng-image/multistage-build/
This is an ideal scenario for a multi-stage build. You perform the compiling in the first stage, copy the output of that compile to the second stage, and only ship that second stage. Each stage is an independent image that begins with a FROM line. And to transfer files between stages, there's now a COPY --from syntax. The result looks roughly like:
# first stage with your full compile environment, e.g. maven/jdk
FROM maven as build
WORKDIR /src
COPY src /src
RUN mvn install
# second stage starts below with just a jre base image
FROM openjdk:jre
# copy the jar from the first stage here
COPY --from=build /src/result.jar /app
CMD java -jar /app/result.jar
Original answer:
Two common options:
As mentioned, you can build outside and copy the compiled result into the container.
You merge your download, build, and cleanup step into a single RUN command. This is a common best practice to minimize the size of each layer.
An example Dockerfile for the second option would look like:
FROM mybase:latest
RUN apt-get update && apt-get install tools \
&& git clone https://github.com/myproj \
&& cd myproj \
&& make \
&& make install
&& cd .. \
&& apt-get rm tools && apt-get clean \
&& rm -rf myproj
The lines would be a little more complicated than that, but that's the gist.
As #dnephin suggested in his comments on the question and on #pl_rock's answer, the standard docker tools are not designed to do this, but you can use a third party tool like one of the following:
dobi (48 GitHub stars)
packer (6210 GitHub stars)
rocker (759 GitHub stars)
conveyor (152 GitHub stars)
(GitHub stars correct when I wrote the answer)
We went with dobi as it was the first one we heard of (because of this question), but it looks like packer is the most popular.
Create a docker file to run your build process, then run cleanup code
Example:
FROM node:latest
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /dist && cp -a /tmp/node_modules /dist/
RUN cp /tmp/package.json /dist
ADD . /tmp
RUN cd /tmp && npm run build
RUN mkdir -p /dist && cp -a /tmp/. /dist
#run some clean up code here
RUN npm run cleanup
# Define working directory
WORKDIR /dist
# Expose port
EXPOSE 4000
# Run app
CMD ["npm", "run", "start"]
In your docker compose file
web:
build: ../project_path
environment:
- NODE_ENV=production
restart: always
ports:
- "4000"