Dockerfile copy jar file from another directory - docker

Ι have this Dockerfile:
FROM openjdk:11
ENV JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The structure of my application is:
Demo:
--deployment:
--Dockerfile
--src/
--docker-compose.yaml
--target:
--app.jar
Code snippet from docker-compose file:
api:
container_name: backend
image: backend
build:
context: deployment/
ports:
- "8080:8080"
When I am putting the Dockerfile in the same directory with the docker-compose and I am changing the docker compose to:
api:
container_name: backend
image: backend
build: .
ports:
- "8080:8080"
Is running as expected. But I want to put the Dockerfile into the deployment folder, since there I have the helm chart and others docker-comopose files which are using this Dockerfile.
My question is:
How I can specify the correct path of the target folder in the Dockerfile?

You cannot copy anything which is out of the build context. If you want to keep the current project structure a solution would be in your compose file for the api service:
build:
context: .
dockerfile: deployment/Dockerfile

Related

Force update shared volume in docker compose

my docker file for ui image is as follows
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "build"]
and my docker compose looks like below.
version: "3"
services:
nginx:
depends_on:
- backend
- ui
restart: always
volumes:
- ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
- static:/usr/share/nginx/html
build:
context: ./nginx/
dockerfile: Dockerfile
ports:
- "80:80"
backend:
build:
context: ./backend/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./backend:/app
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
ui:
tty: true
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
build:
context: ./ui/
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ./ui:/app
- static:/app/build
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes:
static:
I am trying to build static content and copy the content between ui container to nginx container.I use shared volume.Everything works fine as expected.But when I change contents of ui and build again, changes are not reflecting.I tried following thing.
docker-compose down
docker-compose up --build
docker-compose up
None of them is replacing the static content with the new build.
Only when i remove the static volume like below
docker volume rm skeleton_static
and then do
docker-compose up --build
It is changing the content now.. How do i automatically replace the static contents on every docker-compose up or docker-compose up --build thanks.
Named volumes are presumed to hold user data in some format Docker can't understand; Docker never updates their content after they're originally created, and if you mount a volume over image content, the old content in the volume hides updated content in the image. As such, I'd avoid named volumes here.
It looks like in the setup you show, the ui container doesn't actually do anything: its main container process is to build the application, and then it exits immediately. A multi-stage build is a more appropriate approach here, and it will let you compile the application during the image build phase without declaring a do-nothing container or adding the complexity of named volumes.
# ui/Dockerfile
# First stage: build the application; note this is
# very similar to the existing Dockerfile
FROM node:alpine as prodnode
WORKDIR /app
COPY ./package.json ./
RUN npm i
COPY . .
RUN ["npm", "run", "build"] # not CMD
# Second stage: nginx server serving that application
FROM nginx:latest
COPY --from=prodnode /app/build /usr/share/nginx/html
# use default CMD from the base image
In your docker-compose.yml file, you don't need separate "build" and "serve" containers, these are now combined together.
version: "3.8"
services:
backend:
build: ./backend
environment:
- PGUSER=postgres
- PGHOST=postgres
- PGDATABASE=postgres
- PGPASSWORD=postgres_password
- PGPORT=5432
depends_on:
- postgres
# no volumes:
ui:
build: ./ui
depends_on:
- backend
ports:
- '80:80'
# no volumes:
postgres:
image: "postgres:latest"
environment:
- POSTGRES_PASSWORD=postgres_password
volumes: # do persist database data
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
A similar problem will apply to the anonymous volume you've used for the backend service's node_modules directory, and it will ignore any changes to the package.json file. Since all of the application's code and library dependencies are already included in the image, I've deleted the volumes: block that would overwrite those.

Docker container works, but fails when build from docker-compose

I have an application with 3 containers:
client - an angular application,
gateway - a .NET Core application,
api - a .NET Core application
I am having trouble with the container hosting the angular application.
Here is my Docker file:
#stage 1
FROM node:alpine as node
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build
#stage 2
FROM nginx:alpine
COPY --from=node /app/dist/caliber_client /usr/share/nginx/html
EXPOSE 80
and here is the docker compose file:
# Please refer https://aka.ms/HTTPSinContainer on how to setup an https developer certificate for your ASP .NET Core service.
version: '3.4'
services:
calibergateway:
image: calibergateway
container_name: caliber-gateway
build:
context: .
dockerfile: caliber_gateway/Dockerfile
ports:
- 7000:7000
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberapi:
image: caliberapi
container_name: caliber-api
build:
context: .
dockerfile: caliber_api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
networks:
- caliber-local
caliberclient:
image: caliber-client-image
container_name: caliber-client
build:
context: .
dockerfile: caliber_client/Dockerfile
ports:
- 7005:7005
networks:
- caliber-local
networks:
caliber-local:
external: true
When I build and run the angular container independently, I can connect and run the site, however if I try to build it with docker-compose, I get the following error:
enoent ENOENT: no such file or directory, open '/app/package.json'
I can see that npm cannot find the package.json, but I am copying the whole site to the /app directory in the docker file, so I am not sure where the disconnect is.
Thank you.
In the Dockerfile, the left-hand side of COPY statements is always interpreted relative to the build: { context: } directory in the docker-compose.yml file (or the build: directory if there's not a nested argument, or the docker build directory argument; but in any case never anything outside this directory tree).
In a comment, you say
The package.json is one level deeper than the docker-compose.yml file. It is at the same level of the Dockerfile in the caliber_client folder.
Assuming that client application is self-contained, you can change the build definition to use the client subdirectory as the build context
build:
context: caliber_client
dockerfile: Dockerfile
or, since dockerfile: Dockerfile is the default, the shorter
build: caliber_client
If it's important to you to use the parent directory as the build context (maybe you're including some shared files that you don't show in the question) then you can also change the Dockerfile to refer to the subdirectory.
# when the build: { context: } is the parent directory of this one
COPY caliber_client .

How to create the directory in a Dockerfile

I struggle to create a directory in my Dockerfile below. Entering the container after building the image I can't find the directory "models". "ds" directory in path "/usr/src/app/ds/models" is an application directory which was copied. Could you please tell me what is wrong here.
FROM python:3.8
ENV PYTHONUNBUFFERED=1
ENV DISPLAY :0
WORKDIR /usr/src/app
COPY . .
RUN mkdir -p /usr/src/app/ds/models
My docker-compose.yaml file contains volume:
version: '3.8'
services:
app:
build: .
command:
- /bin/bash
- -c
- python manage.py runserver 0.0.0.0:8000
restart: always
volumes:
- .:/usr/src/app
ports:
- '8000:8000'
When your docker-compose.yml file says
volumes:
- .:/usr/src/app
that host directory completely replaces the /usr/src/app directory from your image. This means pretty much nothing in your Dockerfile has an effect; if you try to deploy this setup to another system, you've never run the code in the image.
I'd recommend deleting this block, and also the command: override (make it be the default CMD in the Dockerfile instead).
I need to download models to this directory
Mount only the specific directory you need into your container; don't overwrite the entire application tree. Potentially consider keeping that data directory in a different part of the filesystem.
version: '3.8'
services:
app:
build: .
# no command:
restart: always
volumes:
# only the models subdirectory, not the entire application
- ./ds/models:/usr/src/app/ds/models
ports:
- '8000:8000'

Build image with php and use it in production with docker-compose

i am starting with Docker and i think i miss something quite obvious. I have a really simple multi stage Dockerfile who looks like this:
FROM php:7.4-fpm-alpine as test_php
WORKDIR /app
COPY . .
CMD ["php-fpm"]
FROM nginx:1.19-alpine as test_nginx
COPY docker/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf
WORKDIR /app
COPY --from=test_php /app/public public/
And a docker-compose who looks like this:
services:
php:
build:
context: .
target: test_php
volumes:
- ./:/app
nginx:
build:
context: .
target: test_nginx
depends_on:
- php
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
- ./public:/app/public
ports:
- "80:80"
This works in development (i have in public folder just a simple index.php with phpinfo() ).
My default.conf for nginx have this:
fastcgi_pass php:9000;
to make the link with the php service.
The problem is for production. I build my image based on the Dockerfile and then i push it to Docker Hub.
To use my image on production, i wanted to do something like this in a new docker-compose.prod.yml:
services:
app:
image:mynickname/myimage
So i now use my image created on the dockerfile but i don't have anymore the php service so the nginx conf doesn't work anymore.
I was thinking to keep my original docker-compose (with php service) but in this case, i don't use my image...
I clearly miss something so my question is:
What is the "best" way to go from dev to prod with a basic configuration like this ? (php and nginx).
How can i use my image in production and having php working fine?
Are there other ways?
Thanks a lot for your help
Your first step (still in the development environment) should be to delete the volumes: that overwrite the code in the image. Injecting a per-deployment nginx configuration is reasonable; overwriting what you COPY in with host content means that you're not actually testing what you're going to deploy.
A given Compose service can have both a build: and an image:. In this case Compose will tag the image it builds with the name you give it, instead of choosing its own name, and then you can docker-compose push the built images to a registry.
Finally, when you go to run this setup somewhere else, you can remove the build: blocks and Compose will pull the image:s it needs. The resulting docker-compose.yml will roughly look like:
version: '3'
services:
php:
# build:
# context: .
# target: test_php
image: mynickname/php
nginx:
# build:
# context: .
# target: test_nginx
image: mynickname/nginx
depends_on:
- php
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
ports:
- "80:80"
You can also set this up with multiple docker-compose.yml files where the "standard" docker-compose.yml file has the production version (including the image:), and this gets extended with a docker-compose.override.yml that adds the build: declarations. In a production setup you'd only copy the base file to the target system.

Docker-compose build: How to build an image for production and copy an app code to image?

https://docs.docker.com/compose/production/
Removing any volume bindings for application code, so that code stays
inside the container and can’t be changed from outside
I'd like to build image for production with my app code.
I have a file docker-compose-prod.yml
version: '3'
services:
------
nginx:
build:
context: ./docker/nginx
image: my_nginx:v1
ports:
- 80:80
volumes:
- ./docker/app:/var/www/html
depends_on:
- php
------
The code of my app located in ./docker/app.
The Dockerfile located in ./docker/nginx and I can't with command COPY to copy an app code outside /docker/nginx folder.
When I run a build command I get an image without app contend in /var/www/html:
docker-compose -f docker-compose-prod.yml build
How to build an image in this case with my an app code?
You could pass the dockerfile in the build argument: https://docs.docker.com/compose/compose-file/#dockerfile
This way, I think that you can change your app context to be ./docker, and in the Dockerfile, copy the app folder to /var/www/html. This way, you no longer have to specify a volume when starting the app.
Correct config looks like:
version: '3'
services:
------
nginx:
build:
context: ./docker
dockerfile: nginx/Dockerfile-prod
image: my_nginx:v1
ports:
- 80:80
------
And the Dockerfile-prod in /docker/nginx
...
COPY ./app /var/www/html
...

Resources