I have created a project like https://github.com/senolatac/demo-multi-module-docker and I have implemented some docker multi-stage https://docs.docker.com/language/java/run-tests/ examples.
Here is my Dockerfile:
FROM --platform=linux/x86_64 gradle:7.5.1-jdk17-alpine AS base
COPY --chown=gradle:gradle . /app
WORKDIR /app
FROM base as test
CMD ["./gradlew", "test"]
FROM base as build
RUN ./gradlew build -x test
# create-image -> DOCKER_BUILDKIT=0 docker build --tag sb-web-image --target web .
# run -> docker run -it --rm --name sb-web-container -p 8085:8080 sb-web-image
FROM --platform=linux/x86_64 openjdk:17-alpine as web
COPY --from=build /app/web/build/libs/*.jar /web.jar
CMD ["java", "-Dspring-boot.run.profiles=default", "-jar", "/web.jar"]
# create-image -> DOCKER_BUILDKIT=0 docker build --tag sb-worker-image --target worker .
# run -> docker run -it --rm --name sb-worker-container -p 8086:8080 sb-worker-image
FROM --platform=linux/x86_64 openjdk:17-alpine as worker
COPY --from=build /app/worker/build/libs/*.jar /worker.jar
CMD ["java", "-Dspring-boot.run.profiles=default", "-jar", "/worker.jar"]
# create-image -> DOCKER_BUILDKIT=0 docker build --tag sb-web-image --build-arg JAR_FILE=web/build/libs/\*.jar --target generic .
# run -> docker run -it --rm --name sb-web-container -p 8086:8080 sb-web-image
FROM --platform=linux/x86_64 openjdk:17-alpine as generic
ARG JAR_FILE
COPY --from=build /app/${JAR_FILE} /app.jar
CMD ["java", "-Dspring-boot.run.profiles=default", "-jar", "/app.jar"]
My project has two different modules. To run these modules on Docker, firstly I should create two different images then run them as two different docker container. But my purpose is so simple: Create a single image and run containers from that image. Do you have suggestion about it?
You can straightforwardly override the CMD when you run the application, so you just need to COPY all of the jar files into the single image.
FROM --platform=linux/x86_64 gradle:7.5.1-jdk17-alpine AS build
COPY --chown=gradle:gradle . /app
WORKDIR /app
RUN ./gradlew build -x test
FROM --platform=linux/x86_64 openjdk:17-alpine
WORKDIR /app
COPY --from=build /app/web/build/libs/*.jar ./web.jar
COPY --from=build /app/worker/build/libs/*.jar ./worker.jar
ENV SPRINGBOOT_RUN_PROFILES=default
EXPOSE 8080
CMD ["java", "-jar", "./web.jar"]
docker build -t sb-image .
docker run -d --name sb-web -p 8085:8080 sb-image
docker run -d --name sb-worker -p 8086:8080 sb-image \
java -jar ./worker.jar
So, note that there is only one final stage, but it COPY --from=build both jar files into it. I pick one of them to be the default CMD, and when I run the other, I provide an additional command after the docker run image-name. (Compose command: and Kubernetes args: can do the same thing.)
This looks like a Spring Boot application. I've set the profile property as an environment variable rather than a command-line option, which shortens the command line. If your applications share a code base, you can also get a smaller image by unpacking the fat jars, though this requires an additional build stage.
Related
Here is my Dockerfile:
FROM golang:1.17.5 as builder
WORKDIR /go/src/github.com/cnosdb/cnosdb
COPY . /go/src/github.com/cnosdb/cnosdb
RUN go env -w GOPROXY=https://goproxy.cn,direct
RUN go env -w GO111MODULE=on
RUN go install ./...
FROM debian:stretch
COPY --from=builder /go/bin/cnosdb /go/bin/cnosdb-cli /usr/bin/
COPY --from=builder /go/src/github.com/cnosdb/cnosdb/etc/cnosdb.sample.toml /etc/cnosdb/cnosdb.conf
EXPOSE 8086
VOLUME /var/lib/cnosdb
COPY docker/entrypoint.sh /entrypoint.sh
COPY docker/init-cnosdb.sh /init-cnosdb.sh
RUN chmod +x /entrypoint.sh /init-cnosdb.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cnosdb"]
Here is my configuration of my jenkins:
But the image docker built didn't have a name.
why?
I haven't used Jenkins for this, as I build from the command line. But are you expecting your name arg to show up in the REPOSITORY and TAG columns? If that is the case, docker has a docker tag command, like so:
~$ docker tag <image-id> repo/name:tag
When I build from the command line, I do it like this:
~$ docker build -t repo/name:0.1 .
Then if I check images:
❯ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
repo/name 0.1 689459c139ef 2 days ago 187MB
Adding to what #rtl9069 mentioned the docker build command can be ran as part of a pipeline. Please take a look at this article https://www.liatrio.com/blog/building-with-docker-using-jenkins-pipelines as it describes it with examples.
I have below dockerfile:
FROM node:16.7.0
ARG JS_FILE
ENV JS_FILE=${JS_FILE:-"./sum.js"}
ARG JS_TEST_FILE
ENV JS_TEST_FILE=${JS_TEST_FILE:-"./sum.test.js"}
WORKDIR /app
# Copy the package.json to /app
COPY ["package.json", "./"]
# Copy source code into the image
COPY ${JS_FILE} .
COPY ${JS_TEST_FILE} .
# Install dependencies (if any) in package.json
RUN npm install
CMD ["sh", "-c", "tail -f /dev/null"]
after building the docker image, if I tried to run the image with the below command, then still could not see the updated files.
docker run --env JS_FILE="./Scripts/updated_sum.js" --env JS_TEST_FILE="./Test/updated_sum.test.js" -it <image-name>
I would like to see updated_sum.js and updated_sum.test.js in my container, however, I still see sum.js and sum.test.js.
Is it possible to achieve this?
This is my current folder/file structure:
.
-->Dockerfile
-->package.json
-->sum.js
-->sum.test.js
-->Test
-->--->updated_sum.test.js
-->Scripts
-->--->updated_sum.js
Using Docker generally involves two phases. First, you compile your application into an image, and then you run a container based on that image. With the plain Docker CLI, these correspond to the docker build and docker run steps. docker build does everything in the Dockerfile, then stops; docker run starts from the fixed result of that and runs the image's CMD.
So if you run
docker build -t sum .
The sum:latest image will have the sum.js and sum.test.js files, because that's what the Dockerfile COPYs in. You can then
docker run --rm sum \
ls
docker run --rm sum \
node ./sum.js
to see and run the contents of the image. (Specifying the latter command as CMD would be a better practice.) You can run the command with different environment variables, but it won't change the files in the image:
docker run --rm -e JS_FILE=missing.js sum ls
# still only has sum.js
docker run --rm -e JS_FILE=missing.js node missing.js
# not found
Instead you need to rebuild the image, using docker build --build-arg options to provide the values
docker build \
--build-arg JS_FILE=./product.js \
--build-arg JS_TEST_FILE=./product.test.js \
-t product \
.
docker run --rm product node ./product.js
The extremely parametrizable Dockerfile you show here can be a little harder to work with than a single-purpose Dockerfile. I might create a separate Dockerfile per application:
# Dockerfile.sum
FROM node:16.7.0
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY sum.js sum.test.js .
CMD node ./sum.js
Another option is to COPY the entire source tree into the image (Javascript files are pretty small compared to a complete Node installation) and use a docker run command to pick which script to run.
I want to run docker container with sidecar by this tutorial.
For example, i have java spring boot application. Then i made such Dockerfile:
# Dockerfile for GitLab CI/CD
FROM maven:3.5.2-jdk-8-alpine AS MAVEN_BUILD
ARG SPRING_ACTIVE_PROFILE
MAINTAINER SlandShow
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install -Dspring.profiles.active=$SPRING_ACTIVE_PROFILE && mvn package -B -e -Dspring.profiles.active=$SPRING_ACTIVE_PROFILE
FROM openjdk:8-alpine
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/task-0.0.1-SNAPSHOT.jar /app/task-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "task-0.0.1-SNAPSHOT.jar"]
After that i build docker image and run it:
$ docker build .
$ docker container run -p 8010:8010 <imageId>
Docker-CLI returns hash for started contaner. For example- cc82fa748a62de634893f5594a334ada2854f0be0dff8149efb28ae67c98191c.
Then i'am trying to start sidecar:
docker run -pid=container:cc82fa748a62de634893f5594a334ada2854f0be0dff8149efb28ae67c98191c -p 8080:8080 brendanburns/topz:db0fa58 /server --addr=0.0.0.0:8080
And get:
docker: invalid publish opts format (should be name=value but got '8080:8080').
What's wrong with it?
My fault, i forgot - before -p...
I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.
Here is my Dockerfile:
FROM microsoft/aspnetcore-build:1.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:1.0
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 58912
ENTRYPOINT ["dotnet", "SgaApi.dll"]
I used this command to build: docker build -f Dockerfile -t sga .
Run: docker run -e "ASPNETCORE_URLS=http://+:58912" -it --rm sga
Application starts successfully
I can't access it from the browser. When I run the application using "dotnet run" it works fine.
You have exposed the port, but you haven't published it. You can either use -P to publish all exposed ports or -p and then specify port mapping. For example:
docker run -p 58912:58912 -e "ASPNETCORE_URLS=http://+:58912" -it --rm sga