Using the docker build command line I can pass in a build secret as follows
docker build \
--secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties \
--build-arg project=template-ms \
.
Then use it in a Dockerfile
# syntax = docker/dockerfile:1.0-experimental
FROM gradle:jdk12 AS build
COPY *.gradle .
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle dependencies
COPY src/ src/
RUN --mount=type=secret,target=/home/gradle/gradle.properties,id=gradle.properties gradle build
RUN ls -lR build
FROM alpine AS unpacker
ARG project
COPY --from=build /home/gradle/build/libs/${project}.jar /tmp
RUN mkdir -p /opt/ms && unzip -q /tmp/${project}.jar -d /opt/ms && \
mv /opt/ms/BOOT-INF/lib /opt/lib
FROM openjdk:12
EXPOSE 8080
WORKDIR /opt/ms
USER nobody
CMD ["java", "-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:8000", "-Dnetworkaddress.cache.ttl=5", "org.springframework.boot.loader.JarLauncher"]
HEALTHCHECK --start-period=600s CMD curl --silent --output /dev/null http://localhost:8080/actuator/health
COPY --from=unpacker /opt/lib /opt/ms/BOOT-INF/lib
COPY --from=unpacker /opt/ms/ /opt/ms/
I want to do a build using docker-compose, but I can't find in the docker-compose.yml reference how to pass the secret.
That way the developer just needs to type in docker-compose up
You can use environment or args to pass variables to container in docker-compose.
args:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
environment:
- secret=id=gradle.properties,src=$HOME/.gradle/gradle.properties
Related
I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build -t ${IMAGE_NAME} .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
...
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
I'd like to be able to reference the APP_PORT variable in my config.sh within the Dockerfile as shown above. However, what I have does not work and it complains: Error: ${APP_PORT} is not a valid port number. So it's not interpreting APP_PORT as a variable. Is there a way to reference the variables within config.sh from within the Dockerfile?
Thanks!
EDIT: New Files based on suggested solutions (still don't work)
I have a config.sh:
IMAGE_NAME="back_end"
APP_PORT=80
PUBLIC_PORT=8080
and a build.sh:
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
and a run.sh:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
and finally, a Dockerfile:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
#ENV PORT $APP_PORT
ENV APP_PORT=${APP_PORT}
#RUN echo "$PORT"
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
RUN ls -a
CMD ["gunicorn", "-b", "0.0.0.0:${APP_PORT}", "main:app"]
run.sh still fails and reports: Error: '${APP_PORT} is not a valid port number.'
Define a variable in Dockerfile as follows:
FROM python:buster
LABEL maintainer="..."
ARG APP_PORT
ENV APP_PORT=${APP_PORT}
# Install gunicorn & falcon
COPY requirements.txt ./
RUN pip3 install --no-cache-dir -r requirements.txt
# Add demo app
COPY ./app /app
COPY ./config.sh /app/config.sh
WORKDIR /app
CMD gunicorn -b 0.0.0.0:$APP_PORT main:app # NOTE! without separating with ["",""]
Pass it as build-arg, e.g. in your build.sh:
Note! Passing build argument is only necessary when it is used for building docker image. You use it on CMD and one can omit passing it during building docker image.
#!/bin/bash
source config.sh
echo "Image name is: ${IMAGE_NAME}"
sudo docker build --build-arg APP_PORT="${APP_PORT}" -t "${IMAGE_NAME}" .
# sudo docker build --build-arg APP_PORT=80 -t back_end . -> You may omit using config.sh and directly define the value of variables
and pass value of $APP_PORT in run.sh as well when starting the container:
#!/bin/bash
source config.sh
# Expose ports and run
sudo docker run -it \
-e APP_PORT=$APP_PORT \
-p $PUBLIC_PORT:$APP_PORT \
--name $IMAGE_NAME $IMAGE_NAME
You need a shell to replace environment variables and when your CMD is in exec form, there's no shell.
If you use the shell form, there is a shell and you can use environment variables.
CMD gunicorn -b 0.0.0.0:${APP_PORT} main:app
Read here for more information on the two forms of the CMD statement: https://docs.docker.com/engine/reference/builder/#cmd
I want to run docker container with sidecar by this tutorial.
For example, i have java spring boot application. Then i made such Dockerfile:
# Dockerfile for GitLab CI/CD
FROM maven:3.5.2-jdk-8-alpine AS MAVEN_BUILD
ARG SPRING_ACTIVE_PROFILE
MAINTAINER SlandShow
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install -Dspring.profiles.active=$SPRING_ACTIVE_PROFILE && mvn package -B -e -Dspring.profiles.active=$SPRING_ACTIVE_PROFILE
FROM openjdk:8-alpine
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/task-0.0.1-SNAPSHOT.jar /app/task-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java", "-jar", "task-0.0.1-SNAPSHOT.jar"]
After that i build docker image and run it:
$ docker build .
$ docker container run -p 8010:8010 <imageId>
Docker-CLI returns hash for started contaner. For example- cc82fa748a62de634893f5594a334ada2854f0be0dff8149efb28ae67c98191c.
Then i'am trying to start sidecar:
docker run -pid=container:cc82fa748a62de634893f5594a334ada2854f0be0dff8149efb28ae67c98191c -p 8080:8080 brendanburns/topz:db0fa58 /server --addr=0.0.0.0:8080
And get:
docker: invalid publish opts format (should be name=value but got '8080:8080').
What's wrong with it?
My fault, i forgot - before -p...
I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.
Newbie to kaniko, and try to build docker images in ubuntu docker host.
I have a local Dockerfile and main.go app
# Dockefile
FROM golang:1.10.3-alpine AS build
ADD . /src
RUN cd /src && go build -o app
FROM alpine
WORKDIR /app
COPY --from=build /src/app /app/
CMD [ "./app" ]
#main.go
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
And in command line, i run
docker run -it -v $(pwd):/usr \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/usr --no-push
Unfortunately, I got error like below
...
INFO[0006] Skipping paths under /proc, as it is a whitelisted directory
INFO[0006] Using files from context: [/usr]
INFO[0006] ADD . /src
INFO[0006] Taking snapshot of files...
INFO[0006] RUN cd /src && go build -o app
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c cd /src && go build -o app]
/bin/sh: go: not found
error building image: error building stage: waiting for process to exit: exit status 127
What's wrong? (docker version 18.09.0)
You need to use different path for context in kaniko. Your command to run this build should look like this:
docker run -it -v $(pwd):/context \
gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=/context --no-push
In your command with /usr as context kaniko where overriding this path in all of Dockerfiles and in golang image, go is located in /usr path thats why it couldn't find it then
# which go
/usr/local/go/bin/go
I want to create a docker image using either git sources, or the already build app. I created two Dockerfiles like these (note: this is pseudo code):
Runtime-Image:
FROM <baseimage>
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
COPY myapp.tgz .
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
myapp.tgz is created on a buildserver or maybe by compiling manually. It is available on the docker host server locally.
To build directly from source I use:
FROM <devimage> AS buildenv
ARG GIT_USER
ARG GIT_PASSWORD
RUN git clone http://${GIT_USER}:${GIT_PASSWORD}#<my.git.host>
RUN ./makefile && cp /source/build/myapp.tgz /drop/myapp.tgz
FROM <baseimage> AS runenv
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
COPY --from=buildenv /drop/myapp.tgz .
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
The instructions in the second build stage of this are obviously a duplicate of the Runtime-Image Dockerfile.
I'd like to have just ONE Dockerfile, which can build from source, or from context on the docker host, as required. I could put the duplicated commands in a custom baseimage and reuse that to build onto (FROM), but this would obfuscate the Dockerfile.
What is the recommended, most elegant way to do this?
I can't use a bind mount to get myapp.tgz in the current directory on the docker host, can I? For this I would have to start a Container to build my app?
There is no IF directive in the Dockfile for conditions?
If there is no myapp.tgz on the docker host, COPY myapp.tgz . will fail
If there is no buildenv, COPY --from=buildenv /drop/myapp.tgz . will fail.
I could use COPY ./* . and then check with
[ -f /myapp.tgz ] && <prepare-container> || <build-from-git-source>
I guess? Our would you rather just create a seperate Dockerfile just for building from source and then use something like
docker run --rm -v /SomewhereOnHost/drop:/drop my-compile-image
For the past 2 days I have been trying to figure this out, now I have a good solution to achieve a conditional build (a if in Dockerfile)
ARG mode=local
FROM alpine as build_local
ONBUILD COPY myapp.tgz .
FROM alpine as build_remote
ONBUILD RUN git clone GIT_URL
ONBUILD RUN cd repo && ./makefile && cp /source/build/myapp.tgz .
FROM build_${mode} AS runenv
EXPOSE 1234/tcp
EXPOSE 4321/tcp
VOLUME /foobar
RUN tar -xzf myapp.tgz && rm -f myapp.tgz
ENTRYPOINT ["myapp"]
The toplevel mode allows you to pass the condition with docker build --build-arg mode=remote .. ONBUILD is used so the command is only executed if the corresponding branch is selected.