Setting environment variable during Docker container start (on AWS) - docker

I am trying to pass a variable which fetches an IP address with the following code:
${wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4}
I have tried setting an entrypoint shell script such as:
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
java -jar user-service-0.0.1-SNAPSHOT.jar
And my dockerfile looking like:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint and set variables
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
I need this variable to show up when echoing ECS_INSTANCE_IP_ADDRESS so that it can be loaded by a .properties file which $ECS_INSTANCE_IP_ADDRESS. Whenever I get into /bin/sh of the container and echo the variable it shows up blank. If I do an export on that shell I do get the echo response.
Any ideas? I've tried a bunch of things and can't get the variable to be available on the container.

The standard way to set Environment variables in Docker is Dockerfile or Task definition in case of AWS-ECS.
The current docker-entrypoint will set env for that session only, that is why you got empty value when do ssh, but you can verify it has been in that session but this is not recommended approach in Docker.
You can verify
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
echo "instance IP is ${ECS_INSTANCE_IP_ADDRESS}"
java -jar user-service-0.0.1-SNAPSHOT.jar
You will be able to see this value but it will not be available in another session.
Another example,
Dockerfile
FROM node:alpine
WORKDIR /app
COPY run.sh run.sh
RUN chmod +X run.sh
ENTRYPOINT ["./run.sh"]
entrypoint
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"
You will see the node process is able to process the environment, but if you run a command inside the container it will show undefined.
docker exec -it <container_id> ash -c 'node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"'
Two Option to deal with ENV,
Pass the IP ENV if you already know the IP, as instance always launches before container, for example
The second option is passing environment variable to a java class in command line or How to pass system properties to a jar file if that seems to work for then you entrypoint will be
java -DECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4) -jar myjar.jar

This is how I fixed it, I believe there was also an error on the metadata URL and that could have been the culprit.
Dockerfile:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
RUN apk add jq
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
export FARGATE_IP=$(wget -q -O - http://169.254.170.2/v2/metadata | jq -r .Containers[0].Networks[0].IPv4Addresses[0])
echo $FARGATE_IP
java -jar user-service-0.0.1-SNAPSHOT.jar
application-aws.properties (not full file):
eureka.instance.ip-address=${FARGATE_IP}
Eureka is printing a template error on the Eureka status page but microservices do connect and respond to actuator health and swagger-ui from Spring Boot.

Related

Docker environment variable not available in RUN command

I am having difficulties with ARG & ENV in docker after I have upgraded to Docker version 20.10.7, build f0df350 on windows 10.
I have made dockerfile to show issue:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
ARG node_build=production
ENV node_build_env=${node_build}
FROM node:12.18.3 AS node-build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./
My goal here is that an ARG can be set and it will be then set as environment variable inside the container and if none is set it has a default value.
In this Dockerfile I am attempting to write the environment variable node_build_env to a text file then copy it to the final layer. The problem though is that the file is empty.
To re-create these are commands I am using:
docker build -t testargs:test .
docker run -it --rm testargs:test /bin/bash
cat test.txt
The file is empty. However if I run:
docker build -t testargs:test . --target node-build
and then manually run the command:
echo $node_build_env > test.txt
It works and the value production is written into the file.
Why does it work when I do it manually but not as part of the RUN command?
You are using multi-stage builds.
Your ARG & ENV belongs to base stage. And you're not using your base stage in your node-build build stage.
That means there is no node_build_env value in node-build. Hence the following line creates an empty test.txt file.
RUN echo $node_build_env > test.txt
However your final stage uses base stage. Which means it has access to node_build_env variable. So after building your image using docker build -t testargs:test . and then open up an interactive session with that container and try to execute the following command,
echo $node_build_env
You will see production will be printed out in the terminal.
I believe this will help you solve the problem. Cheers 🍻 !!!
edit:
this is working version:
ARG node_build=production
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base
FROM node:12.18.3 AS node-build
ARG node_build
ENV node_build_env=$node_build
WORKDIR /root
RUN echo $node_build_env > test.txt
FROM base AS final
WORKDIR /app
COPY --from=node-build /root/test.txt ./

sh: curl: not found even install curl inside k8s pod

It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl

Docker - ASP.CORE 2.2 application and SSH

I'm trying to configure my docker container so it's possible to ssh into it (the container will be run on Azure). I managed to create an image that enables user to ssh into a container created from that image, the Dockerfile looks like that (it's not mine, I found it on the internet):
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
EXPOSE 2222
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
COPY sshd_config /etc/ssh
RUN echo 'root:Docker' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
CMD ["/usr/sbin/sshd", "-D"]
I'm using mcr.microsoft.com/dotnet/core/sdk:2.2-stretch because it's what I need later on to run the application.
Having the Dockerfile above, I run docker build . -t ssh. I can confirm that it's possible to ssh into a container created from ssh image with following instructions:
docker run -d -p 0.0.0.0:2222:22 --name ssh ssh
ssh root#localhost -p 2222
My application's Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build
WORKDIR /src
COPY ["Application.WebAPI/Application.WebAPI.csproj", "Application.WebAPI/"]
COPY ["Processing.Dependency/Processing.Dependency.csproj", "Processing.Dependency/"]
COPY ["Processing.QueryHandling/Processing.QueryHandling.csproj", "Processing.QueryHandling/"]
COPY ["Model.ViewModels/Model.ViewModels.csproj", "Model.ViewModels/"]
COPY ["Core.Infrastructure/Core.Infrastructure.csproj", "Core.Infrastructure/"]
COPY ["Model.Values/Model.Values.csproj", "Model.Values/"]
COPY ["Sql.Business/Sql.Business.csproj", "Sql.Business/"]
COPY ["Model.Events/Model.Events.csproj", "Model.Events/"]
COPY ["Model.Messages/Model.Messages.csproj", "Model.Messages/"]
COPY ["Model.Commands/Model.Commands.csproj", "Model.Commands/"]
COPY ["Sql.Common/Sql.Common.csproj", "Sql.Common/"]
COPY ["Model.Business/Model.Business.csproj", "Model.Business/"]
COPY ["Processing.MessageBus/Processing.MessageBus.csproj", "Processing.MessageBus/"]
COPY [".Processing.CommandHandling/Processing.CommandHandling.csproj", "Processing.CommandHandling/"]
COPY ["Processing.EventHandling/Processing.EventHandling.csproj", "Processing.EventHandling/"]
COPY ["Sql.System/Sql.System.csproj", "Sql.System/"]
COPY ["Application.Common/Application.Common.csproj", "Application.Common/"]
RUN dotnet restore "Application.WebAPI/Application.WebAPI.csproj"
COPY . .
WORKDIR "/src/Application.WebAPI"
RUN dotnet build "Application.WebAPI.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "Application.WebAPI.csproj" -c Release -o /app
FROM ssh AS final
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Application.WebApi.dll"]
As you can see I'm using ssh image as a base image in the final stage. Even though I was able to sshe into the container created from ssh image, I'm unable to ssh into a container created from the latter Dockerfile. Here is the docker-compose.yml I'm using in order to ease starting the container:
version: '3.7'
services:
application.webapi:
image: application.webapi
container_name: webapi
ports:
- "0.0.0.0:5000:80"
- "0.0.0.0:2222:22"
build:
context: .
dockerfile: Application.WebAPI/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=docker
When I run docker exec -it webapi bashand execute service ssh status, I'm getting [FAIL] sshd is not running ... failed! - but when I do service ssh start and try to ssh into that container, it works. Unfortunately this approach is not acceptable, ssh daemon should launch itself on startup.
I tried using cron and other stuff available on debian but it's a slim version and systemd is not available there - I'm also not fond of installing hundreds of things on slim versions.
Do you have any ideas what could be wrong here?
You have conflicting startup command definitions in your final image. Note that CMD does not simply run a command in your image, it defines the startup command, and has a complex interaction with ENTRYPOINT (in short: if both are present, CMD just supplies extra arguments to ENTRYPOINT).
You can see the table of possibilities in the Dockerfile documentation: https://docs.docker.com/engine/reference/builder/. In addition, there's a bonus complication when you mix and match CMD and ENTRYPOINT in different layers:
Note: If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
As far as I know, you can't get what you want just by layering images. You will need to create a startup script in your final image that both runs sshd -D and then runs dotnet Application.WebApi.dll.

Mounting Volume as part of a multi-stage build

How can I mount a volume to store my .m2 repo so I don't have to download the internet on every build?
My build is a Multi stage build:
FROM maven:3.5-jdk-8 as BUILD
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jdk
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
CMD ["/bin/bash", "-c", "find -type f -name '*.jar' | xargs java -jar"]
You can do that with Docker >18.09 and BuildKit. You need to enable BuildKit:
export DOCKER_BUILDKIT=1
Then you need to enable experimental dockerfile frontend features, by adding as first line do Dockerfile:
# syntax=docker/dockerfile:experimental
Afterwards you can call the RUN command with cache mount. Cache mounts stay persistent during builds:
RUN --mount=type=cache,target=/root/.m2 \
mvn --batch-mode -f /usr/src/app/pom.xml clean package
Although the anwer from #Marek Obuchowicz is still valid, here is a small update.
First add this line to Dockerfile:
# syntax=docker/dockerfile:1
You can set the DOCKER_BUILDKIT inline like this:
DOCKER_BUILDKIT=1 docker build -t mytag .
I would also suggest to split the dependency resolution and packagin phase, so you can take the full advantage from Docker layer caching (if nothing changes in pom.xml it will use the cached layer with already downloaded dependencies). The full Dockerfile could look like this:
# syntax=docker/dockerfile:1
FROM maven:3.6.3-openjdk-17 AS MAVEN_BUILD
COPY ./pom.xml ./pom.xml
RUN --mount=type=cache,target=/root/.m2 mvn dependency:go-offline -B
COPY ./src ./src
RUN --mount=type=cache,target=/root/.m2 mvn package
FROM openjdk:17-slim-buster
EXPOSE 8080
COPY --from=MAVEN_BUILD /target/myapp-*.jar /app.jar
ENTRYPOINT ["java","-jar","/app.jar","-Xms512M","-Xmx2G","-Djava.security.egd=file:/dev/./urandom"]

how do I build a docker image without existing from a docker file

new to dockers.
running this docker build .
this is the Dockerfile
FROM gcr.io/google_appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
RUN virtualenv /env
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
#ADD requirements.txt /app/requirements.txt
#RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
CMD gunicorn -b :$PORT main:app
CMD bash1 "while true; do echo hello; sleep 1;done"
CMD ["sh", "while true; do echo hello; sleep 1;done"]
CMD "echo" "Hello docker!"
but after, when I run docker ps I don't see the image.
To build an image you have to use:
docker build -t username/imagename .
You have to use -t to tag your image and give it a name, from the docs:
-t, --tag value Name and optionally a tag in the
'name:tag' format (default [])
Then you can see the list of your images using:
docker images
You are using docker ps which is for listing containers not images.
More info about about images and containers.
Check the documentation on docker build.
use 'docker run ' to create and run a container. all docker build does is create an image.

Resources