I need to execute a shell script from within Docker. I am not as proficient in Docker or in scripting as I would like to be. The script has no effect and does not print anything to the screen. I do not believe that it is being called.
What am I doing wrong here?
The script should be run as the entrypoint command.
The Script
#!/usr/bin/env bash
set -Ex
function apply_path {
echo "Check that we have ENVIRONMENT_VAR vars"
test -n "$ENVIRONMENT_VAR"
find /usr/src/app/.next \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i "s#NEXT_PUBLIC_ENVIRONMENT_VAR#$ENVIRONMENT_VAR#g"
}
apply_path
echo "Starting Nextjs"
exec "$#"
The docker file
ARG NODE_VERSION=14.4.0-alpine
###
# STAGE 1: Base
###
FROM node:$NODE_VERSION as base
ENV NODE_PATH=/src
WORKDIR $NODE_PATH
###
# STAGE 2: Build
###
FROM base as build
COPY package.json package-lock.json .npmrc ./
RUN npm i
COPY . ./
RUN NEXT_PUBLIC_ENVIRONMENT_VAR=BUILD npm run build
# Permisions to execute script
RUN ["chmod", "+x", "./entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
###
# STAGE 3: Production
###
FROM node:$NODE_VERSION
ENV NODE_PATH=/src
ENV APP_PORT=3000
WORKDIR $NODE_PATH
COPY --from=build $NODE_PATH/.next ./.next
COPY --from=build $NODE_PATH/node_modules ./node_modules
COPY --from=build $NODE_PATH/src ./src
COPY --from=build $NODE_PATH/package.json ./
COPY --from=build $NODE_PATH/.babelrc ./
COPY --from=build $NODE_PATH/LICENSE ./
EXPOSE $APP_PORT
CMD npm start
When you have a multi-stage Docker build, the end result of that is only the results starting from the final FROM line. While you're setting that script as an ENTRYPOINT, it's in a previous build stage that's not getting used. ENTRYPOINT (and CMD) don't run at image build time, only when the built image is eventually run.
The easiest way to address this is to COPY the script directly into the final build stage:
# STAGE 3: Production
FROM node:$NODE_VERSION
...
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD npm start
Related
I'm trying to get Sonar Scanner to analyze a project as part of a dockerfile using the mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1809 image, but it's throwing the following exception when attempting to run dotnet-sonarscanner begin
Unhandled exception. System.IO.IOException: Cannot find local application data directory.
I've verified that the folder it's looking for exists on the container with the following powershell command:
RUN pwsh -Command Get-ChildItem $env:LOCALAPPDATA -Directory
Not sure what I'm missing here. Here's a more complete picture of the dockerfile with example data:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1809 AS build
WORKDIR /app
ARG SONAR_PROJECT_KEY=example-project-key
ARG SONAR_OGRANIZAION_KEY=exapmle-org-key
ARG SONAR_HOST_URL=https://sonarcloud.io
ARG SONAR_TOKEN
# install java
COPY ./install-java.ps1 .
RUN pwsh install-java.ps1
RUN cmd ver
# Install Sonar Scanner
RUN dotnet tool install --tool-path msbuildscanner dotnet-sonarscanner
#ENV PATH="$PATH:/root/.dotnet/tools"
RUN pwsh -Command Get-ChildItem $env:LOCALAPPDATA -Directory
# Start Sonar Scanner
RUN msbuildscanner\dotnet-sonarscanner begin /k:"$SONAR_PROJECT_KEY" /o:"$SONAR_OGRANIZAION_KEY" /d:sonar.host.url="$SONAR_HOST_URL" /d:sonar.login="$SONAR_TOKEN"
# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
RUN msbuildscanner\dotnet-sonarscanner end /d:sonar.login="$SONAR_TOKEN"
#Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1.9-nanoserver-1809 AS runtime
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "app.dll"]
I have the following Dockerfile.PROD, that builds my Nodejs application to then be copied over the Nginx.
I am trying to pass credentials using build arguments to Docker to then use inside a sed command to create a .env file for Nodejs use.
My Dockerfile.PROD
# build environment
FROM node:12.5.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
COPY DEFAULT.env ./.env
RUN sed -i "s/access-key/$REACT_APP_ACCESS_KEY_ID/" ./.env
RUN sed -i "s/secret-key/$REACT_APP_SECRET_ACCESS_KEY/" ./.env
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
# new
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My DEFAULT.env file
REACT_APP_ACCESS_KEY_ID="access-key"
REACT_APP_SECRET_ACCESS_KEY="secret-key"
My Build command used
docker build -f Dockerfile.prod --build-arg REACT_APP_ACCESS_KEY_ID=ABCDEFGHZXCV --build-arg REACT_APP_SECRET_ACCESS_KEY=ABCDEFGHZXCVhdhjshdjsdf9889 -t test:prod .
The build command keeps warning:
One or more build-args [REACT_APP_ACCESS_KEY_ID
REACT_APP_SECRET_ACCESS_KEY] were not consumed.
For some reason the sed command is not picking them up, so must be sed syntax. I am open to other possibilities of creating the .env file.
You need to define arguments inside your Dockerfile before it's usage, using ARG:
https://docs.docker.com/engine/reference/builder/#arg
I am trying to pass a variable which fetches an IP address with the following code:
${wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4}
I have tried setting an entrypoint shell script such as:
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
java -jar user-service-0.0.1-SNAPSHOT.jar
And my dockerfile looking like:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint and set variables
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
I need this variable to show up when echoing ECS_INSTANCE_IP_ADDRESS so that it can be loaded by a .properties file which $ECS_INSTANCE_IP_ADDRESS. Whenever I get into /bin/sh of the container and echo the variable it shows up blank. If I do an export on that shell I do get the echo response.
Any ideas? I've tried a bunch of things and can't get the variable to be available on the container.
The standard way to set Environment variables in Docker is Dockerfile or Task definition in case of AWS-ECS.
The current docker-entrypoint will set env for that session only, that is why you got empty value when do ssh, but you can verify it has been in that session but this is not recommended approach in Docker.
You can verify
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
echo "instance IP is ${ECS_INSTANCE_IP_ADDRESS}"
java -jar user-service-0.0.1-SNAPSHOT.jar
You will be able to see this value but it will not be available in another session.
Another example,
Dockerfile
FROM node:alpine
WORKDIR /app
COPY run.sh run.sh
RUN chmod +X run.sh
ENTRYPOINT ["./run.sh"]
entrypoint
#!/bin/sh
export ECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"
You will see the node process is able to process the environment, but if you run a command inside the container it will show undefined.
docker exec -it <container_id> ash -c 'node -e "console.log(process.env.ECS_INSTANCE_IP_ADDRESS)"'
Two Option to deal with ENV,
Pass the IP ENV if you already know the IP, as instance always launches before container, for example
The second option is passing environment variable to a java class in command line or How to pass system properties to a jar file if that seems to work for then you entrypoint will be
java -DECS_INSTANCE_IP_ADDRESS=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4) -jar myjar.jar
This is how I fixed it, I believe there was also an error on the metadata URL and that could have been the culprit.
Dockerfile:
#############
### build ###
#############
# base image
FROM maven:3.6.2-ibmjava-8-alpine as build
# set working directory
WORKDIR /app
# install and cache app dependencies
COPY . /app
RUN mvn clean package
############
### prod ###
############
# base image
FROM openjdk:8-alpine
RUN apk add jq
# set working directory
WORKDIR /app
# copy entrypoint from the 'build environment'
COPY --from=build /app/entrypoint.sh /app/entrypoint.sh
# copy artifact build from the 'build environment'
COPY --from=build /app/target/user-service-0.0.1-SNAPSHOT.jar /app/user-service-0.0.1-SNAPSHOT.jar
# expose port 8081
EXPOSE 8081
# run entrypoint
ENTRYPOINT ["/bin/sh", "./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
export FARGATE_IP=$(wget -q -O - http://169.254.170.2/v2/metadata | jq -r .Containers[0].Networks[0].IPv4Addresses[0])
echo $FARGATE_IP
java -jar user-service-0.0.1-SNAPSHOT.jar
application-aws.properties (not full file):
eureka.instance.ip-address=${FARGATE_IP}
Eureka is printing a template error on the Eureka status page but microservices do connect and respond to actuator health and swagger-ui from Spring Boot.
I'm having some weird issues with my custom Dockerfile, compiling a .Net core app in alpine containers.
I've tried numerous different configurations to no avail - cache is ALWAYS invalidated when I implement the final FROM instruction (if I comment that and everything below it out, caching works fine). Here's the file:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7
ARG ASPNETCORE_ENVIRONMENT=development
RUN export
COPY --from=build /app .
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
Any ideas? Hints? Tips? Blazingly obvious mistakes? I've checked each layer and the COPY . ./ instruction ONLY copies the files I expect it to - and none of them change between builds.
Its also worth noting that if I remove the last FROM instruction (and other relevant lines) the cache works perfectly - but the final image size is obviously considerably bigger than the base microsoft/dotnet:2.1-aspnetcore-runtime-alpine3.7 (172Mb vs 1.8Gb) image. I have tried just commenting out the COPY instruction after the FROM, but it doesn't affect the cache invalidation. The following works as expected:
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
ARG ASPNETCORE_ENVIRONMENT=development
ARG ASPNET_CONFIGURATION=Debug
ARG PROJECT_DIR=src/API/
ARG PROJECT_NAME=MyAPI
ARG SOLUTION_NAME=MySolution
RUN export
WORKDIR /source
COPY ./*.sln ./nuget.config ./
# Copy source project files
COPY src/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p src/${file%.*}/ && mv $file src/${file%.*}/; done
# # Copy test project files
COPY test/*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p test/${file%.*}/ && mv $file test/${file%.*}/; done
RUN dotnet restore
COPY . ./
RUN for dir in test/*.Tests/; do (cd "$dir" && dotnet test --filter TestType!=Integration); done
WORKDIR /source/${PROJECT_DIR}
RUN dotnet build ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app
RUN dotnet publish ${PROJECT_NAME}.csproj -c $ASPNET_CONFIGURATION -o /app --no-restore
WORKDIR /app
EXPOSE 80
VOLUME /app/logs
ENTRYPOINT ["dotnet", "MyAssembly.dll"]
.dockerignore below:
base-images/
docker-compose.yml
docker-compose.*.yml
VERSION
**/.*
**/*.ps1
**/*.DotSettings
**/*.csproj.user
**/*.md
**/*.log
**/*.sh
**/Dockerfile
**/bin
**/obj
**/node_modules
**/.vs
**/.vscode
**/dist
**/packages/
**/wwwroot/
Last bit of info: I'm building containers using docker-compose - specifically by running docker-compose build myservicename, but building the image with docker build -f src/MyAssembly/Dockerfile -t MyImageName . yields the same results.
If you're building locally and the cache isn't working – then I don't know what the issue is :)
But if you're building as part of CI – then the issue may be that you need to pull, build, and push the intermediate stage explicitly:
> docker pull MyImageName:build || true
> docker pull MyImageName:latest || true
> docker build --target build --tag MyImageName:build .
> docker build --cache-from MyImageName:build --tag MyImageName:latest .
> docker push MyImageName:build
> docker push MyImageName:latest
The || true part is there because the images won't be there on the initial CI build. The "magic sauce" of this recipe is docker build --target <intermediate-stage-name> and docker build --cache-from <intermediate-stage-name>.
I can't explain why building and pushing the intermediate stage explicitly is needed to get the cache to work – other than some handwaving about only the final image gets pushed, and not the intermediate stage and its layers. But it worked for me – I learned this "trick" from here: https://pythonspeed.com/articles/faster-multi-stage-builds/
I'm trying to convert my project to use multi-stage builds. However, the final step always fails with an error:
Step 11/13 : COPY --from=build /bin/grafana-server /bin/grafana-server
COPY failed: stat /var/lib/docker/overlay2/xxxx/merged/bin/grafana-server: no such file or directory
My Dockerfile looks like this:
FROM golang:latest AS build
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
ENV GIT_SSL_NO_VERIFY=1
COPY . $SRC_DIR
WORKDIR $SRC_DIR
# Building of Grafana
RUN \
npm run build && \
go run build.go setup && \
go run build.go build
# Create final stage containing only required artifacts
FROM scratch
COPY --from=build /bin/grafana-server /bin/grafana-server
EXPOSE 3001
CMD ["./bin/grafana-server"]
The build.go build step will output artifacts to ./bin/ -- The error is pretty unhelpful other than telling me the files don't exist where I think they should exist.
My folder structure on my machine is:
--| ~/Documents/dev/grafana/src/grafana/grafana
--------| bin
------------| <grafan-server builds to here>
--------| deploy
------------| docker
----------------| Dockerfile
From ~/Documents/dev/grafana/src/grafana/grafana is where I issue: docker build -t grafana -f deploy/docker/Dockerfile .
To follow-up my comment, the path you set with the WORKDIR is absolute and should be specified in the same way in the COPY --from=build command.
So this could lead to the following Dockerfile:
FROM golang:latest AS build
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
ENV GIT_SSL_NO_VERIFY=1
COPY . $SRC_DIR
WORKDIR $SRC_DIR
# Building of Grafana
RUN \
npm run build && \
go run build.go setup && \
go run build.go build
# Create final stage containing only required artifacts
FROM scratch
ENV SRC_DIR=/go/src/github.com/grafana/grafana/
WORKDIR $SRC_DIR
COPY --from=build ${SRC_DIR}/bin/grafana-server ${SRC_DIR}/bin/grafana-server
EXPOSE 3001
CMD ["./bin/grafana-server"]
(only partially tested)