It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl
Related
I've created a docker image with Artifactory and Terraform to be used by pods in a K8s Cluster but it wont persist, the pod gets deleted immediately after spinning up and wasn't able to execute the job it was assigned to. Upon checking the pushed image in Artifactory converts the WORKDIR and CMD to RUN, is there anything I'm missing?
Here's the Dockerfile:
FROM alpine:3.16.2
LABEL maintainer="Platform Engineering"
# install dependencies
RUN apk add terraform
RUN apk add curl
RUN apk add tree
RUN curl -fL https://install-cli.jfrog.io | sh
RUN curl --location --output /usr/local/bin/release-cli "https://release-cli-downloads.s3.amazonaws.com/latest/release-cli-linux-amd64"
RUN chmod +x /usr/local/bin/release-cli
# check version of installed dependencies
RUN terraform -v
RUN jf -v
RUN release-cli -v
# target ci workspace under /tmp directory
WORKDIR /tmp/ci-workspace
CMD ["/bin/sh"]
Here's how the layers look like in Artifactory:
Tried rebuilding in Windows and other machine and installing one binary at a time, nothing worked.
I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test
I want to preface this in saying that I am very new to docker and have just got my feet wet with using it. In my Docker file that I run to build the container I install a program that sets some env variables. Here is my Docker file for context.
FROM python:3.8-slim-buster
COPY . /app
RUN apt-get update
RUN apt-get install wget -y
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/install_mvGenTL_Acquire.sh
RUN wget http://static.matrix-vision.com/mvIMPACT_Acquire/2.40.0/mvGenTL_Acquire-x86_64_ABI2-2.40.0.tgz
RUN chmod +x ./install_mvGenTL_Acquire.sh
RUN ./install_mvGenTL_Acquire.sh -u
RUN apt-get install -y python3-opencv
RUN pip3 install USSCameraTools
WORKDIR /app
CMD python3 main.py
After executing the build docker command the program "mvGenTL_Acquire.sh" sets env inside the container. I need these variables to be set when executing the run docker command. But when checking the env variables after running the image it is not set. I know I can pass them in directly but would like to use the ones that are set from the install in the build.
Any help would be greatly appreciated, thanks!
For running a bash script when your container is creating:
make an script.sh file:
#!/bin/bash
your commands here
If you are using an alpine image, you must use #!/bin/sh instead of #!/bin/bash on the first line of your bash file.
Now in your Dockerfile copy your bash file in the container and use the ENTRYPOINT instruction for running this file when the container is creating:
.
.
.
COPY script.sh /
RUN chmod +x /script.sh
.
.
.
ENTRYPOINT ["/script.sh"]
Notice that in the ENTRYPOINT instruction use your bash file address in your image.
Now when you create a container, the script.sh file will be executed.
What permissions do I need?
I created an instance with the gcloud compute instance create-with-container command.
Then, the following was displayed in the log.
Error: Failed to start container: Error response from daemon:
{\" message \ ": \" OCI runtime create failed: container_linux.go: 349: starting container process caused \\\ "exec: \\\\\\\ "./foo\\\\\\\": permission denied \\\ ": unknown \"}
./foo is executable.
image is built by google cloud build.
Dockerfile
# Use the offical golang image to create a binary.
# This is based on Debian and sets the GOPATH to /go.
# https://hub.docker.com/_/golang
FROM golang:1.15-buster as builder
# Create and change to the app directory.
WORKDIR /app
# Retrieve application dependencies.
# This allows the container build to reuse cached dependencies.
# Expecting to copy go.mod and if present go.sum.
COPY go.* ./
RUN go mod download
# Copy local code to the container image.
COPY . ./
# Build the binary.
RUN go build -mod=readonly -v -o foo
# Use the official Debian slim image for a lean production container.
# https://hub.docker.com/_/debian
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM debian:buster-slim
RUN set -x && apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates && \
rm -rf /var/lib/apt/lists/*
# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/foo /app/foo
# Run the task on container startup.
WORKDIR /app
CMD ["./foo"]
this works fine on cloud run.
Fix build command
go build -o foo
Original build command generate foo directory and output execution file in it.
I am trying to containerize and as well as start my Go lang application using Docker-compose,
The image is built successfully according to the logs but my container does not for docker-compose up and it throws the following error to my console.
Cannot start service app: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"go\": executable file not found in $PATH": unknown
Here is what my Docker file looks like.
ARG GO_VERSION=1.13
FROM golang:${GO_VERSION}-alpine AS builder
# We create an /app directory within our
# image that will hold our application source
# files
RUN mkdir /raedar
# Create the user and group files that will be used in the running container to
# run the process as an unprivileged user.
RUN mkdir /user && \
echo 'nobody:x:65534:65534:nobody:/:' > /user/passwd && \
echo 'nobody:x:65534:' > /user/group
# Install git.
# Git is required for fetching the dependencies.
# Allow Go to retrieve the dependencies for the buld
RUN apk update && apk add --no-cache ca-certificates git
RUN apk add --no-cache libc6-compat
# Force the go compiler to use modules
ENV GO111MODULE=on
ADD . /raedar/
WORKDIR /raedar/
RUN go get -d -v golang.org/x/net/html
COPY go.mod go.sum ./
COPY . .
# Compile the binary, we don't want to run the cgo
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o bin/main cmd/app/main.go
# Final stage: the running container.
FROM scratch AS final
WORKDIR /root/
# Import the user and group files from the first stage.
COPY --from=builder /user/group /user/passwd /etc/
# Import the Certificate-Authority certificates for enabling HTTPS.
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Import the compiled executable from the first stage.
COPY --from=builder /raedar/bin/main .
EXPOSE 8080
# Perform any further action as an unprivileged user.
USER nobody:nobody
# Run the compiled binary.
CMD ["./main"]
The error shows you're trying to run go, not ./main:
exec: \"go\": executable file not found in $PATH
A matching Dockerfile would have the line CMD ["go", "./main"] rather than CMD ["./main"].
So either you're unexpectedly building a different Dockerfile, or you're changing the command when you run a container with that image. In particular, if you're using docker-compose, make sure you're not setting command: go ./main or entrypoint: go, either of which could cause this behavior.