I've created a docker image with Artifactory and Terraform to be used by pods in a K8s Cluster but it wont persist, the pod gets deleted immediately after spinning up and wasn't able to execute the job it was assigned to. Upon checking the pushed image in Artifactory converts the WORKDIR and CMD to RUN, is there anything I'm missing?
Here's the Dockerfile:
FROM alpine:3.16.2
LABEL maintainer="Platform Engineering"
# install dependencies
RUN apk add terraform
RUN apk add curl
RUN apk add tree
RUN curl -fL https://install-cli.jfrog.io | sh
RUN curl --location --output /usr/local/bin/release-cli "https://release-cli-downloads.s3.amazonaws.com/latest/release-cli-linux-amd64"
RUN chmod +x /usr/local/bin/release-cli
# check version of installed dependencies
RUN terraform -v
RUN jf -v
RUN release-cli -v
# target ci workspace under /tmp directory
WORKDIR /tmp/ci-workspace
CMD ["/bin/sh"]
Here's how the layers look like in Artifactory:
Tried rebuilding in Windows and other machine and installing one binary at a time, nothing worked.
Related
I just started learning docker. To teach myself, I managed to containerize bandit (a python code scanner) but I'm not able to see the output of the scan before the container destroys itself. How can I copy the output file from inside the container to the host, or otherwise save it?
Right now i'm just using bandit to scan itself basically :)
Dockerfile
FROM python:3-alpine
WORKDIR /
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git ./code-to-scan
CMD [ "python -m bandit -r ./code-to-scan -o bandit.txt" ]
You can mount a volume on you host where you can share the output of bandit.
For example, you can run your container with:
docker run -v $(pwd)/output:/tmp/output -t your_awesome_container:latest
And you in your dockerfile:
...
CMD [ "python -m bandit -r ./code-to-scan -o /tmp/bandit.txt" ]
This way the bandit.txt file will be found in the output folder.
Better place the code in your image not in the root directory.
I did some adjustments to your Dockerfile.
FROM python:3-alpine
WORKDIR /usr/myapp
RUN pip install bandit
RUN apk update && apk upgrade
RUN apk add git
RUN git clone https://github.com/PyCQA/bandit.git .
CMD [ "bandit","-r",".","-o","bandit.txt" ]`
This clones git in your WORKDIR.
Note the CMD, it is an array, so just devide all commands and args as in the Dockerfile about.
I put the the Dockerfile in my D:\test directory (Windows).
docker build -t test .
docker run -v D:/test/:/usr/myapp test
It will generate you bandit.txt in the test folder.
After the code is execute the container exits, as there are nothing else to do.
you can also put --rm to remove the container once it finishs.
docker run --rm -v D:/test/:/usr/myapp test
I am trying to create a docker image based on ubuntu:20.04 where I want to install ROS2, ignition gazebo and the ROS2-ign-bridge with a Dockerfile.
The installation of ROS2 and ign work without any issue but during the bridge installation I need to use colcon. Heres that part from the Dockerfile:
## install ROS2 ignition gazebo bridge
RUN export IGNITION_VERSION=edifice
RUN mkdir -p ros_ign_bridge_ws/src
RUN git clone https://github.com/osrf/ros_ign.git -b foxy ros_ign_bridge_ws/src
WORKDIR ros_ign_bridge_ws
RUN rosdep install -r --from-paths src -i -y --rosdistro foxy
RUN colcon build
RUN source ros_ign_bridge_ws/install/setup.bash
RUN echo "source ros_ign_bridge_ws/install/setup.bash" >> ~/.bashrc
It fails during the colcon build step when I use
docker build -f Dockerfiles/companion_base.Dockerfile -t companion_base .
, but when I run the image created up to that step
docker run -it c125a17c2f68 /bin/bash
and then execute colcon build inside the container it works without any issue.
So what is the difference between RUN colcon build and running colcon build inside the container ?
The issue was that when you source something in a previous docker build step, it isn't available in the next step. So what I needed to do was do the sourcing and building in the same step:
RUN /bin/bash -c "source /opt/ros/foxy/setup.bash; colcon build"
It might be simple question but I could not find the proper solution.
I have a Docker image as below.. The things that I would like to do simply run curl command inside kubernetes pod but I received an error as below.. I could not able to exec via bash also.
$ kubectl exec -ti hub-cronjob-dev-597cc575f-6lfdc -n hub-dev sh
Defaulting container name to hub-cronjob.
Use 'kubectl describe pod/hub-cronjob-dev-597cc575f-6lfdc -n hub-dev' to see all of the containers in this pod.
/usr/src/app $ curl
sh: curl: not found
Tried with bash
$ kubectl exec -ti cronjob-dev-597cc575f-6lfdc -n hub-dev bash
mand in container: failed to exec in container: failed to start exec "8019bd0d92aef2b09923de78753eeb0c8b60a78619543e4cd27069128a30da92": OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown
Dockerfile
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
#Install curl
RUN apk --no-cache add curl -> did not work
RUN apk update && apk add curl curl-dev bash -> did not work
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
Your Dockerfile consists of multiple stages, which is also called multi-stage build.
Each FROM statement is a new stage and new image. In your case you have 2 stages:
builder where you build you app and install curl
second stage which copies /usr/src/app from builder stage
In this case second FROM node:12-alpine statement will contain only basic alpine packages, node tools and /usr/src/app which you have copied from the first stage.
If you want to have curl in your final image you need to install curl in second stage (after second FROM node:12-alpine):
FROM node:12-alpine AS builder
# Variables from outside
ARG NODE_ENVIRONMENT=development
ENV NODE_ENV=$NODE_ENVIRONMENT
# Create app directory
WORKDIR /usr/src/app
# Do not install
# Install app dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Build Stage 2
# Take the build from the previous stage
FROM node:12-alpine
#Install curl
RUN apk update && apk add curl
WORKDIR /usr/src/app
COPY --from=builder /usr/src/app /usr/src/app
# run the application
EXPOSE 50005 9183
CMD [ "npm", "run", "start:docker" ]
As it was mentioned in comments you can test it by running docker container directly - no need to run pod in k8s cluster:
docker build -t image . && docker run -it image sh -c 'which curl'
It is common to use multi-stage build for applications implemented in compiled programming languages.
In the first stage you install all necessary dev tools and compilers and then compile sources into a binary file. Since you don't need and probably don't want sources and developer's tools in a production image you should create a new stage.
In the second stage you copy compiled binary file and run it as CMD or ENTRYPOINT. This way your image contains only executable code, which makes them smaller.
We can add curl using apk in the k8s pod.
apk add curl
I've started using circleci for CI (I'm a newbie) and I want to build a docker image and push it to dockerhub inside a circleci job.
the problem is the ADD statement of the dockerfile, the error say
ADD failed: stat /var/lib/docker/tmp/docker-builder814373370/app/build: no such file or directory
docker build work fine in local. The problem seems to be the 'remote environment' create by circleci to execute docker cmd inside a job (when the job is executing inside a container). I tried multiple things to share my folder to the remote environment but nothing has worked. I also tried to execute my job inside a 'machine' to get rid of the 'remote environment' but it gives me more errors.
I think I can achieve it by storing my project online in another job and then adding the folder by https inside the dockerfile. But I'm pretty sure there is a faster way, I just don't see it.
here my dockerfile:
FROM ubuntu:20.04
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update -yq && apt-get -yq install nodejs npm && npm install serve -g
ADD app/build/ /app
EXPOSE 5000
CMD serve -s /app -l 5000
and my circleci job:
working_directory: ~/project/
docker:
- image: circleci/buildpack-deps:stretch
steps:
- checkout
- setup_remote_docker
- run:
name: Build Docker image
command: sudo docker build . -t $IMAGE_NAME:latest
I achieve it by storing artifacts in another job and then adding the folder by https with curl and wget in a RUN statement of the dockerfile
I am trying to add Glide to my Golang project but I'm not getting my container working. I am currently using:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN mkdir -p $$GOPATH/bin
RUN curl https://glide.sh/get | sh
RUN go get github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
RUN glide update && fresh -c ../runner.conf main.go
as per #craigchilds94's post. When I run
docker build -t docker_test .
It all works. However, when I change the last line from RUN glide ... to CMD glide ... and then start the container with:
docker run -it --volume=$(PWD):/go docker_test
It gives me an error: /bin/sh: glide: not found. Ignoring the glide update and directly starting fresh results in the same: /bin/sh fresh: not found.
The end goal is to be able to mount a volume (for the live-reload) and be able to use it in docker-compose so I want to be able to build it, but I do not understand what is going wrong.
This should probably work for your purposes:
# create image from the official Go image
FROM golang:alpine
RUN apk add --update tzdata bash wget curl git;
# Create binary directory, install glide and fresh
RUN go get -u github.com/Masterminds/glide
RUN go get -u github.com/pilu/fresh
# define work directory
ADD . /go
WORKDIR /go/src
ENTRYPOINT $GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
As far as I know you don't need to run the glide update after you've just installed glide. You can check this Dockerfile I wrote that uses glide:
https://github.com/timogoosen/dockerfiles/blob/master/btcd/Dockerfile
and here is the REAMDE: https://github.com/timogoosen/dockerfiles/blob/master/btcd/README.md
This article gives a good overview of the difference between: CMD, RUN and entrypoint: http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/
To quote from the article:
"RUN executes command(s) in a new layer and creates a new image. E.g., it is often used for installing software packages."
In my opinion installing packages and libraries can happen with RUN.
For starting your binary or commands I would suggest use ENTRYPOINT see:"ENTRYPOINT configures a container that will run as an executable." you could use CMD too for running:
$GOPATH/bin/fresh -c /go/src/runner.conf /go/src/main.go
something like this might work, I didn't test this part:
CMD ["$GOPATH/bin/fresh", "-c", "/go/src/runner.conf /go/src/main.go"]