I'm trying to create a docker image of soundcloud/ipmi-exporter to run with Prometheus on Ubuntu Bionic with Docker 19.03.6, build 369ce74a3c. Docker on my OS X laptop is Docker version 20.10.2, build 2291f61. I am forced to build the (customized) image on my laptop because Bionic has a version of golang that's older than what ipmi-exporter wants, and I'm not allowed to update the Ubuntu server.
Anyway, can someone tell me what I'm doing wrong in my Dockerfile?
# Container image
FROM quay.io/prometheus/golang-builder:1.13-base AS builder
ADD . /go/src/github.com/soundcloud/ipmi_exporter/
RUN cd /go/src/github.com/soundcloud/ipmi_exporter && make
# Container image
FROM ubuntu:18.04
WORKDIR /
RUN apt-get update && apt-get install freeipmi-tools -y --no-install-recommends && rm -rf /var/lib/apt/lists/*
COPY --from=builder /go/src/github.com/soundcloud/ipmi_exporter/ipmi_exporter /bin/ipmi_exporter
EXPOSE 8888
ENTRYPOINT ["ipmi_exporter"]
CMD ["--config.file", "/ipmi_remote.yml"]
CMD ["--web.listen-address=":8889"" "--freeipmi.path=/etc/freeipmi" "--log.level="debug""]
When I run the image all I see is
ipmi_exporter: error: unexpected /bin/sh, try --help
I have ipmi_exporter running on the OS directly and I never configured a config.yml. What config.yml is the Dockerfile author talking about? It's mentioned in the last line of https://github.com/soundcloud/ipmi_exporter/blob/master/Dockerfile
The image lives here: https://github.com/soundcloud/ipmi_exporter The sample/example Dockerfile refers to a config.yaml which this software does not use.
I just can't figure out how to make the image pull in the config file I specify.
Related
I am trying to make my application work in a Linux container. It will eventually be deployed to Azure Container Instances. I have absolutely no experience with containers what so ever and I am getting lost in the documentation and examples.
I believe the first thing I need to do is create a Docker image for my project. I have installed Docker Desktop.
My project has this structure:
MyProject
MyProject.Core
MyProject.Api
MyProject.sln
Dockerfile
The contents of my Dockerfile is as follows.
#Use Ubuntu Linux as base
FROM ubuntu:22.10
#Install dotnet6
RUN apt-get update && apt-get install -y dotnet6
#Install LibreOffice
RUN apt-get -y install default-jre-headless libreoffice
#Copy the source code
WORKDIR /MyProject
COPY . ./
#Compile the application
RUN dotnet publish -c Release -o /compiled
#ENV PORT 80
#Expose port 80
EXPOSE 80
ENTRYPOINT ["dotnet", "/compiled/MyProject.Api.dll"]
#ToDo: Split build and deployment
Now when I try to build the image using command prompt I am using the following command
docker build - < Dockerfile
This all processed okay up until the dotnet publish command where it errors saying
Specify a project or solution file
Now I have verified that this command works fine when run outside of the docker file. I suspect something is wrong with the copy? Again I have tried variations of paths for the WORKDIR, but I just can't figure out what is wrong.
Any advice is greatly appreciated.
Thank you SiHa in the comments for providing a solution.
I made the following change to my docker file.
WORKDIR app
Then I use the following command to build.
docker build -t ImageName -f FileName .
The image now creates successfully. I am able to run this in a container.
I am looking for some help in writing docker file for Ubuntu 18.04 version which installs Python3.10.
Currently it is written in such a way that it gets the default version of the Python3 (i.e. 3.6) along with the ubuntu 18.04.
Here the question is, is there any way that I can get the Python3.10 with Ubuntu 18.04? The requirement is to use either slim or non-slim versions of Python3.10 Bulls eye image from docker hub
you can use ubuntu 18 docker image, then install python 3.10 inside it.
FROM ubuntu:18.04
RUN apt-get -y update && apt -get install software-properties-common /
&& add-apt-repository ppa:deadsnakes/ppa && apt install python3.10
I am able to build the image on ubuntu 18.04 by including python3.10
Step-1: Write a docker file
FROM python:3.10-bullseye
RUN mkdir WORK_REPO
RUN cd WORK_REPO
WORKDIR /WORK_REPO
ADD hi.py .
CMD ["python", "-u", "hi.py"]
Step-2: Build the image
docker build -t image_name .
Step-3: Run the docker image
docker run image_name
Step-4: Connect to the container and check the Python version
I hope this would be helpful for someone who is completely new in writing dockerfile.
Many Thanks,
Suresh.
I did a basic search in the community and could not find a suitable answer, so I am asking here. Sorry if it was asked earlier.
Basically , I am working on a certain project and we keep changing code at a regular interval . So ,we need to build docker image everytime due to that we need to install dependencies from requirement.txt from scratch which took around 10 min everytime.
How can I perform direct change to docker image and also how to configure entrypoint(in Docker File) which reflect changes in Pre-Build docker image
You don't edit an image once it's been built. You always run docker build from the start; it always runs in a clean environment.
The flip side of this is that Docker caches built images. If you had image 01234567, ran RUN pip install -r requirements.txt, and got image 2468ace0 out, then the next time you run docker build it will see the same source image and the same command, and skip doing the work and jump directly to the output images. COPY or ADD files that change invalidates the cache for future steps.
So the standard pattern is
FROM node:10 # arbitrary choice of language
WORKDIR /app
# Copy in _only_ the requirements and package lock files
COPY package.json yarn.lock ./
# Install dependencies (once)
RUN yarn install
# Copy in the rest of the application and build it
COPY src/ src/
RUN yarn build
# Standard application metadata
EXPOSE 3000
CMD ["yarn", "start"]
If you only change something in your src tree, docker build will skip up to the COPY step, since the package.json and yarn.lock files haven't changed.
In my case, I was facing the same, after minor changes, i was building the image again and again.
My old DockerFile
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
COPY . .
so what I did, created a base image file first like this (Avoided the last line, did not copy my code)
FROM python:3.8.0
WORKDIR /app
# Install system libraries
RUN apt-get update && \
apt-get install -y git && \
apt-get install -y gcc
# Install project dependencies
COPY ./requirements.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt --use-deprecated=legacy-resolver
# Don't use terminal buffering, print all to stdout / err right away
ENV PYTHONUNBUFFERED 1
and then build this image using
docker build -t my_base_img:latest -f base_dockerfile .
then the final Dockerfile
FROM my_base_img:latest
WORKDIR /app
COPY . .
And as my from this image, I was not able to up the container, issues with my copied python code, so you can edit the image/container code, to fix the issues in the container, by this mean i avoided the task of building images again and again.
When my code got fixed, I copied the changes from container to my code base and then finally, I created the final image.
There are 4 Steps
Start the image you want to edit (e.g. docker run ...)
Modify the running image by shelling into it with docker exec -it <container-id> (you can get the container id with docker ps)
Make any modifications (install new things, make a directory or file)
In a new terminal tab/window run docker commit c7e6409a22bf my-new-image (substituting in the container id of the container you want to save)
An example
# Run an existing image
docker run -dt existing_image
# See that it's running
docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS
# c7e6409a22bf existing-image "R" 6 minutes ago Up 6 minutes
# Shell into it
docker exec -it c7e6409a22bf bash
# Make a new directory for demonstration purposes
# (note that this is inside the existing image)
mkdir NEWDIRECTORY
# Open another terminal tab/window, and save the running container you modified
docker commit c7e6409a22bf my-new-image
# Inspect to ensure it saved correctly
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# existing-image latest a7dde5d84fe5 7 minutes ago 888MB
# my-new-image latest d57fd15d5a95 2 minutes ago 888MB
I need to run a Jar ( lets say helloworld.jar ) inside a docker container. The container should include debian as an OS . Whenever I start the container the Jar should run. Meaning it should run java -jar helloworld.jar on start. how can I do that ?
also , How can I make docker-compose.yml file from it
Thanks in advance
You can try a simple Dockerfile:
FROM ubuntu
RUN apt-get update -y && apt-get upgrade -y
RUN {add java install command here}
RUN mkdir /src
WORKDIR /src
ADD . .
CMD java helloworld.jar
Build an image using this via docker build . -t helloworld and run it docker run helloworld
Instead of using ubuntu, you could use available open jdk images.
I have Node.js app which I am running as a docker container. Here is
a Dockerfile for that application.
FROM ubuntu
ARG ENVIRONMENT
ARG PORT
RUN apt-get update -qq
RUN apt-get install -y build-essential nodejs npm nodejs-legacy vim
RUN mkdir /consumer_portal
ADD . /consumer_portal
WORKDIR /consumer_portal
RUN npm install -g express
RUN npm install -g path
RUN npm cache clean
RUN npm install
EXPOSE $PORT
ENTRYPOINT [ "node", "server.js" ]
CMD [ $PORT, $ENVIRONMENT ]
Can I modify something in this Dockerfile to reduce the docker image size
Using the official node alpine image as a base image, as most here suggested, is a simple solution to reduce the overall size of the image, because even the base alpine image is a lot smaller compared to the base ubuntu image.
A Dockerfile could look like this:
FROM node:alpine
ARG ENVIRONMENT
ARG PORT
RUN mkdir /consumer_portal \
&& npm install -g express path
COPY . /consumer_portal
WORKDIR /consumer_portal
RUN npm cache clean \
&& npm install
EXPOSE $PORT
CMD [ "node", "server.js" ]
It's nearly the same and should work as expected. Most of the commands from your ubuntu image can be applied the same way in the alpine image.
When I add mock-data to be create a similar project as you might have, results in an ubuntu image with a size of 491 MB and the alpine version is only 62.5 MB big:
REPOSITORY TAG IMAGE ID CREATED SIZE
alpinefoo latest 8ca6f338475e 5 minutes ago 62.5MB
ubuntufoo latest 38620a1bd5a6 6 minutes ago 491MB
Try to pack all RUN instructions together, it will reduce the number of intermediate images. (But it won"t reduce the size).
Adding rm -rf /var/lib/apt/lists/* after apt-get update will reduce image size by removing all useless apt-get stuff.
You may also remove vim from your image in the last RUN instruction.
FROM ubuntu
ARG ENVIRONMENT
ARG PORT
RUN apt-get update \
&& apt-get install -y build-essential nodejs npm nodejs-legacy vim \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir /consumer_portal
ADD . /consumer_portal
WORKDIR /consumer_portal
RUN npm install -g express \
&& npm install -g path \
&& npm cache clean \
&& npm install
EXPOSE $PORT
ENTRYPOINT [ "node", "server.js" ]
CMD [ $PORT, $ENVIRONMENT ]
1) Moving to Alpine is probably the best bet. I just ported an Ubuntu docker file to Alpine and went from 1.5GB to 585MB. I followed these instructions. Note, you'll be using apk instead of apt-get and the Alpine package names are a bit different.
2) It is also possible to reduce layers by merging run commands (each new run command creates a new layer).
RUN apt-get update -qq && apt-get install -y build-essential nodejs npm nodejs-legacy vim
RUN npm install -g express path && npm cache clean && npm install
3) You may also be interested in multi-stage build wherein you only copy necessary components to the final image.
Consider using this:
Consider using a --no-install-recommends when apt-get installing packages. This will result in a smaller image size. For more information, see this blog post
There is a good blog to tell you a few steps to go to reduce image size.
Tips to Reduce Docker Image Sizes
https://hackernoon.com/tips-to-reduce-docker-image-sizes-876095da3b34
Image generated at the first step , alias: builder
Copy the product of the first step image to the current image, only one image layer is used, saving the number of image layers of the previous step.
FROM node:10-alpine as builder
WORKDIR /web-console
ADD . /web-console
RUN npm install
RUN npm run build
FROM node:10-alpine
WORKDIR /web-console
COPY --from=builder . /web-console
CMD ["npm", "run", "docker-start"]
here an example of Java with Maven : 2 steps
FROM maven:3.5.2-jdk-8-alpine as BUILD
WORKDIR /app
COPY pom.xml /app/
COPY src /app/src/
RUN mvn clean package -DskipTests
FROM alpine
WORKDIR /app
COPY --from=BUILD /app/target/*.jar application.jar
ENTRYPOINT ["java", "-jar", "/application.jar"]
If you base on Ubuntu then a smart move is to make this
RUN apt-get update && apt-get install -y \
build-essential \
cwhatever-you-want \
vim \
&& rm -rf /var/lib/apt/lists/*
The last line will clear a lot :)
You should always apt-get update in same line, because otherwise it will be cached and not fired on next builds if you add another lib to install.
The image size of a container is an issue that should be addressed properly.
Some suggest to use the alpine distribution to save space.
That in principle is a good suggestion as there is a nodejs image for alpine that is ready to be used.
But you have to be carefull here, because you have to build all the binaries. Even node_modules usually contain just javascript packages, in some case you have binary that have to be build.
If your dockerfile is working right now, this should not be your case, but as you're moving from an ubuntu to a different kind of image it's better to keep in mind that all binaries that you need to use in the future have to be compiled in a alpine image.
Said that you should consider how you use your image before choose where to cut size.
Is your application a single application that lives alone just in a own container without any other node app around?
In case the answer is no, you should be aware that the size of each image in the local docker registry is not counted as summary to obtain the total used size.
Instead you have to split each image in the basic layers and sum each uniq layer.
What I mean here is that the single image is not so important if you have many node apps that run on a node.
You can save space by sharing the node_modules exporting it as a volume that contains all the needed dependencies.
Or better, you can start from an official nodejs image to create an intermediate image that contains the root of dependencies of you apps. For example expressjs and path. And then install in each application image the dedicated dependencies.
So you gain the advantage to share the layers in common reducing the total used size of the local docker registry.
Minor considerations
You don't need to install express and path globally inside a container image.
Do you really need vim in a container?
Consider that modify a container is not safe even in development. You can use the volume to point resources on your server file system.
Or copy in/out a file or folder from your container when running.
And if you just need to read something, just use commands like less, more or cat.