I am new to Docker. I have few doubts on my understandings. This is what my understanding.
Docker solves these problems by creating a lightweight, standalone, executable package of your application that includes everything needed to run it including the code, the runtime, the libraries, tools, environments, and configurations
Does that mean the same container image will run on different Operating systems? For example:
I hope the below DockerFile would create the container image and run only on CENTOS.
If I want to run my applicaiton different OS, then I should have different DockerFile configuration depending on the OS. In this case what is the advantage of docker containers? Can you pleae correct my understanding?
FROM centos
ENV JAVA_VERSION 8u31
ENV BUILD_VERSION b13
# Upgrading system
RUN yum -y upgrade
RUN yum -y install wget
# Downloading & Config Java 8
RUN wget --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/$JAVA_VERSION-$BUILD_VERSION/jdk-$JAVA_VERSION-linux-x64.rpm" -O /tmp/jdk-8-linux-x64.rpm
RUN yum -y install /tmp/jdk-8-linux-x64.rpm
RUN alternatives --install /usr/bin/java jar /usr/java/latest/bin/java 200000
RUN alternatives --install /usr/bin/javaws javaws /usr/java/latest/bin/javaws 200000
RUN alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
EXPOSE 8080
#install Spring Boot artifact
VOLUME /tmp
ADD /maven/sfg-thymeleaf-course-0.0.1-SNAPSHOT.jar sfg-thymeleaf-course.jar
RUN sh -c 'touch /sfg-thymeleaf-course.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/sfg-thymeleaf-course.jar"]
The Dockerfile you have just provided will create a Docker image that can run on all Operating systems that shares the Linux kernel such as: Debian, Ubuntu, Centos, Fedora just to name a few. And this is one of the Docker purpose, to be able to run the same image on any host that run the Linux kernel.
However, as you specify CentOs (FROM centos) inside your Dockerfile the application that would be running inside the Docker container will be using CentOS as for its Operating System.
Related
I am looking for some help in writing docker file for Ubuntu 18.04 version which installs Python3.10.
Currently it is written in such a way that it gets the default version of the Python3 (i.e. 3.6) along with the ubuntu 18.04.
Here the question is, is there any way that I can get the Python3.10 with Ubuntu 18.04? The requirement is to use either slim or non-slim versions of Python3.10 Bulls eye image from docker hub
you can use ubuntu 18 docker image, then install python 3.10 inside it.
FROM ubuntu:18.04
RUN apt-get -y update && apt -get install software-properties-common /
&& add-apt-repository ppa:deadsnakes/ppa && apt install python3.10
I am able to build the image on ubuntu 18.04 by including python3.10
Step-1: Write a docker file
FROM python:3.10-bullseye
RUN mkdir WORK_REPO
RUN cd WORK_REPO
WORKDIR /WORK_REPO
ADD hi.py .
CMD ["python", "-u", "hi.py"]
Step-2: Build the image
docker build -t image_name .
Step-3: Run the docker image
docker run image_name
Step-4: Connect to the container and check the Python version
I hope this would be helpful for someone who is completely new in writing dockerfile.
Many Thanks,
Suresh.
I'm trying to create a docker image of soundcloud/ipmi-exporter to run with Prometheus on Ubuntu Bionic with Docker 19.03.6, build 369ce74a3c. Docker on my OS X laptop is Docker version 20.10.2, build 2291f61. I am forced to build the (customized) image on my laptop because Bionic has a version of golang that's older than what ipmi-exporter wants, and I'm not allowed to update the Ubuntu server.
Anyway, can someone tell me what I'm doing wrong in my Dockerfile?
# Container image
FROM quay.io/prometheus/golang-builder:1.13-base AS builder
ADD . /go/src/github.com/soundcloud/ipmi_exporter/
RUN cd /go/src/github.com/soundcloud/ipmi_exporter && make
# Container image
FROM ubuntu:18.04
WORKDIR /
RUN apt-get update && apt-get install freeipmi-tools -y --no-install-recommends && rm -rf /var/lib/apt/lists/*
COPY --from=builder /go/src/github.com/soundcloud/ipmi_exporter/ipmi_exporter /bin/ipmi_exporter
EXPOSE 8888
ENTRYPOINT ["ipmi_exporter"]
CMD ["--config.file", "/ipmi_remote.yml"]
CMD ["--web.listen-address=":8889"" "--freeipmi.path=/etc/freeipmi" "--log.level="debug""]
When I run the image all I see is
ipmi_exporter: error: unexpected /bin/sh, try --help
I have ipmi_exporter running on the OS directly and I never configured a config.yml. What config.yml is the Dockerfile author talking about? It's mentioned in the last line of https://github.com/soundcloud/ipmi_exporter/blob/master/Dockerfile
The image lives here: https://github.com/soundcloud/ipmi_exporter The sample/example Dockerfile refers to a config.yaml which this software does not use.
I just can't figure out how to make the image pull in the config file I specify.
I have a self-hosted gitlab for still private projects and a dedicated physical node for testing with an AMD GPU. On this node there is already a gitlab-ci runner with docker executor.
Is there a way to execute programms with OpenCL and access to the AMD GPU within the docker-containers, which are created by the gitlab-ci runner?
All I found until now, were Nvidia and CUDA related infos to solve this problem (for example this How can I get use cuda inside a gitlab-ci docker executor), but I haven't found anything useful for the case with OpenCL and AMD.
Found the solution by myself in the meantime. It was easier then expected.
The docker-image for the gitlab-ci pipeline only need the amd gpu driver from the amd website (https://www.amd.com/en/support).
Example-Dockerfile to build the docker images:
from ubuntu:18.04
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y gcc g++ opencl-headers ocl-icd-opencl-dev curl apt-utils unzip tar curl xz-utils wget clinfo
RUN cd /tmp &&\
curl --referer https://drivers.amd.com/drivers/linux -O https://drivers.amd.com/drivers/linux/amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
tar -Jxvf amdgpu-pro-20.30-1109583-ubuntu-18.04.tar.xz &&\
cd amdgpu-pro-20.30-1109583-ubuntu-18.04/ &&\
./amdgpu-install -y --headless --opencl=legacy
Based on your used gpu and linux version you need potentially another file then the one in this example. Its also possible that the file doesn't exist anymore on the website and you have to checkout the newest file.
Beside this there is only a little modification in the gitlab-runner config (/etc/gitlab-runner/config.toml) necessary.
Add in the docker-runner: devices = ["/dev/dri"]:
[[runners]]
...
[runners.docker]
...
devices = ["/dev/dri"]
And restart the gitlab runner again with gitlab-runner restart.
After this its possible to execute opencl-code inside of the gitlab-ci docker runner.
A beginner's question; how does Docker handle underlying operating system variations when using the RUN command?
Let's take, for example, a very simple Official Docker Hub Dockerfile, for JRE 1.8. When it comes to installing the packages for java, the Dockerfile uses apt-get:
RUN apt-get update && apt-get install -y --no-install-recommends ...
To the untrained eye, this appears to be a platform-specific instruction that will only work on Debian-based operating systems (or at least ones with APT installed).
How exactly would this work on a CentOS installation, for example, where the package manager would be yum? Or god forbid, something like Solaris.
If this pattern of using RUN to fork arbitrary shell commands is prevalent in docker, how does one avoid inter-platform, or even inter-version, dependencies?
i.e. what if the Dockerfile writer has a newer version of (say) grep than I do, and they've used some new CLI flag that isn't available on earlier versions?
The only two outcomes from this can be: (1) RUN command exits with non-zero exit code (2) the Dockerfile changes the installed version of grep before running the command.
The common point shared by all Dockerfiles is the FROM statement. It is the first line in the file and indicates the parent Docker image you're building on. A typical base image could be one with Ubuntu (i.e.: https://hub.docker.com/_/ubuntu/). The snippet you share in your question would fit well in an Ubuntu image (with apt-get) but not in a CentOS image.
In summary, you're installing docker in your CentOS system, but you're building a Docker image with Ubuntu in it.
As I commented in your question, you can add FROM statement to specify which relaying OS you want. for example:
FROM docker.io/centos:latest
RUN yum update -y
RUN yum install -y java
...
now you have to build/create the image with:
docker build -t <image-name> .
The idea is that you'll use the OS you are familiar with (for example, CentOS) and build an image of it. Now, you can take this image and run it above Ubuntu/CentOS/RHEL/whatever... with
docker run -it <image-name> bash
(You just need to install docker in the desired OS.
I am a PHP developer so most of the time for test any application I am working on what I do is:
Create a Vmware VM and install a complete OS: most of the time I like to use CentOS
Setup everything on the VM meaning: Apache and modules, PHP and modules and MySQL or MariaDB
Anytime I start a new VM from scratch there are a few steps I run:
# Install EPEL and Remi Repos
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm epel-release-latest-6.noarch.rpm
# Install Apache, PHP and its dependencies
yum -y install php php-common php-cli php-fpm php-gd php-intl php-mbstring php-mcrypt php-opcache php-pdo php-pear php-pecl-apcu php-imagick php-pecl-xdebug php-pgsql php-xml php-mysqlnd php-pecl-zip php-process php-soap
# Start Apache on 235 run level
chkconfig --levels 235 httpd on
# Setup MariaDB repos
nano /etc/yum.repos.d/MariaDB.repo
# Write this inside the MariaDB.repo file
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
# Install MariaDB
yum -y install MariaDB MariaDB-server
# Start service
service mysql start
# Start MariaDB on run level 235
chkconfig --levels 235 mysql on
# Setup MariaDB (this is interactive)
/usr/bin/mysql_secure_installation
# A few more steps
This is annoying task and I need to do all the time (when I mess up the VM trying new things and changing here and there. So here is where Docker, I think, comes to save. After read a few I know the basic of Docker and I have pull a CentOS image by running docker run -it centos but that's all just a bash shell and a basic CentOS image so is my task to install & setup everything.
Here are my doubts about Docker and how to handle this repetitive and common tasks:
Should I create a Dockerfile (this is my first Dockerfile so perhaps the order is not the right or I am complete mistaken) with the content below and put all the repetitive tasks inside run-setup.sh file?
FROM centos:latest
MAINTAINER MyName <MyEmail>
RUN yum -y update && yum clean all
ADD run-setup.sh /run-setup.sh
RUN chmod -v +x /run-setup.sh
CMD ["/run-setup.sh"]
EXPOSE 80
Should I run the repetitive tasks by hand as I do before on the VM?
The command /usr/bin/mysql_secure_installation is complete interactive since I need to answer a few questions and set a password, how to deal with this one or any other interactive?
Any better idea?
I will start answering your questions:
Yes, you could start with a Dockerfile. However, I recommend you using the commands straight into the file so that its easier to maintain in the future. An e.g. could be Dockerfile of apache from github.
Repetitive tasks, no. You could save the images of the containers by pushing your images to a public registry like docker hub or you could host a private one which can be a docker container itself.
Inter activeness should be worked around somehow with command line options, bash read or passing a file if possible etc. I do not think there is a straight answer to this.
Better ideas, the usual pattern is to host the Dockerfile in a github or bitbucket public repository and then configure automated builds against docker hub. They all come for free :)
There are also many live working examples you could get from the docker hub. Start searching for an image, choose the most popular/offical one, then you must have links to the Dockerfile.
Let me know how it goes.