my own customizations of boot2docker are not reflected into the iso image - docker

Following the section at Making your own customised boot2docker ISO, i wrote the Dockerfile below to install the vim package:
FROM boot2docker/boot2docker
RUN apt-get update && apt-get install -y vim
RUN /make_iso.sh
CMD ["cat", "boot2docker.iso"]
Then executed these commands successfully:
docker build -t my-boot2docker-img . && docker run --rm my-boot2docker-img > boot2docker.iso
I created a virtual machine using this iso image and logged into it. I've expected the vim is now available on my shell but it was not. From the build process console logs, i saw the vim installed successfully. However it is apparently not included in the iso.
Can someone please tell me, what i've missed here?

You only installed vim in the build container that produces the final boot2docker iso. To get the desired result you need to install any packages/data at $ROOTFS in the build container. For some hints on how to accomplish this with apt-get see this answer.
But first you should ask yourself why you need vim in a VM that is only meant as a transparent proxy for mac/windows users.
Edit:
As you got valid reasons to build your own boot2docker iso, have a look at the boot2docker repo.
The dockerfile broken down:
install build dependencies in the build container
download and compile a linux kernel with aufs support, copy to $ROOTFS
download and extract TinyCore distribution at $ROOTFS
download and extract TinyCore packages defined in $TCZ_DEPS to $ROOTFS
build and install VMware tools and other helpers at $ROOTFS
export $ROOTFS as new iso
I'd probably look into extending on step 4 first, i.e. simply download packages from the TinyCore repo.

Related

Is it possible to install openssl in openshift?

I have a script that encrypts the files that are created by my application. The script is a bat file, I am changing it to shell script because in openshift we use wildfly server in centOS and the script uses OpenSSL.
My question is
Is it possible to install OpenSSL in the container or image. If so, is there any issue?
Or do we need to create custom openshift container which has openssl installed.
I am new to openshift and all. So not aware of this.
The short answer is "Yes" - practically, you can add to a container image any tool that you need. It is usually regarded as a good practice to add only what you need to keep the image size small.
It is quite possible that there's an image already "out there" that has openssl installed on centos. You may have to build the image for reasons like security, company policies, etc.
First, create a new image from a base image. A sample Dockerfile:
FROM centos:centos7
# Switch to root to be able to install stuff
USER 0
# -y for unattended install
RUN yum install -y curl \
# clean up the yum cache
&& yum clean all
Build the image, then push it to a Docker registry. Next, reference the image in the deployment configuration as the image for a container.
With OpenShift you actually have the option of building images on OpenShift, including using Docker builds, and saving them automatically in OpenShift's integrated Docker registry.

How do I install docker on RHEL 7 offline?

New to docker.
Need to install docker on a RHEL 7 (no gui) system.
Does the RHEL 7 installation come with docker already on it? If not, where do I get it from? (I cannot use the docker software at docker.com, it has to come from RedHat - government rules, not mine)
Once procured, how do I install it on a system that is not connected to the internet.
I hope I've made my request as simple as possible, let the questions begin.
Red Hat's build of docker is available in the Red Hat Enterprise Linux 7 Extras channel, but only for the Server variant of the product. You can download individual packages from the Customer Portal after login, but it is going to be a bit cumbersome because the docker package has multiple dependencies.
Alternatively, you can use the reposync tool to mirror the entire Extras channel on a network-connected machine which has a subscription. Or you can use yum in download-only mode and copy over the RPMs stored in the cache directory (but please copy them to a regular directory on the target, and use yum install to install them).
Fire up a centos system.
$ sudo yumdownloader docker --resolve
Copy the RPMs over to your RH machine and run:
$ sudo rpm -ivh *rpm
$ sudo systemctl start docker
Gen rpm on CentOS 7 with docker:
$ yumdownloader --resolve docker-ce
Then, install on target:
$ rpm -ivh docker-ce-19.03.11-3.el7.x86_64.rpm

Is it possible to remove unwanted packages from docker image?

I'm trying to reduce the size of my docker image which is using Centos 7.2
The issue is that it's 257MB which is too high...
I have followed the best practices to write Dockerfile in order to reduce the size...
Is there a way to modify the image after the build and rebuild that image to see the size reduced ?
First of all if you want to reduce an OS size, don't start with big one like CentOS, you can start with alpine which is small
Now if you are still keen on using CentOS, do the following:
docker run -d --name centos_minimal centos:7.2.1511 tail -f /dev/null
This will start a command in the background. You can then get into the container using
docker exec -it centos_minimal bash
Now start removing packages that you don't need using yum remove or yum purge. Once you are done you can commit the image
docker commit centos_minimal centos_minimal:7.2.1511_trial1
Experimental Squash Image
Another option is to use an experimental feature of the build command. In this you can have a dockerfile like below
FROM centos:7
RUN yum -y purge package1 package2 package2
Then build this file using
docker build --squash -t centos_minimal:squash .
For this you need to add "experimental": true to your /etc/docker/daemon.json and then restart the docker server
It is possible, but not at all elegant. Just like you can add software to the base image, you could also remove:
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install new_software
RUN yum -y remove obsolete_software
Ask yourself: does your OS have to be CentOS? Then I would recommend you use the default installation and make sure your have enough disk space and memory.
If it does not need to be CentOS, you should rather start with a more minimalistic image. See the discussion here:
Which Docker base image should be used to install Apps in a container without any additional OS?

Compatability of Dockerfile RUN Commands Cross-OS (apt-get)

A beginner's question; how does Docker handle underlying operating system variations when using the RUN command?
Let's take, for example, a very simple Official Docker Hub Dockerfile, for JRE 1.8. When it comes to installing the packages for java, the Dockerfile uses apt-get:
RUN apt-get update && apt-get install -y --no-install-recommends ...
To the untrained eye, this appears to be a platform-specific instruction that will only work on Debian-based operating systems (or at least ones with APT installed).
How exactly would this work on a CentOS installation, for example, where the package manager would be yum? Or god forbid, something like Solaris.
If this pattern of using RUN to fork arbitrary shell commands is prevalent in docker, how does one avoid inter-platform, or even inter-version, dependencies?
i.e. what if the Dockerfile writer has a newer version of (say) grep than I do, and they've used some new CLI flag that isn't available on earlier versions?
The only two outcomes from this can be: (1) RUN command exits with non-zero exit code (2) the Dockerfile changes the installed version of grep before running the command.
The common point shared by all Dockerfiles is the FROM statement. It is the first line in the file and indicates the parent Docker image you're building on. A typical base image could be one with Ubuntu (i.e.: https://hub.docker.com/_/ubuntu/). The snippet you share in your question would fit well in an Ubuntu image (with apt-get) but not in a CentOS image.
In summary, you're installing docker in your CentOS system, but you're building a Docker image with Ubuntu in it.
As I commented in your question, you can add FROM statement to specify which relaying OS you want. for example:
FROM docker.io/centos:latest
RUN yum update -y
RUN yum install -y java
...
now you have to build/create the image with:
docker build -t <image-name> .
The idea is that you'll use the OS you are familiar with (for example, CentOS) and build an image of it. Now, you can take this image and run it above Ubuntu/CentOS/RHEL/whatever... with
docker run -it <image-name> bash
(You just need to install docker in the desired OS.

Yum install won't work on a boot2docker host?

I'm relatively new to Docker.
I have launch a boot2docker host using docker-machine create -d.
Managed to connect to it, and run few commands. All good.
However, when trying to create a basic http server image, based on centos..
"yum install" simply fails. No matter what is the package.
This is my Docker file:
FROM centos
MAINTAINER Amir
#Install Apache
RUN yum install httpd
When running:
docker build .
It's starting to build the image, and everything looks good.. but then fails with:
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2015-09-18.15-10.q5ss8m.yumtx
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
The command '/bin/sh -c yum install httpd' returned a non-zero code: 1
Any idea what am I doing wrong?
Thanks in advance.
If you look bit earlier than the last message, you have a good chance to see something like this:
Total download size: 24 M
Installed size: 32 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
which means you have to change the default choice, e.g.
#Install Apache
RUN yum install -y httpd

Resources