I have a container and I want to make a change to it.
It is hosted at docker hub, and I would like to change one of its commands to install an additional app.
I can see the individual steps in docker hub, but not the dockerfile itself (not sure why).
This is the command I want to change:
/bin/sh -c apk --update add --no-cache openssh bash zip && ....
I want to pull the container, change that line to include an additional package in the add command of apk.
I have been reading about the docker commit --change command but I am unsure how to exactly pinpoint the command I want to change. Do I reference it numerically, saying I want to change line #3?
I will then tag a new version and push, which I know how to do, but I am finding it hard to understand how to change this without docker run -it [name] bash and then push this, as I already did that and it appended a new command after the CMD command, and broke the container...
What hosted on docker hub is actually the image of the container.
This image is hosted in layers. each docker file command represents a new layer. You can't actually change a single layer, You can however create a new docker file based on that image and add new layers.
FROM SOME_BASE_IMAGE
RUN apt-get update && \
apt-get install -y SOME_PACKAGE
(assuming the image has apt package manager)
You'll then have to build the new image with docker build -t IMAGE_REPO:IMAGE_TAG ...
If you don't have access to the Dockerfile, you can't change anything in that container, you can only append more stuff (layers).
This means that, you can't change /bin/sh -c apk --update add --no-cache openssh bash zip && .... for something else, but if your intention is to install / remove more packages, you can do the following:
FROM the_container_you_want_to_change
RUN ...
Related
My docker image based on alpine Linex can not get anything from network. So the command "apk add xxx" is valid. Now my idea is downloading the .apk file and coping it into the docker container. But how can I install the .apk file ?
Let's say you are trying to install glibc in Alpine
Download the packages into your current directory
wget "https://circle-artifacts.com/gh/andyshinn/alpine-pkg-glibc/6/artifacts/0/home/ubuntu/alpine-pkg-glibc/packages/x86_64/glibc-2.21-r2.apk"
wget "https://circle-artifacts.com/gh/andyshinn/alpine-pkg-glibc/6/artifacts/0/home/ubuntu/alpine-pkg-glibc/packages/x86_64/glibc-bin-2.21-r2.apk"
Then, use apk with --allow-untrusted flag
apk add --allow-untrusted glibc-2.21-r2.apk glibc-bin-2.21-r2.apk
And finish the installation (only needed in this example)
/usr/glibc/usr/bin/ldconfig /lib /usr/glibc/usr/lib
Next steps are fine for me:
Get an "online" Alpine machine and download packages. Example is with "zip" and "rsync" packages:
Update your system: sudo apk update
Download only this packages: apk fetch zip rsync
You will get this files (or maybe an actual version):
zip-3.0-r8.apk
rsync-3.1.3-r3.apk
Upload this files to the "offline" Alpine machine.
Install apk packages:
sudo apk add --allow-untrusted zip-3.0-r8.apk
sudo apk add --allow-untrusted rsync-3.1.3-r3.apk
More info: https://wiki.alpinelinux.org/wiki/Alpine_Linux_package_management
please note that the flag --recursive is necessary when you fetch your apk to download all the dependencies too, else you might get an error when you go offline for missing packages.
sudo apk update
sudo apk fetch --recursive packageName
transfer the files to the offline host
sudo apk add --allow-untrusted <dependency.apk>
sudo apk add --allow-untrusted <package.apk>
If it's possible to run Docker commands from a system that's connected to the public Internet, you can do this in a Docker-native way by splitting your image into two parts.
The first image only contains the apk commands, but no actual application code.
FROM alpine
RUN apk add ...
Build that image docker build -t me/alpine-base, connected to the network.
You now need to transfer that image into the isolated environment. If it's possible to connect some system to both networks, and run a Docker registry inside the environment, then you can use docker push to send the image to the isolated environment. Otherwise, this is one of the few cases where you need docker save: create a tar file of the image, move that file into the isolated environment (through a bastion host, on a USB key, ...), and docker load it on the target system.
Now you have that base image on the target system, so you can install the application on top of it without calling apk.
FROM me/alpine-base
WORKDIR /app
COPY . .
CMD ...
This approach will work for any sort of artifact. If you have something like an application's package.json/requirements.txt/Gemfile/go.mod that lists out all of the application's library dependencies, you can run the download-and-install step ahead of time like this, but you'll need to remember to repeat it and manually move the updated base image if these dependencies ever change.
I use Artifactory as remote repository to build my docker image. Now befor I execute the command $ docker build I have to change the docker file so that each line should be changed.
FROM rocker/shiny
RUN apt-get update
RUN apt-get update && apt-get install -y
.
.
.
There are roughly 100 lines in the docker file.
In order to say that docker build should run over Artifactory I have to change every line like as follows:
FROM docker-remote-docker-io.artifacts/rocker/shiny
Is there any possibility to set docker or change . ~/.profile to avoid the changeing every line in the docher file?
The option URL in docker build is not what I need! ;)
You don't say where you are building but you can setup a proxy to dockerhub
Luckly there is a feature on Docker Engine that goes mostly unnoticed:
the “--registry-mirror” daemon option. Engine options are configured
somewhat differently on each Linux distro, but in CentOS/RHEL you can
do it editing the “/etc/sysconfig/docker” file and restarting Docker:
This way you don't have to change your FROM lines
I'm trying to reduce the size of my docker image which is using Centos 7.2
The issue is that it's 257MB which is too high...
I have followed the best practices to write Dockerfile in order to reduce the size...
Is there a way to modify the image after the build and rebuild that image to see the size reduced ?
First of all if you want to reduce an OS size, don't start with big one like CentOS, you can start with alpine which is small
Now if you are still keen on using CentOS, do the following:
docker run -d --name centos_minimal centos:7.2.1511 tail -f /dev/null
This will start a command in the background. You can then get into the container using
docker exec -it centos_minimal bash
Now start removing packages that you don't need using yum remove or yum purge. Once you are done you can commit the image
docker commit centos_minimal centos_minimal:7.2.1511_trial1
Experimental Squash Image
Another option is to use an experimental feature of the build command. In this you can have a dockerfile like below
FROM centos:7
RUN yum -y purge package1 package2 package2
Then build this file using
docker build --squash -t centos_minimal:squash .
For this you need to add "experimental": true to your /etc/docker/daemon.json and then restart the docker server
It is possible, but not at all elegant. Just like you can add software to the base image, you could also remove:
FROM centos:7
RUN yum -y update && yum clean all
RUN yum -y install new_software
RUN yum -y remove obsolete_software
Ask yourself: does your OS have to be CentOS? Then I would recommend you use the default installation and make sure your have enough disk space and memory.
If it does not need to be CentOS, you should rather start with a more minimalistic image. See the discussion here:
Which Docker base image should be used to install Apps in a container without any additional OS?
I found an image on docker (https://hub.docker.com/r/realbazso/horizon) that I like. I am trying to update this to where it runs the most current version of this software.
I tested running the image with the arguments provided and it works great, but the version of the VMWare Horizon client that the image has does not have an updated SSL library and cannot connect to the servers I need it to without throwing an SSL error.
I'm super new to docker, but I was wondering if anyone could help me with this. I'm wanting to install it on the ubuntu:14.04 image, but I'm just not able to wrap my head around it.
I am going to add some more information to #user2915097's answer.
The first thing to do when you want to edit/update an already existing image is to see if you can find its Dockerfile. Fortunately, this repo has a Dockerfile attached to it so it makes it easier. I commented the file so that you can understand better what is going on:
# Pulls the ubuntu image. This will serve as the base image for the container. You could change this and use ubuntu:16.04 to get the latest LTS.
FROM ubuntu:14.04
# RUN will execute the commands for you when you build the image from this Dockerfile. This is probably where you will want to change the source
RUN echo "deb http://archive.canonical.com/ubuntu/ trusty partner" >> /etc/apt/sources.list && \
dpkg --add-architecture i386 && \
apt-get update && \
apt-get install -y vmware-view-client
# CMD will execute the command (there can only be one!) when you start/run the container
CMD /usr/bin/vmware-view
A good resource to understand those commands is https://docs.docker.com/engine/reference/builder/. Make sure to visit that page to learn more about Dockerfile!
Once you have a Dockerfile ready to build, navigate to the folder where your Dockerfile is and run:
# Make sure to change the argument of -t
docker build -t yourDockerHubUsername/containerName .
You might need to modify your Dockerfile a few times before it works correctly. If you are having issues with Docker using cached data
as you have the recipe, if you look at
https://hub.docker.com/r/realbazso/horizon/~/dockerfile/
you should create a directory, put this Dockerfile in, modify it, build another image
docker build -t tucker/myhorizon .
launch it, test it, modify again the Dockerfile maybe.
Check the doc R0MANARMY listed
So currently I'm building an API in PHP as different (micro) services which runs on nginx.
I've followed all the Docker fundamental video's and went through the docs, but I still can't figure out how to implement it.
Do I need a server where I push my code to and deploy on the containers (with CI or so)?
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
I think you messed up a bit what's container and what's an image. For me image is something you build on disk to run. And container is an image running on the computer and serving/doing things.
Do I need a server where I push my code to and deploy on the containers
No, you don't. You start building image from some base image, and from a Dockerfile. So make some work dir, copy Dockerfile here, copy your PHP sources here as PHPAPI and in Dockerfile have commands to copy PHP into docker. Along the lines
FROM ubuntu:15.04
MAINTAINER guidsen
RUN apt-get update && \
apt-get install -y nginx && \
apt-get install -y php && \
apt-get autoremove; apt-get clean; apt-get autoclean
RUN mkdir -p /root/PHPAPI
COPY PHPAPI /root/PHPAPI
WORKDIR /root/PHPAPI
CMD /root/PHPAPI/main.php
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
That depends on what do you use to run containers from image. AWS I think require image pulled from Docker hub, so you have to push it here first. Some other cloud providers or private clouds require to push image directly to them. And yes, your code would be in the image and will be run in the container.