Apply changes to docker container after 'exec' into it - docker

I have successfully shelled into a RUNNING docker container using
docker exec -i -t 7be21f1544a5 bash
I have made some changes to some json files and wanted to apply these changes to reflect online.
I am a beginner and have tried to restart, mount in vain. What strings I have to replace when I mount using docker run?
Is there any online sample?
CONTAINER ID: 7be21f1544a5
IMAGE: gater/web
COMMAND: "/bin/sh -c 'nginx'"
CREATED: 4 weeks ago
STATUS: Up 44 minutes
PORTS: 443/tcp, 172.16.0.1:10010->80/tcp
NAMES: web

You can run either create a Dockefile and run:
docker build .
from the same directory where your Dockerfile is located.
or you can run:
docker run -i -t <docker-image> bash
or (if your container is already running)
docker exec -i -t <container-id> bash
once you are in the shell make all the changes you please. Then run:
docker commit <container-id> myimage:0.1
You will have a new docker image locally myimage:0.1. If you want to push to a docker repository (dockerhub or your private docker repo) you can run:
docker push myimage:0.1

There are 2 ways to do it :
Dockerfile approach
You need to know what changes you have made into Docker container after you have exec into it and also the Dockerfile of the image .
Lets say you installed additional rpm using yum install command after entering into the container (yum install perl-HTML-Format) and updated some file say /opt/test.json inside contianer (take a backup of this file in Docker host in some directory or in directory Dockerfile exist)
The above command/steps you can place in Dockerfile as
RUN yum install perl-HTML-Format
COPY /docker-host-dir/updated-test.json /opt/test.json
Once you update the Dockerfile, create the new image and push it to Docker repository
docker build -t test_image .
docker push test_image:latest
You can save the updated Dockerfile for future use.
Docker commit command approach
After you made the changes to container, use below commands to create a new image from container's changes and push it online
docker commit container-id test_image
docker push test_image
docker commit --help
Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

You don't want to do that. After you figured out what you needed you throw away the running container (git rm 7be21f1544a5), repeat the changes in the Dockerfile and build a new image to run.

Related

Using docker pull & run to build dockerfile

I'm learning how to use docker.
I want to deploy a microservice for swagger. I can do
docker pull schickling/swagger-ui
docker run -p 80:8080 -e API_URL=http://myapiurl/api.json swaggerapi/swagger-ui
To deploy it, I need a dockerfile i can run.
How do i generate the dockerfile in a way I can run it with docker build ?
The original question asks for a Dockerfile, perhaps for some CI/CD workflow, so this answer addresses that requirement:
Create a very simple Dockerfile beginning with
FROM schickling/swagger-ui
Then from that directory run
$ docker build -t mycontainername .
Which can then be run:
$ docker run -p 80:8080 -e API_URL=http://myapiurl/api.json mycontainername
Usually the docker pull pulls the Dockerfile. The Dockerfile for swagger is on the docker repo for it if you wanted to edit it or customize it.
(https://hub.docker.com/r/schickling/swagger-ui/~/dockerfile/)
That one should work with the build command. The build command builds the image, the run command turns the image into a container. The docker pull command should pull the image in. You don't need to run docker build for it as you should already have the image from the pull. You only need to do docker run.

Why can't docker commit a Jenkins container with customized configuration?

I pulled a Jenkins image and launched it. Then I did some configuration on that container. Now I want to save all my configuration into a new image. Below is the command I used:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f214096e4847 jenkins "/bin/tini -- /usr/lo" About an hour ago Up 1 seconds 50000/tcp, 0.0.0.0:8081->8080/tcp ci
From above output, you can see that the jenkins container f214096e4847 is running.
Now I use below command to commit my changes and create a new image:
$ docker commit f214096e4847 my_ci/1.0
sha256:d83801a700c4060326a5209b87281bfb0e93f46207d960038ba2d87628ddb90c
Then I stop the current container and run a new container from my_ci/1.0 image:
$ docker stop f214096e4847
f214096e4847
$ docker run -d --name myci -p 8081:8080 my_ci/1.0
aba1660be200291d499bf00d851a854c724193c0ee2afb3fd318c36320b7637e
But the new container doesn't include any changes I made. It looks like a container got created from original jenkins image. How to persist my data when using docker commit?
EDIT1
I know that I can add a volume to save the configuration data as below:
-v my_path:/var/jenkins_home
But I really want to save it on the docker image. So users don't need to provide the configuration from their host.
It's important to know that this isn't a good approach. As told you in the comments. The recommended way is to mount volumes.
But if you really want your volume in the image I can propose another way. You can create your own image derived from the official image:
Clone the git repo of the original image
git clone https://github.com/jenkinsci/docker.git
This contains the following:
CONTRIBUTING.md Jenkinsfile docker-compose.yml install-plugins.sh jenkins-volume plugins.sh update-official-library.sh
Dockerfile README.md init.groovy jenkins-support jenkins.sh tests weekly.sh
You just need to make one edit in the Dockerfile. Replace the VOLUME by a mkdir command
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
#VOLUME /var/jenkins_home
RUN mkdir -p /var/jenkins_home
Rebuild your own image:
docker build -t my-jenkins:1.0
Start your own jenkins + install some plugins + create some jobs.
docker run -d -p 8080:8080 -p 50000:50000 my-jenkins:1.0
When you're ready with creating the desired jobs you can commit the container as an image.
docker commit 30c5889032a8 my-jenkins-for-developers:1.0
This newest jenkins container will contain your plugins + jobs by default.
docker run -d -p 8080:8080 -p 50000:50000 my-jenkins-for-developers:1.0
This will work in your case. But as I said. It's not recommended. It makes your content dependent of the image. So it's more difficult when you want to perform updates. Also your image can be too big (size).

Docker image versioning and lifecycle management

I am getting into Docker and am trying to better understand how it works out there in the "real world".
It occurs to me that, in practice:
You need a way to version Docker images
You need a way to tell the Docker engine (running on a VM) to stop/start/restart a particular container
You need a way to tell the Docker engine which version of a image to run
Does Docker ship with built-in commands for handling each of these? If not what tools/strategies are used for accomplishing them? Also, when I build a Docker image (via, say, docker build -t myapp .), what file type is produced and where is it located on the machine?
docker has all you need to build images and run containers. You can create your own image by writing a Dockerfile or by pulling it from the docker hub.
In the Dockerfile you specify another image as the basis for your image, run command install things. Images can have tags, for example the ubuntu image can have the latest or 12.04 tag, that can be specified with ubuntu:latest notation.
Once you have built the image with docker build -t image-name . you can create containers from that image with `docker run --name container-name image-name.
docker ps to see running containers
docker rm <container name/id> to remove containers
Suppose we have a docker file like bellow:
->Build from git without versioning:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments
in here:
fecomments is branch name and comments is the folder name.
->building from git with tag and version:
sudo docker build https://github.com/lordash/mswpw.git#fecomments:comments -t lordash/comments:v1.0
->Now if you want to build from a directory: first go to comments directory the run command sudo docker build .
->if you want to add tag you can use -t or -tag flag to do that:
sudo docker build -t lordash . or sudo docker build -t lordash/comments .
-> Now you can version your image with the help of tag:
sudo docker build -t lordash/comments:v1.0 .
->you can also apply multiple tag to an image:
sudo docker build -t lordash/comments:latest -t lordash/comments:v1.0 .

`docker cp` doesn't copy file into container

I have a dockerized project. I build, copy a file from the host system into the docker container, and then shell into the container to find that the file isn't there. How is docker cp supposed to work?
$ docker build -q -t foo .
Sending build context to Docker daemon 64 kB
Step 0 : FROM ubuntu:14.04
---> 2d24f826cb16
Step 1 : MAINTAINER Brandon Istenes <redacted#email.com>
---> Using cache
---> f53a163ef8ce
Step 2 : RUN apt-get update
---> Using cache
---> 32b06b4131d4
Successfully built 32b06b4131d4
$ docker cp ~/.ssh/known_hosts foo:/root/.ssh/known_hosts
$ docker run -it foo bash
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
root#421fc2866b14:/# ls /root/.ssh
root#421fc2866b14:/#
So there was some mix-up with the names of images and containers. Obviously, the cp operation was acting on a different container than I brought up with the run command. In any case, the correct procedure is:
# Build the image, call it foo-build
docker build -q -t foo-build .
# Create a container from the image called foo-tmp
docker create --name foo-tmp foo-build
# Run the copy command on the container
docker cp /src/path foo-tmp:/dest/path
# Commit the container as a new image
docker commit foo-tmp foo
# The new image will have the files
docker run foo ls /dest
You need to docker exec to get into your container, your command creates a new container.
I have this alias to get into the last created container with the shell of the container
alias exec_last='docker exec -it $(docker ps -lq) $(docker inspect -f {{'.Path'}} $(docker ps -lq))'
What docker version are you using? As per Docker 1.8 cp supports copying from host to container:
• Copy files from host to container: docker cp used to only copy files from a container out to the host, but it now works the other way round: docker cp foo.txt mycontainer:/foo.txt
Please note the difference between images and containers. If you want that every container that you create from that Dockerfile contains that file (even if you don't copy afterward) you can use COPY and ADD in the Dockerfile. If you want to copy the file after the container is created from the image, you can use the docker cp command in version 1.8.

How to backup/restore docker image for deployment?

I have an image to be updated with following command before each deployment.
$docker pull myusername/myproject:latest
This command overwrites the previous image.
How can I backup this image (or change it to a different tag locally without committing to networking repository? If there is anything wrong, I can restore the backup.
How can I backup this image
Simply use the docker save command. $ docker save myusername/myproject:latest | gzip -c > myproject_img_bak20141103.tgz
You will later be able to restore it with the docker load command. gunzip -c myproject_img_bak20141103.tgz | docker load
or change it to a different tag locally without committing to networking repository?
Use the docker tag command: $ docker tag myusername/myproject:latest myusername/myproject:bak20141103
For completeness: For Docker on Windows the following syntax applies:
docker save -o container-file-name.tar mcr.microsoft.com/windows/nanoserver:1809
docker load -i "c:\path\to\file.tar"
Example.
#environment: bash shell
#id container: 1d09068efad1
#name backup: backup01
docker commit -p 1d09068efad1 backup01
docker save -o backup01.tar backup01
https://www.baculasystems.com/blog/docker-backup-containers/
https://www.geeksforgeeks.org/backing-up-a-docker-container/

Resources