Docker commit doesn't make changes to containers union file system - jenkins

This probably has been asked at some point, but I can't find it anywhere. I can't seem to be able (or can't figure out how) to commit changes to a docker image without losing file changes in the container that i'm committing. Here's my use case.
I use Boot2Docker on Windows,
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
I pull the newest version of jenkins from dockerHub.
docker pull jenkins
I run it, rerouting it's web interface port
docker run -dt -p 8010:8080 jenkins
I go to the interface to install some plugins. I install the plugins and then restart Jenkins. I then try to commit my changes.
docker commit $(docker ps -lq) <my_user_name_on_docker_hub>/<name_of_the_new_image>
docker ps -lq returns the id of the last running container. Since i'm running only this container at this moment, i'm sure that this returns the correct id (i also checked it by doing docker ps and actually looking up the container)
Then I push the changes.
docker push <my_user_name_on_docker_hub>/<name_of_the_new_image>
The push goes through all of the revisions that already exist on the dockerHub and skips them all until it hits the last one and uploads four megabytes to the registry. And yet, when i try to run this new image, it's just the base image of jenkins without any changes. Without the plugins that i installed. As far as i understand, the changes to the union file system of the image (jenkins plugins are installed as binaries there) should be commited. I need to have a new image with my changes on it.
What am I doing wrong?
EDIT: i created a couple of test jobs, ran them, walked around the file system using docker exec -it bash, and jenkins creates a new directory for each job under /var/jenkins_home/jobs, but when i do docker diff, it shows that only temp files have been created. And after committing, pushing, stopping the container and running a new one from the image that just got pushed, the job folders disappear together with everything else.
EDIT2: i tried creating files in other folders and docker diff seems to be seeing the changes everywhere else except for /var/jenkins_home/ directory.
EDIT3: this seems to be related -- from Jenkins DockerHub page
How to use this image
docker run -p 8080:8080 jenkins This will store the workspace in
/var/jenkins_home. All Jenkins data lives in there - including plugins
and configuration. You will probably want to make that a persistent
volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
The volume for the “myjenkins” named container will then be
persistent.
You can also bind mount in a volume from the host:
First, ensure that /your/home is accessible by the jenkins user in
container (jenkins user - uid 102 normally - or use -u root), then:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins
I tried running the command with the -v toggle, but that didn't really make my commit any more persistent.

It was my fault for not looking at the docs for the Jenkins docker image
How to use this image
docker run -p 8080:8080 jenkins This will store the workspace in
/var/jenkins_home. All Jenkins data lives in there - including plugins
and configuration. You will probably want to make that a persistent
volume:
docker run --name myjenkins -p 8080:8080 -v /var/jenkins_home jenkins
The volume for the “myjenkins” named container will then be
persistent.
https://registry.hub.docker.com/_/jenkins/

Related

docker stack command is not in the list of docker command

docker stack is not in the list of docker commands.
But it works fine. Is it a bug or what? Here is the command list of docker:
Management Commands:
config Manage Docker configs
container Manage containers
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Commands:
attach Attach local standard input, output, and error streams to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Run 'docker COMMAND --help' for more information on a command.
As you can see, there is no stack command.
Here is my docker version:
☁ docker-research [master] ⚡ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:06:22 2018
OS/Arch: darwin/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.05.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.10.1
Git commit: f150324
Built: Wed May 9 22:20:42 2018
OS/Arch: linux/amd64
Experimental: false
update 1
I also think does stack command will show in a swarm node? So, I do a test using docker-machine ssh myvm1 'docker'. Unfortunately, there is still no stack command
I agree with #novaline it seems like a bug in the documentation.
Also, if you try docker stack --help
You will get:
Usage: docker stack COMMAND
Manage Docker stacks
Options:
Commands:
deploy Deploy a new stack or update an existing stack
ls List stacks
ps List the tasks in the stack
rm Remove one or more stacks
services List the services in the stack
Run 'docker stack COMMAND --help' for more information on a command.
The command is there, but...
$ docker stack ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
you have to run the Docker Engine in swarm mode.
from the Docs:
When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file.
Note: If you're trying things out on a local development environment, you can put your engine into swarm mode with docker swarm init.
If you've already got a multi-node swarm running, keep in mind that all docker stack and docker service commands must be run from a manager node.

Jenkins with publish over ssh - unable to migrate server configuration

I am using Jenkins (2.32.2) Docker container with the Publish over ssh plugin (1.17) and I have added a new server manually.
The newly added server is another Docker container (both running with docker-compose) and I am using a password to connect to it, and everything works just fine when doing it manually, but the problem is when I'm rebuilding the image.
I am already using a volume for the jenkins gone directory and it works just fine. The problem is only on the initial installation (e.g. image build, not a container restart).
It seems like the problem is with the secret key, and I found out that I also need to copy some keys when creating my image.
See the credentials section at Publish over ssh documentation
I tried to copy all the "secrets" directory and the following files: secret.key, secret.key.not-so-secret, identity.key.enc - but I still can't connect after a fresh install.
What am I missing?
Edited:
I just tried to copy the whole jenkins_home directory on my DOCKERFILE and it works, so I guess that the problem is with the first load or something? maybe Jenkins changes the key / salt on the first load?
Thanks.
try to push out jenkins config to docker host of to os where docker host is being installed
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
or
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v ./local/conf:/var/jenkins_home jenkins

How to configure Jenkins in Docker?

I'm new to Jenkins and Docker both. I am currently working on project where I allow users to submit jobs to jenkins. And I was wondering if there is a way to use docker to dynamically spun-up Jenkins server and redirect all the user jobs coming from my application to this server and then destroy this jenkins once the work is done. Is it possible? if yes, how ? if no, Why? Also, I need to setup maven for this jenkins server, do I need another container for that?
You can try the following but I can't guarantee if it's that easy to move your jenkins content from your dedicated server to your docker container. I did not try it before .
But the main thing is the following.
Create a backup of the content of your dedicated jenkins server using
tar
Create a named docker volume:
$ docker volume create --name jenkins-volume
Extract your jenkins-backup.tar inside your docker volume. This
volume is in /var/lib/docker/volumes/jenkins-volume/_data
$ sudo tar -xvpzf jenkins-backup.tar -C /var/lib/docker/volumes/jenkins-volume/_data/
Now you can start your jenkins container and telling it to use the jenkins-volume. Use the jenkins-image which is the same version as the dedicated jenkins.
$ docker run -d -u jenkins --name jenkins -p 50000:50000 -p 443:8443 -v jenkins-volume:/var/jenkins_home --restart=always jenkins:your-version
For me this worked to move the content from our jenkins container on AWS to a jenkins container on another cloud provider. I did not try it for a dedicated server. But you can't break anything of your existing jenkins when you try it.

Add file to jenkins workspace with docker

In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation

share images between host and child docker

I read this article http://blog.docker.io/2013/09/docker-can-now-run-within-docker/ and I want to share images between my "host" docker and "child" docker. But when I run
sudo docker run -v /var/lib/docker:/var/lib/docker -privileged -t -i jpetazzo/dind
I can't connect to "child" docker from dind container.
root#5a0cbdc2b7df:/# docker version
Client version: 0.8.1
Go version (client): go1.2
Git commit (client): a1598d1
2014/03/13 18:37:49 Can't connect to docker daemon. Is 'docker -d' running on this host?
How can I share my local images between host and child docker?
You shouldn't do that! Docker assumes that it has exclusive access to /var/lib/docker, and if you (or another Docker instance) meddles with this directory, it could have unexpected results.
There are multiple solutions, depending on what you want to achieve.
If you want to be able to run Docker commands from within a container, but don't need a separate daemon, then you can share the Docker control socket with this container, e.g.:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-t -i ubuntu bash
If you really want to run a different Docker daemon (e.g. because you're hacking on Docker and/or want to run a different version), but want to access the same images, maybe you could run a private registry in a container, and use that registry to easily share images between Docker-in-the-Host and Docker-in-the-Container.
Don't hesitate to give more details about your use-case so we can tell you the most appropriate solution!

Resources