Jenkins with docker - jenkins

My problem is:
docker run -d -p 8080:8080 asd/jenkins # everything's ok
# made changes at jenkins
docker commit container_with_jenkins # comitted
docker run -d -p 8080:8080 image_from_container_with_changes
# => Error: create: No command specified
Am I missing something?
How do one work with docker's images and save changes within container?

When you commit an image it does not inherit the CMD from its parent image. So when you start a container based on the new image, you need to supply a run command.
docker run -d image_from_container_with_changes java -jar /var/lib/jenkins/jenkins.war
where the run command of course depends on your specific installation.
Jenkins stores it's configuration in a directory, e.g. /root/.jenkins. What I would recommend is to create a directory on the host and link this as a volume:
docker run -v {absolute_path_to_jenkins_dir}:/root/.jenkins -d asd/jenkins
If you start a new container in the same way, it will have the same jobs etc. In case you make changes that do go into this directory (I don't know by head where plugins or updates are installed) you still might want to make a new image. In that case, use the -run option when you commit your container to specify new configuration,
docker commit -run='{"Cmd": ["java", "-jar", "/var/lib/jenkins/jenkins.war"]}' abc1234d

Related

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

Why can't docker commit a Jenkins container with customized configuration?

I pulled a Jenkins image and launched it. Then I did some configuration on that container. Now I want to save all my configuration into a new image. Below is the command I used:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f214096e4847 jenkins "/bin/tini -- /usr/lo" About an hour ago Up 1 seconds 50000/tcp, 0.0.0.0:8081->8080/tcp ci
From above output, you can see that the jenkins container f214096e4847 is running.
Now I use below command to commit my changes and create a new image:
$ docker commit f214096e4847 my_ci/1.0
sha256:d83801a700c4060326a5209b87281bfb0e93f46207d960038ba2d87628ddb90c
Then I stop the current container and run a new container from my_ci/1.0 image:
$ docker stop f214096e4847
f214096e4847
$ docker run -d --name myci -p 8081:8080 my_ci/1.0
aba1660be200291d499bf00d851a854c724193c0ee2afb3fd318c36320b7637e
But the new container doesn't include any changes I made. It looks like a container got created from original jenkins image. How to persist my data when using docker commit?
EDIT1
I know that I can add a volume to save the configuration data as below:
-v my_path:/var/jenkins_home
But I really want to save it on the docker image. So users don't need to provide the configuration from their host.
It's important to know that this isn't a good approach. As told you in the comments. The recommended way is to mount volumes.
But if you really want your volume in the image I can propose another way. You can create your own image derived from the official image:
Clone the git repo of the original image
git clone https://github.com/jenkinsci/docker.git
This contains the following:
CONTRIBUTING.md Jenkinsfile docker-compose.yml install-plugins.sh jenkins-volume plugins.sh update-official-library.sh
Dockerfile README.md init.groovy jenkins-support jenkins.sh tests weekly.sh
You just need to make one edit in the Dockerfile. Replace the VOLUME by a mkdir command
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
#VOLUME /var/jenkins_home
RUN mkdir -p /var/jenkins_home
Rebuild your own image:
docker build -t my-jenkins:1.0
Start your own jenkins + install some plugins + create some jobs.
docker run -d -p 8080:8080 -p 50000:50000 my-jenkins:1.0
When you're ready with creating the desired jobs you can commit the container as an image.
docker commit 30c5889032a8 my-jenkins-for-developers:1.0
This newest jenkins container will contain your plugins + jobs by default.
docker run -d -p 8080:8080 -p 50000:50000 my-jenkins-for-developers:1.0
This will work in your case. But as I said. It's not recommended. It makes your content dependent of the image. So it's more difficult when you want to perform updates. Also your image can be too big (size).

How to start a stopped Docker container with a different command?

I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)

How can I remove the Cmd entry from a Docker image configuration?

After modifying a Docker image "from within" by running
docker run -it --user root <image_name> bash
…and commiting the changes, the image's config now contains the bash command in Container.Cmd and ContainerConfig.Cmd.
I have seen that docker commit at least used to have a -run option which could let me modify the configuration, but I haven't found documentation for it.
How can I remove Cmd from the configuration to make entrypoint active again (and what should I have done to avoid the problem)?
(Workaround) You could run your new image with docker run --entrypoint to set a new entrypoint, then commit that new container as a new image. It should keep the entrypoint you started it with.
Alternatively you could manually edit the JSON metadata for the image, but I wouldn't recommend that as a production hack -- it is always better to go through the APIs for that.

Post "docker run" commands

I'm looking for a way of setting some commands to run in my Dockerfile, once I have run "docker run"
My use case is I have 2 containers, Web (Apache, PHP), DB (MySQL)
When I execute "docker run" on the Web container and the link is made to the DB container. I want to execute the migrations script in my Web container.
I can use "docker exec" to get into the box and run the migrations which works. I just want to automate this with Dockerfile if possible or with another provisioner.
Thanks
Simon
Just have a script in either image (seems to make more sense for it to live in your DB image), and execute it before you start your web server. Even better, store your MySQL data in a volume so that on your next run or restart of the db container you don't have to worry about the migration:
# migrate data into your volume
docker run --name mysql-data -v /my/mysql/data mysqlImage migrate.sh
# run mysql
docker run --name mysql -d --volumes-from mysql-data mysqlImage
# run www
docker run --name www -d --link mysql:mysql phpImage
You can also just set your entry point to a custom script, let's call it /my/run.sh:
#!/usr/bin/env bash
mysqlimport ...
# don't know the syntax, but run apache in non-daemon mode
apache
Then:
docker run --name www -d --link mysql:mysql --entrypoint /my/run.sh phpImage
Docker is designed for running one process, but can run several. You should look at supervisor, see https://docs.docker.com/articles/using_supervisord/
If you really want to make the db migration a part of your docker container run, you might have a more complex script as a command, which would first do the migrations script and then run the web service.

Resources