Where is jenkins.xml in jenkins docker container - jenkins

We are using jenkins inside docker (version: FROM jenkins:1.651.1).
We have problems with the html publisher plugin which isn't executing javascript by default. The same issue as described here.
We want to change our jenkins.xml (just like they said) to change it's configuration. The problem is the fact we can't find it inside our container. Does it has another name inside the docker container or where can we find it? Thanks

The jenkins docker image does not contain a jenkins.xml. This looks to be a windows specific file.
$ docker run --rm jenkins find / -name 'jenkins.xml' 2>/dev/null
# returns nothing
/usr/local/bin/jenkins.sh is run when a jenkins container is started. This contains the line:
eval "exec java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS \"\$#\""
Therefore try setting the required setting in JENKINS_OPTS:
docker run -e JENKINS_OPTS="hudson.model.DirectoryBrowserSupport.CSP=" jenkins

Related

How to run shell script on Host from jenkins docker container?

I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.

Add file to jenkins workspace with docker

In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation

How to properly start Docker inside Jenkins that is also running in Docker

I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/

Docker: how to restart process inside of container?

I have a set of tests which I would like to run on docker container.
In the middle of the tests I am changing me test data and I need to restart JETTY.
What is the best way to do that?
I can imagine some options:
With SSH - but for the docker ssh is not the best option.
Python agent on docker to listen sockets - expose one more port, connect and restart jetty.
Maybe there are better ideas for that?
Thanks
Sounds like the process you're trying to restart is the primary process for the docker container (ie. the one you set in your Dockerfile if you have one, and when you run 'ps -ef' inside the container you would see the PID for your process set to 1). If this is the case, then you cannot restart it from inside the container. You should just restart the container itself:
docker restart <container_id>
Enter the container and restart it.
Manual Way:
docker exec -it <containeridorname> /bin/bash
Or Automated Way:
docker exec -it <containeridorname> /restartjettycommand.sh
you need to use an entrypoint shell script
you will need to build your docker file so that you can copy the files in to the container.
your entrypoint.sh or call it runjettytests.sh
pseudo code for that will look like:
#!/bin/sh
java -jar start.jar
$runTests
java -DSTOP.PORT=8080 -DSTOP.KEY=stop_jetty -jar start.jar --stop
cp dataset2 data/
java -jar start.jar
$runTests2
java -DSTOP.PORT=8080 -DSTOP.KEY=stop_jetty -jar start.jar --stop
exit(0)
clearly your use case may vary but thats a rough idea

How to set an environment variable in a running docker container

If I have a docker container that I started a while back, what is the best way to set an environment variable in that running container? I set an environment variable initially when I ran the run command.
$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress
but now that it has been running for a while I want to add another VIRTUAL_HOST to the environment variable. I do not want to delete the container and then just re-run it with the environment variable that I want because then I would have to migrate the old volumes to the new container, it has theme files and uploads in it that I don't want to lose.
I would just like to change the value of VIRTUAL_HOST environment variable.
There are generaly two options, because docker doesn't support this feature now:
Create your own script, which will act like runner for your command. For example:
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
Run your command following way:
docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"
Docker doesn't offer this feature.
There is an issue: "How to set an enviroment variable on an existing container? #8838"
Also from "Allow docker start to take environment variables #7561":
Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.
For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:
You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)
This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.
However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.
To:
set up many env. vars in one step,
prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),
you can use
--env-file key_value_file.txt
option:
docker run --env-file key_value_file.txt $INSTANCE_ID
Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8
Live Restore
First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.
sudo vim /etc/docker/daemon.json
My file looks like this:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}
Taken from here.
Get the Container ID
Save the ID of the container you want to edit for easier access to the files.
export CONTAINER_ID=`docker inspect --format="{{.Id}}" <YOUR CONTAINER NAME>`
Edit Container Configuration
Edit the configuration file, go to the "Env" section, and add your key.
sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json
My file looks like this:
...,"Env":["TEST=1",...
Stop and Start Docker
I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.
sudo systemctl stop docker
sudo systemctl start docker
Because of live-restore, your containers should stay up.
Verify That It Worked
docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'
Single quotes are important here.
You can also verify that the uptime of your container hasn't changed:
docker ps
You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.
You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:
$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres
So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.
A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.
In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.
After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.
If you are running the container as a service using docker swarm, you can do:
docker service update --env-add <you environment variable> <service_name>
Also remove using --env-rm
To make sure it's addedd as you wanted, just run:
docker exec -it <container id> env
1. Enter your running container:
sudo docker exec -it <container_name> /bin/bash
2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:
printenv | grep -v "no_proxy" >> /etc/environment
3. Stop and Start the container
sudo docker stop <container_name>
sudo docker start <container_name>
Firstly you can set env inside the container the same way as you do on a linux box.
Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.
here is how to update a docker container config permanently
stop container: docker stop <container name>
edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker
I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one
docs.docker.com/engine/reference/commandline/commit
docker commit [container-id] [tag]
docker commit b0e71de98cb9 stack-overflow:0.0.1
then you can pass environment vars or file
docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1
the quick working hack would be:
get into the running container.
docker exec -it <container_name> bash
set env variable,
install vim if not installed in the container
apt-get install vim
vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021
source ~/.profile
check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)
Now, you can run whatever you're running outside of the container from inside the container.
Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.
After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .
So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1].
In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .
If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]
[1]
$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]
[2]
/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
[3]
docker restart dev-php (container name)
Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.
There is a way to recreate container with new environment settings and use it for some time.
1. Create new image from runnning container:
docker commit my-service
a1b2c3d4e5f6032165497
Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.
2. Stop and rename original container:
docker stop my-service
docker rename my-service my-service-original
3. Create and start new container with modified environment:
docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497
Here, I did the following:
created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
configured its mounts and networks
added my custom environment configuration
4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:
docker rename my-service-original my-service
docker start my-service
How to set environment variable in a running docker container as a development environment
Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.
Instructions
Using VScode attach to your running container
Then with VScode open the ~/.bashrc file
Export your variable by adding the code in the end of the file
export MY_VAR="value"
Finally execute .bashrc using source command
source ~/.bashrc
You could set an environment variable to a running Docker container by
docker exec -it -e "your environment Key"="your new value" <container> /bin/bash
Verify it using below command
printenv
This will update your key with the new value provided.
Note: This will get reverted back to old on if docker gets restarted.
Use export VAR=Value
Then type printenv in terminal to validate it is set correctly.

Resources