i've setup a jenkins pipeline job in a groovy script....
i am trying to build the jenkins job which runs a docker command on remote server.
my jenkins is expected to connect to remote server and perform
docker run -d -p 60:80 <image name>
so for that i have used the following groovy script in jenkins pipeline job
stage ('Deploy on App Server')
{
def dockrun = 'docker run -d -p 60:80 <image name>'
sshagent(['dev-servr-crdntls'])
{
sh "ssh -o StrictHostKeyChecking=no ubuntu#xx.xxx.xx.xx ${dockrun}"
}
}
This scripts runs perfectly fine. Jenkins is connecting to remote server and running the docker command and app is running on port 60.
HOWEVER as this is in jenkins pipeline for CICD, next time when the Build is run job is getting failed because port 60 is already assigned. .
I want to kill the port 60 before running the docker run -d -p ......command. Any suggestions please
You could use the following command to kill the running container that occupies a given port:
docker kill $(docker ps -qf expose=<port>)
Explanation:
The docker ps command allows to list containers and has a lot of useful options. One of them is the -f flag for filtering for containers based on some properties. Now you could filter for the running container that occupies <port> by using -f expose=<port>. In addition, the -q flag can be used to only output the container ID. This output can be used as input to the docker kill command.
Edit:
Because the command mentioned above could potentially fail with an error if no container is running on the given port, the following command can be used to circumvent this problem:
docker kill $(docker ps -qf expose=<port>) 2> /dev/null || echo 'No container running on port <port>'
Now, the command will either kill the container occupying <port> if such container exists and is running, or output No container running on port <port> (optional)
Related
Here's the situation:
I have a docker container (jenkins). I've mounted the sockets to my container so that I can perform docker commands inside my jenkins container.
Manually, everything works in the container. However, when Jenkins executes the job, it doesn't "wait" for the docker exec command to run to completion.
Below, is an extract from the Jenkinsfile. The short-lived printenv command runs correctly, and prints the environment variables. The next command (python) just gets run and then Jenkins moves on immediately, not waiting for completion. The Jenkins agent (slave) is running on an Ubuntu image. Running all these commands outside Jenkins work as expected.
echo "Running the app docker container in detached tty mode to keep it up"
docker run --detach --tty --name "${CONTAINER_NAME}" "${IMAGE_NAME}"
echo "Listing environment variables"
docker exec --interactive "${CONTAINER_NAME}" bash -c "printenv"
echo "Running test coverage"
docker exec --interactive "${CONTAINER_NAME}" bash -c "python -m coverage run --source . --branch -m pytest -vs"
It seems maybe related to this question.
Please can anyone explain how to get Jenkins to wait for the docker exec command to complete before proceeding to the next step.
Have considered alternatives, like the Docker Pipeline Plugin, but would much prefer to use something close to what I have above where possible.
Ok, another approach, I've tried using Docker Pipeline plugin here.
You can use docker.sock as volume mount to orchestrate containers on your host machine like this in your docker-compose.yml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Depending on your setup you might need to run
chmod 666 /var/run/docker.sock
to get going in the first place.
This works on macOS as well as Linux.
Ugh. This was down to the way that I'd set up docker support on the slave container.
I'd used socat to provide a TCP server proxy. Instead, switched that out for a plain old docker.sock volume between host & container.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The very first time, I had to also sort out a permissions issue by doing (inside the container):
rm -Rf ~/.docker
chmod 666 /var/run/docker.sock
After that, everything "just worked". Very painful experience.
I created a fresh Digital Ocean server with Docker on it (using Laradock) and got my Laravel website working well.
Now I want to automate my deployments using Deployer.
I think my only problem is that I can't get Deployer to run docker exec -it $(docker-compose ps -q php-fpm) bash;, which is the command I successfully manually use to enter the appropriate Docker container (after using SSH to connect from my local machine to the Digital Ocean server).
When Deployer tries to run it, I get this error message:
➤ Executing task execphpfpm
[1.5.6.6] > cd /root/laradock && (pwd;)
[1.5.6.6] < /root/laradock
[1.5.6.6] > cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)
[1.5.6.6] < the input device is not a TTY
➤ Executing task deploy:failed
• done on [1.5.6.6]
✔ Ok [3ms]
➤ Executing task deploy:unlock
[1.5.6.6] > rm -f ~/daily/.dep/deploy.lock
• done on [1.5.6.6]
✔ Ok [188ms]
In Client.php line 99:
[Deployer\Exception\RuntimeException (1)]
The command "cd /root/laradock && (docker exec -it $(docker-compose ps -q php-fpm) bash;)" failed.
Exit Code: 1 (General error)
Host Name: 1.5.6.6
================
the input device is not a TTY
Here are the relevant parts of my deploy.php:
host('1.5.6.6')
->user('root')
->identityFile('~/.ssh/id_rsa2018-07-09')
->forwardAgent(true)
->stage('production')
->set('deploy_path', '~/{{application}}');
before('deploy:prepare', 'execphpfpm');
task('execphpfpm', function () {
cd('/root/laradock');
run('pwd;');
run('docker exec -it $(docker-compose ps -q php-fpm) bash;');
run('pwd');
});
I've already spent a day and a half reading countless articles and trying so many different variations. E.g. replacing the -it flag with -i, or setting export COMPOSE_INTERACTIVE_NO_CLI=1 or replacing the whole docker exec command with docker-compose exec php-fpm bash;.
I expect that I'm missing something fairly simple. Docker is widely used, and Deployer seems popular too.
To use Laravel Deployer you should connect via ssh directly to the workspace container.
You can expose the container's ssh port:
https://laradock.io/documentation/#access-workspace-via-ssh
Let's say you've forwarded container ssh port 22 to vm port 2222. In that case you need configure your Deployer to use the port 2222.
Also remember to set proper secure SSH keys, not the default ones.
You should try
docker-compose exec -T php-fpm bash;
The -T option will
Disable pseudo-tty allocation. By default docker-compose exec allocates a TTY.
In my particular case I had separate containers for PHP and Composer. That is why I could not connect to the container via SSH while deploying.
So I configured the bin/php and bin/composer parameters like this:
set('bin/php', 'docker exec php php');
set('bin/composer', 'docker run --volume={{release_path}}:/app composer');
Notice that here we use exec for a persistent php container which is already running at the moment and run to start a new instance of composer container which will stop after installing dependencies.
I want to run multiple containers automatically and create something,
but some images, such as swarm, will automatically stop after run or start.
I already try like that
docker run -d swarm
docker run -d swarm /bin/bash tail -f /dev/null
docker run -itd swarm bash -c "while true; do sleep 1; done"
but 'docker ps' show nothing, and I tried to build Dockerfile by typing:
FROM swarm
ENTRYPOINT ["echo"]
and The image does not run with error message :
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"echo\\\": executable file not found in $PATH\"\n".
I can't understand this error... How can I keep swarm container running..?
(Sorry,my English is not good))
using -d is recommended because you can run your container with just one command and you don’t need to detach terminal of container by hitting Ctrl + P + Q.
However, there is a problem with -d option. Your container immediately stops unless the commands are not running on foreground.
Docker requires your command to keep running in the foreground. Otherwise, it thinks that your applications stops and shutdown the container.
The problem is that some application does not run in the foreground.
In this situation, you can add tail -f /dev/null to your command.
By doing this, even if your main command runs in the background, your container doesn’t stop because tail is keep running in the foreground.
docker run -d swarm tail -f /dev/null
docker ps shows container
Now you can attach your container by using docker exec container_name command
or
docker run -d swarm command tail -f /dev/null
First of all you don't want to mix the -i and -d switches. Either you would like to run the container in interactive or detached mode. In your case in detached mode:
docker run -d swarm /bin/bash tail -f /dev/null
There are also no need to allocate a tty using the -t flag, since this only needs to be done in interactive mode.
You should have a look at the Docker run reference
Docker container does two type of task. One is to perform and exit & other is to run it in background.
To run docker container in background, there are few options.
Run using shell. docker run -it <image> /bin/bash
For continuously running container. docker run -d -p 8080:8080 <image>. Assuming image will expose port 8080 and in listening mode.
It's fine to to a tail on /dev/null, but why not make it do something useful?
The following command will reap orphan processes, so no zombie (defunct) precesses are left floating around. Also some init.d / restart scripts doesn't allow this.
exec sh -c 'while true ;do wait ;done'
You are right docker run -itd swarm ( Without give argument for container( bash -c "while true; do sleep 1; done" ) )works fine .If you pass argument for docker run it will run the command and terminates the container.If you want to run the container permanently first start the container with docker run -itd swarm and check if the container runs or not by docker ps now the container runs , if you want to execute any command in container use docker exec -itd container_name command Remember : Only use the command which not stop the container. bash -c "while true; do sleep 1; done this command will stop the container ( Because this is complete command if we execute in normal terminal it execute and terminate , this type of command also terminates the container ).
I Hope this Helps..
Basically this is the method , but your docker image is swarm so it is different and i don't know about swarm docker image and i am not using that .But after i research about that . First i run the docker swarm image it shows.,
After that i understand we run docker swarm image by using only five commands in picture like create, list manage, join, help . if we run swarm image without command like docker run -itd swarm it takes command as --help . Sorry, But i don't know what is the purpose of swarm image. For more usage check https://hub.docker.com/_/swarm/ .
The answer that i added docker run -itd image tail -f /dev/null is not for swarm image , it is for docker images like ubuntu,fedora, centos.
Just read the usage of swarm image and why it is used .
After if you have any issue post your issue in https://github.com/docker/swarm-library-image/issues
Thank You...
have a container running
docker run --rm -d --name=tmp ubuntu sleep infinity
example of requesting a command from the dorment container
docker exec tmp echo hello from container
notes:
--rm removes the container if it is stopped
-d runs the container in the background
--name=tmp name the container so you control how to denote it
ubuntu pushes a light image to exec your commands
sleep infinity keeps the container dorment
On a VPS server I use the following command to run Jenkins:
docker run -d -p 8080:8080 jenkins
But sometimes, my config error will stop the container, then all my jobs configured in Jenkins are lost.
I follow this video to run Jenkins in Docker.
Is this the right way to run Jenkin in Docker? How to save my jobs in my pulled Jenkins image?
You have to attach a volume to the container that point to jenkins home directory. Usually I use:
docker run -d -p 80:8080 -v /my-absolute-path/where-is-jenkins_home:/var/jenkins_home jenkins
I have a test Ubuntu server with docker-machine installed. I have a number of docker containers running on the servers. Including a Jenkins container. I run jenkins with the following command
docker run -d --name jenkins -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker --restart=always -p 8080:8080 -v ~/jenkinsHome:/var/jenkins_home docker-jenkins
I am working on managing my images through Jenkins. I can start all but one of my containers via Jenkins shell script. The one container that fails appears to start in the script (I do a docker PS after the docker run in script). However, the container stops after the script completes. I am using the same docker run command that works on the command prompt, but it fails in Jenkins script:
sudo docker run -d --net=host -v ~/plex-config:/config -v ~/Media:/media -p 32400:32400 wernight/plex-media-server
I have double checked folder permissions and they are correct. Can anyone direct me to possible reasons the run command is failing in Jenkins, but not at the command prompt?
using docker ps -a I was able to get an ID for the stopped container. Then by using docker logs I was able to see the error was a folder permission issue. Then digging deeper, it was a user permission error mis-match between the user Jenkins runs as inside it's container not being able to pass the folder correctly. I have decided to circumvent the problem by using docker stop and start commands and not using the docker run command.