run docker commands from command prompt versus jenkins script - jenkins

I have a test Ubuntu server with docker-machine installed. I have a number of docker containers running on the servers. Including a Jenkins container. I run jenkins with the following command
docker run -d --name jenkins -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker --restart=always -p 8080:8080 -v ~/jenkinsHome:/var/jenkins_home docker-jenkins
I am working on managing my images through Jenkins. I can start all but one of my containers via Jenkins shell script. The one container that fails appears to start in the script (I do a docker PS after the docker run in script). However, the container stops after the script completes. I am using the same docker run command that works on the command prompt, but it fails in Jenkins script:
sudo docker run -d --net=host -v ~/plex-config:/config -v ~/Media:/media -p 32400:32400 wernight/plex-media-server
I have double checked folder permissions and they are correct. Can anyone direct me to possible reasons the run command is failing in Jenkins, but not at the command prompt?

using docker ps -a I was able to get an ID for the stopped container. Then by using docker logs I was able to see the error was a folder permission issue. Then digging deeper, it was a user permission error mis-match between the user Jenkins runs as inside it's container not being able to pass the folder correctly. I have decided to circumvent the problem by using docker stop and start commands and not using the docker run command.

Related

How to develop within docker image

I started to experiment with the docker but have some questions regarding how to develop on it and regarding its use cases. If anyone could guide me through these questions, it will be much appreciated.
First,
As far as I understood, docker is used mainly for developing applications on custom environments, thus avoiding the tidious installation processes. This is initially my intention, why I'd like to use docker for.
I've created a docker file which builds successfuly, and which has basic C++ development tools based upon library/gcc. I want to be able to develop in this docker container as you would do on your terminal.
What I did is I created a docker image from a Dockerfile. (I can observe that it is successfully created)
docker build -t mydockerimage .
Then run the docker in detached mode.
docker run -d mydockerimage
At this point, I am notified with the ID of the docker container. However docker container does not seem to be running when I check the output of:
docker container ls
Here comes the first question, why is my docker container not running?
To my understanding, simplest way to interact with the docker container is as follows:
docker exec -it <container_id_or_name> echo "Hello from container!"
Is this true? Is this a use case of docker in which I simply can start the container and exec some Linux command on it?
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
Thank you in advance.
Do you provide an entrypoint or CMD in your dockerfile? This will be executed inside your container and keeps the container running. You can find some details here.
In short. Docker has a default entrypoint: /bin/sh -c, but no default CMD.
Check the dockerfile of ubuntu. This has bash as CMD so it's executing /bin/sh -c bash.
$ docker run -it ubuntu bash
root#9855e779cab2:/#
This will result in an interactive shell in which you can execute commands like on an ubuntu. If you exit the container the container will stop running.
To keep a container running you can use the -d option. It will run the container in the background as a daemon:
$ docker run -d -it ubuntu bash
2606ad8e095baa0237cc30e599a26a4d727d99d47392d779fb83cd50f1a39614
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2606ad8e095b ubuntu "bash" 18 seconds ago Up 17 seconds cranky_johnson
Now you can exec inside the container to "go inside" the container and execute ubuntu commands.
$ docker exec -it 2606ad8e095b bash
root#2606ad8e095b:/#
When you exit the container it remains running in the background.
Now we can execute your command too:
$ docker exec -it 2606ad8e095b echo "Hello from container!"
Hello from container!
This will open a bash session in your container and echo the string.
I think it's important in your case you define some entrypoint (which can also be a script) or a CMD. Probably you need something very similar to Ubuntu when you just want to use bash inside your container.
Moreover, I get a permission denied on /var/lib/docker.sock when I try to execute docker commands without sudo. What am I missing here?
This is normal. The Docker daemon currently requires root privileges. So you have to use docker with your root user or users which have root priviledges and you have to add sudo every time. You can add your user to a docker group. Every time the daemon starts, it makes the ownership of the Unix socket read/writable by the docker group. This means you can use docker without using sudo everytime when that user is inside your docker group.
To add your user to the docker group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ exit
ssh back or open new shell

Docker couchbase cbbackup/cbtransfer/cbrestore tools

I've used docker to install couchbase on my ubuntu machine using (https://hub.docker.com/r/couchbase/server/). The docker run query is as follows:
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
Everything works perfectly fine. My application connects, I'm able to insert/update and query the couchbase. Now, I'm looking to debug a situation wherein the couchbase is on my co-developers machine who also has the same installation i.e., couchbase on docker using the above link. For achieving this, I wanted to run cbbackup on his installation. To achieve this, I run the following command which is a variation of the above link:
bash -c "clear && docker exec -it couch-db sh"
Can anyone please help me with the location of /opt/couchbase/bin in this setup? I believe this is where I can get access to "cbbackup", "cbrestore" and "cbtransfer" which I can then use to backup and restore data from my colleague's machine.
Thanks,
Abhi.
When you run the command
docker run -d --name db -p 8091-8094:8091-8094 -p 11210:11210 -v /home/dockercontent/couchbase:/opt/couchbase/var couchbase
you're pulling a docker image and spawning a docker container.
Please read more about Docker and containerization.
In order to run cbbackup you need to log into your docker container.
Follow these steps:
Retrieve the container-id:
$ docker ps -a
Look for the CONTAINER ID for IMAGE NAME=couchbase
Login to the container using the command:
$ docker exec -it <container-id> bash
Go to the directory : /opt/couchbase/bin using:
$ cd /opt/couchbase/bin
You'll find cbbackup binary in this directory.

Must I provide a command when running a docker container?

I'd like to install mysql server on a centos:6.6 container.
However, when I run docker run --name myDB -e MYSQL_ROOT_PASSWORD=my-secret-pw -d centos:6.6, I got docker: Error response from daemon: No command specified. error.
Checking the document from docker run --help, I found that the COMMAND seems to be an optional argument when executing docker run. This is because [COMMAND] is placed inside a pair of square brackets.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
I also find out that the official repository of mysql doesn't specify a command when starting a MySQL container:
Starting a MySQL instance is simple:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Why should I provide a command when running a centos:6.6 container, but not so when running a mysql container?
I'm guessing that maybe centos:6.6 is specially-configured so that the user must provide a command when running it.
if you use centos:6.6, you do need to provide a command when you issue "docker run" command.
The reason the officical repository of mysql does not specify a command is because it has CMD command in it's docker file: CMD ["mysqld"]. Check it's docker file here.
The CMD in docker file is the default command when run the container without a command.
You can read here to better understand what you can use in a docker file.
In your case, you can
Start your centos 6.6 container
Take official mysql docker file as reference, issue similar command (change apt-get to yum ( or sudo yum if you don't use the default root user)
Once you can successfully start mysql, you can put all your command in your docker file, just to make sure the first line is "From centos:6.6"
Build your image
Run a container with your image, then you don't need to provide a command in docker run
You can share your docker file in docker hub, so that other people can user yours.
good luck.

Docker issue commands to an app inside container?

I am using nodeBB to start a server you can run ./nodebb start to stop you can do ./nodebb stop. Now that I have dockerized it http://nodebb-francais.readthedocs.org/projects/nodebb/en/latest/installing/docker/nodebb-redis.html I am not sure how I can interact with it.
I have followed the steps "Using docker-machine mac os x"
docker run --name my-forum-redis -d -p 6379:6379 nodebb/docker:ubuntu-redis
Then
docker run --name my-forum-nodebb --link my-forum-redis:redis -p 80:80 -p 443:443 -p 4567:4567 -P -t -i nodebb/docker:ubuntu
Then
docker start my-forum-nodebb
I had an issue with redis address in use, so I want to fix that and restart but I am not sure how? Also I would like to issue the command grunt in the project directory, again not sure how?
My question is how can I interact with an app inside a docker container as if I had direct access to the project folder itself? Am I missing something?
All code in this answer is untested, as I'm currently at a computer without docker.
See whether the containers are still running
docker ps
Stop misconfigured containers
docker stop my-forum-redis
docker stop my-forum-nodebb
Remove misconfigured containers and their volumes
(The docker images they are based on will be retained.)
docker rm --volumes --force stop my-forum-nodebb
docker rm --volumes --force my-forum-redis
Start again
Then, issue your 3 commands again, now with the correct ports.
Execute arbitrary commands inside container
Also I would like to issue the command grunt in the project directory, again not sure how?
You probably want to do the following after the docker run --name my-forum-nodebb ... command but before docker start my-forum-nodebb.
docker run accepts a command to execute instead of the container's default command. Let's first use this to find out where in the container we'd land:
docker run my-forum-nodebb pwd
If that is the directory where you want to run grunt, just go forward with it:
docker run my-forum-nodebb grunt
If not, you'll have to stuff several commands into a single one. You can do that by invoking a shell:
docker run my-forum-nodebb bash -c 'cd /path/to/project/dir; grunt'
where /path/to/project/dir is to be replaced by where you want to run grunt.

How to get Container Id of Docker in Jenkins

I am using Docker Custom Build Environment Plugin to build my project inside "jpetazzo/dind" docker image. After building, in console output it shows:
Docker container 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc started to host the build
$ docker exec --tty 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc env
[workspace] $ docker exec --tty --user 122:docker 4aea29fff86ba4e50dbcc7387f4f23c55ff3661322fb430a099435e905d6eeef env BUILD_DISPLAY_NAME=#73
Here Docker Container which got started has container id 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc .
Now further I want to execute some command on "Execute shell" part in "Build" option in Jenkins, there I want to use this Container Id. I tried using ${BUILD_CONTAINER_ID} as mentioned in the plugin page. But that does't work.
The documentation tells you to use docker run, but you're trying to do docker exec. The exec subcommand only works on a currently running container.
I suppose you could do a docker run -d to start the container in the background, and then make sure to docker stop when you're done. I suspect this will leave you with some orphaned running containers when things go wrong, though.

Resources