I'd like to package Selenium grid exrtas into a docker image.
This service being run without using docker container can reboot the OS it's running in. I wonder if I can setup the container to restart by Selemiun grid extras service running inside the container.
I am not familiar with Selenium Grid, but as a general idea: you could mount a folder from the host as data volume, then let Selenium write information to there, like a flag file.
On the host, you have a scheduled task / cronjob running on the host that would check for this flag in the shared folder and if it has a certain status, you would invoke a docker restart from there.
Not sure if there are other more elegant solutions for this, but this is what came to my mind adhoc.
Update:
I just found this on the Docker forum:
https://forums.docker.com/t/how-can-i-run-docker-command-inside-a-docker-container/337
I'm not sure about CoreOS but normally you can manage your host
containers from within a container by mounting the Docker socket.
Such as
docker run -it -v /var/run/docker.sock:/var/run/docker.sock ubuntu:latest sh -c "apt-get update ; apt-get install docker.io -y ;
bash"
or
https://registry.hub.docker.com/u/abh1nav/dockerui/
Related
I created a task definition on Amazon ECS and want to run in with Fargate. I set up my task, network mode is awsvpc. I created a new container with a docker image (simple "Hello world" project) on Amazon ECR. Run the task - everything works fine. Now I need to run a docker container from hub.docker.com as a part of the task
Dockerfile
FROM ubuntu
RUN apt-get update && apt-install ...
ADD script.sh /script.sh
RUN chmod +x /script.sh
ENTRYPOINT ["/script.sh"]
script.sh
#!/bin/bash
...prepare data
docker run -rm some_container_from_docker_hub
...continue process data
Initially, I got "command not found" error. OK, I installed docker into my image. Now I've got "Cannot connect to the Docker daemon".
My question: is there any way to run a docker container inside of another docker container on Amazon Fargate?
You can't run a container from another container using Fargate.
Running a container from another one, like in your case, would mean that you could have access to the docker daemon. Accessing the docker daemon means root access to the host machine. This breaks the docker container isolation and is unsafe.
Depending on your usage, I suggest you use an EC2 instance, use CodeBuild or build an operator that is able to talk with the api to span containers.
[Edit]: It seems that there is an open issue on this topic [ECS,Fargate]: Support for building Docker containers #95
I have a master container instance (Node.js) that runs some tasks in a temporary worker docker container.
The base image used is node:8-alpine and the entrypoint command executes with user node (non-root user).
I tried running my container with the following command:
docker run \
-v /tmp/box:/tmp/box \
-v /var/run/docker.sock:/var/run/docker.sock \
ifaisalalam/ide-taskmaster
But when the nodejs app tries running a docker container, permission denied error is thrown - the app can't read /var/run/docker.sock file.
Accessing this container through sh and running ls -lha /var/run/docker.sh, I see that the file is owned by root:412. That's why my node user can't run docker container.
The /var/run/docker.sh file on host machine is owned by root:docker, so I guess the 412 inside the container is the docker group ID of the host machine.
I'd be glad if someone could provide me an workaround to run docker from docker container in Container-optimized OS on GCE.
The source Git repository link of the image I'm trying to run is - https://github.com/ifaisalalam/ide-taskmaster
Adding the following command into my start-up script of the host machine solves the problem:
sudo chmod 666 /var/run/docker.sock
I am just not sure if this would be a secure workaround for an app running in production.
EDIT:
This answer suggests another approach that might also work - https://stackoverflow.com/a/47272481/11826776
Also, you may read this article - https://denibertovic.com/posts/handling-permissions-with-docker-volumes/
I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.
I'm building a Docker container which have maven and some dependencies. Then it execute a script inside the container. It seems, one of that dependencies needs an Xserver to work. Nothing is shown on screen but it seems necessary and can't be avoided.
I got it working putting an ENV DISPLAY=x.x.x.x:0 on Dockerfile and it connects to the external Xserver and it works. But the point is to make a Docker self-sufficient container.
So I need to add a Xserver to my container adding in Dockerfile the necessary. And I want that Xserver only accessible by the Docker container itself and not externally.
The FROM of my Dockerfile is FROM ubuntu:15.04 and that is unchangeable because my Dockerfile have a lot of things depending of that specific version.
I've read some post about how to connect from docker container to Xserver of the Docker host machine, like this. But as I put in question's title, the Docker host is headless and doesn't have Xserver.
Which would be the minimum apt-get packages to install into the container to have a Xserver?
I guess in my Dockerfile will be needed the display environment var like ENV DISPLAY=:0. Is this correct?
Is anything else needed to be added in docker run command?
Thank you.
You can install and run a x11vnc inside your docker container. I'll show you how to make it running on a headless host and connect it remotely to run X applications(e.g. xterm).
Dockerfile:
FROM joprovost/docker-x11vnc
RUN mkdir ~/.vnc && touch ~/.vnc/passwd
RUN x11vnc -storepasswd "vncdocker" ~/.vnc/passwd
EXPOSE 5900
CMD ["/usr/bin/x11vnc", "-forever", "-usepw", "-create"]
And build a docker image named vnc:
docker build -t vnc .
Run a container and remember map port 5900 to host for remote connect(I'm using --net=host here):
docker run -d --name=vnc --net=host vnc
Now you have a running container with x11vnc inside, download a vnc client like realvnc and try to connect to <server_ip>:5900 from local, the password is vncdocker which is set in Dockerfile, you'll come to the remote X screen with an xterm open. If you execute env and will find the environment variable DISPLAY=:20
Let's go to the docker container and try to open another xterm:
docker exec -it vnc bash
Then execute the following command inside container:
DISPLAY=:20 xterm
A new xterm window will popup in your vnc client window. I guess that's the way you are going to run your application.
Note:
The base vnc image is based on ubuntu 14, but I guess the package is similar in ubuntu 16
Don't expose 5900 if you don't want remote connection
Hope this can help :-)
I'm trying to run Docker inside a Jenkins container that is also running in Docker (i.e. Docker in Docker). What I want to know is how to properly start the Docker service when booting Jenkins. The only solution I've found today is to build my own Jenkins image based on the official Jenkins image but change the jenkins script loaded by the entry point to also start up Docker:
# I've added this line just before Jenkins is started from the script:
sudo service docker start
# I've also removed "exec" from the original file which used "exec java $JAVA_TOPS ..." but that didn't work
java $JAVA_OPTS -jar /usr/share/jenkins/jenkins.war $JENKINS_OPTS "$#"
This works when I run (using docker run) a new container but the problem is that if I do (docker start) on stopped container the Docker service is not started.
I strongly suspect that this is not the right way to start my Docker service. My plan is to perhaps use supervisord to start Jenkins and Docker separately (I suppose container linking is out of the question since Docker should be executed as a service on the same container that Jenkins is running on?). My concern with this approach is that I'm going to lose the EntryPoint specified in the Jenkins Dockerfile which allows me to pass arguments to the Jenkins container when starting the container, for example:
docker run -p 8080:8080 -v /your/home:/var/jenkins_home jenkins -- <jenkins_arguments>
Does anyone have any recommendations on a good way to solve this preferably by not forking the official Jenkins image?
I'm pretty you cannot do that.
Docker in Docker doesn't mean you have to run docker inside docker with 3 level : host > First level container > Second Level Container
In fact, you just need to share docker with host, and this is your host who will run others containers.
To do that, you have to mount volume with -v parameter
-v /var/run/docker.sock:/var/run/docker.sock
with this command, when you will docker run inside you jenkins container, the docker client will communicate with docker deamon from your host in order to run new container.
To do that, you should run your jenkins container with privileged
--privileged
To resume, here is the full command line
docker run -d -v /var/run/docker.sock:/var/run/docker.sock --privileged myimage
And you you don't need to create a new jenkins image for that.
Hoping to have helped you
http://container-solutions.com/running-docker-in-jenkins-in-docker/