Exec into docker cloud? - docker

When working locally I often use the docker exec command to look around and debug containers.
Is there a way to do this from my PC when the containers are deployed on docker-cloud?
I realize there is a terminal tab on the docker-cloud GUI but I'm finding it a bit limited.

Yes, if you can open an ssh session on your docker cloud service (which is probably possible).
Or, more likely, if you run and access your container through a Docker Cloud Agent, which allows you to use any Linux host (“bring your own host”) as a node which you can then use to deploy containers.
Otherwise, no, as the socket used by docker cloud session would not be exposed through internet, and is used only locally on the remote cloud server.

You can use the approach here(https://docs.docker.com/docker-cloud/infrastructure/ssh-into-a-node/#ssh-into-docker-cloud-node) and it has worked for me.

Related

Update Docker Images via dockerized Jenkins Job

I run some docker containers on my Synology NAS. Now I also run Jenkins via Docker on the NAS and want to create a job that does the following steps:
Stop all Docker Containers
Delete all unnecessary stuff (-> docker system prune)
Rebuild all Docker images
Run the new Docker image
But I don't know how to access the host system in dockerized Jenkin. SSH to the Host doesn't seem to be a good idea.
Do you have any tips?
The whole point of your Docker images is to run in an isolated sandbox, so it's by design that your image doesn't have access to the native system. SSH is one approach, but risky, as you point out.
A better approach is to set the DOCKER_HOST environment variable to point to the IP of the NAS (which might need to be the virtual network NAS address). You will probably need to experiment a bit with getting the correct address and making sure the hosted docker command has permissions to drive the host's Docker service.
This post in the Synology Forums may get you on the right track.

Dealing with dockers and containers in production

I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.

Deploying and Securing Docker Containers and Server OS

I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.

Use real server instead of docker-machine for OSX

I have a linux on cloud with a installed docker service on it. How can I use my VS on cloud instead of docker-machine on my OSX? it means instead of install VirtualBox and create a VM on it by docker-machine, I use my server on cloud as docker server.
To access a remote Docker daemon simply pass the -H flag to your docker commands:
docker -H=tcp://192.168.0.100:2375 images
You need to ensure that the remote Docker daemon is listening on the appropriate network interface. Be aware though that doing this on an external server is highly insecure, anyone that can reach the port has effectively root access on the server. At the very least read this article on securing the Docker daemon.
Personally I would only recommend using a port binding via ssh tunnel to access the remote Docker daemon.
You might get a solution from docker-machine's generic driver. Just start the virtual server in cloud, set up proper SSH keys and get started :) It should work just the same as with a VM within VirtualBox.
I'm not sure how to get VS auto-started if it is shut down though. Via a could-vendor specific command line program?
Edit: I should have read the docs better, the first cloud example actually shows the usage of digital ocean driver. If it is already running then just use the generic driver.

Can a docker process access programms on the host with ipc

I´m working on a clustered tomcat system that uses MQSeries.
Today MQSeries is accessed in bindings mode, i.e. via IPC and tomcat and mqeries run on the same host without any virtualization/docker support.
I´d like to transform that to a solution, where mqseries runs on the host (or possible in a docker container) the the tomcat instances run in docker containers.
It´s possible to access mqseries in client mode (via a tcp connection) and this seems to be the right solution.
Would it still be possible to access mqseries from the docker container via ipc, i.e. create exceptions for the ipc namespace separation? Is anything like that planned for docker?
Since docker 1.5 this is possible with the flag --ipc=host like in
docker run --ipc=host ubuntu bash
This answer suggests how IPC can be enabled with a source-code modification to Docker. As far as I (and the other answers there) know, there is no built-in feature.
Specificically, he says he commented out this line which makes Docker create a separate IPC namespace.
Rebuilding Docker is a bit tedious because it brings in dozens of other things during the build, but if you follow the instructions it's straightforward.

Resources