Maybe this is a trivial question, but I am new to docker and definitely not an expert on web hosting. I couldn't find the exact answer to this question on google, and this might be something that would interest a lot of other people.
What are my options for web hosting for docker containers? If docker can be run anywhere, does that mean that I can host my containerized app on LAMP shared hosting? Or do I need a VPS and set it up somehow for docker platform? I also heard about AWS providing some hosting for docker, so I am confused about what can docker be hosted on, and if someone can explain me it would be of great help.
Docker can be run in Ubuntu, Linux, recent windows server 2016 support docker. Docker can be installed in any one of above server with bare minimum requirements. Coming to cloud hosting , cloud provides platform to manage docker container like DC/os, kubernetes etc but not docker as service. To install docker it's require vps or in on-primise server
Related
I am new to the containers topic and would appreciate if this forum is the right place to ask this question.
I am learning dockers and containers and I now have some skills using the docker commands and dealing with containers. I understand that docker has two main parts, the docket client (docker.exe) and the docker server (dockerd.exe). Now in the development life both are installed on my local machine (I am manually installed them on windows server 2016) followed Nigel Poulton tutorial here https://app.pluralsight.com/course-player?clipId=f1f27565-e2bf-4e58-96f3-bc2c3b160ec9. Now when it comes to the real production life, then, how would I configure my docker client to communicate with a remote docker server. I tried to make some research on the internet but honestly could not find a simple answer for this question. I installed docker for desktop on my windows 10 machine and noticed that it created a hyper-v machine which might be Linux machine, my understanding is that this machine has the docker server that my docker client interacts with but do not understand how is this interaction gets done.
I would appreciate if I get some guidance or clear answer to my inquiries.
In production environments you never have a remote Docker daemon. Generally you interact with Docker either through a dedicated orchestrator (Kubernetes, Docker Swarm, Nomad, AWS ECS), or through a general-purpose system automation tool (Chef, Ansible, Salt Stack), or if you must by directly ssh'ing to the system and running docker commands there.
Remote access to the Docker daemon is something of a security disaster. If you can access the Docker daemon at all, you can edit any file on the host system as root, and pretty trivially take over the whole thing. (Google "Docker cryptojacking" for some real-world examples.) In principle you can secure it with mutual TLS, but this is a tricky setup.
The other important best practice is that Docker images should be self-contained. Don't try to deploy a Docker image to production, and also separately copy your application code. The same Ansible setup that can deploy a Docker container can also install Node directly on the target system, avoiding a layer; it's tricky to copy application code into a Kubernetes volume, especially when Kubernetes pods can restart outside your direct control. Deploy (and test!) your images with all of the code COPYd in a Dockerfile, minimizing the use of bind mounts.
I am running a CENTOS Server and will be installing the Docker Engine on top of that where needless to say, I will be setting up my containers. I'll initially be setting up two containers: (1) serve my web pages (2) run my database.
My thought process was that I would install FirewallD on the CentOS. My questions are the following:
Do I need to install some sort of firewall within the containers itself? If so, can someone at a high-level tell me how this is done and what firewall I would be installing at the container level?
Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
As you can tell, this will be my first developing with containers, so do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
I would appreciate it if I could get some guidance here as I'm tasked to do this, but not sure of the correct path.
Thanks again.
I really have not tried much as I'm not sure where to begin. Currently I have just been doing some research on my use case.
Q) Do I need to install some sort of firewall within the containers itself?
A) No, not really. Containers can only communicate via the ports the configuration specify to open.
Q) Do I need to open some ports within FirewallD running on CENTOS to access the Docker Engine / Containers?
A) TCP/IP port 443 if you want to access the daemon via the REST API. Other wise, and probably more secure, leave remote access off. SSH into the machine and interact with the daemon locally.
Q) ...do I need to create the containers first on the server and then on from my development machine push the containers to the identified container?
A) Create the containers on development, push the image to a repository (Docker Hub is one, AWS ECR is another, you can also host your own). Access the server, then finally pull the images from the repository onto the server.
As for where to begin; At the beginning :D. But really, https://docs.docker.com/get-started/ has a 'getting starting' to start you off. Linux Academy, A Cloud Guru, Lyda, Udemy, and other similar learning resource are all solid starting points.
Hope this helps you on your journey.
I'm working on integrating Docker into our TeamCity build process so that I can create a task that runs a "docker build" to create an image from our code. Right now, all our build agents run on either Windows Server 2008 or Windows Server 2012, neither of which can run Docker. There's a chance we can get a license for one Windows Server 2016 build machine, but I'm wondering if there's a way to run Docker Engine on that machine while issuing docker commands from other build agents.
Here's what I've considered so far:
Docker Toolkit: This is a way to run Docker on legacy systems, but it spins up a local VirtualBox VM running Linux thus it can only run Linux containers. I need to be able to build and run Windows containers.
Docker Machine: This is a way to talk to a remote Docker engine. However, according to this open bug, it appears Docker Machine is only capable to talking to remote engines on Linux hosts due to security implementations; It's an old issue but I can't find any indication this limitation has been removed.
Docker itself uses a client/server architecture, but I couldn't find any documentation on how to talk to a remote engine without using something like Docker Machine.
Anything else I'm missing, or am I just pretty much out of luck unless we upgrade all our build agents to Windows 10 or Windows Server 2016?
You can start using the remote Windows Server 2016 instance from other build agents.
Docker allows to expose the Docker Engine (aka Daemon) via tcp. In that case and especially when the host is publicly reachable you should consider configuring authentication using client/server certificates. Details can be found in the official documentation at https://docs.docker.com/engine/security/https/, but you may find the Windows Server specific article at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/ more helpful.
Regarding your aspect of using a client to connect to a remote Docker Engine, please use the -H tls://<host>:<port> argument like described at https://docs.docker.com/engine/reference/commandline/cli/ (or see the example provided at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/#testtlsconnection).
we use VMWare vSphere for VMs in our company.
To automatically create docker hosts we use one simple command:
docker-machine --driver vmwarevsphere .... vm params(cpu,memory,network,name, etc)
It automatically creates new VM machine in our VM cluster, installs docker and then we add it to swarm or create new.
Right now I need to create windows docker hosts to run windows containers.
Docker-machine installs boot2docker.iso after creating VM.
But instead I need VM with microsoft servercore or nano.
How do I do it?
Thanks a lot.
Anton
On a Windows machine with Docker for Windows installed you could run the following command to pull the official images for server or nanoserver
docker pull microsoft/nanoserver
or
docker pull microsoft/windowsservercore
I'm not exactly sure how you're automating this - are you using a dockerfile or docker compose?
Are you talking about setting up the Windows host that runs Docker engine? If so, Docker for Windows CE is meant to be desktop software so not recommended for server side workload. Also, Windows EE Server requires Windows Server 2016 or later. If you would really like to use Windows server core mode, Windows Server 1709 offers that. Still, it quite bit new, so you should not set high expectations just yet.
As per the instruction to install the engine, MS has this.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server
Or, equivalent one from Docker here.
https://docs.docker.com/engine/installation/windows/docker-ee/
you are talking about hosting a windows container on VMware vSphere? I don't think this is possible right now, may be in the future. I have no documentation or link to verify my answer but in our company we have a similar situation and use vSphere for VMs and Linux container and Hyper-V in parallel for VMs and windows container.
How do I run Datalab locally when it requires Docker (and Docker Toolbox is not supported as documented here: https://cloud.google.com/datalab/docs/quickstarts/quickstart-local)? The Docker website says Docker requires Windows 10 Professional or Enterprise 64-bit, and most corporate environments don't run Windows 10.
Docker is highly preferred over Docker Toolbox, as its a simpler, self-contained installation, with simpler configuration (since you don't have additional virtualization software to deal with, as you do with Docker Toolbox - namely boot2docker and its underlying functionality). However if you have a setup to run docker on your end, you should theoretically be able to use that for running the Datalab docker container by adapting the instructions.
You do have the option of running everything on a GCE VM.
I was facing the same problem, what I found more comfrotable in the end is to install Ubuntu on Virtual Box. This is free and fairly easy, and from the virtual machine you can use all the Docker and the Google guide to run Datalab locally.