Microsoft Azure Devops Override Network driver when container job is initializing - docker

I encountered an issue and not sure any of you encountered before. I tried to start a container job in which Linux based hosted on Window Server 2019. The hosted machine has docker EE installed and was able to run the container.
However, when I tried to trigger the Azure Pipeline to run the job in the the self-hosted machine, it shows following error:
Failed to create network
It appears that the agent failed to create the network using the default driver (Bridge) before starting up the container as the self-hosting server is Windows Server - windows uses NAT driver whereas the default driver for Docker is Bridge driver.
I wonder is it possible to override the driver using NAT driver in azure-pipeline? I tried using the following method but it seemed like not able to override it.
azure-pipelines
Or, is there any alternative way to disable the agent to create the network before starting the docker?
Or, is there any alternative to run Linux container in Windows Server?

Related

VS Code Remote Development using a docker container hosted in the cloud

With the VS Code extension Visual Studio Code Remote - Containers I can develop inside a container that is spun up on my local computer via Docker Desktop.
Is there any way to develop inside a container hosted on Azure, AWS, Google Cloud, or any other cloud system instead?
I can't use Docker Desktop locally because I'm on a Macbook Pro with Apple Silicon, meaning that Docker does not work the same way as it would on an Intel chip.
UPDATE 2021-12-04:
I solved the issue by using GitHub Codespaces
You can use docker context
It forwards the remote docker socket via ssh to your local machine
docker context create NAME_OF_THE_CONTEXT --docker "host=ssh://$SERVER_USER_NAME#$SERVER_IP"
Use the context
docker context use NAME_OF_THE_CONTEXT
Now you can run docker commands in your local terminal that will be executed on the remote host.
So now you can connect to remote containers via VSCode as if the containers are running remotely.

Docker inside Windows VirtualBox

here's the thing: I tried to install docker inside a windows which runs inside virtualbox, and off course I failed due it's not possible (now I know this is due Hyper-V not used by virtualbox and required by docker).
Since for me migrating to VMWare ain't an option, I dig a little bit and found out that there's no problem on running docker inside a linux distro (which runs inside a vbox), so here's the question.
Is it possible to run 2 different virtual machines with virtualbox, one with linux (running docker inside it), and the other one with windows as my development environment, both at the same time and to develop on windows and then deploy and run tests on docker? If this is possible, how? Any links or keywords for me to search for would be appreciated.
Sure! You need to do following steps:
You should set your VMs network so then can see easily each other https://superuser.com/questions/119732/how-to-do-networking-between-virtual-machines-in-virtualbox
You should expose docker daemon on TCP socket on VM with linux https://success.docker.com/article/how-do-i-enable-the-remote-api-for-dockerd
On VM with windows you need to create some override for docker client so he will connect to remote daemon on linux machine https://gist.github.com/kekru/4e6d49b4290a4eebc7b597c07eaf61f2#create-bat-file-for-windows
Please keep in mind when you expose some service under ports you won't access that on VM with windows on localhost - instead of that you need to type: :

How to run Docker commands on remote Windows engine

I'm working on integrating Docker into our TeamCity build process so that I can create a task that runs a "docker build" to create an image from our code. Right now, all our build agents run on either Windows Server 2008 or Windows Server 2012, neither of which can run Docker. There's a chance we can get a license for one Windows Server 2016 build machine, but I'm wondering if there's a way to run Docker Engine on that machine while issuing docker commands from other build agents.
Here's what I've considered so far:
Docker Toolkit: This is a way to run Docker on legacy systems, but it spins up a local VirtualBox VM running Linux thus it can only run Linux containers. I need to be able to build and run Windows containers.
Docker Machine: This is a way to talk to a remote Docker engine. However, according to this open bug, it appears Docker Machine is only capable to talking to remote engines on Linux hosts due to security implementations; It's an old issue but I can't find any indication this limitation has been removed.
Docker itself uses a client/server architecture, but I couldn't find any documentation on how to talk to a remote engine without using something like Docker Machine.
Anything else I'm missing, or am I just pretty much out of luck unless we upgrade all our build agents to Windows 10 or Windows Server 2016?
You can start using the remote Windows Server 2016 instance from other build agents.
Docker allows to expose the Docker Engine (aka Daemon) via tcp. In that case and especially when the host is publicly reachable you should consider configuring authentication using client/server certificates. Details can be found in the official documentation at https://docs.docker.com/engine/security/https/, but you may find the Windows Server specific article at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/ more helpful.
Regarding your aspect of using a client to connect to a remote Docker Engine, please use the -H tls://<host>:<port> argument like described at https://docs.docker.com/engine/reference/commandline/cli/ (or see the example provided at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/#testtlsconnection).

Docker Windows Container with Service Fabric on Windows Server

I have a Service Fabric cluster installed on 5 virtual machines which are running Windows Server 2016. I would like to run docker windows container inside my Service Fabric cluster. I'm fairly new to the SF and Docker and I have couple of questions:
To make it work do I have to install Docker on each node? (If so which version CE or EE?) Because when I deploy my SF app with windows container service inside, it gives me an error during application start Error event: SourceId='System.Hosting', Property='Download:1.0:1.0:45cc185a-abde-47f4-9a1f-943ad6e29d23'.
There was an error during download.Container deployment is not supported on the node.
Can I run linux container on Service Fabric installed on Windows Server?
Yes you need to have the Containers feature enabled. Or, when running in Azure, you can use a host with the Containers feature already enabled, e.g. '2016-Datacenter-with-Containers'.
No, you can't do that inside a cluster at this time.
More info:
here
here

Docker Machine to a remote server

I understaand that when I create a VM with Docker Machine using the Virtualbox driver, it creates a local VM running the boot2docker distribution. I can then create my containers on it using for instance Docker Compose.
But what exactly happens when you use Docker Machine onto a remote server? Does it create a VM on that remote server?
Does it differ if you use a known provider (using say the AWS driver) or an unknown provider (using the generic driver)?
When you use Digital Ocean, AWS etc, you give Docker machine an API key, which it will use to create a VM. It will then install the Docker daemon and any dependencies and configure remote access. So you don't give it a remote server - it creates one.
If you use the generic driver, you give Docker machine SSH access to an IP, where I presume it again installs Docker and configures remote access (so it effectively skips the creation step).

Resources