How to build docker images on AWS EC2 Windows Server instance? - docker

We use Team City to build C# applications on a Windows server in AWS EC2.
Now there is a requirement to build Docker containers using the same system. The build steps have been tested locally and are able to produce a docker image.
Docker is not installing correctly on the server which leads to the builds failing.
Docker Edge supports Windows Server but fails on EC2 due to Hyper-V not functioning correctly.
Docker Toolbox also fails because VT-X/AMD-v are not enabled.
Is there any way to build docker images on an AWS EC2 Windows Server instance?

Related

VS Code Remote Development using a docker container hosted in the cloud

With the VS Code extension Visual Studio Code Remote - Containers I can develop inside a container that is spun up on my local computer via Docker Desktop.
Is there any way to develop inside a container hosted on Azure, AWS, Google Cloud, or any other cloud system instead?
I can't use Docker Desktop locally because I'm on a Macbook Pro with Apple Silicon, meaning that Docker does not work the same way as it would on an Intel chip.
UPDATE 2021-12-04:
I solved the issue by using GitHub Codespaces
You can use docker context
It forwards the remote docker socket via ssh to your local machine
docker context create NAME_OF_THE_CONTEXT --docker "host=ssh://$SERVER_USER_NAME#$SERVER_IP"
Use the context
docker context use NAME_OF_THE_CONTEXT
Now you can run docker commands in your local terminal that will be executed on the remote host.
So now you can connect to remote containers via VSCode as if the containers are running remotely.

Deploy docker to on-premise using azure CI-CD

I have created.NetCore Application and was successfully deployed to the local PC docker container.
Now I am trying to build it from Azure DevOps and publish it to one of my servers hosted on-premise.
Now I have no idea how to host it. Also not sure what is Docker Registry Service Connection & Container Registry Type.
My DevOps server is also hosted on-premise with no docker installed on it.
I have a docker account with one private repository.
Please suggest how to continue as I am getting the below error while building the image
open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Thanks
Deploy docker to on-premise using azure CI-CD
If you want to deploy app to the local PC docker container, you can use Self-hosted Agent(Build Pipeline and Release Pipeline) or Deployment Group(Release Pipeline).
Note: we need set the self-agent on the server where have docker installed.
Then you could try the following pipeline settings.
Here is a blog about ASP.Net Application Deployment in Docker for Windows.
You could use Command Line Task to run the docker command. In this case, you can move the local build and deploy process to azure devops

How to run Docker commands on remote Windows engine

I'm working on integrating Docker into our TeamCity build process so that I can create a task that runs a "docker build" to create an image from our code. Right now, all our build agents run on either Windows Server 2008 or Windows Server 2012, neither of which can run Docker. There's a chance we can get a license for one Windows Server 2016 build machine, but I'm wondering if there's a way to run Docker Engine on that machine while issuing docker commands from other build agents.
Here's what I've considered so far:
Docker Toolkit: This is a way to run Docker on legacy systems, but it spins up a local VirtualBox VM running Linux thus it can only run Linux containers. I need to be able to build and run Windows containers.
Docker Machine: This is a way to talk to a remote Docker engine. However, according to this open bug, it appears Docker Machine is only capable to talking to remote engines on Linux hosts due to security implementations; It's an old issue but I can't find any indication this limitation has been removed.
Docker itself uses a client/server architecture, but I couldn't find any documentation on how to talk to a remote engine without using something like Docker Machine.
Anything else I'm missing, or am I just pretty much out of luck unless we upgrade all our build agents to Windows 10 or Windows Server 2016?
You can start using the remote Windows Server 2016 instance from other build agents.
Docker allows to expose the Docker Engine (aka Daemon) via tcp. In that case and especially when the host is publicly reachable you should consider configuring authentication using client/server certificates. Details can be found in the official documentation at https://docs.docker.com/engine/security/https/, but you may find the Windows Server specific article at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/ more helpful.
Regarding your aspect of using a client to connect to a remote Docker Engine, please use the -H tls://<host>:<port> argument like described at https://docs.docker.com/engine/reference/commandline/cli/ (or see the example provided at https://stefanscherer.github.io/protecting-a-windows-2016-docker-engine-with-tls/#testtlsconnection).

Docker Windows Container with Service Fabric on Windows Server

I have a Service Fabric cluster installed on 5 virtual machines which are running Windows Server 2016. I would like to run docker windows container inside my Service Fabric cluster. I'm fairly new to the SF and Docker and I have couple of questions:
To make it work do I have to install Docker on each node? (If so which version CE or EE?) Because when I deploy my SF app with windows container service inside, it gives me an error during application start Error event: SourceId='System.Hosting', Property='Download:1.0:1.0:45cc185a-abde-47f4-9a1f-943ad6e29d23'.
There was an error during download.Container deployment is not supported on the node.
Can I run linux container on Service Fabric installed on Windows Server?
Yes you need to have the Containers feature enabled. Or, when running in Azure, you can use a host with the Containers feature already enabled, e.g. '2016-Datacenter-with-Containers'.
No, you can't do that inside a cluster at this time.
More info:
here
here

Is the Docker engine installed on the server or client?

Is the Docker engine installed on the server to build off of the images it receives and then runs the containers that are built from it or is the engine installed on the client and then the building of images into containers is done there? Is the Docker engine installed on both the client and server and does different actions on each side?
Docker Engine is responsible for building, pulling, pushing the image and then running them as container. Docker Engine is installed on the server side and the client side just consist of the CLI used for issuing commands to Docker Engine. The Client uses Rest API to issue commands to server.
In your case both Machine A and Machine B will have Docker Engine. You will need the Docker Engine on Machine A to build the image and then push it to a repository (like Dockerhub). On Machine B you will need Docker Engine to pull the image and then create containers from it.

Resources