I built a project (react/express) and used Docker and docker-compose to stand up my local development environment. I'm ready to deploy now and I have a Windows Server 2019 VM that is currently hosting PHP applications on IIS.
Is it possible to add Docker to my server and host my containerized application without impacting my existing IIS sites (Essentially run the container and IIS side by side)? If so, how do I bind the container/application to a URL within IIS?
I have really struggled to find Docker information on this topic.
Also, while I'm at it, will I need to pay for Docker Enterprise Edition?
Yes, you can, IIS by default will use port 80 and 443. Hence, to make it run side-by-side:
When run your docker do not mapping to port 80. docker run -p 8080:80 your_docker_handler for example. Hence you can access you IIS using http://server-ip and for docker http://server-ip:8080
or else,
You can do reverse proxy from IIS to your docker if you want to access docker without the port. But this one will need more effort and maybe some adjustment on your app code inside docker as well
Related
Most of the things I'm finding online are all about using docker-compose and more to create a reverse proxy for local development of dockerized applications. This is for my local development environment.
I have a need to create an nginx reverse proxy that can route requests to applications on my local computer that are not running in docker containers (non-dockerized).
Example:
I start up a web app A (not in docker) running on http://localhost:8822
I start up another web app B (not in docker) running on https://localhost:44320
I have an already running publicly available api on https://public-url-for-api-app-a.net
I also have a public A Record setup in my DNS for *.mydomain.local.com -> 127.0.0.1
I am trying to figure out how to use a nginx:mainline-alpine container in host mode to allow me to do the following:
I type http://web-app-a.mydomain.local.com -> reverse proxy to http://localhost:8822
I type http://web-app-b.mydomain.local.com -> reverse proxy to https://localhost:44320
I type http://api-app-a.mydomain.local.com -> reverse proxy to https://public-url-for-api-app-a.net
Ideally, this "solution" would run on both Windows and Mac but I am currently falling short in my attempts at this on my Windows machine.
Some stuff I've tried:
Following this tutorial, Start up my nginx docker container in "host" mode via:
docker run --rm -d --network host --name my_nginx nginx:mainline-alpine
I'm unable to get it to load on http://localhost:80. I'm wondering if I'm hitting some limitation of docker and windows? -- I receive a "The site can't be reached" here.
Custom building my own docker image with nginx configs and exposed ports (before trying host network mode)
Other relevant information:
Docker-Desktop on Windows version: 4.4.4 (73704)
Nginx Container via nginx:mainline-alpine tag.
Web App A = Front End Vue App
Web App B = Front End .NET Framework App
Web App C = Backend .NET Framework App
At this point, I've read too many posts that my brain is mush -- so it could very well be something obvious I'm missing. I'm beginning to think it may be better to simply run nginx.exe locally but that's not ideal because I don't want to have to check in binaries to my source in order for this setup to work.
We are using docker compose for microservice end-to-end development and testing. Basically each compose service has a port mapping from ubiquitous development port to container standard production port 8080.
[UC1] UI on development mode invokes microservices on localhost at known development ports (docker port mappings). It follows one could stop any container on docker network and restart it in the IDE. UI could still invoke the service, and the service could still invoke other services as long as IDE process binds to development port (it's the default profile). This is how we debug microservices through UI with great success.
[UC2] However, this solution fails when microservice running in the docker calls back to microservice running in the IDE. That is because containers in the docker compose network are isolated from localhost. They find each other by container name but has no idea of docker host.
How to enable UC2 with minimal configuration changes and with same flexibility as UC1?
If you are on Windows, you can reach the host using the special DNS name host.docker.internal, see here.
I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).
I have a web application running in a docker container on production server. Now I need to make API requests to this application. So, I have two possibilities:
1) Link a domain
2) Make requests directly by IP
I'm using a cloud server for that. In my previous experience I linked the domain to a folder. But now I don't know how to link the domain to a running container on ip_addr:port.
I found this link
https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/
but it's for docker enterprice. Using of that is impossible for the moment.
To expose a docker application to the public without using compose or other orchestration tools like Kubernetes, you can use the docker run -p hostPort:containerPort option to expose your container port. Make sure your application is listening on 0.0.0.0:[container port] inside your container. To access the service externally, you would use the host's IP, and the port that the container port has been mapped to.
See more here
If you want to link to a domain, you can update your DNS records to point your domain to your host IP address.
Hope this helps!
Best way is to use kubernetes because it will ease many operations. But docker-compose can also be used.
If you want to simply deploy using docker it can be done by mapping hostPort to containerPort.
The Docker EE docs state you can use their built in load balancer to do path based routing:
https://docs.docker.com/ee/ucp/interlock/usage/context/
I would love to use this for our local devs to have a local container cluster to develop against since a lot of our apps are using host paths to route each service.
My original solution was to add another container to the compose service that would just be an nginx proxy doing path based routing, but then I stumbled on that Docker EE functionality.
Is there anything similar to that functionality without using Docker EE or should I stick with just using an nginx reverse proxy container?
EDIT: I should clarify, in our release environments, I use an ALB with AWS. This is for local dev workstations.
The Docker EE functionality is just them wrapping automation around an interlock container, which itself runs nginx I think. I recommend you just use nginx locally in your compose file, or better yet, use traefik, which is purpose-built for this exact purpose.