Add extra_hosts to all containers - docker

I have a project that can be run on multiple OSes Linux/Windows/Mac. I have single docker-compose file. For Windows and MAC docker-compose starting version 18 is smart enough to add host.docker.internal automatically into containers so I can connect the containers. For Ubuntu, I have to specify every time
extra_hosts:
- 'host.docker.internal:host-gateway'
Is there a way to always inject these additions to host files for the container whenever the container is created, so that it is similar to Windows and Mac?
Am I looking at this wrongly?
I just want to avoid tedious editing of the docker-compose file because docker behaves differently on Ubuntu than on Mac and Windows.

Related

Develop on Docker container from Intellij

How can develop on my Docker container from Intellij? I am developing on macOS, but my development environment is inside a Docker container. In VSCode, I can use the Remote - Containers extension to open the files in my Docker container, go to function definitions, use the version of Go on the container, access the container shell--it's as if I am accessing a remote machine from VSCode. I didn't have to change my Dockerfile or mount any volumes. Everything just worked.
IntelliJ seems to have added something according to this, but the total functionality is unclear. I can attach to a running container using the Docker plugin, access the shell, and inspect the container's attributes, but none of the above other functionality with VSCode.
Here are some examples of why this is needed:
I am developing on macOS, but my target is specific to Intel Linux code. If I do Cmd+B on a symbol, I'm taken to a Darwin specific file
importing github.com/docker/libnetwork fails because the files in this package can only be built for Linux
The above doesn't happen on VSCode because I can develop directly on the container.

Docker desktop networking Windows and Linux nodes

I have a windows service within a Docker container that needs to access a MySQL database in a Linux container on the same machine (dev machine currently).
I thought of creating an overlay network on the two "nodes" on the same machine but this isn't possible as creating the swarm worker fails on windows after creating the swarm master on linux.
Is this possible, if not what is the easiest way of doing this? The purpose of the windows container is simply to deploy to a test environment to gather data. Do I need to deploy the linux to the cloud or another machine maybe, so the windows container can communicate?
You can simply use docker compose, it will create the network automatically. Replace the MySQL host with the MySQL service name you defined in the compose yaml file. Detailed information please refer to docker-compose.

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Do I still need to install Node.js or Python via docker container file when the OS is installed with python/node.js already?

I am trying to create the docker file (Image file) for the web application I am creating. Basically, the web application is written in Node.js and Vue.js. In order to create a docker container for the application, I have got the documentation from vue.js to create a docker file. The steps given are working file. I just wanted to clear my understanding in this part.
link:- https://cli.vuejs.org/guide/deployment.html#docker-nginx
If the necessary package Node/Python is installed in the OS (Not in the container), would the container be able to pick up the npm scripts and execute python scripts also? If yes, is it really dependent on the OS software packages as well?
Please help me with the understanding.
Yes, you need to install Node or Python or whatever software you need in your application in your container. The reason is that the container should be able to run on any host machine that has Docker installed, regardless of how the host machine is set up or what it software it has installed.
It might be a bit tedious at first to make sure that your Dockerfile installs all the software that is needed, but it becomes very useful when you want to run your container on another machine. Then all you have to do is type docker run and it should work!
Like David said above, Docker containers are isolated from your host machine and it should be treated as a completely different machine/host. The way containers can communicate with other containers or sometimes the host is through network ports.
One "exception" to the isolation between the container and the host is that the container can sometimes write to files in the host in order to persist data even after the container has been stopped. You can use volumes or mounts to allow containers to write to files on the host.
I would suggest the Docker Overview for more information about Docker.

Debugging a Go process in a container using Delve/Goland from the host

Before I burn hours trying it out I wanted to ask the community is this even possible?
Scenario:
Running Goland on host (may be any OS)
Running Go dev env in Alpine based container
Code on host volume mapped to container
Can I attach the Goland debugger (Delve) to a Go process in the container? I'm assuming I can run delve in the container headless and run the client on the host, punching whatever port is required? Will I have binary compatibility issues if the host is not linux?
I'd rather not duplicate the entire post in this answer, but have a look at this resource on how to use containers to run applications you write https://blog.jetbrains.com/go/2018/04/30/debugging-containerized-go-applications/
To answer this specifically, as long as you have Go, the application sources, and all dependencies installed on the host machine, you can develop in GoLand and then, using a mapped volume, you can also run it from the container.
However, this workflow sounds more like the workflow you'd normally have using VMs not containers, which is why in the above article all the running/debugging is done using the actual containers, rather than using bash inside a container to run those commands.

Resources