Does PhpStorm run its own Docker containers in the background? - docker

I want to grasp better what's going on behind the scenes when developing with Docker in JetBrains' IDEs, in this case PhpStorm.
I see that my project root directory is mapped to /opt/project but when I docker-compose up and look inside the PHP container /opt/project doesn't exist. So I'm guessing that PhpStorm manages it's own Docker environment where it deploys my containers.
Is it true that there are actually 3 domains to consider?
The code on the local machine
The Docker containers ran by PhpStorm -> I can run tests via PhpStorm
The Docker containers ran by docker-compose up -> I can see the website
Can somebody verify that what I stated is correct and provide more context? And if what I stated is correct, it poses another question like why I don't have port conflicts between the PhpStorm Docker containers and my own ran by docker-compose up.
These questions started popping up while configuring the tests (PHPUnit), dependency management (composer, autoload, composer dump-autoload) in PhpStorm.

Unfortunately the insides of IDE's Docker integration aren't documented.
I see that my project root directory is mapped to /opt/project but when I docker-compose up and look inside the PHP container /opt/project doesn't exist.
Where do you see it? How do you look inside exactly?
Is it true that there are actually 3 domains to consider?
The code on the local machine
I'm unfortunately not sure what do you mean here exactly. Volume mappings?
The Docker containers ran by PhpStorm -> I can run tests via PhpStorm
PhpStorm is indeed using "helper" containers to run tests.
The Docker containers ran by docker-compose up -> I can see the website
Yes?..
why I don't have port conflicts between the PhpStorm Docker containers and my own ran by docker-compose up
Could you please be more specific? How did you set up ports? What ports should be conflicting? How did you test that there are no conflicts?
These questions started popping up while configuring the tests
Are you facing some specific issue?

Related

Gitlab Runner, docker executor, deploy from linux container to CIFS share

I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.

Attaching IDE to my backend docker container stops that container's website from being accessible from host

Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078

Do I still need to install Node.js or Python via docker container file when the OS is installed with python/node.js already?

I am trying to create the docker file (Image file) for the web application I am creating. Basically, the web application is written in Node.js and Vue.js. In order to create a docker container for the application, I have got the documentation from vue.js to create a docker file. The steps given are working file. I just wanted to clear my understanding in this part.
link:- https://cli.vuejs.org/guide/deployment.html#docker-nginx
If the necessary package Node/Python is installed in the OS (Not in the container), would the container be able to pick up the npm scripts and execute python scripts also? If yes, is it really dependent on the OS software packages as well?
Please help me with the understanding.
Yes, you need to install Node or Python or whatever software you need in your application in your container. The reason is that the container should be able to run on any host machine that has Docker installed, regardless of how the host machine is set up or what it software it has installed.
It might be a bit tedious at first to make sure that your Dockerfile installs all the software that is needed, but it becomes very useful when you want to run your container on another machine. Then all you have to do is type docker run and it should work!
Like David said above, Docker containers are isolated from your host machine and it should be treated as a completely different machine/host. The way containers can communicate with other containers or sometimes the host is through network ports.
One "exception" to the isolation between the container and the host is that the container can sometimes write to files in the host in order to persist data even after the container has been stopped. You can use volumes or mounts to allow containers to write to files on the host.
I would suggest the Docker Overview for more information about Docker.

Debugging a Go process in a container using Delve/Goland from the host

Before I burn hours trying it out I wanted to ask the community is this even possible?
Scenario:
Running Goland on host (may be any OS)
Running Go dev env in Alpine based container
Code on host volume mapped to container
Can I attach the Goland debugger (Delve) to a Go process in the container? I'm assuming I can run delve in the container headless and run the client on the host, punching whatever port is required? Will I have binary compatibility issues if the host is not linux?
I'd rather not duplicate the entire post in this answer, but have a look at this resource on how to use containers to run applications you write https://blog.jetbrains.com/go/2018/04/30/debugging-containerized-go-applications/
To answer this specifically, as long as you have Go, the application sources, and all dependencies installed on the host machine, you can develop in GoLand and then, using a mapped volume, you can also run it from the container.
However, this workflow sounds more like the workflow you'd normally have using VMs not containers, which is why in the above article all the running/debugging is done using the actual containers, rather than using bash inside a container to run those commands.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Resources