Intellij with run target as Docker randomizing ports - docker

I am using intellij using the Run Targets configuration for deploying my build inside a docker container. The run target is based on a docker-compose file I provided.
My application is running on port 8080. The problem is that on each run, Intellij is assigning a random port from the host to the application port 8080.
I dont want Intellij to randomize port on each run. I explicitly mentioned in my docker-compose file to map 8080:8080. But still Intellij overrides it and assigns a random port. I also tried to run the application on different port but still no use.
Any one could help?

Related

Gitlab Runner, docker executor, deploy from linux container to CIFS share

I have a Gitlab runner that runs all kind of jobs using Docker executors (host is Ubuntu 20, guests are various Linux images). The runner runs containers as unprivileged.
I am stumped on an apparently simple requirement - I need to deploy some artifacts on a Windows machine that exposes the target path as an authenticated share (\\myserver\myapp). Nothing more than replacing files on the target with the ones on the source - a simple rsync would be fine.
Gitlab Runner does not allow specifying mounts in the CI config (see https://gitlab.com/gitlab-org/gitlab-runner/-/issues/28121), so I tried using mount.cifs, but I discovered that by default Docker does not allow mounting anything inside the container unless running privileged, which I would like to avoid.
I also tried the suggestion to use --cap-add as described at Mount SMB/CIFS share within a Docker container but they do not seem to be enough for my host, there are probably other required capabilities and I have no idea how to identify them. Also, this looks just slightly less ugly than running privileged.
Now, I do not strictly need to mount the remote folder - if there were an SMB-aware rsync command for example I would be more than happy to use that. Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
Do you have any idea on how achieve this?
Unfortunately I cannot install anything on the Windows machine (no SSH, no SCP, no FTP).
You could simply copy an executable (that you can build anywhere else) in Go, listening to a port, ready to receive a file.
This this implementation for instance: file-receive.go. It listens on port 8080 (can be changed) and copies the file content to a local folder.
No installation or setup required: just copy the exe to the target machine and run it.
From your GitLab runner, you can 'curl' send a file to the remote PC machine, port 8080.

Attaching IDE to my backend docker container stops that container's website from being accessible from host

Summary
I'm on mac. I have several docker containers, I can run all of them using docker-compose up and everything works as expected: for instance, I can access my backend container by searching http://localhost:8882/ on my browser, since port 8882 is mapped to the same port on host by using:
ports:
- "8882:8882"
Problems start when I try to attach an IDE to the backend container so as to be able to develop "from inside" that container.
I've tried using vscode's plugin "Remote - Containers" following this tutorial and also pycharm professional, which comes with the possibility to run docker configurations out of the box. On both cases I had the same result: I run the IDE configuration to attach to the container and its local website suddenly stops working, showing "this site can't be reached".
When using pycharm, I noticed that Docker Desktop shows that the backend container changed its port to 54762. But I also tried that port with no luck.
I also used this command:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
to get the container ip (172.18.0.4) and tried that with both ports, again, same result.
Any ideas?
Pycharm configuration
Interpreter:
This works in the sense that I can watch the libraries code installed inside the container:
Run/Debug configuration. This configuration succeeds in the sense that I can start it and it seems to be attached correctly to the backend container... though the problem previously described appears.
So, there were many things taking part in this, since this is an already huge and full of technical debt project.
But the main one is that the docker-compose I was using was running the server using uwsgi on production mode, which tampered many things... amongst which were pycharm's ability to successfully attach to the running container, debug, etc.
I was eventually able to create a new docker-compose.dev.yml file that overrided the main docker-compose file, only to change the backend server command for flask on development mode. That fixed everything.
Be mindful that for some reason flask run command inside a docker container does not allow you to see the website properly until you pass a -host=0.0.0.0 option to it. More in https://stackoverflow.com/a/30329547/5750078

JBoss cli connect to docker

i have transfer my files to host machine. I already bring up jboss docker. How can i deploy the war file in my host mchine to container via cli. Please dvice. I not planning to use custom image
You need to open the management port from docker -p9990:9990.
Also I'm not sure to what address the management is bound to. You would have to pass -Djboss.bind.address.management=0.0.0.0 to the command line.

Docker cannot access exposed port inside container

I have a container for which I expose my port to access a service running within the container. I am not exposing my ports outside the container i.e. to the host (using host network on mac). On getting inside the container using exec -t and running a curl for a post request, I get the error:
curl command: curl http://localhost:19999
Failed connect to localhost:19999; Connection refused.
I have the expose command in my dockerfile and do not want to expose ports to my host. My service is also up and running inside the container. I also have the property within config set as
"ExposedPorts": {"19999/tcp": {}}
(obtained through `docker inspect <container id/name>\ Any idea on why this is not working? Using docker for Mac
I'd post my docker-compose file too but this is being built through maven. I can ensure that I am exposing my port using 19999:19999. Another weird issue is that on disabling my proxies it would run a very light weight command for my custom service and wouldn't run it again returning the same error as above. The issue only occurs on my machine and not others
Hints:
The app must be listening on port 19999 which is probably not.
The EXPOSE that you're using inside the Dockerfile does nothing.
Usually there is no need to change the default port on which an application is listening, hence each container has its own IP and you shouldn't run in a port conflict.
Answer:
Instead of curling 19999 try to use the default port on which your app would normally be listening to(it's hard to guess what you are trying to run).
If you don't publish a port (with the docker run -p option or the Docker Compose ports: option), you cannot directly reach the container on Docker for Mac. See the Known limitations, use cases, and workarounds in the Docker Desktop for Mac documentation: the "per-container IP addressing is not possible" item ism what you're trying to attempt.
The docker inspect IP address is basically useless, except in one very specific Docker configuration (on a native-Linux host, calling from outside of Docker, on the same host); I wouldn't bother looking it up.
The Dockerfile EXPOSE directive and similar runtime options do very little and mostly serve as documentation. Even if you have that configured you still need to separately publish the port when you start the container to reach it from outside of Docker space.

Does PhpStorm run its own Docker containers in the background?

I want to grasp better what's going on behind the scenes when developing with Docker in JetBrains' IDEs, in this case PhpStorm.
I see that my project root directory is mapped to /opt/project but when I docker-compose up and look inside the PHP container /opt/project doesn't exist. So I'm guessing that PhpStorm manages it's own Docker environment where it deploys my containers.
Is it true that there are actually 3 domains to consider?
The code on the local machine
The Docker containers ran by PhpStorm -> I can run tests via PhpStorm
The Docker containers ran by docker-compose up -> I can see the website
Can somebody verify that what I stated is correct and provide more context? And if what I stated is correct, it poses another question like why I don't have port conflicts between the PhpStorm Docker containers and my own ran by docker-compose up.
These questions started popping up while configuring the tests (PHPUnit), dependency management (composer, autoload, composer dump-autoload) in PhpStorm.
Unfortunately the insides of IDE's Docker integration aren't documented.
I see that my project root directory is mapped to /opt/project but when I docker-compose up and look inside the PHP container /opt/project doesn't exist.
Where do you see it? How do you look inside exactly?
Is it true that there are actually 3 domains to consider?
The code on the local machine
I'm unfortunately not sure what do you mean here exactly. Volume mappings?
The Docker containers ran by PhpStorm -> I can run tests via PhpStorm
PhpStorm is indeed using "helper" containers to run tests.
The Docker containers ran by docker-compose up -> I can see the website
Yes?..
why I don't have port conflicts between the PhpStorm Docker containers and my own ran by docker-compose up
Could you please be more specific? How did you set up ports? What ports should be conflicting? How did you test that there are no conflicts?
These questions started popping up while configuring the tests
Are you facing some specific issue?

Resources