View the logs of remote docker interpeter defined in PhpStorm - docker

I have defined docker remote PHP interpreter in PhpStorm. So far, so good, but when the unit tests are run, PhpStorm spawns a new container from the provided image and there is no way to access the server logs. Imagine that some of the tests fail and Symfony logs errors in the newly spawned container - how can I view them after the tests execution? Currently, they are entirely gone and the only way to debug is using XDebug.

Related

How to reconnect to docker-compose output log?

Please help, I'm not even sure if I am asking the right question here as there are many new components to my environment (the new components of my Environment are that I am new to developing in a Windows OS, New to Visual Studio Code IDE, and new to Docker within VS Code. This question could pertain to any of those factors.
Scenario:
I boot up my windows 10 machine, open VS Code, go to the command line from within VS Code (I am using a Git Bash Shell within VS Code). From this terminal I start my project with the following command: docker-compose up --build
as a result of running this command, I see the output in my terminal which indicates that all three of my containers have started up (Note this is a Flask application using Postgres with an Angular front end, each one has it's own container).
My application has a test API endpoint which when called responds with 'status ok'. Upon hitting that endpoint in postman I see a couple of lines of output in my terminal indicating that the application has processed the request for the specific URL. Everything is great.
Now I close all my applications and reboot the machine.
Upon rebooting I see a message from the system informing me that my docker containers are starting. This is good. But now I would like to get back to the state where I can see that same output that I saw when I ran the docker-compose up command, however this is no longer in the terminal on VS Code.
My question is, how can I get that output again without shutting down the docker containers and re-building them? Sure, I could do that, but this seems like an unnecessary step since the containers auto-restarted on system reboot.
Is there a log I can tail?
Additional info:
In the DockerFile for the API server. The server is started with the following command:
CMD ["./entrypoint.local.sh"]
In the entrypoint.local.sh file, the actual application is started with this command:
uwsgi --ini /etc/uwsgi.local.ini --chdir /var/www/my-application
Final note: This is not an application I created so I would like to avoid changing it since this will affect others on my team
In your terminal run: docker-compose logs --follow <name-of-your-service>
Or see every log stream for every service with docker-compose logs --follow
You can find the name of your docker-compose service by looking at each key under services: in your docker-compose.yml

Logging from multiple processes in a single docker container

I have an application (let's call it Master) which runs on linux and starts several processes (let's call them Workers) using fork/exec. Therefore each Worker has its own PID and writes its own logs.
When running directly on a host machine (without docker) each process uses syslog for logging, and rsyslog puts ouptut from each Worker to a separate file, using a config like this:
$template workerfile,"/var/log/%programname%.log"
:programname, startswith, "worker" ?workerfile
:programname, isequal, "master" "/var/log/master"
Now, I want to run my application inside a docker container. Docker starts Master process as the main process (in CMD section of the Dockerfile), and then it forks the Workers at runtime (not sure if it's a canonical way to use docker, but that's what I have). Of course I'm getting only the stdout for the Master process from docker, and logs of Workers get lost.
So my question is, any way I could get the logs from the forked processes?
To be precise, I want the logs from different processes to appear in individual files on the host machine eventually.
I tried to run rsyslog daemon inside docker container (just like I do when running without docker), writing logs to a mounted volume, but it doesn't seem to work. I guess it requires a workaround like supervisord to run the Master process and rsyslogd at the same time, which looks like an overkill to me.
I couldn't find any simple solution for that, though my problem seems to be trivial.
Any help is appreciated, thanks

Remote debugging of Docker containers using IntelliJ

I have a number of Docker containers (10) each running a Java service that makes up my system. To create these containers I use a couple of docker-compose files. Using the Docker Integration plugin for IntelliJ, I can now spool up these services to my remote server using the Docker-compose option (the images used are built outside of IntelliJ, using Gradle). Here are the steps I have done to achieve this:
I have added a Docker server using the Docker Machine option to connect to the remote Docker daemon (message says Connection Successful).
I have added a new Docker Compose configuration, using the server, specifying my compose files, and the services I want to start.
Now that I have the system controlled through IntelliJ, I have been trying to figure out how to attach the remote debugger to each of these services so that IntelliJ will hit my breakpoints.
Will I need to add the JVM args (-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005) to each service (container) and add the usual remote debug configuration for each service? Do I need to use a different address for each service? If so, how do I add these args? Surely with the Docker Integration plugin, there is an easier way to do this.
IntelliJ Idea v2018.1.5 (Community Edition)
Docker Integration v181.5087.20

PhpStorm: running PHPUnit database tests from Docker container

I configured PhpStorm to run PHPUnit tests from Docker container. Unfortunately if test tries to connect to MySQL server I get an error:
SQLSTATE[HY000] [2002] Connection refused
MySQL server runs in the same container. If I try to connect to MySQL server from container via some standalone script - it works fine.
Also the app itself works fine too.
Other tests (without database usage) work fine.
Any ideas what is wrong with my PhpStorm configuration? I followed official step by step configuration video tutorial, but it does not cover database part.
PhpStorm runs command that is being executed every time I hit run tests button:
docker://mycontainer/myapp:v1.0/php /opt/.phpstorm_helpers/phpunit.php --configuration /var/www/myapp/tests/phpunit.xml /var/www/myapp/tests/unit
Maybe problem is with phpstorm_helpers? Because it stands as separate container. Maybe my app container and helper container should be linked somehow?
What I need is to run tests in existing container which I start only once. Depending on this thread PhpStorm does not have such functionality yet.
So I switched to remote interpreter instead. Now PhpStorm connects to container via SSH. I know, it's a bit gruesome, but for this moment it's what I need.
Still if somebody wants to run integration tests with PhpStorm and Docker in proper way there is a good thread about it.

What is image "containersol/minimesos" in minimesos?

I was able to setup the minimesos cluster on my laptop and also could deploy a small command-line utility. Now the questions;
What is the image "containersol/minimesos" used for? It is pulled but I don't see it running, when I do "docker ps". "docker images" lists it.
How come when I run "top" inside the mesos-agent container, I see all the processes running in my host (laptop)? This is a bit strange.
I was trying to figure out what's inside minimesos script. I see that there's just one "docker run ... " command. Would really appreciate if I could get to know what the aforementioned command does that results into 4 containers (1 master, 1 slave, 1 zk, 1 marathon) running on my laptop.
containersol/minimesos runs the Java code that is the core of minimesos. It only runs until it executes the command from the CLI. When you do minimesos up the command name and the minimesosFile will be passed to this container. The container in turn will execute the Java code that will create the other containers that form the Mesos cluster specified in the minimesosFile. That should answer #3 also. Take a look at MesosCluster class thats the root of where the magic happens.
I don't know the answer to #2 will get back to you when I find out.
Every minimesos command runs as a short lived container, whose image is containersol/minimesos.
When you run 'minimesos up' it launches containersol/minimesos with 'up' as the argument. It then launches a cluster by starting other containers like containersol/mesos-agent and containersol/mesos-master. After the cluster is up the containersol/minimesos container exists and is removed.
We have separated cli and minimesos core as a refactoring to prepare for the upcoming API module. We are creating an API to support clients for different programming language. The first client will be a Golang client.
In this new setup minimesos will run launch a long running API server and any minimesos cli commands call the API. The clients will also launch the API server and call the API.

Resources