Docker logs from go container (log and fmt) stop after init - docker

I'm working on an application which consists of a number of go containers. I manage them with docker compose. Recently I've been having trouble getting logs out of them. When I run "docker logs [container-name]", I only see logs that were created during init for packages in my application, and during main before the service starts listening. Subsequent calls to log.Println or fmt.Println do not appear in the output of "docker logs".
Do you know what could be going on?

You may want to write your logs into the /dev/stdout
or simply use
log.SetOutput(os.Stdout)
From log package

Related

How to reconnect to docker-compose output log?

Please help, I'm not even sure if I am asking the right question here as there are many new components to my environment (the new components of my Environment are that I am new to developing in a Windows OS, New to Visual Studio Code IDE, and new to Docker within VS Code. This question could pertain to any of those factors.
Scenario:
I boot up my windows 10 machine, open VS Code, go to the command line from within VS Code (I am using a Git Bash Shell within VS Code). From this terminal I start my project with the following command: docker-compose up --build
as a result of running this command, I see the output in my terminal which indicates that all three of my containers have started up (Note this is a Flask application using Postgres with an Angular front end, each one has it's own container).
My application has a test API endpoint which when called responds with 'status ok'. Upon hitting that endpoint in postman I see a couple of lines of output in my terminal indicating that the application has processed the request for the specific URL. Everything is great.
Now I close all my applications and reboot the machine.
Upon rebooting I see a message from the system informing me that my docker containers are starting. This is good. But now I would like to get back to the state where I can see that same output that I saw when I ran the docker-compose up command, however this is no longer in the terminal on VS Code.
My question is, how can I get that output again without shutting down the docker containers and re-building them? Sure, I could do that, but this seems like an unnecessary step since the containers auto-restarted on system reboot.
Is there a log I can tail?
Additional info:
In the DockerFile for the API server. The server is started with the following command:
CMD ["./entrypoint.local.sh"]
In the entrypoint.local.sh file, the actual application is started with this command:
uwsgi --ini /etc/uwsgi.local.ini --chdir /var/www/my-application
Final note: This is not an application I created so I would like to avoid changing it since this will affect others on my team
In your terminal run: docker-compose logs --follow <name-of-your-service>
Or see every log stream for every service with docker-compose logs --follow
You can find the name of your docker-compose service by looking at each key under services: in your docker-compose.yml

Logging from multiple processes in a single docker container

I have an application (let's call it Master) which runs on linux and starts several processes (let's call them Workers) using fork/exec. Therefore each Worker has its own PID and writes its own logs.
When running directly on a host machine (without docker) each process uses syslog for logging, and rsyslog puts ouptut from each Worker to a separate file, using a config like this:
$template workerfile,"/var/log/%programname%.log"
:programname, startswith, "worker" ?workerfile
:programname, isequal, "master" "/var/log/master"
Now, I want to run my application inside a docker container. Docker starts Master process as the main process (in CMD section of the Dockerfile), and then it forks the Workers at runtime (not sure if it's a canonical way to use docker, but that's what I have). Of course I'm getting only the stdout for the Master process from docker, and logs of Workers get lost.
So my question is, any way I could get the logs from the forked processes?
To be precise, I want the logs from different processes to appear in individual files on the host machine eventually.
I tried to run rsyslog daemon inside docker container (just like I do when running without docker), writing logs to a mounted volume, but it doesn't seem to work. I guess it requires a workaround like supervisord to run the Master process and rsyslogd at the same time, which looks like an overkill to me.
I couldn't find any simple solution for that, though my problem seems to be trivial.
Any help is appreciated, thanks

Docker - Cannot start or stop container groups through Docker Dashboard

I am new to Docker and have been running the Example Voting App (suggested by the getting-started guide).
I have encountered an issue where i can start and stop each individual container within the desktop dashboard, but no start or stop command is passed when i start or stop the containers as a group. I get no logs and no other helpful information as to why this is happening.
I can start and stop the containers as a group from command line by simply navigating to the repository folder and calling docker-compose start/stop, but i'd like the functionality to be able to do it from the desktop dashboard.
Some other things that I have encountered that may* relate to the issue:
I encountered issues with the dashboard GUI not properly displaying when containers were deleted.
I had to enable file sharing on my C Drive to even be able 'import' half of the repository files. This was not listed as something that
you needed to do on the getting-started guide so im assuming it
shouldnt actually be a necessary thing to do.
= Windows 10 =

Docker automatically save events in file

docker events doesn't save events in files. But I need to backup all history. In case of crash, I need to known the status of all container.
How to automatically save events in files?
Thanks
Docker logs all containers. Moe details about how to view logs can be found here. However, there is another way to handle this thing.
When you start the docker, you can add |& tee docker.log to the command you use to start docker containers.
This stores all the logs being displayed on terminal inside a file named docker.log. This is described in more detail here.

Docker container stops without any errors while runnning sbt/play application

I'm running into an issue where my docker container will exit with exit code 137 after ~a day of running. The logs for the container contains no information indicating that an error code has occurred. Additionally, attempts to restart the container returns an error that the PID already exists for the application.
The container is built using the sbt docker plugin, sbt docker:publishLocal and is then run using
docker run --name=the_app --net=the_app_nw -d the_app:1.0-SNAPSHOT.
I'm also running 3 other docker containers which all together do use 90% of the available memory, but its only ever that particular container which exits.
Looking for any advice on to where to look next.
The error code 137 (128+9) means that it was killed (like kill -9 yourApp) by something. That something can be a lot of things (maybe it was killed because was using too much resources by docker or something else, maybe it got out of memory, etc)
Regarding the pid problem, you can add to your build.sbt this
javaOptions in Universal ++= Seq(
"-Dpidfile.path=/dev/null"
)
Basically this should instruct Play to not create a RUNNING_PID file. If it does not work you can try to pass that option directly in Docker using the JAVA_OPTS env variable.

Resources