Docker automatically save events in file - docker

docker events doesn't save events in files. But I need to backup all history. In case of crash, I need to known the status of all container.
How to automatically save events in files?
Thanks

Docker logs all containers. Moe details about how to view logs can be found here. However, there is another way to handle this thing.
When you start the docker, you can add |& tee docker.log to the command you use to start docker containers.
This stores all the logs being displayed on terminal inside a file named docker.log. This is described in more detail here.

Related

Docker - Cannot start or stop container groups through Docker Dashboard

I am new to Docker and have been running the Example Voting App (suggested by the getting-started guide).
I have encountered an issue where i can start and stop each individual container within the desktop dashboard, but no start or stop command is passed when i start or stop the containers as a group. I get no logs and no other helpful information as to why this is happening.
I can start and stop the containers as a group from command line by simply navigating to the repository folder and calling docker-compose start/stop, but i'd like the functionality to be able to do it from the desktop dashboard.
Some other things that I have encountered that may* relate to the issue:
I encountered issues with the dashboard GUI not properly displaying when containers were deleted.
I had to enable file sharing on my C Drive to even be able 'import' half of the repository files. This was not listed as something that
you needed to do on the getting-started guide so im assuming it
shouldnt actually be a necessary thing to do.
= Windows 10 =

Run Jira in docker with initial setup snapshot

In my company, we're using a Jira for issue tracking. I need to write an application, that integrates with it and synchronizes some data with other services. For testing, I want to have a docker image of the Jira with some initial data.
I'm using the official atlassian/jira-core image. After the initial setup, I saved a state by running docker commit, but unfortunately the new image seems to be empty, and I need to set it up again from scratch.
What should I do to save the initial setup? I want to run tests that will change something within Jira, so reverting it back will be necessary to have reliable test suite. After I spin a new container it should have created a few users, and project with some issues. I don't want to create it manually for each new instance. Also, the setup takes a lot of time which is not acceptable for testing.
To get persistent storage you need to mount /var/atlassian/jira in your host system. /var/atlassian/jira this can be used for storing your configuration etc. so you do not need to commit, whenever you spin up a new container with /var/atlassian/jira mount path will have all the configuration that you set previously.
docker run --detach -v /you_host_path/jira:/var/atlassian/jira --publish 8080:8080 cptactionhank/atlassian-jira:latest
For logs you can mount
/opt/atlassian/jira/logs
The above is valid if you are running with the latest tag or you can explore relevant dockerfile.
Set volume mount points for installation and home directory. Changes to the
home directory needs to be persisted as well as parts of the installation
directory due to eg. logs. VOLUME ["/var/atlassian/jira", "/opt/atlassian/jira/logs"]
atlassian-jira-dockerfile
look at the entrypoint.sh , the comments from there are:
check if the server.xml file has been changed since the creation of
this Docker image. If the file has been changed the entrypoint script
will not perform modifications to the configuration file.
so I think you need to provide your server.xml to stop the init process...

How to view docker logs from vscode remote container?

I'm currently using vscode's remote containers extension with a .devcontainer.json file that points to my docker-compose.yml file.
Everything works fine and my docker-compose start command gets run (which launches a web server), but I haven't found a way to quickly see the logs from the web server. Has anyone found a way to view the docker log output automatically once vscode connects to the remote container?
I know as an alternative I could remove my container's start command and, after vscode connects, manually open a terminal and start the web server, but I'm hoping there's an easier way.
Thanks in advance!
I'm not using remote containers, just local once, so not sure if this applies but for locally running containers, you can go to the "Docker" tab (you need to install the official Microsoft Docker VS Code Plugin) where you can see your running containers. Just right-click on the container you want to see the logs for and select "View Logs":
You'll see a new "Task" appear in the Terminal pane that will show all your docker logs:
This question is really old and I'm not sure it this option was available at this time but just open the Command Palette (F1) and select/find "Remote-Containers: Show Log".
You see now the log of your container in the terminal.
You can open the command palette and search for: Remote Explorer: Focus on containers view. You should see a sidebar of containers, if you right click your container you can view logs.
I use VS Code's builtin terminal to see the live logs of the docker container that is connected with VS Code.
When VS Code is connected to the docker container, you can open the builtin terminal using the View > Terminal menu option. You should see an existing terminal labeled Dev Containers.
Maybe this is too late? But for others, this is how I do it.
First, instead of logging stuff to the stdout, I redirect all of the outputs into one single file and then using the tail command to steam the output to the terminal instead.
For example, I am going Go here:
logFile, err := os.OpenFile(logFileName, os.O_WRONLY|os.O_CREATE, 0755)
if err != nil {
log.Fatal("Fail to open the log file")
}
logrus.SetOutput(logFile)
Once that's done, I open up my terminal and run my the following command:
$ tail -f {logFileName}
That's one way to do it I guess, but I sure hope VSCode can come up with a better solution.
In the Remote Explorer tab you can see all your docker containers. Under "Dev Containers" is the container for the service specified in devcontainer.json; the rest are in "Other Containers." Simply right click on the container you're interested in and click "Show Container Log." You'll see the full output of the command for that service, just like in an interactive terminal - not a docker build log!
Note I am using a local development container and did not test with remote containers but I'm guessing it's the same.

Docker logs from go container (log and fmt) stop after init

I'm working on an application which consists of a number of go containers. I manage them with docker compose. Recently I've been having trouble getting logs out of them. When I run "docker logs [container-name]", I only see logs that were created during init for packages in my application, and during main before the service starts listening. Subsequent calls to log.Println or fmt.Println do not appear in the output of "docker logs".
Do you know what could be going on?
You may want to write your logs into the /dev/stdout
or simply use
log.SetOutput(os.Stdout)
From log package

docker track logs from dynamically created containers

I have an app that is dynamically creating docker containers and I can't intercept the way it is created.
I want to see logs from all the machines that are up. no matter if it was via docker-compose or just docker command line. I need to see all the logs.
Is it possible?
right no I need to run docker ps, see all the created machines and run docker log container.
I can't really monitor what is going inside.
Thanks
An approach is to use a dedicated logging container that can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.
Further, dedicated logging containers can automatically collect, monitor, and analyze log events, It can scale your log events automatically without configuration. It can retrieve logs through multiple streams of log events, stats, and Docker API data.
You can check this link also for some help.
Docker Logging Best Practices

Resources