I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.
Related
I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.
I want to collect docker container logs, By default, log files will be deleted when removing container. It cause several logs lost each time i update my service. How to keep log files after removing containers?
Or, Is there another way to collect all logs from containers without losing?
There will be two situations:
If your logs are the stdout or stderr, you can save them before removing the container:
docker logs CONTAINER_ID > container.log
If your logs are stored in some files, in this case, you can copy them out or mount a directory for them while running the container:
# Copy the logs out to the host
docker copy CONTAINER_ID:/path/to/your/log_file /host/path/to/store
# Mount a directory for them
docker run -d \
-v /host/path/to/store/logs:/container/path/stored/logs \
your-image
I followed the standard Odoo container instructions on Docker to start the required postgres and odoo servers, and tried to pass host directories as persistent data storage for both as indicated in those instructions:
sudo mkdir /tmp/postgres /tmp/odoo
sudo docker run -d -v /tmp/postgres:/var/lib/postgresql/data/pgdata -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db postgres:10
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The Odoo container shows messages that it starts up fine, but when I point my web browser at http://localhost:8069 I get no response from the server. By contrast, if I omit the -v argument from the Odoo docker run command, my web browser connects to the Odoo server fine, and everything works great.
I searched and see other people also struggling with getting the details of persistent data volumes working, e.g. Odoo development on Docker, Encountered errors while bringing up the project
This seems like a significant gap in Docker's standard use-case that users need better info on how to debug:
How to debug why the host volume mounting doesn't work for the odoo container, whereas it clearly does work for the postgres container? I'm not getting any insight from the log messages.
In particular, how to debug whether the container requires the host data volume to be pre-configured in some specific way, in order to work? For example, the fact that I can get the container to work without the -v option seems like it ought to be helpful, but also rather opaque. How can I use that success to inspect what those requirements actually are?
Docker is supposed to help you get a useful service running without needing to know the guts of its internals, e.g. how to set up its internal data directory. Mounting a persistent data volume from the host is a key part of that, e.g. so that users can snapshot, backup and restore their data using tools they already know.
I figured out some good debugging methods that both solved this problem and seem generally useful for figuring out Docker persistent data volume issues.
Test 1: can the container work with an empty Docker volume?
This is a really easy test: just create a new Docker volume and pass that in your -v argument (instead of a host directory absolute path):
sudo docker volume create hello
sudo docker run -v hello:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
The odoo container immediately worked successfully this way (i.e. my web browswer was able to connect to the Odoo server). This showed that it could work fine with an (initially) empty data directory. The obvious question then is why it didn't work with an empty host-directory volume. I had read that Docker containers can be persnickety about UID/GID ownership, so my next question was how do I figure out what it expects.
Test 2: inspect the running container's file system
I used docker exec to get an interactive bash shell in the running container:
sudo docker exec -ti odoo bash
Inside this shell I then looked at the data directory ownership, to get numeric UID and GID values:
ls -dn /var/lib/odoo
This showed me the UID/GID values were 101:101. (You can exit from this shell by just typing Control-D)
Test 3: re-run container with matching host-directory UID:GID
I then changed the ownership of my host directory to 101:101 and re-ran the odoo container with my host-directory mount:
sudo chown 101:101 /tmp/odoo
sudo docker stop odoo
sudo docker rm odoo
sudo docker run -v /tmp/odoo:/var/lib/odoo -p 8069:8069 --name odoo --link db:db -t odoo
Success! Finally the odoo container worked properly with a host-directory mount. While it's annoying the Odoo docker docs don't mention anything about this, it's easy to debug if you know how to use these basic tests.
I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background
In Docker i have installed Jenkins successfully. When i create a new job and i would like to execute a sh file from my workspace, what is the best way to add a file to my workspace with Docker? I started my container with this: docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
You could copy a file from your file system to the container with a simple command from your terminal.
docker cp [OPTIONS] LOCALPATH|- CONTAINER:PATH
https://docs.docker.com/engine/reference/commandline/cp/
example:
docker cp /yourpaht/yourfile <containerId>:/var/jenkins_home
It depends a bit on how the planned lifecycle of your Jenkins container is. If it is just used temporarily and does no harm if the data is gone, docker cp as NickGnd suggested will do the trick.
But since the working data of Jenkins like jobconfigs, system configs and workspaces will only live inside the container, all of it will be gone once the container is removed, so if you plan to have a longer running Jenkins environment, you might want to persist the data outside of the container so it will survive recreating the container, launching new container versions and so on. This can be done with the option --volume /path/on/host:/path/in/container or its short form -v on docker run.
There is also the option of --volumes-from which you can use to mount to keep the data in one "data container" and mount it into your Jenkins container.
For further information on this, please have a look at The docker volumes documentation