I'm running the Seq docker image on an AWS EC2 instance.
In order to have the logs written to persistent storage, I've attached an EBS volume to the instance, and mounted it from within the instance with the rexray/ebs plugin:
docker plugin install rexray/ebs:latest REXRAY_PREEMPT=true EBS_REGION=eu-central-1a --grant-all-permissions EBS_ACCESSKEY=... EBS_SECRETKEY=...
docker volume create --driver rexray/ebs --name SeqData
Then instructed Seq to use that volume:
docker run -d --name seq -e ACCEPT_EULA=Y -v SeqData:/data -p 80:80 -p 5341:5341 datalust/seq:latest
Seq runs fine for a while (sometimes for a few hours, sometimes a few days), then I notice that the container is no longer running, and the AWS console shows that the volume has been detached. The AWS logs show that a DetachVolume event was initiated by the instance.
I reattach the volume manually in the AWS console, and restart the container. Seq resumes its normal operation, then after a while the phenomenon repeats itself.
The docker log doesn't give any hint. It just shows Seq logging its normal activity (retention, indexing, etc.) approximately every 5 minutes - up till about 10 minutes before the time that the detaching occurred.
I have limited experience with AWS or Docker, so I'll be grateful if anyone can help me out.
For Seq's memory management to work effectively, both --memory and --memory-swap need to be passed to the docker run command. Normally these should have the same value (i.e. no swap).
docker run --memory=4g --memory-swap=4g <other args> datalust/seq
Related
So I have a container, and to create it in docker toolbox I used:
docker run --memory=4096m -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=MY_PASSWORD' -p 1433:1433 -d --name CONTAINER_NAME microsoft/mssql-server-linux
I have different names and password, but I swapped those out. Every time I run that, it creates, but then exits immediately. When I use:
docker ps -a
to check on it, under status it says:
Exited (1) 7 minutes ago
and then when I try to run:
docker logs CONTAINER_NAME
to check what happened, I get an error saying:
sqlservr: This program requires a machine with at least 2000 megabytes of memory.
My computer has plenty of ram available, and when I created the container I gave it 4gb of ram, so I don't understand what the issue is here. Also, I cannot use docker for windows.
The fix was to remove the "default" vm that gets created automatically, using:
docker-machine rm default
and then re-creating it with the command:
docker-machine -D create -d virtualbox --virtualbox-memory 8096 --virtualbox-disk-size "100000" default
which gives it 8 gigs of memory and 100 gigs of disk space. Also, renaming it as default keeps Kitematic working, which is a plus.
I use following command to build web server
docker run --name webapp -p 8080:4000 mypyweb
When it stopped and I want to restart, I always use:
sudo docker start webapp && sudo docker exec -it webapp bash
But I can't see the server state as the first time:
Digest: sha256:e61b45be29f72fb119ec9f10ca660c3c54c6748cb0e02a412119fae3c8364ecd
Status: Downloaded newer image for ericgoebelbecker/stackify-tutorial:1.00
* Running on http://0.0.0.0:4000/ (Press CTRL+C to quit)
How can I see the state instead of interacting with the shell?
When you use docker run, the default behavior is to run the container detached. This runs in the background and is detached from your shell's stdin/out.
To run the container in the foreground and connected to stdin/out:
docker run --interactive --tty --publish=8080:4000 mypyweb
To docker start a container, similarly:
docker start --interactive --attach [CONTAINER]
NB --attach rather than -tty
You may list (all add --all) running containers:
docker container ls
E.g. I ran Nginx:
CONTAINER ID IMAGE PORTS NAMES
7cc4b4e1cfd6 nginx 0.0.0.0:8888->80/tcp nostalgic_thompson
NB You may use the NAME or any uniquely identifiable subset of the ID to reference the container
Then:
docker stop nostalgic_thompson
docker start --interative --attach 7cc4
You may check the container's logs (when running detached or from another shell) by grabbing the container's ID or NAMES
docker logs nostalgic_thompson
docker logs 7cc4
HTH!
Using docker exec is causing the shell to attach to the container. If you are comparing the behavior of docker run versus docker start, they behave differently, and it is confusing. Try this:
$ sudo docker start -a webapp
the -a flag tells docker to attach stdout/stderr and forward signals.
There are some other switches you can use with the start command (and a huge number for the run command). You can run docker [command] --help to get a summary of the options.
One other command that you might want to use is logs which will show the console output logs for a running container:
$ docker ps
[find the container ID]
$ docker logs [container ID]
If you think your container's misbehaving, it's often not wrong to just delete it and create a new one.
docker rm webapp
docker run --name webapp -p 8080:4000 mypyweb
Containers occasionally have more involved startup sequences and these can assume they're generally starting from a clean slate. It should also be extremely routine to delete and recreate a container; it's required for some basic tasks like upgrading the image underneath a container to a newer version or changing published ports or environment variables.
docker exec probably shouldn't be part of your core workflow, any more than you'd open a shell to interact with your Web browser. I generally don't tend to docker stop containers, except to immediately docker rm them.
When I run docker start, it seems the container might not be fully started at the time the docker start command returns. Is it so?
Is there a way to wait for the container to be fully started before the command returns? Thanks.
A common technique to make sure a container is fully started (i.e. services running, ports open, etc) is to wait until a specific string is logged. See this example Waiting until Docker containers are initialized dealing with PostgreSql and Rails.
Edited:
There could be another solution using the HEALTHCHECK of Docker containers.The idea is to configure the container with a health check command that is used to determine whether or not the main service if fully
started and running normally.
The specified command runs inside the container and sets the health status to starting, healthy or unhealthy
depending of its exit code (0 - container healthy, 1 - container is not healthy). The status of the container can then be retrieved
on the host by inspecting the running instance (docker inspect).
Health check options can be configured inside Dockerfile or when the container is run. Here is a simple example for PostgreSQL
docker run --name postgres --detach \
--health-cmd='pg_isready -U postgres' \
--health-interval='5s' \
--health-timeout='5s' \
--health-start-period='20s' \
postgres:latest && \
until docker inspect --format "{{json .State.Health.Status }}" postgres| \
grep -m 1 "healthy"; do sleep 1 ; done
In this case the health command is pg_isready. A web service will typically use curl, other containers have their specific commands
The docker community provides this kind of configuration for several official images here
Now, when we restart the container (docker start), it is already configured and we need only the second part:
docker start postgres && \
until docker inspect --format "{{json .State.Health.Status }}" postgres|\
grep -m 1 "healthy"; do sleep 1 ; done
The command will return when the container is marked as healthy
Hope that helps.
Disclaimer, I'm not an expert in Docker, and will be glad to know by myself whether a better solution exists.
The docker system doesn't really know that container "may not be fully started".
So, unfortunately, there is nothing to do with this in docker.
Usually, the commands used by the creator of the docker image (in the Dockerfile) are supposed to be organized in a way that the container will be usable once the docker start command ends on the image, and its the best way. However, it's not always the case.
Here is an example:
A Localstack, which is a set of services for local development with AWS has a docker image, but once its started, for example, S3 port is not ready to get connections yet.
From what I understand a non-ready-although-exposed port will be a typical situation that you refer to.
So, out of my experience, in the application that talks to docker process the attempt to connect to the server port should be enclosed with retries and once it's available.
I am using Docker toolbox, the image is: cloudera/quickstart.
Due to my computer, the docker run on 4GB memory and 2 cpu.
when i create new container the hue and hive works well, but when i use the container again (after exit) i get many problems in the hue therefore the hive is not working.
for example one of the errors:
the errors
the code i use to create new container:
docker run --hostname=quickstart.cloudera --privileged=true -t -i --publish-all=true -p 8888:8888 -p 80:80 --name cloudera-test cloudera/quickstart /usr/bin/docker-quickstart
so this is a problem of porting or hardware or something else?
i think i got the answer,
because my computer have 8 GiB memmory, when i use the command
docker start [container name]
it takes time to all the services to startup
we can see the status of all the services with the command
service --status-all
the location of all the services is: /etc/init.d/
so i suggest not to try get in the hue immediately after the container is startup.
if still there is problems try to check whice service is not OK.
for the HBase services we need to shut down them in this order:
1) service hbase-thrift stop
2) service hbase-regionserver stop
3) service hbase-master stop
and to start from bottom to up (3, 2, 1)
for more details you can read here
Is there a way to access the hosts zfs snapshots from within the docker?
I'm trying to get some custom backup, using zfs snapshots with send/receive, running on a cluster of docker based servers. To stick to the current setup, I'd like the backup service to run in a docker container as well. I'm having a hard time figuring out if there's any way to access the hosts file system, on an administrative level, from within a docker container.
I basically need a way to run zfs list, zfs snap and zfs send from within the container. My gut tells me "no", but maybe there's a clever way by some mount options and privilege wizardry
I use rancherOS 1.3.0 with zfs on /mnt
i start container with:
privileged: true
pid: host
volumes:
- /mnt:/mnt:shared
with this confis i can clone snaphots
for me it worked with:
docker container in privileged mode
zfs device mapped into the container (--device /dev/zfs)
zfsutils-linux installed in the container
Znapzend (a backup utility using ZFS snapshots) covers this in their github page: https://github.com/oetiker/znapzend#running-in-container. I'm using this to automate backups on my NAS to a separate offsite backup NAS.
Here's the relevant info from the link:
---SNIP---
znapzend is also available as docker container image. It needs to be a privileged container depending on permissions.
docker run -d --name znapzend --device /dev/zfs --privileged \
oetiker/znapzend:master
To configure znapzend, run in interactive mode:
docker exec -it znapzend /bin/sh
$ znapzendzetup create ...
# After exiting, restart znapzend container or send the HUP signal to
# reload config
By default, znapzend in container runs with --logto /dev/stdout. If you wish to add different arguments, overwrite them at the end of the command:
docker run --name znapzend --device /dev/zfs --privileged \
oetiker/znapzend:master znapzend --logto /dev/stdout --runonce --debug
Be sure not to daemonize znapzend in the container, as that exits the container immediately.
---SNIP---
Unfortunately, there is no way to do that. We've had the same problem ourselves, and the way we worked around it was by creating a container-less service which the containers can issue commands to, and the container-less service could then issue ZFS commands on their behalf and return the results. It's not a perfect solution, but (at least for us) it was better than nothing.