Docker commit not saving changes (anapsix/webdis) - docker

I started a docker images anapsix/webdis:
sudo docker run -d -p 7379:7379 -e LOCAL_REDIS=true anapsix/webdis
and changed the etc/webdis.json to allow websockets and committed it with
sudo docker commit <container-id>
however, when I used the new image to start a container, it does not keep the changes. Is there something I'm doing wrong?
Thanks!

In this case your problem is that the anapsix/webdis image has an entrypoint script (/entrypoint.sh) that generates /etc/webdis.json when the container starts.
Looking at the script, you can set the value of websockets by setting the WEBSOCKETS variable when you start the container:
docker run -d -p 7379:7379 \
-e LOCAL_REDIS=true \
-e WEBSOCKETS=true \
anapsix/webdis
When we run it like this, the generated /etc/webdis.json looks like:
{
"redis_host": "127.0.0.1",
"redis_port": 6379,
"redis_auth": null,
"http_host": "0.0.0.0",
"http_port": 7379,
"threads": 5,
"pool_size": 10,
"daemonize": false,
"websockets": true,
"database": 0,
"acl": [
{
"disabled": ["DEBUG", "FLUSHDB", "FLUSHALL"]
},
{
"http_basic_auth": "user:password",
"enabled": ["DEBUG"]
}
],
"verbosity": 8,
"logfile": "/dev/stdout"
}
More broadly, using docker commit is almost always the wrong thing to do; you should generate custom images using a Dockerfile (this gives you a much more manageable, reproducible process for creating container images).

Related

How to run podman commands on host from within container

In case of docker, this can be achieved by mounting docker.sock inside container.
But since there is no daemon in podman. What's the replacement for docker.sock?
I want to typically check the podman images presents on host and start a new container.
I'm using Podman with --privileged=true and root.
There is a new API (status: experimental) that was announced in a blog post in January 2020.
[root#fedora31 ~]# podman --version
podman version 1.8.0
[root#fedora31 ~]# podman system service --timeout 500000 unix://root/foobar.sock
This function is EXPERIMENTAL.
As the API is still experimental this might change but right now you could make a query like this:
[root#fedora31 ~]# curl -s --unix-socket /root/foobar.sock http://d/v1.24/images/json | python3 -m json.tool
[
{
"Containers": 0,
"Created": 1572319417,
"Id": "f0858ad3febdf45bb2e5501cb459affffacef081f79eaa436085c3b6d9bd46ca",
"Labels": {
"maintainer": "Clement Verna <cverna#fedoraproject.org>"
},
"ParentId": "",
"RepoDigests": [
"sha256:8fa60b88e2a7eac8460b9c0104b877f1aa0cea7fbc03c701b7e545dacccfb433"
],
"RepoTags": [
"docker.io/library/fedora:latest"
],
"SharedSize": 0,
"Size": 201095865,
"VirtualSize": 201095865,
"CreatedTime": "0001-01-01T00:00:00Z"
},
null
]
[root#fedora31 ~]#
The command python3 -m json.tool was added to pretty-print the JSON output.
I think the UNIX socket can be accessed from inside a container by using the bind-mounting technique (that was mentioned in the question).
According to the man page, the command podman system service also accepts the flag --varlink.
Using Varlink instead of the new API might be a better solution right now as it is more mature but it will be deprecated in the future.

Run Filebeat in docker as IoT Edge module

I would like to run Filebeat as Docker container in Azure IoT Edge. I would like Filebeat to get logs from others running containers.
I'm already able to run filebeat as Docker container, from the documentation (https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_volume_mounted_configuration)
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:6.8.3 filebeat -e -strict.perms=false
With this command and with the correct filebeat.yml file I'm able to collect logs for every running containers on my device.
Now I would like to deploy this configuration as Azure IoT Edge Modules.
I created a docker image having the filebeat.yml file included with the following Dockerfile:
FROM docker.elastic.co/beats/filebeat:6.8.3
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chmod go-w /usr/share/filebeat/filebeat.yml
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
From documentation: https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_custom_image_configuration
I tested this Dockerfile by running locally
docker build -t filebeat .
and
docker run -d \
--name=filebeat \
--user=root \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
filebeat:latest filebeat -e -strict.perms=false
This works fine, logs from other containers are collected as they should.
Now my question is :
In Azure IoT Edge, how can I mount volumes to access others Docker containers running on the devices, like it's done with
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro"
in order to collect logs?
From this other SO post (Mount path to Azure IoT Edge module) in the Azure IoT Edge portal I tried the following:
"HostConfig": {
"Mounts": [
{
"Target": "/var/lib/docker/containers",
"Source": "/var/lib/docker/containers",
"Type": "volume",
"ReadOnly: true
},
{
"Target": "/var/run/docker.sock",
"Source": "/var/run/docker.sock",
"Type": "volume",
"ReadOnly: true
}
]
}
}
But when I deploy this module I have the following error:
2019-11-25T10:09:41Z [WARN] - Could not create module FilebeatAgent
2019-11-25T10:09:41Z [WARN] - caused by: create /var/lib/docker/containers: "/var/lib/docker/containers" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
I don't understand this error. How can I specify a path using only [a-zA-Z0-9][a-zA-Z0-9_.-] ?
Thanks for your help.
EDIT
In the Azure IoT Edge portal, createOptions json:
{
"HostConfig": {
"Binds": [
"/var/lib/docker/containers:/var/lib/docker/containers",
"/var/run/docker.sock:/var/run/docker.sock"
]
}
}
There is an article that describes how to mount storage from the host here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-access-host-storage-from-module

Docker View Historical Logs

Background:
For Development purposes I do a lot of docker-compose up -d and docker-compose stop.
To view logs of a container I do either
- docker logs --details --since=1m -t -f container_name
or
- docker inspect --format='{{.LogPath}}' container_name
cat path-from-previous
The problem is when I want to view 10 days older logs, there are none, the logs just have todays logs.
when I do a docker inspect container_name I get the following
"Created": "todays-timestamp"
my logging is the default config.
"LogConfig": {
"Type": "json-file",
"Config": {}
},
the reason behind this is because there is no rotation in your docker-logs.
in case you are using a linux system go to:
/etc/logrotate.d/
and create the file docker-container like this => /etc/logrotate.d/docker-container
write this into the file:
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
missingok
delaycompress
copytruncate
}
it takes all builded images and their daily log and rotates + compress them.
you can test this with:
logrotate -fv /etc/logrotate.d/docker-container
enter your docker folder /var/lib/docker/containers/[CONTAINER ID]/ and you can see the rotation.
reference: https://sandro-keil.de/blog/logrotate-for-docker-container/

How to get Rancher scripts code when add agent to nodes hosts?

Normally, get that code on master host's dashboard:
$ sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
If want to deploy multiple nodes automatic to other hosts, it's necessary to get this code from master:
5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
Then every node just add agent with this code is good. Is it right?
But, how to get it by cli from master?
Rancher has API, which enables you to interact with it remotely. What you require is called registrationTokens. Now, how to access them.
First, set up API tokens in your Rancher. Go to API -> Keys -> Add Account API Key and create the keys. If you can't find the buttons, your URL would be 192.168.0.100:8080/env/1a5/api/keys.
Now you know the keys and from remote host you can do something like this:
curl -u "${RANCHER_ACCESS_KEY}:${RANCHER_SECRET_KEY}" \
-X GET \
'http://192.168.0.100:8080/v2-beta/projects/1a5/registrationtokens'
Your result will be a JSON with required data:
{
...
"data": [
{
"id": "1c3",
"type": "registrationToken",
"links": {
...
},
"actions": {
...
},
...
"command": "sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/AAAAAAAAAAAAAAAAAAAA:0000000000000:ZZZZZZZZZZZZZZZZZZZZZZZZZZ",
...
}],
...
}

marathon docker jobs hanged in deployment state

Hi I have been successfull so far with simple jobs in marathon but it stuck when i have tried deploying a deocker job in mesos through marathon framework.
I am using a json file as below to deploy a docker job:
{
"id": "pga-docker",
"cpus": 0.2,
"mem": 1024.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "pga",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 80, "hostPort": 6565, "servicePort": 0, "protocol": "tcp" }
]
}
}
}
My pga docker image have no problem when run as container, but through marathon its just not working. Its staying in the deploying state forever.
I am using the below command line:
curl -X POST http://10.141.141.10:8080/v2/apps -d #basic-3.json -H "Content-type: application/json"
But when I run the same image from marathon UI, its working. To run from marathon I used "docker run --publish 6060:80 --name test --rm pga" in the cmd field of the UI new job page.
Any one have idea why this is hanged in the command line approach?
This is what i have found during some trial and error with the json file.
I found that when we run docker image in local system, if we have mentioned an entry point or a cmd then that will execute while running the container. But this is not same for mesos/marathon. my observation is that if I explicitly mentioned cmd in the deployment json then its working fine.
"cmd":"sh pga-setup.sh"
I will love to know if anyone faced a similar issue an solved it by another way.

Resources