How to get Rancher scripts code when add agent to nodes hosts? - docker

Normally, get that code on master host's dashboard:
$ sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
If want to deploy multiple nodes automatic to other hosts, it's necessary to get this code from master:
5D8B3FD489C00C7F361A:2483142400000:WvMClyNFLXQnT9pLuii3D0sYA
Then every node just add agent with this code is good. Is it right?
But, how to get it by cli from master?

Rancher has API, which enables you to interact with it remotely. What you require is called registrationTokens. Now, how to access them.
First, set up API tokens in your Rancher. Go to API -> Keys -> Add Account API Key and create the keys. If you can't find the buttons, your URL would be 192.168.0.100:8080/env/1a5/api/keys.
Now you know the keys and from remote host you can do something like this:
curl -u "${RANCHER_ACCESS_KEY}:${RANCHER_SECRET_KEY}" \
-X GET \
'http://192.168.0.100:8080/v2-beta/projects/1a5/registrationtokens'
Your result will be a JSON with required data:
{
...
"data": [
{
"id": "1c3",
"type": "registrationToken",
"links": {
...
},
"actions": {
...
},
...
"command": "sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.2 http://192.168.0.100:8080/v1/scripts/AAAAAAAAAAAAAAAAAAAA:0000000000000:ZZZZZZZZZZZZZZZZZZZZZZZZZZ",
...
}],
...
}

Related

Docker commit not saving changes (anapsix/webdis)

I started a docker images anapsix/webdis:
sudo docker run -d -p 7379:7379 -e LOCAL_REDIS=true anapsix/webdis
and changed the etc/webdis.json to allow websockets and committed it with
sudo docker commit <container-id>
however, when I used the new image to start a container, it does not keep the changes. Is there something I'm doing wrong?
Thanks!
In this case your problem is that the anapsix/webdis image has an entrypoint script (/entrypoint.sh) that generates /etc/webdis.json when the container starts.
Looking at the script, you can set the value of websockets by setting the WEBSOCKETS variable when you start the container:
docker run -d -p 7379:7379 \
-e LOCAL_REDIS=true \
-e WEBSOCKETS=true \
anapsix/webdis
When we run it like this, the generated /etc/webdis.json looks like:
{
"redis_host": "127.0.0.1",
"redis_port": 6379,
"redis_auth": null,
"http_host": "0.0.0.0",
"http_port": 7379,
"threads": 5,
"pool_size": 10,
"daemonize": false,
"websockets": true,
"database": 0,
"acl": [
{
"disabled": ["DEBUG", "FLUSHDB", "FLUSHALL"]
},
{
"http_basic_auth": "user:password",
"enabled": ["DEBUG"]
}
],
"verbosity": 8,
"logfile": "/dev/stdout"
}
More broadly, using docker commit is almost always the wrong thing to do; you should generate custom images using a Dockerfile (this gives you a much more manageable, reproducible process for creating container images).

Run Filebeat in docker as IoT Edge module

I would like to run Filebeat as Docker container in Azure IoT Edge. I would like Filebeat to get logs from others running containers.
I'm already able to run filebeat as Docker container, from the documentation (https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_volume_mounted_configuration)
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:6.8.3 filebeat -e -strict.perms=false
With this command and with the correct filebeat.yml file I'm able to collect logs for every running containers on my device.
Now I would like to deploy this configuration as Azure IoT Edge Modules.
I created a docker image having the filebeat.yml file included with the following Dockerfile:
FROM docker.elastic.co/beats/filebeat:6.8.3
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chmod go-w /usr/share/filebeat/filebeat.yml
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
From documentation: https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_custom_image_configuration
I tested this Dockerfile by running locally
docker build -t filebeat .
and
docker run -d \
--name=filebeat \
--user=root \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
filebeat:latest filebeat -e -strict.perms=false
This works fine, logs from other containers are collected as they should.
Now my question is :
In Azure IoT Edge, how can I mount volumes to access others Docker containers running on the devices, like it's done with
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro"
in order to collect logs?
From this other SO post (Mount path to Azure IoT Edge module) in the Azure IoT Edge portal I tried the following:
"HostConfig": {
"Mounts": [
{
"Target": "/var/lib/docker/containers",
"Source": "/var/lib/docker/containers",
"Type": "volume",
"ReadOnly: true
},
{
"Target": "/var/run/docker.sock",
"Source": "/var/run/docker.sock",
"Type": "volume",
"ReadOnly: true
}
]
}
}
But when I deploy this module I have the following error:
2019-11-25T10:09:41Z [WARN] - Could not create module FilebeatAgent
2019-11-25T10:09:41Z [WARN] - caused by: create /var/lib/docker/containers: "/var/lib/docker/containers" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
I don't understand this error. How can I specify a path using only [a-zA-Z0-9][a-zA-Z0-9_.-] ?
Thanks for your help.
EDIT
In the Azure IoT Edge portal, createOptions json:
{
"HostConfig": {
"Binds": [
"/var/lib/docker/containers:/var/lib/docker/containers",
"/var/run/docker.sock:/var/run/docker.sock"
]
}
}
There is an article that describes how to mount storage from the host here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-access-host-storage-from-module

Traefik Docker Swarm Basic Authentication

I recently set up Traefik v.1.7.14 in a Docker container, on a Docker Swarm enabled cluster. As a test, I created a simple service:
docker service create --name demo-nginx \
--network traefik-net \
--label traefik.app.port=80 \
--label traefik.app.frontend.auth.basic="test:$$apr1$$LG8ly.Y1$$1J9m2sDXimLGaCSlO8.T20" \
--label traefik.app.frontend.rule="Host:t.myurl.com" \
nginx
As the code above states, I am simply installing nginx on my url, at the subdomain t specified.
When this code runs, the service gets created successfully. Traefik also shows the service in the traefik api, as well as within the traefik administrator.
In the traefik api, the back-end service is reported as follows:
"frontend-Host-t-myurl-com-0": {
"entryPoints": [
"http",
"https"
],
"backend": "backend-demo-nginx",
"routes": {
"route-frontend-Host-t-myurl-com-0": {
"rule": "Host:t.myurl.com"
}
},
"passHostHeader": true,
"priority": 0,
"basicAuth": null,
"auth": {
"basic": {}
}
When I go to visit t.myurl.com, I get the authentication prompt, as expected.
However, when I type in my username/password (test:test, in this case), the login prompt just prompts me again and doesn't authenticate me.
I have checked to ensure that I am escaping the characters in the docker label by using:
echo $(htpasswd -nb test test) | sed -e s/\\$/\\$\\$/g
To generate the password.
As part of my testing, I also tried turning off the https entryPoint, as I wanted to see if this cycle was somehow being triggered by ssl. That didn't seem to have any impact on resolving this (rule: --label traefik.app.frontend.entryPoints=http). Traefik did properly respond on http upon doing this, but the password authentication still fell into the same prompting loop as before.
When I remove the traefik.app.frontend.auth.basic label, I can access my site at my url (t.myurl.com). So this issue appears to be isolated within the basic authentication functionality.
My DNS provider is Cloudflare.
If anyone has any ideas, I'd appreciate it.
Maybe you can try this:
echo $(htpasswd -nb your-user your-password);
Because you don't need two $$ in the command line.

Tagging a Docker image with auto incremented global version in Gitlab in yaml

I'm currently digging in Gitlab CI. I would like to add a way in my YAML files to tag my docker images with a version number composed in the following fashion : MajorVersion.Minorversion.AutoincrementedGlobalversionNumber
I would like to auto-increment the globally defined variable "AutoincrementedGlobalversionNumber" each time I deploy.
I have used CI_PIPELINE_IID however it keeps incrementing for each pipeline request, I need something to keep a version where I can keep track of and it should increment only when I pack and deploy.
variables:
CI_VERSION: "1.0.${CI_PIPELINE_IID}"
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" -t "$CI_REGISTRY_IMAGE:$CI_VERSION" ./postfix
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
You probably can't do this with the default GitLab CI variables, but there could be a workaround along the lines of (untested):
Get the registry ID with something like:
$ registry_id=$(curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/container_registry.json" | jq '.[].id')
Query said registry to get the name:
curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/registry/repository/$registry_id/tags?format=json" | jq
eg returns the following and you can grep the name for GlobalVersionNumber:
[
{
"name": "latest",
"location": "registry.gitlab.com/mwasilewski/helm:latest",
"revision": "85a403337a56e9e6409dfb8185bf9aa5c2135f9a437bd75da82d27471c71feb4",
"short_revision": "85a403337",
"total_size": 152246865,
"created_at": "2016-12-11T08:31:30.126+00:00",
"destroy_path": "/mwasilewski/helm/registry/repository/31074/tags/latest"
}
]
Continue with your Docker build and push, after incrementing the GlobalVersionNumber you get back.
NB: this assumes you are using GitLab's Container Registry
Resources:
https://gitlab.com/gitlab-org/gitlab-ce/issues/40826

Vault Docker Image - Cant get REST Response

I am deploying vault docker image on Ubuntu 16.04, I am successful initializing it from inside the image itself, but I cant get any Rest Responses, and even curl does not work.
I am doing the following:
Create config file local.json :
{
"listener": [{
"tcp": {
"address": "127.0.0.1:8200",
"tls_disable" : 1
}
}],
"storage" :{
"file" : {
"path" : "/vault/data"
}
}
"max_lease_ttl": "10h",
"default_lease_ttl": "10h",
}
under /vault/config directory
running the command to start the image
docker run -d -p 8200:8200 -v /home/vault:/vault --cap-add=IPC_LOCK vault server
entering bash terminal of the image :
docker exec -it containerId /bin/sh
Running inside the following command
export VAULT_ADDR='http://127.0.0.1:8200' and than vault init
It works fine, but when I am trying to send rest to check if vault initialized:
Get request to the following url : http://Ip-of-the-docker-host:8200/v1/sys/init
Getting No Response.
even curl command fails:
curl http://127.0.0.1:8200/v1/sys/init
curl: (56) Recv failure: Connection reset by peer
Didnt find anywhere online with a proper explanation what is the problem, or if I am doing something wrong.
Any Ideas?
If a server running in a Docker container binds to 127.0.0.1, it's unreachable from anything outside that specific container (and since containers usually only run a single process, that means it's unreachable by anyone). Change the listener address to 0.0.0.0:8200; if you need to restrict access to the Vault server, bind it to a specific host address in the docker run -p option.

Resources