network.host setting for Docker running ElasticSearch - docker

I am running a docker in my ubuntu host with the following:
docker run -d --rm -p 9200:9200 -p 9300:9300 --name elasticsearch6.6.1 docker.elastic.co/elasticsearch/elasticsearch:6.6.1
Later on when I query like so, I get failure:
curl 'http://localhost:9200/?pretty'
The failure looks like so:
curl 'http://localhost:9200/?pretty'
[command]/bin/bash --noprofile --norc /home/vsts/work/_temp/dcac22e9-6b6f-443b-8497-c093dd6bb804.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (56) Recv failure: Connection reset by peer
So, my question is:
How do I start elasticsearch on docker and publish the ports 9200
and 9300 to the host?
Is network.host=_ site _ setting required?
Thanks,

After reviewing the image that you have mentioned, please check the logs of the container as you might have an error similar to this:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
In my case solved by: sysctl -w vm.max_map_count=262144 then when i try the curl command it works as expected:
curl 'http://localhost:9200/?pretty'
{
"name" : "WgMEnP1",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "2USHCgsWTQOWK5g76uOnNQ",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "1fd8f69",
"build_date" : "2019-02-13T17:10:04.160291Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Related

Running Selenoid on Jenkins Docker image

I have Jenkins installed in Docker container by the following guideline
https://www.jenkins.io/doc/book/installing/docker/
I am also trying to install Selenoid image in Jenkins using the pipeline
pipeline {
agent any
stages {
stage('Prepare Selenoid') {
steps {
sh 'wget "https://github.com/aerokube/cm/releases/download/1.8.2/cm_linux_amd64"'
sh 'chmod +x cm_linux_amd64'
sh './cm_linux_amd64 selenoid start –vnc'
sh 'docker ps'
sh 'docker logs selenoid'
sh 'curl http://localhost:4444/status'
}
}
}
post {
always {
script {
sh 'docker stop selenoid'
sh 'docker rm selenoid'
}
}
}
}
When I run this job, I got the following logs:
...
> Starting Selenoid...
> Successfully started Selenoid
+ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
99467c57d5c6 aerokube/selenoid:1.10.9 "/usr/bin/selenoid -…" 2 seconds ago Up Less than a second 0.0.0.0:4444->4444/tcp selenoid
+ docker logs selenoid
2023/01/13 17:09:45 [-] [INIT] [Loading configuration files...]
2023/01/13 17:09:45 [-] [INIT] [Loaded configuration from /etc/selenoid/browsers.json]
2023/01/13 17:09:45 [-] [INIT] [Video Dir: /opt/selenoid/video]
2023/01/13 17:09:45 [-] [INIT] [Logs Dir: /opt/selenoid/logs]
2023/01/13 17:09:45 [-] [INIT] [Using Docker API version: 1.41]
2023/01/13 17:09:45 [-] [INIT] [Timezone: UTC]
2023/01/13 17:09:45 [-] [INIT] [Listening on :4444]
+ curl http://localhost:4444/status
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to localhost port 4444: Connection refused
script returned exit code 7+
...
I also tried different options:
0.0.0.0:4444/status
127.0.0.1:4444/status
Your pipeline code is probably also running in another container. So localhost inside this container is not the localhost of the host machine you are expecting to access. Try to use http://selenoid:4444/, but make sure your container is running in selenoid network being used by CM tool by default.

Docker refuses to curl on 443

I am experiencing a network issue with docker that I haven't seen before. Could it be related to my ubuntu network conf? or docker setup?
Sending build context to Docker daemon 36.86kB
Step 1/2 : FROM centos:centos7
---> 5e35e350aded
Step 2/2 : RUN curl https://google.com/
---> Running in d65fe6ad9d57
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 2a00:1450:4007:808::200e: Cannot assign requested address
regards
The problem was on the network I was using provided by my cellphone (an Android hotspot on a Xiaomi redmi7). I still cannot figure out the real cause of this misfunctionality.

Ansible: register output of command, parse it, give it to container as environment variable: Variable not defined

I have an ansible playbook as follows (with irrelevant info stripped):
tasks:
- name: get public_ip4 output
shell: curl http://169.254.169.254/latest/meta-data/public-ipv4
register: public_ip4
- debug: var=public_ipv4.stdout
- name: Create docker_pull
template: <SNIP>
- name: Pull containers
command: "sh /root/pull_agent.sh"
- name: (re)-create the agent
docker_container:
name: agent
image: registry.gitlab.com/project_agent
state: started
exposed_ports: 8889
published_ports: 8889:8889
recreate: yes
env:
host_machine: public_ipv4.stdout
The target machine is an AWS EC2 instance. The purpose is to get its public IPv4 address and give it as an environment variable to a container. The container has a Python instance, Agent, that will use os.environ.get(host_machine) to thus access the IPv4 address of the EC2 instance.
The output from the debug logs is (with irrelevant info removed and the ipv4 address replaced with ):
PLAY [swarm_manager_main] ******************************************************
TASK [get public_ip4 output] ***************************************************
[WARNING]: Consider using the get_url or uri module rather than running
'curl'. If you need to use command because get_url or uri is insufficient you
can add 'warn: false' to this command task or set 'command_warnings=False' in
ansible.cfg to get rid of this message.
changed: [tm001.stackl.io] => {"changed": true, "cmd": "curl http://169.254.169.254/latest/meta-data/public-ipv4", "delta": "0:00:00.013260", "end": "2019-08-02 08:36:26.649844", "rc": 0, "start": "2019-08-02 08:36:26.636584", "stderr": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 12 100 12 0 0 2600 0 --:--:-- --:--:-- --:--:-- 3000", "stderr_lines": [" % Total % Received % Xferd Average Speed Time Time Time Current", " Dload Upload Total Spent Left Speed", "", " 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0", "100 12 100 12 0 0 2600 0 --:--:-- --:--:-- --:--:-- 3000"], "stdout": "<HIDDEN>", "stdout_lines": ["52.210.80.33"]}
TASK [debug] *******************************************************************
ok: [tm001.stackl.io] => {
"public_ipv4.stdout": "VARIABLE IS NOT DEFINED!: 'public_ipv4' is undefined"
}
TASK [Create docker_pull] ******************************************************
<SNIP>
TASK [Pull containers] *********************************************************
<SNIP>
TASK [(re)-create the agent] ********************************************
changed: <SNIP> ["host_machine=public_ipv4.stdout", <SNIP>
I don't understand why the public_ipv4 variable is not used correctly. I've tried multiple things (including setting set_fact or setting a new variable) but to no avail.
What am I doing wrong?
There is a typo in your playbook: ip4 and ipv4
register: public_ip4
debug: var=public_ipv4.stdout

Unable to start container from Apache-Superset container (tar file) image (Failed to connect to localhost port)

I'm quite new to Docker and Apache-Superset and trying to run a container (using docker) from the container image. Loaded the .tar file with
docker load --input ./inc_superset.tar
Which went as expected, tried running the container from this image file with
docker run --cidfile ./cid.txt <IMAGE_ID>
This starts my container but is has unhealthy status; upon inspecting the container ( with docker inspect) I get a huge JSON, below is a snippet of the log received (can post the entire log on request).
"Health": {
"Status": "unhealthy",
"FailingStreak": 5,
"Log": [
{
"Start": "2019-01-22T19:59:00.8036984+05:30",
"End": "2019-01-22T19:59:01.5698797+05:30",
"ExitCode": 7,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8088: Connection refused\n"
},
...
...
{
"Start": "2019-01-22T20:01:02.589517677+05:30",
"End": "2019-01-22T20:01:02.794486003+05:30",
"ExitCode": 7,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 8088: Connection refused\n"
}
]
}
Am I making any mistake? Am I missing something? Any troubleshooting help on this requested.
Thanks
The problem was the webserver within the superset container doesn't run by default as of the configuration available on apache.org as of 7 September 2019.
Solved it as follows:
#Go into the container
docker-compose exec superset bash
#Start the webserver that is exposed on all interfaces so that we can access it from docker host
superset run -p 8088 --host 0.0.0.0
I was facing the same issue. Ran via docker-compose using the instructions on https://superset.incubator.apache.org/installation.html but get no response from localhost:8088.
Docker inspect State.Health.Status = "unhealthy" and log show several entries with curl: (7) Failed to connect to localhost port 8088: Connection refused\n"
docker ps shows that the container is exposed on 0.0.0.0:8088

Docker healthcheck reporting "healthy" always

I want to be able to report "Unhealthy" when a container becomes so (based on various conditions), for now I just return 500 on an even call and 200 OK on a odd numbered call.
My docker file looks like so:
FROM golang:alpine
RUN apk update
RUN apk add curl
RUN mkdir /service
COPY healthcheck.go /service
COPY ./counts /service
EXPOSE 9080
WORKDIR /service
HEALTHCHECK --interval=5s --timeout=500ms CMD curl --fail http://localhost:9080/health || exit 1
CMD ["go", "run", "/service/healthcheck.go"]
With docker inspect I am able to see that there are timeouts(induced from code) and status Ok's. However the "Health.Status" in the inspect shows
"Status": "healthy"
docker inspect output:
"Health": {
"Status": "healthy",
"FailingStreak": 1,
"Log": [
{
"Start": "2018-03-10T02:44:12.48947433Z",
"End": "2018-03-10T02:44:12.99252883Z",
"ExitCode": -1,
"Output": "Health check exceeded timeout (500ms)"
},
{
"Start": "2018-03-10T02:44:18.004402431Z",
"End": "2018-03-10T02:44:18.069316531Z",
"ExitCode": 0,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\nThis time it has to be healthy 252\n\r100 43 100 43 0 0 43000 0 --:--:-- --:--:-- --:--:-- 43000\nnext253"
},
{
"Start": "2018-03-10T02:44:23.078242333Z",
"End": "2018-03-10T02:44:23.583552633Z",
"ExitCode": -1,
"Output": "Health check exceeded timeout (500ms)"
},
{
"Start": "2018-03-10T02:44:28.593083534Z",
"End": "2018-03-10T02:44:28.665864034Z",
"ExitCode": 0,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 43 100 43 0 0 7166 0 --:--:-- --:--:-- --:--:-- 8600\n\nThis time it has to be healthy 254\nnext255"
},
{
"Start": "2018-03-10T02:44:33.671220836Z",
"End": "2018-03-10T02:44:34.177248436Z",
"ExitCode": -1,
"Output": "Health check exceeded timeout (500ms)"
}
]
}
},
Any pointers how to report the container as Unhealthy?
Time for a bit of magic without curl or any other external stuff:
There is a differance between ubuntu like 'nc' and busybox 'nc' version used in alpine image.
The point is that regular nc wait for response and this one from busybox seams to not.
Because of that I use { ... } to encapsulate 'printf' and 'sleep' into single subshell that is piped next to nc.
By doing that, nc have a chance to get response from endpoint and pipe it out to grep.
Exit status of grep decide about healty status.
HEALTHCHECK --interval=1s --timeout=5s --retries=3 \
CMD { printf "GET /fpm-ping HTTP/1.0\r\n\r\n"; sleep 0.5; } | nc -w 1 127.0.0.1 8080 | grep pong
Yes, you can allow docker to report the container as unhealthy by changing your HEALTHCHECK in Dockerfile to the one below:
HEALTHCHECK --interval=5s --retries=1 --timeout=500ms CMD curl --fail http://localhost:9080/health || exit 1
If a single run of the check takes longer than timeout seconds then
the check is considered to have failed.
It takes retries consecutive failures of the health check for the
container to be considered unhealthy.
(Ref: https://docs.docker.com/engine/reference/builder/#healthcheck)
By default, docker will attempt to retry for 3 times and when it fails for three consecutive times, then the container is considered to be unhealthy. At the moment, you return status 500 on an even numbered request and status 200 on an odd numbered request. When it fails (on the even numbered request), docker will retry again, and this time it will be an odd numbered request, so it reports the container as healthy.
By setting retries to 1, docker will report the container as unhealthy when the first attempt fails, and wait for 5 seconds to attempt the healthcheck again.
Turns out --retries was the solution.
Changed Dockerfile listed here:
FROM golang:alpine
RUN apk update
RUN apk add curl
RUN mkdir /service
COPY healthcheck.go /service
COPY ./counts /service
EXPOSE 9080
WORKDIR /service
HEALTHCHECK --interval=5s --timeout=500ms --retries=1 CMD curl --fail http://localhost:9080/health || exit 1
CMD ["go", "run", "/service/healthcheck.go"]

Resources