I'm having a weird issue with ES 5.6.5 in a Docker container in swarm mode. Here's the code in question:
from elasticsearch import Elasticsearch
from settings import ES_HOSTS
db = Elasticsearch(hosts=ES_HOSTS)
db.indices.exists(index=product_id)
It performs an HTTP HEAD request in the background and that request times out, never getting a response. I confirmed by doing the same HEAD request using curl (curl -X HEAD http://elasticsearch:9200/85a9b708-e89d-11e7-887a-02420aff0008) and it does indeed time out. Other requests work just fine. For example, if I do a GET request to the aforementioned URL, I get the expected error saying the index does not exist.
When I run the same ES image on a standalone docker container on my machine, configured exactly the same way and with the same code making the calls, it works without a problem.
Here's the relevant swarm configuration section:
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:5.6.5"
environment:
- cluster.name=raul_elasticsearch
- xpack.security.enabled=false
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- esdata:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
deploy:
resources:
limits:
memory: 6G
reservations:
memory: 6G
And this is the command I ran for having a standalone ES docker container:
docker run --rm -p 9200:9200 -p 9300:9300 -e "bootstrap.memory_lock=true" -e "discovery.type=single-node" -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "xpack.security.enabled=false" -d --name raul_elasticsearch docker.elastic.co/elasticsearch/elasticsearch:5.6.5
Any thoughts on what could be causing the issue?
UPDATE1: looking at the debug logs from the ES running in the docker swarm, I am getting the following messages:
Dec 24 17:10:33: [2017-12-24T17:10:33,839][DEBUG][r.suppressed ] path: /85a9b708-e89d-11e7-887a-02420aff0008, params: {index=85a9b708-e89d-11e7-887a-02420aff0008}
Dec 24 17:10:33: org.elasticsearch.index.IndexNotFoundException: no such index
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.infe(IndexNameExpressionResolver.java:676) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:630) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:578) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:168) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:144) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:77) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.admin.indices.get.TransportGetIndexAction.checkBlock(TransportGetIndexAction.java:63) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.admin.indices.get.TransportGetIndexAction.checkBlock(TransportGetIndexAction.java:47) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.doStart(TransportMasterNodeAction.java:134) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction.start(TransportMasterNodeAction.java:126) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:104) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.master.TransportMasterNodeAction.doExecute(TransportMasterNodeAction.java:54) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1256) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getIndex(AbstractClient.java:1357) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction.lambda$prepareRequest$0(RestGetIndicesAction.java:97) ~[elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:80) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:262) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:200) [elasticsearch-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.Netty4HttpServerTransport.dispatchRequest(Netty4HttpServerTransport.java:505) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:80) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:68) [transport-netty4-5.6.5.jar:5.6.5]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
Dec 24 17:10:33: at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
Additionally, I also get these messages, most of which seem to be related to a periodic task, but one caught my eye: Can not start an object, expecting field name (context: Object)
Dec 24 17:10:36: [2017-12-24T17:10:36,919][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:36: [2017-12-24T17:10:36,923][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:46: [2017-12-24T17:10:46,932][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:46: [2017-12-24T17:10:46,935][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:56: [2017-12-24T17:10:56,920][DEBUG][o.e.x.m.a.GetDatafeedsStatsAction$TransportAction] [hPjV7-n] Get stats for datafeed '_all'
Dec 24 17:10:56: [2017-12-24T17:10:56,927][DEBUG][o.e.x.m.e.l.LocalExporter] monitoring index templates and pipelines are installed on master node, service can start
Dec 24 17:10:58: [2017-12-24T17:10:58,707][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] saving watch records [4]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status]
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch_4e9bb936-dc65-4795-8be3-b2b2c1660460-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,711][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,712][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_kibana_version_mismatch_4e9bb936-dc65-4795-8be3-b2b2c1660460-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,713][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch_0a13442f-96dc-4732-b0de-1e87a9dc05ab-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,714][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_version_mismatch_0a13442f-96dc-4732-b0de-1e87a9dc05ab-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,714][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] found [15178] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,715][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] hit [{
Dec 24 17:10:58: "error" : "Can not start an object, expecting field name (context: Object)"
Dec 24 17:10:58: }]
Dec 24 17:10:58: [2017-12-24T17:10:58,716][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] found [1] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,716][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_elasticsearch_cluster_status_b35f5be6-4d4b-4fa2-a50c-6f18d4b6d949-2017-12-24T17:10:58.707Z] hit [{
Dec 24 17:10:58: "error" : "Can not start an object, expecting field name (context: Object)"
Dec 24 17:10:58: }]
Dec 24 17:10:58: [2017-12-24T17:10:58,718][DEBUG][o.e.x.w.e.ExecutionService] [hPjV7-n] executing watch [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch]
Dec 24 17:10:58: [2017-12-24T17:10:58,718][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch_c88db510-0a7e-4520-a085-8381f4278288-2017-12-24T17:10:58.707Z] found [0] hits
Dec 24 17:10:58: [2017-12-24T17:10:58,719][DEBUG][o.e.x.w.i.s.ExecutableSimpleInput] [hPjV7-n] [fxCyOMU8STOiNqoLUtOQhQ_logstash_version_mismatch_c88db510-0a7e-4520-a085-8381f4278288-2017-12-24T17:10:58.707Z] found [0] hits
(I have no idea why it's complaining about Kibana and Logstash, I don't have them installed)
UPDATE 2: Using curl's --head parameter instead of -X HEAD makes it work. No idea why. Asking for verbose output yields this:
$ curl -v --head http://localhost:9200/85a9b708-e89d-11e7-887a-02420aff0008
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> HEAD /85a9b708-e89d-11e7-887a-02420aff0008 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
content-type: application/json; charset=UTF-8
< content-length: 467
content-length: 467
Which is the expected response, and the command exits normally.
However, this never exits:
$ curl -v -X HEAD http://localhost:9200/85a9b708-e89d-11e7-887a-02420aff0008
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> HEAD /85a9b708-e89d-11e7-887a-02420aff0008 HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-type: application/json; charset=UTF-8
< content-length: 467
<
What's with that warning from the second command?
Looks like it was a bug in the elasticsearch==5.4.0 library I was using. Updating it to a newer version (5.5.1) fixed the issue.
Related
I deploy Node Exporter for monitor my system, but some app in my server used port 9100 , and Node Export Services cannot start.?
How to change another port for Node_Exporter
Thank for reading, this is my fist post
I try on RHEL 8
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-0724T14:43:11+07:00" level=info msg="Build context (go=go1.12.5, user=root#b50852a1acba,date=20190604-16:41:18)" source="node_exporter.go:157"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg="Enabled collectors:" source="node_exporter.go:97"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - arp" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - bcache" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - bonding" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - conntrack" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx node_exporter[16516]: time="2019-07-24T14:43:11+07:00" level=info msg=" - cpu" source="node_exporter.go:104"
Jul 24 14:43:11 xxxxxxx systemd[1]: node_exporter.service: main process exited, code=exited, status=1/FAILURE
Jul 24 14:43:11 xxxxxxx systemd[1]: Unit node_exporter.service entered failed state.
Jul 24 14:43:11 xxxxxxx systemd[1]: node_exporter.service failed.
`
I found the answer.
Just add --web.listen-address=:9500 behind ExecStart=/usr/local/bin/node_exporter in the config file
It looklike
ExecStart=/usr/local/bin/node_exporter --web.listen-address=:[custum port]
In order to keep track of the volumes used, i like to use named volumes. Currently i have one named volume
docker volume ls
DRIVER VOLUME NAME
local mongodb
my docker-compose file is something like this:
version: "3"
services:
db:
image: mongo:4.0.6
container_name: mongo
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: macmongoroot
volumes:
- mongodb:/data/db
volumes:
mongodb:
external:
name: mongodb
networks:
default:
external:
name: macbook
everytime i run docker-compose docker-compose up -d docker compose adds a new anonymous volume rather then using the named one :
docker volume ls
DRIVER VOLUME NAME
local a4a02fffa9bbbdd11c76359264a5bf24614943c5b1b0070b33a84e51266c58d7
local mongodb
this docker compose file works fine on my server but on my docker desktop i'm having this issue. currently using Docker Desktop version 2.0.0.3 (31259). Any help would be appreciated thanks
The anonymous volume belongs to /data/configdb which in the Dockerfile instructions
VOLUME /data/db /data/configdb
By doing docker inspect on the created container you will notice the following:
"Mounts": [
{
"Type": "volume",
"Name": "mongodb",
"Source": "/var/lib/docker/volumes/mongodb/_data",
"Destination": "/data/db",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "be86274b1f6009eb60b8acb3855f51931c4ccc7df700666555422396688b0dd6",
"Source": "/var/lib/docker/volumes/be86274b1f6009eb60b8acb3855f51931c4ccc7df700666555422396688b0dd6/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
]
Which means that mongodb volume is actually being used for the data as you asked however another volume will be created for this /data/configdb. Also you can verify that the data exist by checking this source path /var/lib/docker/volumes/mongodb/_data where mongodb data will be saved
$ ls /var/lib/docker/volumes/mongodb/_data total 328
drwxr-xr-x 4 999 999 4096 Mar 8 11:02 .
drwxr-xr-x 3 root root 4096 Mar 8 10:58 ..
-rw------- 1 999 999 16384 Mar 8 11:00 collection-0--2358474299739251284.wt
-rw------- 1 999 999 36864 Mar 8 11:01 collection-2--2358474299739251284.wt
-rw------- 1 999 999 4096 Mar 8 11:00 collection-4--2358474299739251284.wt
-rw------- 1 999 999 16384 Mar 8 11:00 collection-7--2358474299739251284.wt
drwx------ 2 999 999 4096 Mar 8 11:11 diagnostic.data
-rw------- 1 999 999 16384 Mar 8 11:00 index-1--2358474299739251284.wt
-rw------- 1 999 999 36864 Mar 8 11:01 index-3--2358474299739251284.wt
-rw------- 1 999 999 4096 Mar 8 10:58 index-5--2358474299739251284.wt
-rw------- 1 999 999 4096 Mar 8 11:01 index-6--2358474299739251284.wt
-rw------- 1 999 999 16384 Mar 8 10:58 index-8--2358474299739251284.wt
-rw------- 1 999 999 16384 Mar 8 10:58 index-9--2358474299739251284.wt
drwx------ 2 999 999 4096 Mar 8 11:00 journal
-rw------- 1 999 999 16384 Mar 8 11:00 _mdb_catalog.wt
-rw------- 1 999 999 2 Mar 8 11:00 mongod.lock
-rw------- 1 999 999 36864 Mar 8 11:02 sizeStorer.wt
-rw------- 1 999 999 114 Mar 8 10:58 storage.bson
-rw------- 1 999 999 45 Mar 8 10:58 WiredTiger
-rw------- 1 999 999 4096 Mar 8 11:00 WiredTigerLAS.wt
-rw------- 1 999 999 21 Mar 8 10:58 WiredTiger.lock
-rw------- 1 999 999 1065 Mar 8 11:02 WiredTiger.turtle
-rw------- 1 999 999 69632 Mar 8 11:02 WiredTiger.wt
I have a problem with Elasticsearch 5 in Docker.
Stack compose file:
version: "3.4"
services:
elastic01: &elasticbase
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.7
networks:
- default
restart: always
environment:
- node.name=elastic01
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic02,elastic03
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
deploy:
placement:
constraints: [node.hostname == node1]
elastic02:
<<: *elasticbase
depends_on:
- elastic01
environment:
- node.name=elastic02
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.watcher.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic01,elastic03
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
deploy:
placement:
constraints: [node.hostname == node2]
elastic03:
<<: *elasticbase
depends_on:
- elastic01
volumes:
- /var/docker/elastic:/usr/share/elasticsearch/data
environment:
- node.name=elastic03
- cluster.name=elastic
- network.host=0.0.0.0
- xpack.security.enabled=false
- bootstrap.memory_lock=false ## Docker swarm does not support that
- discovery.zen.minimum_master_nodes=2
- discovery.zen.ping.unicast.hosts=elastic01,elastic02
deploy:
placement:
constraints: [node.hostname == node3]
networks:
default:
driver: overlay
attachable: true
When I run stack file, it works like a charm. _cluster/health shows that nodes are up and running a the status is "Green" but after while, periodically, system goes down with exception Elastic exception
Feb 10 09:39:39 : [2018-02-10T08:39:39,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:39 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5167] timed out after [3750ms]
Feb 10 09:39:39 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:39 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:39 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:39 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:39 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Feb 10 09:39:40 : [2018-02-10T08:39:40,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:40 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5172] timed out after [3750ms]
Feb 10 09:39:40 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:40 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:40 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:40 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:40 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Feb 10 09:39:41 : [2018-02-10T08:39:41,159][WARN ][o.e.d.z.UnicastZenPing ] [elastic01] failed to send ping to [{elastic03}{2WS6GPu8Qka9YLE_PWfVKg}{AD_Nw1m9T-CZHUFhgXQjtQ}{10.0.9.5}{10.0.9.5:9300}{ml.max_open_jobs=10, ml.enabled=true}]
Feb 10 09:39:41 : org.elasticsearch.transport.ReceiveTimeoutTransportException: [elastic03][10.0.9.5:9300][internal:discovery/zen/unicast] request_id [5175] timed out after [3751ms]
Feb 10 09:39:41 : at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:961) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:41 : at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.6.7.jar:5.6.7]
Feb 10 09:39:41 : at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]
Feb 10 09:39:41 : at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]
Feb 10 09:39:41 : at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
And sometimes:
Feb 10 09:44:10 [2018-02-10T08:44:10,810][WARN ][o.e.t.n.Netty4Transport ] [elastic01] exception caught on transport layer [[id: 0x3675891a, L:/10.0.9.210:53316 - R:10.0.9.5/10.0.9.5:9300]], closing connection
Feb 10 09:44:10 java.io.IOException: No route to host
Feb 10 09:44:10 at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.IOUtil.read(IOUtil.java:197) ~[?:?]
Feb 10 09:44:10 at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) ~[?:?]
Feb 10 09:44:10 at io.netty.buffer.PooledHeapByteBuf.setBytes(PooledHeapByteBuf.java:261) ~[netty-buffer-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:1100) ~[netty-buffer-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:372) ~[netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:123) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
Feb 10 09:44:10 at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Strange thing is that all the time it happens I am able to ping the container and resolve the name from the container where it happens. No packet loss, no timeouts. The only bad thing is the transport layer in the Elastic. All other services are running in the same cluster without issues (MongoDB, Redis, Internal microservices)
Does anybody have a clue?
I found the issue.
Elasticsearch must be binded to a one single interface, not to 0.0.0.0. Once I binded it to eth0, it started to work. Also it looks, there cannot be named volume - it throws another error during the time. It must be mounted to a local drive directly.
This works:
services:
elastic01:
environment:
network.host=_eth0_
I'm currently trying to move our company's squid server to a dockerized version and I'm struggling to get it working with Kubernetes.
I have built a Docker image that works perfectly fine when run with "docker run".
The complete Docker Run command is:
sudo docker run -d -i -t --privileged --volume=/proc/sys/net/ipv4/ip_nonlocal_bind:/var/proc/sys/net/ipv4/ip_nonlocal_bind --net=host --cap-add=SYS_MODULE --cap-add=NET_ADMIN --cap-add=NET_RAW -v /dev:/dev -v /lib/modules:/lib/modules -p80:80 -p8080:8080 -p53:53/udp -p5353:5353/udp -p5666:5666/udp -p4500:4500/udp -p500:500/udp -p3306:3306 --name=edge crossense/edge:latest /bin/bash
When I try to run the Image with Kubernetes, with the something like:
kubectl run --image=crossense/edge:latest --port=80 --port=8080 --port=53 --port=5353 --port=5666 --port=4500 --port=500 --port=3306 edge
seems like Kubernetes tries to get the container up and running, but without any success...
$kubectl get po
NAME READY REASON RESTARTS AGE
edge-sz7wp 0/1 Running 10 15m
And the $kubectl describe pod edge command gives me lots of these:
Thu, 09 Nov 2017 17:13:05 +0000 Thu, 09 Nov 2017 17:13:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id abcc2ff25a624a998871e02bcb62d42d6f39e9db0a39f601efa4d357dd8334aa
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:15 +0000 Thu, 09 Nov 2017 17:13:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 878778836bd3cc25bdf1e3b9cc2f2f6fa22b75b938a481172f08a6ec50571582
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:25 +0000 Thu, 09 Nov 2017 17:13:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id aa51e94536216b905ff9ba07951fedbc0007476b55dfdb2e5106418fb6aee05c
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:35 +0000 Thu, 09 Nov 2017 17:13:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id f4661e5ea33471cd1ba30816b40c8ba2d204fa22509b973da4af6eedb64c592e
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:45 +0000 Thu, 09 Nov 2017 17:13:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 75f83dcb9b4f8af5134d6fd2edcd9342ecf56111e132a45f4e9787e83466e28b
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:13:55 +0000 Thu, 09 Nov 2017 17:13:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id c9d0535b3962ec9da29c068dbb0a6b64426a5ac3e52f72e79bcbaf03c9f3d403
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:05 +0000 Thu, 09 Nov 2017 17:14:05 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 579f4428e9804404bd746cceee88bb6c73066a33263202bb5f1eb15f6ff26d7b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:15 +0000 Thu, 09 Nov 2017 17:14:15 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id d36b2c9ddf0b1a05d86b43d2a92eb3c00ae92d00e155d5a1be1da8e2682f901b
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:25 +0000 Thu, 09 Nov 2017 17:14:25 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 2d7b24537414f5e6f2981bf5f01596b19ea1abdb0eb4b81508fc7f44e8c34609
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:35 +0000 Thu, 09 Nov 2017 17:14:35 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id fdae44c599b77d44839e4897b750203c183001a6053c926432ef5a3c7f4deb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:45 +0000 Thu, 09 Nov 2017 17:14:45 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 544351dda838d698e3bc125840edb6ad71cd0165a970cce46825df03b826eb38
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} created Created with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Thu, 09 Nov 2017 17:14:55 +0000 Thu, 09 Nov 2017 17:14:55 +0000 1 {kubelet 127.0.0.1} spec.containers{edge} started Started with docker id 00fe4c286c1cc9b905c9c0927f82b39d45d41295a9dd0852131bba087bb19610
Any help would be much appreciated!
While I can't say this conclusively without the ability to re-produce and lack of logs, one of the differences which can be noticed easily is the privileges you have provided in docker command for example NET_ADMIN or NET_RAW etc. which are missing in Kubernetes run command.
Kubernetes also provides the ability to assign such privileges to a pod with capabilities within the securityContext in a pod declaration.
I am not sure if you can do this with Kubectl, but if you use the YAML declaration for the pod, the specs look roughly like:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myshell
image: "ubuntu:14.04"
command:
- /bin/sleep
- "300"
securityContext:
capabilities:
add:
- NET_ADMIN
For more reference, I would suggest a quick look at:
This post on Weave blog which lists all capabilities and an example which I have borrowed above as well
Official Kubernetes documentation which provides all details needed around security context
For all the poor souls out there, who couldn't find out the answer,
the reason for the pod to keep restarting is that the command executed by it has exited with code 0 (meaning successfully).
In my case, I was running /bin/bash as the entrypoint command, as specified in my pod configuration .yaml file:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
containers:
- name: edge
image: "crossense/edge:production"
command:
- /bin/bash
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /host/dev
- name: modules
mountPath: /host/lib/modules
....
The solution was simply adding a non exiting command to the
entrypoint. This can be any process run on foreground or simply a
/bin/sleep
For the sake of example and future learning, my final pod configuration file looked like this:
apiVersion: v1
kind: Pod
metadata:
name: edge
spec:
hostNetwork: true
containers:
- name: edge
image: "crossense/edge:production"
command: ["/bin/bash", "-c"]
args: ["service rsyslog restart; service proxysql start; service mongodb start; service pdns-recursor start; service supervisor start; service danted start; touch /var/run/squid.pid; chown proxy /var/run/squid.pid; service squid restart; service ipsec start; /sbin/iptables-restore < /etc/iptables/rules.v4; sleep infinity"]
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
volumeMounts:
- mountPath: /dev/shm
name: dshm
- name: ip-nonlocal-bind
mountPath: /host/proc/sys/net/ipv4
- name: dev
mountPath: /dev
- name: modules
mountPath: /lib/modules
ports:
- containerPort: 80
- containerPort: 8080
- containerPort: 53
protocol: UDP
- containerPort: 5353
protocol: UDP
- containerPort: 5666
- containerPort: 4500
- containerPort: 500
- containerPort: 3306
volumes:
- name: dshm
emptyDir:
medium: Memory
- name: ip-nonlocal-bind
hostPath:
path: /proc/sys/net/ipv4
- name: dev
hostPath:
path: /dev
type: Directory
- name: modules
hostPath:
path: /lib/modules
type: Directory
For any questions, feel free to comment of this thread, or ask me at max.vlashchuk#gmail.com :)
So I followed the tutorial on this page https://docs.docker.com/compose/wordpress/. Here is the two files I ended up creating and the directory structure
~/project
Dockerfile
docker-compose.yml
wordpress
Dockerfile
FROM orchardup/php5
ADD . /wordpress
docker-compose.yml
web:
build: .
command: php -S 0.0.0.0:8000 -t /wordpress
ports:
- "8000:8000"
links:
- db
volumes:
- .:/wordpress
db:
image: orchardup/mysql
environment:
MYSQL_DATABASE: wordpress
When I try to load up the URL in the browser I get an error
Container logs
[Sat Dec 5 22:32:38 2015] 192.168.99.1:64220 [404]: / - No such file or directory
[Sat Dec 5 22:33:16 2015] 192.168.99.1:64235 [404]: / - No such file or directory
[Sat Dec 5 22:33:45 2015] 192.168.99.1:64243 [404]: / - No such file or directory
[Sat Dec 5 22:33:45 2015] 192.168.99.1:64244 [404]: /favicon.ico - No such file or directory
[Sat Dec 5 22:33:50 2015] 192.168.99.1:64248 [404]: /wp-admin - No such file or directory
[Sat Dec 5 22:35:08 2015] 192.168.99.1:64249 Invalid request (Unexpected EOF)
[Sat Dec 5 22:35:08 2015] 192.168.99.1:64250 Invalid request (Unexpected EOF)
[Sat Dec 5 22:35:08 2015] 192.168.99.1:64251 Invalid request (Unexpected EOF)
[Sat Dec 5 22:44:22 2015] 192.168.99.1:64361 [404]: / - No such file or directory
[Sat Dec 5 22:44:25 2015] 192.168.99.1:64366 [404]: / - No such file or directory
[Sat Dec 5 22:50:00 2015] 192.168.99.1:64442 [404]: / - No such file or directory
[Sat Dec 5 22:50:16 2015] 192.168.99.1:64443 Invalid request (Unexpected EOF)
[Sat Dec 5 22:50:16 2015] 192.168.99.1:64444 Invalid request (Unexpected EOF)
[Sat Dec 5 22:51:30 2015] 192.168.99.1:64477 [404]: / - No such file or directory
You are adding the current directory to /wordpress inside the image. So, the location of the Wordpress files is at /wordpress/wordpress. If you are starting up PHP with a document root of /wordpress then you will want navigate to http://192.168.99.100:8000/wordpress.