Using netcat to wait neo to be ready within the same container - docker

So consider the following... If we had a docker container called app, this app contained an instance of neo4j database. Within the Dockerfile under the CMD we reference entrypoint.sh. From this script we are running the following ...
end="$((SECONDS+60))"
while true; do
nc -z localhost 7687 && break
[[ "${SECONDS}" -ge "${end}" ]] && exit 1
sleep 1
done
The question is why does netcat not see the neo, even though it is booting. I have confirmed this by commenting out the CMD line within the dockerfile and checking that we can get to it though a browser.
If there is only one container enclosing neo and netcat running within the entrypoint.sh will it even have access to neo when it comes up or would netcat need to be in a seperate container all together ?
My docker-compose.yaml is below...
version: "2.1"
services:
app:
build:
context: .
container_name: neo4j-ingestion
expose:
- "7474"
- "7687"
ports:
- "7687:7687"
- "7474:7474"
environment:
MEMORY_LOCK: "true"
DB_START_DELAY: "10"
PROCESSORS: "2"
DATA_INGEST_FOLDER: /ingest/pending
NEO4J_AUTH: "neo4j/password"
NEO4J_ACCEPT_LICENSE_AGREEMENT: "yes"
NEO4J_dbms_memory_heap_maxSize: "4G"
NEO4J_HOSTNAME: "0.0.0.0"
NEO4J_USERNAME: "neo4j"
NEO4J_PASSWORD: "password"
PROCESS_NAME: "neo4j-ingest"
INGESTION_TYPE: "incremental"
SLEEP_PERIOD: "20"
CREATE_SCHEMA: "false"
NEO4J_dbms_security_procedures_unrestricted: "apoc.*"
IS_DELTA: "1"
volumes:
- neodata:/data
- "./test-integration/incremental:/ingest"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
neodata:
driver: local

Related

Unable connect from one container to another

I have a Dockerfile and docker-compose.yml file set up, but not sure if they are correct and I am unable to run it without an error.
My Dockerfile is:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN go get
RUN go run server.go
and my compose.yml is:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
ports:
- 9200:9200
gqlgen:
container_name: "gqlgen"
build: ./
restart: "on-failure"
ports:
- "8080:8080"
depends_on:
- elasticsearch
This is how the root of my folder looks like:
I tried to run: docker-compose up from the root directory and this is what I get:
panic: Get "http://127.0.0.1:9200/": dial tcp 127.0.0.1:9200: connect: connection refused
I think I am doing my setup wrong.
UPDATE:
Based on suggestions and more stuff that I read online, I changed my DOCKERFILE as:
FROM golang:1.14-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o server .
CMD ["./server"]
and compose file:
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
and it builds correctly now.
But same issue with running docker-compose up.
panic: Get "http://elasticsearch:9200/": dial tcp 172.18.0.2:9200: connect: connection refused
You have a problem because you address Elasticsearch incorrectly.
Inside docker container 127.0.0.1 refers to the container itself, so your app is trying to find Elasticsearch where there isn't one.
The correct way to reference one docker container from another is by using docker container name. So in your case, it would be using name: elasticsearch.
Edit:
There is another issue with your configuration.
You miss some vital elements of Elasticsearch configuration.
Here you have snippet with minimal configuration for a single node Elasticsearch cluster.
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
All I have written before is still valid. After modifying docker-compose your last version which refers to Elasticsearch via http://elasticsearch:9200 should work fine.
Edit:
As #David Maze pointed out there is a third issue in your example.
Instead of RUN go run server.go you should have CMD go run server.go.
What you are doing is running your app during your build, when you want to run your app inside the container.
The more conventional approach would be to build app, instead of copying the source, copying the binary to the container and running the binary inside the container.
Here you have some information about that: https://medium.com/travis-on-docker/multi-stage-docker-builds-for-creating-tiny-go-images-e0e1867efe5a
So the above action to replace localhost with elasticsearch is correct.
But this it should happen only when you are starting your up with docker-compose.
Do not attempt to call elasticsearch from your IDE using elasticearch instead of host.
I suggest to make the host of elasticsearch configurable and for local configuration keep localhost, but you can override it in docker-compose file.
version: "3.5"
services:
elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- bootstrap.memory_lock=true
- cluster.initial_master_nodes=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
golang:
container_name: "golang"
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
restart: unless-stopped
depends_on:
- elasticsearch
environment:
- ELASTICSEARCH_HOST: elasticsearch
Where ELASTICSEARCH_HOST is a variable that you use in your project

Multiple docker-compose file with different context path

I have 2 docker-compose files that I need to run together, the locations of the files are like
/home/project1/docker-compose.yml
and
/home/project2/docker-compose.yml
so clearly both the services should have different contextpath
But when I run below docker compose command
docker-compose -f /home/project1/docker-compose.yml -f /home/project2/docker-compose.yml config
I get to see, Both the service are getting the same context path
app:
build:
context: /home/project1
dockerfile: Dockerfile
app2:
build:
context: /home/project1
dockerfile: Dockerfile
How can I resolve this issue I want both my services to have their own project path ie.
app service should have context path /home/project1
and
app2 service should have context path /home/project2
They way I found could run multiple services is:
1. First of all create images for all the services with the help of docker build command
ex: in my case it is a java application with maven build tool, so command I used:
mvn clean package docker:build -Denv=$ENV -DskipTests
2. After all the images built, create a common docker-compose file which will look like this
version: '3'
services:
service1:
image: <service1-image>
ports:
- "8100:8080"
environment:
- TZ=Asia/Kolkata
- MYSQL_USER=root
- MYSQL_PASSWORD=unroot
ulimits:
nproc: 65535
nofile:
soft: 65535
hard: 65535
service2:
image: <service2-image>
ports:
- "8101:8080"
environment:
- TZ=Asia/Kolkata
- MYSQL_USER=root
- MYSQL_PASSWORD=unroot
ulimits:
nproc: 65535
nofile:
soft: 65535
hard: 65535
service3:
image: <service3-image>
environment:
- TZ=Asia/Kolkata
- MYSQL_USER=root
- MYSQL_PASSWORD=unroot
ulimits:
nproc: 65535
nofile:
soft: 65535
hard: 65535
networks:
default:
external:
name: dev
The way I have mentioned mysql root, password, you can also add environment variables.
3.Then run below command to run all the services
docker-compose -f docker-compose.yml up
or run below command to run specific service
docker-compose -f docker-compose.yml up service1
This is working fine for me, this will require 1 time setup but after that it will be very easy and fast.
Option 1:
Use absolute paths for the contexts in both docker-compose files
Option 2:
Create a docker-compose.override.yml with the absolute paths:
version: "3"
services:
service1:
build:
context: /home/project1
service2:
build:
context: /home/project2
and include it in the docker-compose command:
docker-compose -f /home/project1/docker-compose.yml -f /home/project2/docker-compose.yml -f /home/docker-compose.override.yml config
On linux, to avoid hard-coding of the base path in the docker-compose.override.yml, you can use PWD environment variable:
services:
service1:
build:
context: ${PWD}/project1
service2:
build:
context: ${PWD}/project2
It seems that when using multiple docker-compose files the context is taken from the location of the first file. It is explained in detail in this issue
https://github.com/docker/compose/issues/3874

ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:5044)) - ELK Filebeat .NET Core 3.1 Docker

I'm having a strange problem I can't work out as my problem, when searching for this error, is different. People seem to have experienced this when trying to connect Filebeat to Logstash.
However, I am trying to write logs directly to Elasticsearch but I am getting Logstash related errors even though I am not even spinning up a container in Docker Compose??
Main Docker Compose File:
version: '2.2'
services:
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
DockerFile for Filebeat
FROM docker.elastic.co/beats/filebeat:7.5.2
COPY filebeat/filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN chmod 644 /usr/share/filebeat/filebeat.yml
USER filebeat
I am trying to read json logs that are already in Elasticsearch format, so after reading the docs I decided to try and write directly to Elasticsearch which seems to be valid depending on the application.
My Sample.json file:
{"#timestamp":"2020-02-10T09:35:20.7793960+00:00","level":"Information","messageTemplate":"The value of i is {LoopCountValue}","message":"The value of i is 0","fields":{"LoopCountValue":0,"SourceContext":"WebAppLogger.Startup","Environment":"Development","ApplicationName":"ELK Logging Demo"}}
My Filebeat.yml:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
As stated in the title of this post, I get this message in the console:
filebeat | 2020-02-10T09:38:24.438Z ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://logstash:5044)): lookup logstash on 127.0.0.11:53: no such host
Then when I eventually try and visualize the data in ElasticHq, inevitably, nothing is there.
So far, I've tried using commands like docker prune just in case theres something funny going on with Docker.
Is there something I'm missing?
You have misconfigured your filebeat.yml file. Look at this error:
Failed to connect to backoff(async(tcp://logstash:5044))
Filebeat tries to connect to logstash, beacause this is the default configuration. In fact on one hand you show a filebeat.yml file and on the other hand, you haven't mounted it on /usr/share/filebeat/filebeat.yml - look at your volumes settings
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
networks:
- esnet
You should mount it. If you try to copy it inside a docker container with dockerfile - why?????is necessary reinvent the wheel and add complexity? - you should use the root user:
USER root
and add root user to your service in docker-compose.yml:
user: root

Cannot connect to Elasticsearch from Go server running in Docker [duplicate]

This question already has answers here:
Connection refused on docker container
(7 answers)
Closed 3 years ago.
I have set up an environment today that runs a golang:1.13-alpine image, along with the latest images for Elasticsearch and Kibana.
Elasticsearch and Kibana are running fine when accessing from my local machine, but I cannot connect to Elasticsearch through the Go server. I have put this together from guides I have found and followed.
I am still a bit green using Docker. I have an idea that I am pointing at the wrong ip address in the container, but I am unsure how to fix it. Hope someone can guide me in the right direction.
docker-compose.yml:
version: "3.7"
services:
web:
image: go-docker-webserver
build: .
ports:
- "8080:8080"
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
environment:
node.name: elasticsearch
cluster.initial_master_nodes: elasticsearch
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.4.2
ports:
- "5601:5601"
links:
- elasticsearch
Dockefile:
FROM golang:1.13-alpine as builder
RUN apk add --no-cache --virtual .build-deps \
bash \
gcc \
git \
musl-dev
RUN mkdir build
COPY . /build
WORKDIR /build
RUN go get
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o webserver .
RUN adduser -S -D -H -h /build webserver
USER webserver
FROM scratch
COPY --from=builder /build/webserver /app/
WORKDIR /app
EXPOSE 8080
EXPOSE 9200
CMD ["./webserver"]
main.go:
func webserver(logger *log.Logger) *http.Server {
router := http.NewServeMux()
router.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
es, err := elasticsearch.NewDefaultClient()
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
res, err := es.Info()
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
log.Println(res)
})
return &http.Server{
Addr: listenAddr,
Handler: router,
ErrorLog: logger,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 15 * time.Second,
}
}
When I boot the server, everything is running fine and I can access Kibana and query the data that I have indexed, but as soon as I hit localhost:8080 in Postman, the server dies and outputs:
web_1 | 2019/11/26 16:40:40 Error getting response: dial tcp 127.0.0.1:9200: connect: connection refused
go-api_web_1 exited with code 1
You can mention the network as bridge in the docker-compose file and verify once , How it works .
version: "3.7"
services:
web:
image: go-docker-webserver
build: .
ports:
- "8080:8080"
networks:
- elastic
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.4.2
environment:
node.name: elasticsearch
cluster.initial_master_nodes: elasticsearch
cluster.name: docker-cluster
bootstrap.memory_lock: "true"
ES_JAVA_OPTS: -Xms256m -Xmx256m
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.4.2
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
networks:
elastic:
driver: bridge
set the kernal setting to 262144 :
sysctl -w vm.max_map_count=262144

docker-compose stop not working after docker-compose -p <name> up

I am using docker-compose version 2. I am starting containers with docker-compose -p some_name up -d and trying to kill them with docker-compose stop. The commands exits with 0 code but the containers are still up and running.
Is this the expected behaviour for version? If yes, any idea how can I work around it?
my docker-compose.yml file looks like this
version: '2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.0
ports:
- "9200:9200"
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
xpack.security.enabled: "false"
xpack.monitoring.enabled: "false"
xpack.graph.enabled: "false"
xpack.watcher.enabled: "false"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 262144
hard: 262144
kafka-server:
image: spotify/kafka
environment:
- TOPICS=my-topic
ports:
- "9092:9092"
test:
build: .
depends_on:
- elasticsearch
- kafka-server
update
I found that the problem is caused by using the -p parameter and giving explicit prefix to the container. Still looking for the best way to solve it.
docker-compose -p [project_name] stop worked in my case. I had the same problem.
Try forcing running containers to stop by sending a SIGKILL with docker-compose -p some_name kill.
docker-compose kill
I just read and experimented with something from compose CLI envs when passing -p.
You have to pass the -p some_name to kill the containers or compose will assume the directory name if you don't.
Kindly let me know if this helped.

Resources