cannot connect to container with docker-compose - docker

I'm using docker 1.12 and docker-compose 1.12, on OSX.
I created a docker-compose.yml file which runs two containers:
the first, named spark, builds and runs a sparkjava application
the second, named behave, runs some functional tests on the API exposed by the first container.
version: "2"
services:
behave:
build:
context: ./src/test
container_name: "behave"
links:
- spark
depends_on:
- spark
entrypoint: ./runtests.sh spark:9000
spark:
build:
context: ./
container_name: "spark"
ports:
- "9000:9000"
As recommended by Docker Compose documentation, I use a simple shell script to test if the spark server is ready. This script is name runtest.sh, and runs into the container named "behave". It is launched by docker-compose (see above):
#!/bin/bash
# This scripts waits for the API server to be ready before running functional tests with Behave
# the parameter should be the hostname for the spark server
set -e
host="$1"
echo "runtests host is $host"
until curl -L "http://$host"; do
>&2 echo "Spark server is not ready - sleeping"
sleep 5
done
>&2 echo "Spark server is up - starting tests"
behave
```
The DNS resolution does not seem to work. curl makes a request to spark.com instead of a request to my container named "spark".
UPDATE:
By setting an alias for my link (links: -spark:myserver), I've seen the DNS resolution is not done by Docker: I received an error message from a corporate network equipment (I'm running this from behind a corporate proxy, with Docker for Mac). Here is an extract of the output:
Recreating spark
Recreating behave
Attaching to spark, behave
behave | runtests host is myserver:9000
behave | % Total % Received % Xferd Average Speed Time Time Time Current
behave | Dload Upload Total Spent Left Speed
100 672 100 672 0 0 348 0 0:00:01 0:00:01 --:--:-- 348
behave | <HTML><HEAD>
behave | <TITLE>Network Error</TITLE>
behave | </HEAD>
behave | <BODY>
behave | ...
behave | <big>Network Error (dns_unresolved_hostname)</big>
behave | Your requested host "myserver" could not be resolved by DNS.
behave | ...
behave | </BODY></HTML>
behave | Spark server is up - starting tests

To solve this, I added an environment variable no_proxy for the name of the container I wanted to join.
In the dockerfile for the container behave, I have:
ENV http_proxy=http://proxy.mycompany.com:8080
ENV https_proxy=http://proxy.mycompany.com:8080
ENV no_proxy=127.0.0.1,localhost,spark

Related

Is testing in Gitlab Ci supported with Opensearch docker images as a service?

I have a Java app that is running integration tests with Elasticsearch in Gitlab.
.gitlab-ci.yml:
...
integration:
stage: integration
tags:
- onprem
services:
- name: "docker.elastic.co/elasticsearch/elasticsearch:7.10.1"
alias: "elasticsearch"
command: [ "bin/elasticsearch", "-Expack.security.enabled=false", "-Ediscovery.type=single-node" ]
script:
- curl "http://elasticsearch:9200/_cat/health"
- mvn -Dgroups="IntegrationTest" -DargLine="-Durl=elasticsearch" test
...
Now I want to use Opensearch 1.1.0 because that is what we use on AWS. I tried working off the docker compose setup that Opensearch suggests for developers ( https://opensearch.org/docs/latest/opensearch/install/docker/#sample-docker-compose-file-for-development ), and came up with this:
...
integration:
stage: integration
tags:
- onprem
services:
- name: "opensearchproject/opensearch:1.1.0"
alias: "elasticsearch"
command: [
"./opensearch-docker-entrypoint.sh",
"-Ecluster.name=opensearch-cluster",
"-Enode.name=opensearch-node1",
"-Ebootstrap.memory_lock=true",
"-Ediscovery.type=single-node",
"OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m",
"DISABLE_INSTALL_DEMO_CONFIG=true",
"DISABLE_SECURITY_PLUGIN=true"
]
script:
- curl "http://elasticsearch:9200/_cat/health"
- mvn -Dgroups="IntegrationTest" -DargLine="-Durl=elasticsearch" test
...
The curl response:
$ curl "http://elasticsearch:9200/_cat/health"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0curl: (7) Failed to connect to elasticsearch port 9200: No route to host
One big difference seems to be that Elasticsearch disables security with an environment variable, but Opensearch does that with an argument through setup. I tried running Opensearch directly through the "bin/" directory, but that seems to give all sorts of additional errors. The Opensearch image is available on dockerhub ( https://hub.docker.com/layers/opensearchproject/opensearch/1.1.0/images/sha256-94254d215845723e73829f34cf7053ae9810db360cf73c954737a717e9cf031d?context=explore ) , but I have no access to the Dockerfile of the Elasticsearch image to compare.
I have numerous other failed setups: Tried moving different combinations of the arguments over as stage variables in gitlab-ci.
Am I misunderstanding what to do here, or is what I'm trying even supported at all?
The final layer in opensearchproject/opensearch:1.1.0 is
CMD ["./opensearch-docker-entrypoint.sh"]
which reconfigures opensearch based on environment variables, like DISABLE_SECURITY_PLUGIN, and populates opensearch startup options like -Eplugins.security.disabled=true
Even though such ENVs are accepted by Docker, bash does not allow export of environment variables like discovery.type=single-node, so the gitlab-ci job fails.
The CMD is refactored to ENTRYPOINT in later releases by this issue
One way to trick the CMD, is to set the variables before launching the script:
integration:
stage: integration
variables:
OPENSEARCH_JAVA_OPTS: "-Xms512m -Xmx512m"
DISABLE_INSTALL_DEMO_CONFIG: "true"
DISABLE_SECURITY_PLUGIN: "true"
services:
- name: opensearchproject/opensearch:1.1.0
alias: opensearch
command: ["bash", "-c", "env 'discovery.type=single-node' 'cluster.name=opensearch' ./opensearch-docker-entrypoint.sh"]
script:
- curl -sS http://opensearch:9200/_cat/health

How do I run Locust in a distributed Docker configuration?

I'm working on running Locust with multiple workers in a Fargate environment, but I wanted to see what it looked like in a simple distributed Docker setup. I took the below docker-compose.yml from the website and modified it so that everything would run on localhost. I can start Locust just fine with docker-compose up --scale worker=4, and four workers and master come up, but when I try to run a test via the Web UI, I get:
Attaching to locust-distributed-docker-master-1, locust-distributed-docker-worker-1, locust-distributed-docker-worker-2, locust-distributed-docker-worker-3, locust-distributed-docker-worker-4
locust-distributed-docker-worker-1 | [2021-11-17 19:01:19,719] be1b465ae5c7/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-master-1 | [2021-11-17 19:01:19,956] 8769b6dcd3ed/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
locust-distributed-docker-master-1 | [2021-11-17 19:01:20,016] 8769b6dcd3ed/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-4 | [2021-11-17 19:01:20,144] bd481d228ef6/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-3 | [2021-11-17 19:01:20,716] 26af3d44e1c9/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-worker-2 | [2021-11-17 19:01:21,122] d536c752bdee/INFO/locust.main: Starting Locust 2.5.0
locust-distributed-docker-master-1 | [2021-11-17 19:01:42,998] 8769b6dcd3ed/WARNING/locust.runners: You are running in distributed mode but have no worker servers connected. Please connect workers prior to swarming.
The whole point of this exercise is to watch the console to see how the workers interact with the master, nothing else.
docker-compose.yml:
version: '3'
services:
master:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --master
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host 127.0.0.1
To use the example in this way, with workers running on the same machine as the master, it's easiest to follow the example exactly and just not include the master IP address. The trouble is networking between Docker containers locally doesn't work the same as regular networking between hosts. You'll have to research the differences.
Locust workers will automatically discover the master if they run on the same host if you don't specify an IP address for the master. In the example docker-compose.yml --master-host master refers to the master service that will run by name, creating a networking bridge between the worker containers and the master container to allow the workers to automatically discover the master. When you actually deploy the workers, you may need a different setup that does specify a separate IP address to communicate with if they're run on separate hosts.
So just follow the example directly and have your workers command like this:
worker:
image: locustio/locust
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py --worker --master-host master
That should result in output like this:
% docker compose up --scale worker=4
[+] Running 5/0
⠿ Container docker-compose_master_1 Created 0.0s
⠿ Container docker-compose_worker_2 Created 0.0s
⠿ Container docker-compose_worker_3 Created 0.0s
⠿ Container docker-compose_worker_1 Created 0.0s
⠿ Container docker-compose_worker_4 Created 0.0s
Attaching to master_1, worker_1, worker_2, worker_3, worker_4
worker_3 | [2021-11-18 16:32:49,911] d90df67c6a69/INFO/locust.main: Starting Locust 2.5.0
worker_4 | [2021-11-18 16:32:50,062] 112a60412b1e/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,224] 859d07f8570b/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
worker_2 | [2021-11-18 16:32:50,233] 56ffce9d4448/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,238] 859d07f8570b/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,239] 859d07f8570b/INFO/locust.runners: Client '56ffce9d4448_dfda9f3bcff742909af80b63d7866714' reported as ready. Currently 1 clients ready to swarm.
master_1 | [2021-11-18 16:32:50,249] 859d07f8570b/INFO/locust.runners: Client '112a60412b1e_49c9e2df265d4fd7bc0f6554a76e66c9' reported as ready. Currently 2 clients ready to swarm.
worker_1 | [2021-11-18 16:32:50,256] 988707b23133/INFO/locust.main: Starting Locust 2.5.0
master_1 | [2021-11-18 16:32:50,259] 859d07f8570b/INFO/locust.runners: Client '988707b23133_88ac7446afd843a5ae7a20dceaed9ea4' reported as ready. Currently 3 clients ready to swarm.
master_1 | [2021-11-18 16:32:50,336] 859d07f8570b/INFO/locust.runners: Client 'd90df67c6a69_e432779d02f94947abb992ff1043eb0e' reported as ready. Currently 4 clients ready to swarm.

Apache Nifi (on docker): only one of the HTTP and HTTPS connectors can be configured at one time error

Have a problem adding authentication due to a new needs while using Apache NiFi (NiFi) without SSL processing it in a container.
The image version is apache/nifi:1.13.0
It's said that SSL is unconditionally required to add authentication. It's recommended to use tls-toolkit in the NiFi image to add SSL. Worked on the following process:
Except for environment variable nifi.web.http.port for HTTP communication, and executed up the standalone mode container with nifi.web.https.port=9443
docker-compose up
Joined to the container and run the tls-toolkit script in the nifi-toolkit.
cd /opt/nifi/nifi-toolkit-1.13.0/bin &&\
sh tls-toolkit.sh standalone \
-n 'localhost' \
-C 'CN=yangeok,OU=nifi' \
-O -o $NIFI_HOME/conf
Attempt 1
Organized files in directory $NIFI_HOME/conf. Three files keystore.jks, truststore.jsk, and nifi.properties were created in folder localhost that entered the value of the option -n of the tls-toolkit script.
cd $NIFI_HOME/conf &&
cp localhost/*.jks .
The file $NIFI_HOME/conf/localhost/nifi.properties was not overwritten as it is, but only the following properties were imported as a file $NIFI_HOME/conf/nifi.properties:
nifi.web.http.host=
nifi.web.http.port=
nifiweb.https.host=localhost
nifiweb.https.port=9443
Restarted container
docker-compose restart
The container died with below error log:
Only one of the HTTP and HTTPS connectors can be configured at one time
Attempt 2
After executing the tls-toolkit script, all files a were overwritten, including file nifi.properties
cd $NIFI_HOME/conf &&
cp localhost/* .
Restarted container
docker-compose restart
The container died with the same error log
Hint
The dead container volume was also accessible, so copied and checked file nifi.properties, and when did docker-compose up or restart, it changed as follows:
The part I overwritten or modified:
nifi.web.http.host=
nifi.web.http.port=
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=localhost
nifi.web.https.port=9443
The changed part after re-executing the container:
nifi.web.http.host=a8e283ab9421
nifi.web.http.port=9443
nifi.web.http.network.interface.default=
#############################################
nifi.web.https.host=a8e283ab9421
nifi.web.https.port=9443
I'd like to know how to execute the container with http.host, http.port empty. docker-compose.yml file is as follows:
version: '3'
services:
nifi:
build:
context: .
args:
NIFI_VERSION: ${NIFI_VERSION}
container_name: nifi
user: root
restart: unless-stopped
network_mode: bridge
ports:
- ${NIFI_HTTP_PORT}:8080/tcp
- ${NIFI_HTTPS_PORT}:9443/tcp
volumes:
- ./drivers:/opt/nifi/nifi-current/drivers
- ./templates:/opt/nifi/nifi-current/templates
- ./data:/opt/nifi/nifi-current/data
environment:
TZ: 'Asia/Seoul'
########## JVM ##########
NIFI_JVM_HEAP_INIT: ${NIFI_HEAP_INIT} # The initial JVM heap size.
NIFI_JVM_HEAP_MAX: ${NIFI_HEAP_MAX} # The maximum JVM heap size.
########## Web ##########
# NIFI_WEB_HTTP_HOST: ${NIFI_HTTP_HOST} # nifi.web.http.host
# NIFI_WEB_HTTP_PORT: ${NIFI_HTTP_PORT} # nifi.web.http.port
NIFI_WEB_HTTPS_HOST: ${NIFI_HTTPS_HOST} # nifi.web.https.host
NIFI_WEB_HTTP_PORT: ${NIFI_HTTPS_PORT} # nifi.web.https.port
Thank you

How does one connect two services in the local docker-compose network?

I have followed the instructions, I think, and have come up with the following configuration:
version: '3.9'
services:
flask:
image: ops:imgA
ports:
- 5000:5000
volumes:
- /opt/models:/opt/models
entrypoint: demo flask
streamlit:
image: ops:imgB
ports:
- 8501:8501
entrypoint: streamlit run --server.port 8501 demo -- stream --flask-hostname flask
The --flask-hostname flask sets the host name used in an http connect, i.e.: http://flask:5000. I can set it to anything.
The basic problem here is that I can spin up one of these images, install tmux, and run everything within a single image.
But, when I split it across multiple images and use docker-compose up (which seems better than tmux), the containers can't seem to connect to each other.
I have rattled around the documentation on docker's website, but I've moved on to the troubleshooting stage. This seems to be something that should "just work" (since there are few questions along these lines). I have total control of the box I am using, and can open or close whatever ports needed.
Mainly, I am trying to figure out how to allow, with 100% default settings nothing complicated, these two services (flask and streamlit) to speak to each other.
There must be 1 or 2 settings that I need to change, and that is it.
Any ideas?
Update
I can access all of the services externally, so I am going to open up external connections between the services (using the external IP) as a "just work" quick fix, but obviously getting the composition to work internally would be the best option.
I have also confirmed that the docker-compose and docker versions are up to date.
Update-2: changed from flask#127.0.0.1 to flask#0.0.0.0
Flask output:
flask_1 | * Serving Flask app "flask" (lazy loading)
flask_1 | * Environment: production
flask_1 | WARNING: This is a development server. Do not use it in a production deployment.
flask_1 | Use a production WSGI server instead.
flask_1 | * Debug mode: on
flask_1 | INFO:werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | 2020-12-19 02:22:16.449 INFO werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
flask_1 | INFO:werkzeug: * Restarting with inotify reloader
flask_1 | 2020-12-19 02:22:16.465 INFO werkzeug: * Restarting with inotify reloader
flask_1 | WARNING:werkzeug: * Debugger is active!
flask_1 | 2020-12-19 02:22:22.003 WARNING werkzeug: * Debugger is active!
Streamlit:
streamlit_1 |
streamlit_1 | You can now view your Streamlit app in your browser.
streamlit_1 |
streamlit_1 | Network URL: http://172.18.0.3:8501
streamlit_1 | External URL: http://71.199.156.142:8501
streamlit_1 |
streamlit_1 | 2020-12-19 02:22:11.389 Generating new fontManager, this may take some time...
And the streamlit error message:
ConnectionError:
HTTPConnectionPool(host='flask', port=5000):
Max retries exceeded with url: /foo/bar
(Caused by NewConnectionError(
'<urllib3.connection.HTTPConnection object at 0x7fb860501d90>:
Failed to establish a new connection:
[Errno 111] Connection refused'
)
)
Update-3: Hitting refresh fixed it.
The server process must be listening on the special "all interfaces" address 0.0.0.0. Many development-type servers by default listen on "localhost only" 127.0.0.1, but in Docker each container has its own private notion of localhost. If you use tmux or docker exec to run multiple processes inside a container, they have the same localhost and can connect to each other, but if the client and server are running in different containers, the request doesn't arrive on the server's localhost interface, and if the server is listening on "localhost only" it won't receive it.
Your setup is otherwise correct, with only the docker-compose.yml you include in the question. Some other common problems:
You must connect to the port the server process is listening on inside the container. If you remap it externally with ports:, that's ignored, and you'd connect to the second ports: number. Correspondingly, ports: aren't required. (expose: also isn't required and doesn't do anything at all.)
The client may need to wait for the server to start up. If the client depends_on: [flask] the host name will usually resolve (unless the server dies immediately) but if it takes a while to start up you will still get "connection refused" errors. See Docker Compose wait for container X before starting Y.
Neither container may use network_mode: host. This disables Docker's networking features entirely.
If you manually declare networks:, both containers need to be on the same network. You do not need to explicitly create a network for inter-container communication to work: Compose provides a default network for you, which is used if nothing else is declared.
Use the Compose service names as host names. You don't need to explicitly specify container_name: or links:.

Spring Boot tries to connect to Mongo localhost

I have a Spring Boot 2.x project using Mongo. I am running this via Docker (using compose locally) and Kubernetes. I am trying to connect my service to a Mongo server. This is confusing to me, but for development I am using a local instance of Mongo, but deployed in GCP I have named mongo services.
here is my application.properties file:
#mongodb
spring.data.mongodb.uri= mongodb://mongo-serviceone:27017/serviceone
#logging
logging.level.org.springframework.data=trace
logging.level.=trace
And my Docker-compose:
version: '3'
# Define the services/containers to be run
services:
service: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3009:3009" #specify ports forwarding
links:
- mongo-serviceone # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- mongo-serviceone
mongo-serviceone: # name of the service
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
When I try docker-compose up . I get the following error:
mongo-serviceone_1 | 2018-08-22T13:50:33.454+0000 I NETWORK
[initandlisten] waiting for connections on port 27017 service_1
| 2018-08-22 13:50:33.526 INFO 1 --- [localhost:27017]
org.mongodb.driver.cluster : Exception in monitor thread
while connecting to server localhost:27017 service_1
| service_1 | com.mongodb.MongoSocketOpenException:
Exception opening socket service_1 | at
com.mongodb.connection.SocketStream.open(SocketStream.java:62)
~[mongodb-driver-core-3.6.3.jar!/:na]
running docker ps shows me:
692ebb72cf30 serviceone_service "java -Djava.securit…" About an hour ago Up 9 minutes 0.0.0.0:3009->3009/tcp, 8080/tcp serviceone_service_1
6cd55ae7bb77 mongo "docker-entrypoint.s…" About an hour ago Up 9 minutes 0.0.0.0:27017->27017/tcp serviceone_mongo-serviceone_1
While I am trying to connect to a local mongo, I thought that by using the name "mongo-serviceone"
Hard to tell what the exact issue is, but maybe this is just an issue because of the space " " after "spring.data.mongodb.uri=" and before "mongodb://mongo-serviceone:27017/serviceone"?
If not, maybe exec into the "service" container and try to ping the mongodb with: ping mongo-serviceone:27017
Let me know the output of this, so I can help you analyze and fix this issue.
Alternatively, you could switch from using docker compose to a Kubernetes native dev tool, as you are planning to run your application on Kubernetes anyways. Here is a list of possible tools:
Allow hot reloading:
DevSpace: https://github.com/covexo/devspace
ksync: https://github.com/vapor-ware/ksync
Pure CI/CD tools for dev:
Skaffold: https://github.com/GoogleContainerTools/skaffold
Draft: https://github.com/Azure/draft
For most of them, you will only need minikube or a dev namespace inside your existing cluster on GCP.
Looks like another application was running on port 27017 on your localhost Similar reported issue
quick way to check on linux/mac:
telnet 127.0.01 27017
check logs files:
docker logs serviceone_service

Resources