Consuming an API on localhost from a node-chrome-debug docker image - docker

I am running a REST API in a container started from a node-chrome-debug image and I am trying to consume it from a Selenium Webdriver script executed on the node. However, when I try to hit the API on http:\\localhost:5000 via RestSharp, I receive the following message:
Error: 'Connection refused [::ffff:127.0.0.1]:5000 (127.0.0.1:5000)'
The configuration of docker-compose.yml is the following:
version: "3"
services:
selenium-hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 2
GRID_BROWSER_TIMEOUT: 300
GRID)TIMEOUT: 300
chrome:
image: selenium/node-chrome-debug
depends_on:
- selenium-hub
environment:
HUB_PORT_4444_TCP_ADDR: selenium-hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
I suspect that the REST client is hitting the Selenium Grid Hub instead of the container's localhost. Is there a way to connect to the correct machine? Thanks in advance.
P.S. I am running a REST service on a docker container as a way to communicate with system's kernel and run a couple of commands (System.Diagnostics.Process isn't working for me).

Related

using docker compose to run selenium hub and node

I have this docker-compose.yml file from here that I am using to open selenium hub and node on mac OS . I changed host port to 65299 , as I got error that 4444 is being used. I have docker desktop 3.5.1 installed
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "65299:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=65299
firefox:
image: selenium/node-firefox
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=65299
When I look here - http://localhost:65299/grid/console , I dont see any node registered
Also, on terminal I get this
firefox_1 | 20:27:22.110 INFO [SelfRegisteringRemote$1.run] - Couldn't register this node: The hub is down or not responding: Failed to connect to selenium-hub/172.26.0.2:65299
Also , in logs it says
Nodes should register to http://172.27.0.2:4444/grid/register/
so why is system even trying 172.26.0.2:65299 or may be I am missing something here ?
The HUB_PORT variable of nodes are wrong. 65299 port is the port for accessing hub from outside of docker network. For example you are using this port the access hub from browser.
You need to set 4444 to that variable. That port available to docker network. So nodes can connect hub.

Multiple isloated elasticsearch cluster with single docker-compose file

I want to create 2 elasticsearch cluster in single docker-compose file, so that I can test few changes only on new es cluster,
My docker-compose file is look like this
version: "2.2"
services:
elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- "9200:9200"
mem_limit: '2048M'
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
ports:
- "9400:9200"
mem_limit: '2048M'
search:
image: search:latest
entrypoint: java -Delasticsearch.host=elasticsearch-master -DnewElasticsearch.host=new-elasticsearch-master -DnewElasticsearch.port=9400 -jar app.jar
ports:
- "8083:8083"
depends_on:
- elasticsearch-master
- new-elasticsearch-master
mem_limit: '500M'
volumes:
esdata1:
esdata2:
I have 1 java service where I am adding both the host with different environment variable
-Delasticsearch.host=elasticsearch-master
-DnewElasticsearch.host=new-elasticsearch-master
But when I run code from java search service as follow
new RestTemplate().getForEntity("http://elasticsearch-master:9200/_cat/indices?v",String.class)
This gives me correct response.
But when I try to connect to another host on 9400.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9400/_cat/indices?v",String.class)
I am getting Connection Refused error
When I try same host with 9200 then that gives me 200 response.
new RestTemplate().getForEntity("http://new-elasticsearch-master:9200/_cat/indices?v",String.class)
Can someone please tell me how can I make 2 different connection with different port as below.
http://elasticsearch-master:9200
http://new-elasticsearch-master:9400
Thanks
You got the expected behavior. The ports field in docker-compose map the ports to your localhost, which mean that the "old" Elasticsearch will be available via localhost:9200 and the "new" Elasticsearch will be available via localhost:9400.
On the other hand, docker-compose services communicate in an internal network and the service name is the hostname and the port is the original listening port.
Thus, you were able to access (internally) your old one via http://elasticsearch-master:9200 and the new one via http://new-elasticsearch-master:9200.
If you wish to use the new Elasticsearch with 9400 you need to change its settings: http.port. You can do that like:
new-elasticsearch-master:
image: elasticsearch:6.6.0
volumes:
- esdata2:/usr/share/elasticsearch/data
environment:
- http.port=9400
ports:
- "9400:9400"
mem_limit: '2048M'
note that you have to change the port mapping as well (because it will map your new port, 9400 to the localhost 9400).

Unable to telnet to MariaDB container

I'm using docker-compose to run MariaDB and it is working fine. I am fetching jasper server and maria DB docker images and running them. When I telnet the jasper server image, it responds correctly, but when I telnet to MariaDB, it says:
telnet localhost 3306
Trying ::1...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
What I might be doing wrong?
Here is the output of sudo docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9e759f106006 bitnami/jasperreports:7 "/app-entrypoint.sh …" 21 minutes ago Up 21 minutes 0.0.0.0:9093->8080/tcp, 0.0.0.0:443->8443/tcp ceyedev_jasperreports_1
9242e52f6af8 bitnami/mariadb:10.3 "/opt/bitnami/script…" 21 minutes ago Up 21 minutes 3306/tcp ceyedev_mariadb_1
Here is my docker compose file:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_jasperreports
- MARIADB_DATABASE=bitnami_jasperreports
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
jasperreports:
image: 'bitnami/jasperreports:7'
environment:
- MARIADB_HOST=mariadb
- MARIADB_PORT_NUMBER=3306
- JASPERREPORTS_DATABASE_USER=bn_jasperreports
- JASPERREPORTS_DATABASE_NAME=bitnami_jasperreports
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '9093:8080'
- '443:8443'
volumes:
- 'jasperreports_data:/bitnami'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
jasperreports_data:
driver: local
You have to open the ports in your Docker compose file (that thing you posted is called a Docker Compose file, not Dockerfile which is the one containing the commands to build a Docker image).
In the mariadb section make it like this:
services:
mariadb:
image: 'bitnami/mariadb:10.3'
environment:
- MARIADB_USER=bn_jasperreports
- MARIADB_DATABASE=bitnami_jasperreports
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami'
ports:
- 3306
This way, the 3306 port of MariaDB will be exposed to your local computer. This means:
that you may access MariaDB through the 3306 port
that ANYONE with direct network access to your computer (i.e. local IP address) will be able to access MariaDB through port 3306.
Bear in mind those two things regarding your system security.

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

How to access application run on local machine from container?

I have an application running locally and I have a docker container running via docker compose:
swagger:
image: swaggerapi/swagger-ui:v3.23.5
ports:
- "7171:8080"
networks:
- dockernet
expose:
- 8080
environment:
- URL=http://192.168.10.20:8080/actions/v3/api-docs
192.168.10.20 is my localhost.
if I access http://192.168.10.20:8080/actions/v3/api-docs via the browser I see the response but the swagger service can't access it.
How to fix it?

Resources