I got a problem about using webdriver.remote(in docker container) to connect to the other selenium grid container. These are my docker-compose file and the python file using the webdriver.
python file:
sleep(10)
Options = webdriver.ChromeOptions()
Options.add_argument('--no-sandbox')
Options.add_argument('--headless')
driver = webdriver.Remote(
command_executor= 'http://selenium-hub:4444/wd/hub',
desired_capabilities = DesiredCapabilities.CHROME,
)
docker-compose file:
version: "3"
services:
selenium-hub:
image: selenium/hub:3.14.0
container_name: selenium-hub
ports:
- "9090:4444"
chromenode:
image: selenium/node-chrome:3.14.0
depends_on:
- selenium-hub
links:
- selenium-hub:hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
flask-web:(execute python file)
image: main
container_name: template_flask
depends_on:
- selenium-hub
- chromenode
links:
- selenium-hub
- chromenode
The error I got :
MaxRetryError: HTTPConnectionPool(host='selenium-hub', port=4444): Max retries exceeded with url:
I have seen many discussions about this error but can't still solve it . Could anyone give me some tips ? Thanks!
Sounds like a network issue between containers. I would suggest create a network for all the containers, that way they can talk to each other by using the service names defined in your docker-compose file, refer to:
version: "3"
services:
hub:
image: selenium/hub:3
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 180
GRID_TIMEOUT: 60
networks:
- selenium_net
chrome:
image: selenium/node-chrome-debug:3
container_name: chrome_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
networks:
- selenium_net
firefox:
image: selenium/node-firefox-debug:3
container_name: firefox_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 5
NODE_MAX_INSTANCES: 5
volumes:
- /dev/shm:/dev/shm
ports:
- "9003:5900"
links:
- hub
networks:
- selenium_net
networks:
selenium_net:
driver: bridge
name: selenium-net
Related
I am trying to run this docker compose file but my microservices cannot connect to eureka server through the docker network bridge. Does anyone know where is the problem? This is the docker compose file I am running
version: '3'
services:
zipkin:
image: openzipkin/zipkin
container_name: zipkin
ports:
- "9411:9411"
networks:
- spring
rabbitmq:
image: rabbitmq:3.9.11-management-alpine
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- spring
eureka-server:
image: shaslan/eureka-server:latest
container_name: eureka-server
ports:
- "8761:8761"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
api-gw:
image: shaslan/apigw:latest
container_name: apigw
ports:
- "8083:8083"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
customer:
image: shaslan/customer:latest
container_name: customer
ports:
- "8080:8080"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
fraud:
image: shaslan/fraud:latest
container_name: fraud
ports:
- "8081:8081"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
notification:
image: shaslan/notification:latest
container_name: notification
ports:
- "8082:8082"
environment:
- "SPRING_PROFILES_ACTIVE=docker"
networks:
- spring
depends_on:
- zipkin
- eureka-server
- rabbitmq
networks:
spring:
driver: bridge
When I open up my eureka server it doesn't discover any microservice. Any help would be appreciated
I'm setting up a local teamcity and I want to load my selenium grid via teamcity. So I created underneath docker compose file. Everything works and it seems that it's running only I cannot find it with the correct ip. Does anybody know where I need to look for the correct ip?
version: '2'
services:
firefox:
image: selenium/node-firefox-debug
ports:
- "6900:6900"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
firefoxTest:
image: selenium/node-firefox-debug
ports:
- "6901:6901"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
firefoxTesting:
image: selenium/node-firefox-debug
ports:
- "6902:6902"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
chrome:
image: selenium/node-chrome-debug
ports:
- "5900:5900"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
hub:
image: selenium/hub
ports:
- "4444:4444"
I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f
I have a service connecting to nsqd to produce and consume messages. I have integration tests connecting to it (broadcast address 127.0.0.1) and this works fine executing it locally in the cli or the ide.
Then I have created this service to up with docker-compose, but cannot connect to nsqd.
Following is my docker-compose file
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq:v0.3.8
command: /nsqlookupd
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq:v0.3.8
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=127.0.0.1
links:
- nsqlookupd:nsqlookupd
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq:v0.3.8
ports:
- "4171:4171"
links:
- nsqlookupd:nsqlookupd
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"
Right now I don't care about the tests locally, just to have all the containers working properly.
I have tried changing the broadcast, removing the broadcast...
As they say in the docs https://nsq.io/deployment/docker.html#using-docker-compose this is the last thing I tried (basically the changes are the commands), with no luck:
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150"
- "4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171"
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"
Cache problem, sorry for that :facepalm:
What I told it wasn't working (removing the broadcast) IT IS
I have tried changing the broadcast, removing the broadcast...
As they say in the docs https://nsq.io/deployment/docker.html#using-docker-compose this is the last thing I tried (basically the changes are the commands), with no luck:
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150"
- "4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171"
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"
Trying to run selenium from sidekiq worker with docker-compose.
It works well if I run job from rails task. But It doesn't work when I run from sidekiq.
I got this error when I run Job from sidekiq.
Errno::EADDRNOTAVAIL: Failed to open TCP connection to localhost:4444 (Cannot assign requested address - connect(2) for "localhost" port 4444)
docker-compose.yml
version: '3'
services:
db:
image: mysql
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
redis:
image: redis:latest
ports:
- 6379:6379
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/myapp
depends_on:
- db
- redis
selenium-hub:
image: selenium/hub:3.12.0-boron
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.12.0-boron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
firefox:
image: selenium/node-firefox:3.12.0-boron
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
Please suggest me how to fix this problem
I have it working with a docker-compose.yml like this:
version: '3.3'
services:
selenium-hub:
container_name: selenium_hub
image: selenium/hub:3.12.0-cobalt
ports:
- 4444:4444
networks:
- selenium_grid
selenium-chrome:
container_name: selenium_chrome
image: selenium/node-chrome:3.12.0-cobalt
environment:
- HUB_HOST=selenium_hub
- HUB_PORT=4444
volumes:
- /dev/shm:/dev/shm
networks:
- selenium_grid
depends_on:
- selenium-hub
selenium-firefox:
container_name: selenium_firefox
image: selenium/node-firefox:3.12.0-cobalt
environment:
- HUB_HOST=selenium_hub
- HUB_PORT=4444
volumes:
- /dev/shm:/dev/shm
networks:
- selenium_grid
depends_on:
- selenium-hub
networks:
selenium_grid:
driver: bridge