I'm setting up a local teamcity and I want to load my selenium grid via teamcity. So I created underneath docker compose file. Everything works and it seems that it's running only I cannot find it with the correct ip. Does anybody know where I need to look for the correct ip?
version: '2'
services:
firefox:
image: selenium/node-firefox-debug
ports:
- "6900:6900"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
firefoxTest:
image: selenium/node-firefox-debug
ports:
- "6901:6901"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
firefoxTesting:
image: selenium/node-firefox-debug
ports:
- "6902:6902"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
chrome:
image: selenium/node-chrome-debug
ports:
- "5900:5900"
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
hub:
image: selenium/hub
ports:
- "4444:4444"
Related
I have magento running in a docker container using this tutorial (https://github.com/markshust/docker-magento).
The docker container is accessed via https://magento.test and this works fine in the browser. We have a script in magento that is trying to connect to https://magento.test from within the container but this fails with Could not resolve host: magento.test.
Basically the host can access magento.test and connect to the docker container. But the docker container can not connect to itself.
I have tried adding extra hosts to the docker-composer.yml (see below) but this has not worked. I am guessing the IP 127.0.0.1 is incorrect.
version: "3"
services:
app:
image: markoshust/magento-nginx:1.18-8
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"
phpfpm:
image: markoshust/magento-php:7.4-fpm-15
volumes: *appvolumes
env_file: env/phpfpm.env
#extra_hosts: *appextrahosts
db:
image: mariadb:10.4
command:
--max_allowed_packet=64M
--optimizer_use_condition_selectivity=1
--optimizer_switch="rowid_filter=off"
ports:
- "3306:3306"
env_file: env/db.env
volumes:
- dbdata:/var/lib/mysql
redis:
image: redis:6.2-alpine
ports:
- "6379:6379"
elasticsearch:
image: markoshust/magento-elasticsearch:7.16-0
ports:
- "9200:9200"
- "9300:9300"
environment:
- "discovery.type=single-node"
## Set custom heap size to avoid memory errors
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
## Avoid test failures due to small disks
## More info at https://github.com/markshust/docker-magento/issues/488
- "cluster.routing.allocation.disk.threshold_enabled=false"
- "index.blocks.read_only_allow_delete"
rabbitmq:
image: markoshust/magento-rabbitmq:3.9-0
ports:
- "15672:15672"
- "5672:5672"
volumes:
- rabbitmqdata:/var/lib/rabbitmq
env_file: env/rabbitmq.env
mailcatcher:
image: sj26/mailcatcher
ports:
- "1080:1080"
## Blackfire support, uncomment to enable
#blackfire:
# image: blackfire/blackfire:2
# ports:
# - "8307"
# env_file: env/blackfire.env
## Selenium support, uncomment to enable
#selenium:
# image: selenium/standalone-chrome-debug:3.8.1
# ports:
# - "5900:5900"
# extra_hosts: *appextrahosts
volumes:
appdata:
dbdata:
rabbitmqdata:
sockdata:
ssldata:
Any help would be greatly appreciated, thanks!
Does host network usage solve your problem?
services:
app:
image: markoshust/magento-nginx:1.18-8
network_mode: "host" # share host network
ports:
- "80:8000"
- "443:8443"
volumes: &appvolumes
- ~/.composer:/var/www/.composer:cached
- ~/.ssh/id_rsa:/var/www/.ssh/id_rsa:cached
- ~/.ssh/known_hosts:/var/www/.ssh/known_hosts:cached
- appdata:/var/www/html
- sockdata:/sock
- ssldata:/etc/nginx/certs
extra_hosts: &appextrahosts
## Selenium support, replace "magento.test" with URL of your site
- "magento.test:127.0.0.1"
I got a problem about using webdriver.remote(in docker container) to connect to the other selenium grid container. These are my docker-compose file and the python file using the webdriver.
python file:
sleep(10)
Options = webdriver.ChromeOptions()
Options.add_argument('--no-sandbox')
Options.add_argument('--headless')
driver = webdriver.Remote(
command_executor= 'http://selenium-hub:4444/wd/hub',
desired_capabilities = DesiredCapabilities.CHROME,
)
docker-compose file:
version: "3"
services:
selenium-hub:
image: selenium/hub:3.14.0
container_name: selenium-hub
ports:
- "9090:4444"
chromenode:
image: selenium/node-chrome:3.14.0
depends_on:
- selenium-hub
links:
- selenium-hub:hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
flask-web:(execute python file)
image: main
container_name: template_flask
depends_on:
- selenium-hub
- chromenode
links:
- selenium-hub
- chromenode
The error I got :
MaxRetryError: HTTPConnectionPool(host='selenium-hub', port=4444): Max retries exceeded with url:
I have seen many discussions about this error but can't still solve it . Could anyone give me some tips ? Thanks!
Sounds like a network issue between containers. I would suggest create a network for all the containers, that way they can talk to each other by using the service names defined in your docker-compose file, refer to:
version: "3"
services:
hub:
image: selenium/hub:3
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 180
GRID_TIMEOUT: 60
networks:
- selenium_net
chrome:
image: selenium/node-chrome-debug:3
container_name: chrome_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
networks:
- selenium_net
firefox:
image: selenium/node-firefox-debug:3
container_name: firefox_node
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 5
NODE_MAX_INSTANCES: 5
volumes:
- /dev/shm:/dev/shm
ports:
- "9003:5900"
links:
- hub
networks:
- selenium_net
networks:
selenium_net:
driver: bridge
name: selenium-net
please see below my docker-compose.yml and attached screenshot for hub logs. I am not seeing nodes getting attached to hub, any suggestions how to make it work.
Added screenshots for output of docker-compose.yml and containers created in Docker. When I am trying to launch localhost:4444, i am seeing err - The Grid has no registered Nodes yet
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
links:
- hub```
[![DockerContainerCreatedSuccessfully][1]][1]
[1]: https://i.stack.imgur.com/IiMpK.png
I try to setup zipkin, elasticsearch, prometheus and grafana with docker-compose.yml
When I run dockers, see in the log:
dependencies_zipkin | 19/09/30 14:37:09 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
I'm on MacOS X with docker 2.1.0.3
the content of my docker-compose.yml is this one:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- "9200:9200"
environment:
- "xpack.security.enabled=false"
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
restart: unless-stopped
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
restart: unless-stopped
zipkin:
image: openzipkin/zipkin
container_name: zipkin
depends_on:
- dependencies
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
ports:
- "9411:9411"
restart: unless-stopped
grafana:
image: grafana/grafana
container_name: grafana
ports:
- "3000:3000"
restart: unless-stopped
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies_zipkin
depends_on:
- storage
environment:
- "STORAGE_TYPE=elasticsearch"
- "ES_HOSTS=storage"
When I connect to localhost:9200, I see that elasticsearch is working fine and on port 9411, zipkin is deployed but I have the error:
ERROR: cannot load service names: server error (Service Unavailable)(due to the network error
In the log, I have this information:
105 ^[[35mdependencies_zipkin |^[[0m 19/09/30 14:45:20 ERROR NetworkClient: Node [172.28.0.2:9200] failed (java.net.ConnectException: Connection refused (Connection refused)); no other nodes left - aborting...
and this one
^[[31mzipkin |^[[0m java.lang.IllegalStateException: couldn't connect any of [Endpoint{storage:80, ipAddr=172.28.0.2, weight=1000}]
Any idea?
UPDATE
by using mysql it is working fine, so the problem is at the level of elastic search.
I tried alsoo by using
"STORAGE_PORT_9200_TCP_ADDR=127.0.0.1"
but the issue still occurs.
UPDATE
As mention is the solution gave by Brian, I have to use:
ES_HOSTS=http://storage:9300
the key is on port, I was using the port 9200
The error disappear between zipkin and es but still occurs between es and zipkin-dependencies.
The problem lies in your ES_HOSTS variable, from the docs here:
ES_HOSTS: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
Defaults to "http://localhost:9200".
So you will need: ES_HOSTS=http://storage:9200
Finally I have this file:
version: '3.7'
services:
storage:
image: openzipkin/zipkin-elasticsearch7
container_name: elasticsearch
ports:
- 9200:9200
zipkin:
image: openzipkin/zipkin
container_name: zipkin
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
ports:
- 9411:9411
depends_on:
- storage
dependencies:
image: openzipkin/zipkin-dependencies
container_name: dependencies
entrypoint: crond -f
depends_on:
- storage
environment:
- STORAGE_TYPE=elasticsearch
- "ES_HOSTS=elasticsearch:9300"
- "ES_NODES_WAN_ONLY=true"
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- $PWD/prometheus:/etc/prometheus/
- /tmp/prometheus:/prometheus/data:rw
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- "9090:9090"
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- prometheus
ports:
- "3000:3000"
Main differences are the usage of
"ES_HOSTS=elasticsearch:9300"
instead of
"ES_HOSTS=storage:9300"
and in the dependencies configuration I add the entrypoint in dependencies:
entrypoint: crond -f
This one is really the key to not have the exception when I start docker-compose.
To solve this issue, I check the this project: https://github.com/openzipkin/docker-zipkin
The remaining question is: why do I need to use entrypoint: crond -f
I have a service connecting to nsqd to produce and consume messages. I have integration tests connecting to it (broadcast address 127.0.0.1) and this works fine executing it locally in the cli or the ide.
Then I have created this service to up with docker-compose, but cannot connect to nsqd.
Following is my docker-compose file
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq:v0.3.8
command: /nsqlookupd
ports:
- "4160:4160"
- "4161:4161"
nsqd:
image: nsqio/nsq:v0.3.8
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160 --broadcast-address=127.0.0.1
links:
- nsqlookupd:nsqlookupd
ports:
- "4150:4150"
- "4151:4151"
nsqadmin:
image: nsqio/nsq:v0.3.8
ports:
- "4171:4171"
links:
- nsqlookupd:nsqlookupd
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"
Right now I don't care about the tests locally, just to have all the containers working properly.
I have tried changing the broadcast, removing the broadcast...
As they say in the docs https://nsq.io/deployment/docker.html#using-docker-compose this is the last thing I tried (basically the changes are the commands), with no luck:
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150"
- "4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171"
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"
Cache problem, sorry for that :facepalm:
What I told it wasn't working (removing the broadcast) IT IS
I have tried changing the broadcast, removing the broadcast...
As they say in the docs https://nsq.io/deployment/docker.html#using-docker-compose this is the last thing I tried (basically the changes are the commands), with no luck:
version: '3'
services:
redis:
image: redis:4.0.9-alpine
ports:
- "6379:6379"
nsqlookupd:
image: nsqio/nsq
command: /nsqlookupd
ports:
- "4160"
- "4161"
nsqd:
image: nsqio/nsq
command: /nsqd --lookupd-tcp-address=nsqlookupd:4160
depends_on:
- nsqlookupd
ports:
- "4150"
- "4151"
nsqadmin:
image: nsqio/nsq
command: /nsqadmin --lookupd-http-address=nsqlookupd:4161
depends_on:
- nsqlookupd
ports:
- "4171"
creator:
build: "creator/"
depends_on:
- nsqlookupd
- nsqd
- redis
environment:
SERVER_ADDR: ":8080"
NSQ_ADDR: "nsqd:4150"
NSQ_TOPIC: "driver_locations"
NSQ_CHANNEL: "ch"
REDIS: "redis:6379"
ports:
- "8080:8080"