Expose application running on host port to Selenoid - docker

I'm running a Selenoid application test automation script, and would like to run this script against a local application. However, I can't find how to expose my local application (running on port 8787) to Selenoid. I found the following thread discussing a similar issue, but it doesn't solve my issue. The linked thread describes to use the host's ip address. However, I want to make my test system independent. The host ip address is different for each system, and is hard to be retrieved system independently.
I already tried adding the expose field to my docker compose file:
version: '3'
services:
selenoid:
network_mode: bridge
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"
However, this doesn't work because the docker containers created by Selenoid do not get passed the same option.
Is there any way to expose my host port 8787 to my Selenoid container in a system/os independent way (either via a configuration in the docker-compose.yml file, a capability passed to the remote driver or any other way?)?

Selenoid runs browsers in standard Docker containers, so anything applicable to Docker is applicable to Selenoid browsers. Docker was created for the case when all interacting parts are packed to containers and in that case you have legacy Docker links or modern Docker custom networks on your service. If you still want to run your application on the host machine without packing it to container, you have to either user host machine IP or on some platforms Docker provides a particular domain name, e.g. docker.for.mac.localhost on Mac.

I finally realized that yes, the application I run actually runs in a Docker container and thus linking them is as easy as putting Selenoid and the application in the same Docker network. Final docker-compose.yml is as follows:
version: '3'
networks:
my_network_name:
external:
name: my_network_name # This assumes network is already created
services:
selenoid:
networks:
my_network_name: null
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-container-network", "my_network_name", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"

Related

Portainer Stack - docker compose issue with MacVLan network

I am starting to use portrainer.io to manage my docker images, instead of Synology DSM Docker GUI.
Background information:
I've used MacVLAN to create an own IP address for my Pihole Docker, overall everything regarding this piHole is running fine with this settings, made by DSM GUI.
environment network volumesports
Problem:
I now would like to use portrainer.io to manage my Docker installation. Including the Stack option which should be docker compose.
I am now struggeling to get my PiHole Image up with this Docker script:
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
networks: docker
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "80:80/tcp"
environment:
TZ: 'Europe/Berlin'
WEBPASSWORD: 'password'
ServerIP: "0.0.0.0"
# Volumes store your data between container upgrades
volumes:
- '/pihole/pihole/:/etc/pihole/'
- '/pihole/dnsmasq/:/etc/dnsmasq.d/'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
Does anyone have an idea why I get "Unable to deploy stack" as error message?
You are telling the service to use a network called "docker", but the network is not defined in the compose file. Is this the complete docker-compose file?
If yes, then you are missing the networks section:
networks:
docker:
external: true

Configure Docker containers to communicate through network

How can a docker container be configured to communicate with another docker container through network?
In my case, redis is running in a docker container with port 6379 opened. It can be access from host machine. I have started another container that needs to access redis, but it is not available.
Setting network_mode to host didn't fix solve the problem.
I usually connect conainters using network. In my project, I have two containers one as an dotnet core application and another a database and I'll connect these two using the following docker-compose file:
services:
coreapi:
ports:
- "8088:80"
volumes:
- ~/logs/dockerlogs/coreapi:/app/logs
networks:
- test
mongodb:
ports:
- "27017:27017"
- "27018:27018"
- "27019:27019"
networks:
- test
networks:
test:
external: true

Docker-Compose, How To Connect Java Application With Custom Docker Network On Redis Container

I have a java application, that connects through external database through custom docker network
and I want to connect a Redis container.
docker-redis github topic
I tried the following on the application config:
1 localhost:6379
2 app_redis://app_redis:6379
3 redis://app_redis:6379
nothing works on my setup
docker network setup:
docker network create -d bridge --subnet 192.168.0.0/24 --gateway 192.168.0.1 mynet
Connect to a Database Running on Your Docker Host
PS: this might be off-topic, how I can add the network on docker-compose instead of external
docker-compose:
services:
app-kotin:
build: ./app
container_name: app_server
restart: always
working_dir: /app
command: java -jar app-server.jar
ports:
- 3001:3001
links:
- app-redis
networks:
- front
app-redis:
image: redis:5.0.9-alpine
container_name: app-redis
expose:
- 6379
networks:
front:
external:
name: mynet
with the setup above how can I connect through a Redis container?
Both containers need to be on the same Docker network to communicate with each other. The app-kotin container is on the front network, but the app-redis container doesn't have a networks: block and so goes onto an automatically-created default network.
The simplest fix from what you have is to also put the app-redis container on to the same network:
app-redis:
image: redis:5.0.9-alpine
networks:
- front
The Compose service name app-redis will then be usable as a host name, from other containers on the same network.
You can simplify this setup considerably. You don't generally need to manually specify IP configuration for the Docker-private networks. Compose can create the network for you, and in fact it will create a network named default for you. (Networking in Compose discusses this further.) links: and expose: aren't used in modern Docker networking; Compose can provide a default container_name: for you; and you don't need to repeat the working_dir: or command: from the image. Removing all of that would leave you with:
version: '3'
services:
app-kotin:
build: ./app
restart: always
ports:
- '3001:3001'
app-redis:
image: redis:5.0.9-alpine
The server container will be able to use the other container's Compose service name app-redis as a host name, even with this minimal configuration.

traefik hostname works for web apps but not for mongodb

I'm running a mongo instance with docker-compose and traefik.
myapp-mongo:
build: ../images/myapp-mongo
restart: always
ports:
- "27017:27017"
labels:
- "traefik.ports=27017,27018"
- "traefik.backend=myapp-mongo"
- "traefik.frontend.rule=Host:myapp-mongo.docker.localhost"
networks:
- development
environment:
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWD=${MONGO_PASSWD}
- MONGO_AUTHDB=${MONGO_AUTHDB}
Mongo is running fine and I can connect using 127.0.0.1 from my Mac.
The problem is that I can't connect using hostname myapp-mongo.docker.localhost. It only works using IP 127.0.0.1.
Trying to ping the IP 127.0.0.1 responds ok, but trying to ping the hostname doesn't work.
I've already added 127.0.0.1 proxy.docker.localhost into /etc/hosts to get traefik working.
All other web apps has hostnames working fine like eg myapp.docker.localhost. This problem is only happening with this mongodb container.
Probably because Træfik is HTTP proxy and so will only support HTTP/HTTPS connections.
I believe #bpatel is right (see comment I left on his answer with link to github conversation) Traefik at the time of writing only supports HTTP/HTTPS.
Solution using native docker networks
However, you can get around this issue! Since you are using docker, you can work around by using the container name in your code (assuming mongo and your mongo accessing code are both running in containers on a shared docker network. This will be the case if the containers are spun up with docker-compose). Run the following to see if your containers are linked up correctly:
run docker ps to get your container names running (under the NAMES column)
run docker network ls to see your network names
run docker network inspect <target_network_name> to verify your containers from step 1 are on the same network.
I run docker-compose from three separate compose files, so you should be able to cover most cases from the following (apologies for any syntax errors, the following are stripped down code examples):
Entire docker-compose file that that starts up traefik (under directory name 'proxy')
version: '2'
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
webgateway:
driver: bridge
snippet from my docker-compose file that spins up mongo
version: '2'
services:
database:
image: mongo
ports:
- "27017:27017"
networks:
- web
networks:
web:
external:
name: proxy_webgateway
snippet from docker-compose that has mongo accessing code
version: '2'
services:
topicOntologyBuilder:
image: topic-ontology-builder
labels:
- "traefik.backend=topicOntologyBuilder"
- "traefik.port=80"
- "traefik.frontend.rule=Host:topic-ontology.docker.localhost"
networks:
- web
volumes:
- ./:/home
networks:
web:
external:
name: proxy_webgateway
Connection in Code
Not certain what language you're using, this is what the following js code looked like for me to connect to mongo (inside that 'topicOntologyBuilder` container, while using traefik as the proxy (again, this works because we're making the most of docker networks):
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://<MONGO_CONTAINER_NAME>/<DB_NAME>', function(err, db) {
//insert code here to interact with mongo
})
Why this works
This works because docker does some clever DNS stuff within the containers so that each container knows the IP of other containers, by looking it up in their DNS entry, by the container names
Extra intel
If your containers are on separate computers/vm's, you'll probably want to play around with a service discovery tool (Consul plays well with Traefik) or do something fancy with a docker network overlay which is specific for containers in a cluster.
If using raw docker networks, you can assign container aliases (this doesn't work with Traefik though, or at least it didn't a couple months back).

Docker for Mac Host Networking

I'm using Docker for Mac. I have two containers.
1st: A PHP application that is attempting to connect to localhost:3306 to MySQL.
2nd: MySQL
When running with links, they are able to reach each other.
However, I would like to avoid changing any of the code in the PHP application (e.g. changing localhost to "mysql") and stay with using localhost.
Host networking seems to do the trick, the problem is, when I enable host networking I can't access the PHP application on port 80 on my host mac.
If I docker exec -it into the php application and curl localhost, i see the HTML, so it looks like the port is just not forwarding to the host machine?
this is an example for docker-compose
it runs mysql in one container and phpmyadmin in another
the containers are linked together
you can access the containers via your host machine on the ports
3316 and 8889
my_mysql:
image: mysql/mysql-server:latest
container_name: my_mysql
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
ports:
- 3316:3306
restart: always
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: my_myadmin
links:
- my_mysql:my_mysql
environment:
- PMA_ARBITRARY=0
- PMA_HOST=my_mysql
ports:
- 8889:80
restart: always

Resources