Setting up Selenium Grid nodes on different EC2 instances with a local hub - docker

I'm trying to setup a distributed testing environment. I'm trying to find a way to set up Selenium grid in a way that all the nodes are running in docker containers and on different EC2 instances. I need my hub to be on my local machine. Is there a way to connect the nodes to the hub when the nodes are remote?
I tried to set up a hub on my windows machine with this command
java -jar selenium-server-standalone-3.141.0.jar -role hub
and tried to register a node to it which is on an EC2 instance using this command,
docker run -d -p 5555:5555 -e REMOTE_HOST="http://<PASTE-NODE-IP>:5555" -e HUB_PORT_4444_TCP_ADDR="<PASTE-HUB-IP>" -e HUB_PORT_4444_TCP_PORT="4444" --name chrome-node selenium/node-chrome:2.47.1
as mentioned in this thread, Selenium Grid with Docker containers on different hosts
But it doesn't properly connect.

Related

How to run my local application in a container

I have built my application locally and accessible locally. Something like 'http://localhost:4200/admin/login' works fine.
An i have my automation suite that i want to execute in a selenium grid environment. So I created a hub and node and they are interconnected.
When i run any global applications like google.com etc... then those sites are accessible in selenium node container and they are working fine.
But when i run my suite pointing to local, they are not executing in the node container. How can i access my local deployed application in Node container?
I started my hub using
'docker run -d -p 4545:4444 --name selenium-hub selenium/hub'
and started my node as
'docker run -d -P --link selenium-hub:hub selenium/node-chrome-debug'

Docker container cannot reach other services for a few seconds

I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?

Is docker run --net=host analogous to the "Host" network mode on AWS ECS?

Was following discussions in this article. We want to bring up our Dockerized application in ECS and currently the application run as a standalone docker container using the command
docker run --net=host -d -p PORT:PORT My-APP
The question is - if we migrate to ECS - does this --net-host setting Map to the Host Networking Mode in ECS?
Yes:
If the network mode is host, the task bypasses Docker's built-in virtual network and maps container ports directly to the EC2 instance's network interface directly. In this mode, you can't run multiple instantiations of the same task on a single container instance when port mappings are used.

Docker Swarm - Map ports and Scaling

I am currently using Docker Engine 1.11, and I am investigating if its possible for me to move to Docker 1.12 and use Swarm. I am currently using Docker to run 50+ Bamboo agents, all of which need to have a port mapped to a port on the server. For instance, each docker container needs to have port 4000 available, so when I do Docker run, I do-
Docker run -p 10000:4000 myimg
Docker run -p 10001:4000 myimg
Docker run -p 10002:4000 myimg
Docker run -p 10003:4000 myimg
In Docker Swarm, from what I understand, I would run the following command to scale my service to 50 containers
docker service scale helloworld=5
But, if I did this, then they would all be trying to map to the same port. How can I accomplish this? Is it possible?
No, you can't.
It's just one key function that docker service provides that a single port can map to multi containers(service discovery)
And another one is when container fails, swarm will start a new one.(self healing)
I know nothing about Bamboo, so I can't tell you if there's a way to run bamboo service with the swarm mode.

How do I run 2 environments of SkyDns/Skydock simultaneously?

Ref: https://github.com/crosbymichael/skydock
https://github.com/crosbymichael/skydns
First I fired up those two instances.
docker run -d -p 8080:8080 -p 172.17.42.1:53:53/udp --name skydns crosbymichael/skydns -nameserver 8.8.8.8:53 -domain docker
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock crosbymichael/skydock -ttl 30 -environment dev -s /docker.sock -domain docker -name skydns
And this setup is working as expected.
Now I want to spawn another production environment. This time I only fired another skydock container with the env production as follows.
docker run -d -v /var/run/docker.sock:/docker.sock --name skydock-prod crosbymichael/skydock -ttl 30 -environment prod -s /docker.sock -domain docker -name skydns
Querying the api doesn't show the production skydoc.
curl $(docker-ip):8080/skydns/services/
And now I am wondering on how to setup the production version of skydock.
Do I have to run in separate docker-host?
If I fire up in the same docker host, in which DNS url entry will the new containers be available?
Do I have to pass some flags/variables when I fire new containers to be available in the production env?
I don't about the way to make 2 or more skydock instances listen to the same docker.sock (within single host machine). I think conceptually it is not right. Docker containers know nothing about your logical enviroments (production, staging, ...)
I got a multihost setup with skydns and skydock. I run skydns on a separate host. Each of two other servers run single instance of skydock, which registers all docker containers ips in centralised SkyDNS, so that all containers are visible by dns name across different physical hosts.
All of that is working on top of Flannel network overlay https://github.com/coreos/flannel (which requires etcd)

Resources