I'm trying to test a Symfony3 web application with Behat/Mink and Selenium2Driver so that I can test Javascript functionallity too.
The application runs in a docker container, so I added a new docker container for selenium-hub and chrome as described here:
# docker-compose.yml
version: '3.5' # Docker Engine release 17.12.0+
networks:
servicesnet:
driver: bridge
services:
apache:
build:
context: './apache2'
container_name: apache-service
ports:
- "80:80"
- "443:443"
tty: true
networks:
- servicesnet
volumes:
- ${HOST_APACHE_CONFIG}:/etc/apache2
- ${HOST_PAGES_PATH}:/var/www/localhost/htdocs
selenium-hub:
image: selenium/hub:4.0.0-alpha-6-20200730
container_name: selenium-hub
ports:
- "4444:4444"
networks:
- servicesnet
chrome:
image: selenium/node-chrome:4.0.0-alpha-6-20200730
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
networks:
- servicesnet
When I run docker-compose up it outputs for the new containers:
chrome | 2020-08-12 07:36:19,917 INFO Included extra file "/etc/supervisor/conf.d/selenium.conf" during parsing
chrome | 2020-08-12 07:36:19,918 INFO supervisord started with pid 7
selenium-hub | 2020-08-12 07:36:19,297 INFO Included extra file "/etc/supervisor/conf.d/selenium-grid-hub.conf" during parsing
selenium-hub | 2020-08-12 07:36:19,298 INFO supervisord started with pid 7
selenium-hub | 2020-08-12 07:36:20,301 INFO spawned: 'selenium-grid-hub' with pid 10
selenium-hub | Starting Selenium Grid Hub...
selenium-hub | 2020-08-12 07:36:20,311 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
selenium-hub | 07:36:20.588 INFO [LoggingOptions.getTracer] - Using OpenTelemetry for tracing
selenium-hub | 07:36:20.589 INFO [LoggingOptions.createTracer] - Using OpenTelemetry for tracing
selenium-hub | 07:36:20.607 INFO [EventBusOptions.createBus] - Creating event bus: org.openqa.selenium.events.zeromq.ZeroMqEventBus
selenium-hub | 07:36:20.638 INFO [BoundZmqEventBus.<init>] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://172.28.0.3:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://172.28.0.3:4443]
selenium-hub | 07:36:20.676 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://172.28.0.3:4442 and tcp://172.28.0.3:4443
selenium-hub | 07:36:20.680 INFO [UnboundZmqEventBus.<init>] - Sockets created
selenium-hub | 07:36:20.681 INFO [UnboundZmqEventBus.lambda$new$2] - Bus started
chrome | 2020-08-12 07:36:21,136 INFO success: xvfb entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,136 INFO success: fluxbox entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,136 INFO success: vnc entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
chrome | 2020-08-12 07:36:21,137 INFO success: selenium-node entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
selenium-hub | 07:36:21.308 INFO [Hub.execute] - Started Selenium hub 4.0.0-alpha-6 (revision 5f43a29cfc): http://172.28.0.3:4444
chrome | 07:36:21.774 INFO [LoggingOptions.getTracer] - Using OpenTelemetry for tracing
chrome | 07:36:21.775 INFO [LoggingOptions.createTracer] - Using OpenTelemetry for tracing
chrome | 07:36:21.791 INFO [EventBusOptions.createBus] - Creating event bus: org.openqa.selenium.events.zeromq.ZeroMqEventBus
chrome | 07:36:21.829 INFO [UnboundZmqEventBus.<init>] - Connecting to tcp://selenium-hub:4442 and tcp://selenium-hub:4443
chrome | 07:36:21.857 INFO [UnboundZmqEventBus.<init>] - Sockets created
chrome | 07:36:21.859 INFO [UnboundZmqEventBus.lambda$new$2] - Bus started
chrome | 07:36:22.121 INFO [NodeServer.execute] - Reporting self as: http://172.28.0.5:5555
chrome | 07:36:22.175 INFO [NodeOptions.report] - Adding Chrome for {"browserName": "chrome"} 8 times
chrome | 07:36:22.298 INFO [NodeServer.execute] - Started Selenium node 4.0.0-alpha-6 (revision 5f43a29cfc): http://172.28.0.5:5555
chrome | 07:36:22.302 INFO [NodeServer.execute] - Starting registration process for node id ff0154a7-ed4b-438a-887c-0a7f3a988cb4
selenium-hub | 07:36:22.355 INFO [LocalDistributor.refresh] - Creating a new remote node for http://172.28.0.5:5555
selenium-hub | 07:36:22.763 INFO [LocalDistributor.add] - Added node ff0154a7-ed4b-438a-887c-0a7f3a988cb4.
selenium-hub | 07:36:22.770 INFO [Host.lambda$new$0] - Changing status of node ff0154a7-ed4b-438a-887c-0a7f3a988cb4 from DOWN to UP. Reason: http://172.28.0.5:5555 is ok
chrome | 07:36:22.774 INFO [NodeServer.lambda$execute$0] - Node has been added
Then I have the next method for every test:
<?php
namespace Tests\AppBundle\Controller;
use Behat\Mink\Driver\Selenium2Driver;
use Behat\Mink\Mink;
use Behat\Mink\Session;
use Symfony\Bundle\FrameworkBundle\Client;
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;
abstract class BaseControllerTest extends WebTestCase
{
/**
* #var Client
*/
protected $client;
/**
* #var Session
*/
protected $session;
public function visitUri($uri)
{
$this->client = static::createClient();
$pass = $this->client->getKernel()->getContainer()->getParameter('http_basic_auth_pass');
$host = 'localhost'; // I've tried several things here (like 172.28.0.5:5555)
$driver = new Selenium2Driver('chrome');
$mink = new Mink(array(
'chrome' => new Session($driver)
));
$driver->setTimeouts(['page load' => 900000]);
$mink->setDefaultSessionName('chrome');
$this->session = $mink->getSession();
$this->session->visit('http://user:' . $pass . '#' . $host . $uri);
}
}
And I call this method from a specific test:
public function testClickOnSearch()
{
$this->visitUri(/mi-custom-uri);
$page = $this->session->getPage();
$this->session->wait(
200000,
"typeof jQuery !== 'undefined'"
);
$page->findButton('Buton text')->click();
$this->assertContains('my-custom-uri-2', $this->session->getCurrentUrl());
}
but I never get the session started. If I go to http://localhost:4444/wd/hub/session/url I see this error message:
"org.openqa.selenium.NoSuchSessionException: Unable to find session with ID: url\nBuild info: version: '4.0.0-alpha-6', revision: '5f43a29cfc'\nSystem info: host: 'fca78c7f81e6', ip: '172.28.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.0-42-generic', java.version: '1.8.0_252'\nDriver info: driver.version: unknown"
And executing the test, after 200 seconds this error is thrown:
PHP Fatal error: Call to a member function click() on null
I'm sure something is missing but don't know what. Any idea?
This error message...
org.openqa.selenium.NoSuchSessionException: Unable to find session with ID: url\n
Build info: version: '4.0.0-alpha-6', revision: '5f43a29cfc'\n
System info: host: 'fca78c7f81e6', ip: '172.28.0.3', os.name: 'Linux', os.arch: 'amd64', os.version: '5.4.0-42-generic', java.version: '1.8.0_252'\n
Driver info: driver.version: unknown
...implies that the ChromeDriver was unable to initiate/spawn a new Browsing Context i.e. Chrome Browser session which gets reflected in the logs as:
Driver info: driver.version: unknown
Hence moving forward you see the error:
PHP Fatal error: Call to a member function click() on null
and the most probhable cause is the incompatibility between the version of the binaries you are using.
Solution
Ensure that:
ChromeDriver is updated to current ChromeDriver v84.0 level.
Chrome is updated to current Chrome Version 84.0 level. (as per ChromeDriver v84.0 release notes)
If your base Web Client version is too old, then uninstall it and install a recent GA and released version of Web Client.
Take a System Reboot.
Always invoke driver.quit() within tearDown(){} method to close & destroy the WebDriver and Web Client instances gracefully.
References
You can find a couple of relevant detailed discussions in:
org.openqa.selenium.NoSuchSessionException: no such session error in Selenium automation tests using ChromeDriver Chrome with Java
Found you using selenium inside docker container. Than you can try environment variable SE_NODE_SESSION_TIMEOUT=999999
I start with Rancher and actually, I want to install it on an Amazon VPS, I followed the instructions in Rancher's doc, but when I run this command :
docker run --restart=unless-stopped -p 8080:8080 rancher/server --db-host 172.26.3.141 --db-port 3306 --db-user rancher --db-pass xxxx --db-name rancher
to launch the Rancher container, it hangs at this level:
time="2019-06-09T10:14:17Z" level=info msg="Done downloading all drivers" service=gms
It does not display an error message, but does not advance either. Someone to an idea please ??? Thank you !
I use Amazon EC2 with Debian 9 and docker-ce: 18.6
This is the logs :
CATTLE_AGENT_PACKAGE_HOST_API_URL=/usr/share/cattle/artifacts/host-api.tar.gz
CATTLE_AGENT_PACKAGE_PER_HOST_SUBNET_URL=/usr/share/cattle/artifacts/rancher-per-host-subnet.
CATTLE_AGENT_PACKAGE_PYTHON_AGENT_URL=/usr/share/cattle/artifacts/go-agent.tar.gz
CATTLE_AGENT_PACKAGE_WINDOWS_AGENT_URL=/usr/share/cattle/artifacts/go-agent.zip
CATTLE_API_UI_URL=//releases.rancher.com/api-ui/1.0.8
CATTLE_CATTLE_VERSION=v0.183.79
CATTLE_DB_CATTLE_DATABASE=mysql
CATTLE_DB_CATTLE_MYSQL_HOST=172.31.5.211
CATTLE_DB_CATTLE_MYSQL_NAME=rancher
CATTLE_DB_CATTLE_MYSQL_PORT=3306
CATTLE_DB_CATTLE_USERNAME=rancher
CATTLE_GRAPHITE_HOST=
CATTLE_GRAPHITE_PORT=
CATTLE_HOME=/var/lib/cattle
CATTLE_HOST_API_PROXY_MODE=embedded
CATTLE_LOGBACK_OUTPUT_GELF_HOST=
CATTLE_LOGBACK_OUTPUT_GELF_PORT=
CATTLE_RANCHER_CLI_VERSION=v0.6.13
CATTLE_RANCHER_COMPOSE_VERSION=v0.12.5
CATTLE_RANCHER_SERVER_IMAGE=rancher/server
CATTLE_RANCHER_SERVER_VERSION=v1.6.28
CATTLE_USE_LOCAL_ARTIFACTS=true
DEFAULT_CATTLE_API_UI_CSS_URL=/api-ui/ui.min.css
DEFAULT_CATTLE_API_UI_INDEX=//releases.rancher.com/ui/1.6.50
DEFAULT_CATTLE_API_UI_JS_URL=/api-ui/ui.min.js
DEFAULT_CATTLE_AUTH_SERVICE_EXECUTE=true
DEFAULT_CATTLE_CATALOG_EXECUTE=true
DEFAULT_CATTLE_CATALOG_URL={"catalogs":{"community":{"url":"https://git.rancher.io/community-er"},"library":{"url":"https://git.rancher.io/rancher-catalog.git","branch":"${RELEASE}"}}}
DEFAULT_CATTLE_COMPOSE_EXECUTOR_EXECUTE=true
DEFAULT_CATTLE_MACHINE_EXECUTE=true
DEFAULT_CATTLE_RANCHER_CLI_DARWIN_URL=https://releases.rancher.com/cli/v0.6.13/rancher-darwin
DEFAULT_CATTLE_RANCHER_CLI_LINUX_URL=https://releases.rancher.com/cli/v0.6.13/rancher-linux-a
DEFAULT_CATTLE_RANCHER_CLI_WINDOWS_URL=https://releases.rancher.com/cli/v0.6.13/rancher-windo
DEFAULT_CATTLE_RANCHER_COMPOSE_DARWIN_URL=https://releases.rancher.com/compose/v0.12.5/ranche2.5.tar.gz
DEFAULT_CATTLE_RANCHER_COMPOSE_LINUX_URL=https://releases.rancher.com/compose/v0.12.5/rancher5.tar.gz
DEFAULT_CATTLE_RANCHER_COMPOSE_WINDOWS_URL=https://releases.rancher.com/compose/v0.12.5/ranch2.5.zip
DEFAULT_CATTLE_SECRETS_API_EXECUTE=true
DEFAULT_CATTLE_WEBHOOK_SERVICE_EXECUTE=true
10:13:23.794 [main] INFO ConsoleStatus - Loading configuration
2019-06-09 10:13:29,181 INFO [main] [ConsoleStatus] Starting DB migration
2019-06-09 10:13:31,523 INFO [main] [ConsoleStatus] DB migration done
2019-06-09 10:13:32,021 INFO [main] [ConsoleStatus] Cluster membership changed [127.0.0.1:
2019-06-09 10:13:32,022 INFO [main] [ConsoleStatus] Checking cluster state on start-up
2019-06-09 10:13:32,023 INFO [main] [ConsoleStatus] Waiting to become master
2019-06-09 10:13:53,813 INFO [main] [ConsoleStatus] Loading processes
2019-06-09 10:13:54,294 INFO [main] [ConsoleStatus] Starting [1/94]: LockDelegatorImpl
2019-06-09 10:13:54,295 INFO [main] [ConsoleStatus] Starting [2/94]: AnnotatedListenerRegi
2019-06-09 10:13:54,302 INFO [main] [ConsoleStatus] Starting [3/94]: EventService
2019-06-09 10:13:54,302 INFO [main] [ConsoleStatus] Starting [4/94]: DefaultObjectMetaData
2019-06-09 10:13:55,281 INFO [main] [ConsoleStatus] Starting [5/94]: JsonDefaultsProvider
2019-06-09 10:13:55,327 INFO [main] [ConsoleStatus] Starting [6/94]: ObjectDefaultsPostIns
2019-06-09 10:13:55,327 INFO [main] [ConsoleStatus] Starting [7/94]: DefaultProcessManager
2019-06-09 10:13:55,328 INFO [main] [ConsoleStatus] Starting [8/94]: SampleDataStartupV3
2019-06-09 10:13:55,334 INFO [main] [ConsoleStatus] Starting [9/94]: TaskManagerImpl
2019-06-09 10:13:55,358 INFO [main] [ConsoleStatus] Starting [10/94]: ServiceAccountCreate
2019-06-09 10:13:55,373 INFO [main] [ConsoleStatus] Starting [11/94]: WebsocketProxyLaunch
2019-06-09 10:13:55,373 INFO [main] [ConsoleStatus] Starting [12/94]: SampleDataStartupV15
2019-06-09 10:13:55,374 INFO [main] [ConsoleStatus] Starting [13/94]: SchemaFactory:v1-adm
2019-06-09 10:13:55,750 INFO [main] [ConsoleStatus] Starting [14/94]: SchemaFactory:v1-age
2019-06-09 10:13:55,769 INFO [main] [ConsoleStatus] Starting [15/94]: SchemaFactory:v1-age
2019-06-09 10:13:55,777 INFO [main] [ConsoleStatus] Starting [16/94]: SchemaFactory:v1-bas
2019-06-09 10:13:56,237 INFO [main] [ConsoleStatus] Starting [17/94]: SchemaFactory:v1-mem
2019-06-09 10:13:56,469 INFO [main] [ConsoleStatus] Starting [18/94]: SchemaFactory:v1-own
2019-06-09 10:13:56,695 INFO [main] [ConsoleStatus] Starting [19/94]: SchemaFactory:v1-pro
2019-06-09 10:13:56,943 INFO [main] [ConsoleStatus] Starting [20/94]: SchemaFactory:v1-pro
2019-06-09 10:13:57,169 INFO [main] [ConsoleStatus] Starting [21/94]: SchemaFactory:v1-rea
time="2019-06-09T10:13:57Z" level=info msg="Downloading key from http://localhost:8081/v1/scr
time="2019-06-09T10:13:57Z" level=fatal msg="Error getting config." error="Invalid key conten
2019-06-09 10:13:57,474 INFO [main] [ConsoleStatus] Starting [22/94]: SchemaFactory:v1-rea
2019-06-09 10:13:57,685 INFO [main] [ConsoleStatus] Starting [23/94]: SchemaFactory:v1-reg
2019-06-09 10:13:57,689 INFO [main] [ConsoleStatus] Starting [24/94]: SchemaFactory:v1-res
2019-06-09 10:13:57,910 INFO [main] [ConsoleStatus] Starting [25/94]: SchemaFactory:v1-ser
2019-06-09 10:13:58,189 INFO [main] [ConsoleStatus] Starting [26/94]: SchemaFactory:v1-tok
2019-06-09 10:13:58,193 INFO [main] [ConsoleStatus] Starting [27/94]: SchemaFactory:v1-use
2019-06-09 10:13:58,419 INFO [main] [ConsoleStatus] Starting [28/94]: AgentBasedProcessHan
2019-06-09 10:13:58,419 INFO [main] [ConsoleStatus] Starting [29/94]: AgentHostStateUpdate
2019-06-09 10:13:58,420 INFO [main] [ConsoleStatus] Starting [30/94]: AgentManager
2019-06-09 10:13:58,420 INFO [main] [ConsoleStatus] Starting [31/94]: AuthServiceLauncher
2019-06-09 10:13:58,421 INFO [main] [ConsoleStatus] Starting [32/94]: BackupCreate
2019-06-09 10:13:58,423 INFO [main] [ConsoleStatus] Starting [33/94]: BackupRemove
2019-06-09 10:13:58,423 INFO [main] [ConsoleStatus] Starting [34/94]: CatalogLauncher
2019-06-09 10:13:58,423 INFO [main] [ConsoleStatus] Starting [35/94]: ComposeExecutorLaunc
2019-06-09 10:13:58,423 INFO [main] [ConsoleStatus] Starting [36/94]: ConfigItemRegistryIm
2019-06-09 10:13:58,568 INFO [main] [ConsoleStatus] Starting [37/94]: ConfigItemServerImpl
2019-06-09 10:13:58,791 INFO [main] [ConsoleStatus] Starting [38/94]: ConfigUpdatePublishe
2019-06-09 10:13:58,792 INFO [main] [ConsoleStatus] Starting [39/94]: DataManager
2019-06-09 10:13:58,792 INFO [main] [ConsoleStatus] Starting [40/94]: DefaultAuthorization
time="2019-06-09T10:13:59Z" level=info msg="Downloading key from http://localhost:8081/v1/scr
time="2019-06-09T10:13:59Z" level=fatal msg="Error getting config." error="Invalid key conten
2019-06-09 10:14:00,070 INFO [main] [ConsoleStatus] Starting [41/94]: DefaultJooqResourceM
2019-06-09 10:14:00,070 INFO [main] [ConsoleStatus] Starting [42/94]: DefaultObjectSeriali
2019-06-09 10:14:00,070 INFO [main] [ConsoleStatus] Starting [43/94]: DockerComposeStackHa
2019-06-09 10:14:00,070 INFO [main] [ConsoleStatus] Starting [44/94]: DynamicExtensionMana
2019-06-09 10:14:00,070 INFO [main] [ConsoleStatus] Starting [45/94]: ExtensionResourceMan
2019-06-09 10:14:00,071 INFO [main] [ConsoleStatus] Starting [46/94]: HostApiRSAKeyProvide
2019-06-09 10:14:00,071 INFO [main] [ConsoleStatus] Starting [47/94]: HostTemplateManager
2019-06-09 10:14:00,071 INFO [main] [ConsoleStatus] Starting [48/94]: ImageStoragePoolMapA
2019-06-09 10:14:00,072 INFO [main] [ConsoleStatus] Starting [49/94]: InstanceHostMapActiv
2019-06-09 10:14:00,073 INFO [main] [ConsoleStatus] Starting [50/94]: InstanceHostMapDeact
2019-06-09 10:14:00,074 INFO [main] [ConsoleStatus] Starting [51/94]: InstanceManager
2019-06-09 10:14:00,074 INFO [main] [ConsoleStatus] Starting [52/94]: IpsecHealthcheckEnab
2019-06-09 10:14:00,075 INFO [main] [ConsoleStatus] Starting [53/94]: LoadBalancerServiceI
2019-06-09 10:14:00,075 INFO [main] [ConsoleStatus] Starting [54/94]: MachineDriverLoader
2019-06-09 10:14:00,077 INFO [main] [ConsoleStatus] Starting [55/94]: MachineLauncher
2019-06-09 10:14:00,077 INFO [main] [ConsoleStatus] Starting [56/94]: MetadataConfigItemFa
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [57/94]: MountRemove
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [58/94]: PostInstancePurge
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [59/94]: PostStartLabelsProvi
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [60/94]: ProjectMemberResourc
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [61/94]: ProjectResourceManag
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [62/94]: PullTaskCreate
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [63/94]: RestoreFromBackup
2019-06-09 10:14:00,080 INFO [main] [ConsoleStatus] Starting [64/94]: RevertToSnapshot
2019-06-09 10:14:00,081 INFO [main] [ConsoleStatus] Starting [65/94]: SampleDataStartupV10
2019-06-09 10:14:00,085 INFO [main] [ConsoleStatus] Starting [66/94]: SampleDataStartupV11
2019-06-09 10:14:00,086 INFO [main] [ConsoleStatus] Starting [67/94]: SampleDataStartupV12
2019-06-09 10:14:00,087 INFO [main] [ConsoleStatus] Starting [68/94]: SampleDataStartupV13
2019-06-09 10:14:00,088 INFO [main] [ConsoleStatus] Starting [69/94]: SampleDataStartupV14
2019-06-09 10:14:00,089 INFO [main] [ConsoleStatus] Starting [70/94]: SampleDataStartupV16
2019-06-09 10:14:00,090 INFO [main] [ConsoleStatus] Starting [71/94]: SampleDataStartupV17
2019-06-09 10:14:00,091 INFO [main] [ConsoleStatus] Starting [72/94]: SampleDataStartupV5
2019-06-09 10:14:00,092 INFO [main] [ConsoleStatus] Starting [73/94]: SampleDataStartupV6
2019-06-09 10:14:00,093 INFO [main] [ConsoleStatus] Starting [74/94]: SampleDataStartupV7
2019-06-09 10:14:00,094 INFO [main] [ConsoleStatus] Starting [75/94]: SampleDataStartupV8
2019-06-09 10:14:00,094 INFO [main] [ConsoleStatus] Starting [76/94]: SampleDataStartupV9
2019-06-09 10:14:00,095 INFO [main] [ConsoleStatus] Starting [77/94]: SecretManager
2019-06-09 10:14:00,096 INFO [main] [ConsoleStatus] Starting [78/94]: SecretsApiLauncher
2019-06-09 10:14:00,096 INFO [main] [ConsoleStatus] Starting [79/94]: ServiceManager
2019-06-09 10:14:00,096 INFO [main] [ConsoleStatus] Starting [80/94]: SettingManager
2019-06-09 10:14:00,096 INFO [main] [ConsoleStatus] Starting [81/94]: SnapshotCreate
2019-06-09 10:14:00,097 INFO [main] [ConsoleStatus] Starting [82/94]: SnapshotRemove
2019-06-09 10:14:00,097 INFO [main] [ConsoleStatus] Starting [83/94]: StackAgentHandler
2019-06-09 10:14:00,097 INFO [main] [ConsoleStatus] Starting [84/94]: StackAgentHandler
2019-06-09 10:14:00,098 INFO [main] [ConsoleStatus] Starting [85/94]: StackAgentHandler
2019-06-09 10:14:00,098 INFO [main] [ConsoleStatus] Starting [86/94]: StackAgentHandler
2019-06-09 10:14:00,098 INFO [main] [ConsoleStatus] Starting [87/94]: StackAgentHandler
2019-06-09 10:14:00,099 INFO [main] [ConsoleStatus] Starting [88/94]: TelemetryLauncher
2019-06-09 10:14:00,099 INFO [main] [ConsoleStatus] Starting [89/94]: VolumeManager
2019-06-09 10:14:00,099 INFO [main] [ConsoleStatus] Starting [90/94]: VolumeRemove
2019-06-09 10:14:00,100 INFO [main] [ConsoleStatus] Starting [91/94]: VolumeStoragePoolMap
2019-06-09 10:14:00,100 INFO [main] [ConsoleStatus] Starting [92/94]: VolumeStoragePoolMap
2019-06-09 10:14:00,100 INFO [main] [ConsoleStatus] Starting [93/94]: WebhookServiceLaunch
2019-06-09 10:14:00,101 INFO [main] [ConsoleStatus] Starting [94/94]: project.template.rel
2019-06-09 10:14:00,198 INFO [main] [ConsoleStatus] [DONE ] [40330ms] Startup Succeeded, L
time="2019-06-09T10:14:00Z" level=info msg="Starting rancher-compose-executor" version=v0.14.
time="2019-06-09T10:14:00Z" level=fatal msg="Unable to create event router" error="Get http:/al tcp 127.0.0.1:8080: getsockopt: connection refused"
time="2019-06-09T10:14:00Z" level=warning msg="Couldn't load install uuid: Get http://localho27.0.0.1:8080: getsockopt: connection refused. Sleep 250ms and retry"
time="2019-06-09T10:14:00Z" level=fatal msg="Failed to configure cattle client: Get http://lotcp 127.0.0.1:8080: connect: connection refused"
time="2019-06-09T10:14:00Z" level=warning msg="Couldn't load install uuid: Get http://localho27.0.0.1:8080: getsockopt: connection refused. Sleep 500ms and retry"
time="2019-06-09T10:14:01Z" level=warning msg="Couldn't load install uuid: Get http://localho27.0.0.1:8080: getsockopt: connection refused. Sleep 1s and retry"
time="2019-06-09T10:14:01Z" level=info msg="Downloading key from http://localhost:8081/v1/scr
time="2019-06-09T10:14:01Z" level=info msg="Starting websocket proxy. Listening on [:8080], Pocalhost:8081], Monitoring parent pid [10]."
time="2019-06-09T10:14:01Z" level=info msg="Configured http API filter"
time="2019-06-09T10:14:01Z" level=info msg="Configured authTokenValidator API filter"
time="2019-06-09T10:14:01Z" level=info msg="Master config file: master.conf"
time="2019-06-09T10:14:01Z" level=info msg="Downloading certificate from http://localhost:808icate"
time="2019-06-09T10:14:02Z" level=info msg="Starting go-machine-service..." gitcommit=v0.39.4
time="2019-06-09T10:14:02Z" level=info msg="Waiting for handler registration (1/2)" service=g
time="2019-06-09T10:14:02Z" level=info msg="Webhook service listening on 8085"
time="2019-06-09T10:14:02Z" level=info msg="Starting rancher-compose-executor" version=v0.14.
time="2019-06-09T10:14:02Z" level=info msg="Fetch uuid 30298836-8939-4754-986a-38c399eaf4f1 s
time="2019-06-09T10:14:02Z" level=info msg="Starting Catalog Service (port 8088, refresh inte
time="2019-06-09T10:14:02Z" level=info msg="Starting Rancher Auth service"
time="2019-06-09T10:14:03Z" level=info msg="No Auth provider configured"
time="2019-06-09T10:14:03Z" level=info msg="Initializing event router" workerCount=5
time="2019-06-09T10:14:04Z" level=info msg="Listening on :8090"
time="2019-06-09T10:14:04Z" level=info msg="Connection established"
time="2019-06-09T10:14:04Z" level=info msg="Starting websocket pings"
time="2019-06-09T10:14:04Z" level=info msg="Waiting for handler registration (2/2)" service=g
time="2019-06-09T10:14:04Z" level=info msg="Initializing event router" workerCount=250
time="2019-06-09T10:14:04Z" level=info msg="Connection established"
time="2019-06-09T10:14:04Z" level=info msg="Starting websocket pings"
time="2019-06-09T10:14:04Z" level=info msg="Installing builtin drivers" service=gms
time="2019-06-09T10:14:04Z" level=info msg="Initializing event router" workerCount=250
time="2019-06-09T10:14:04Z" level=info msg="Connection established"
time="2019-06-09T10:14:04Z" level=info msg="Starting websocket pings"
time="2019-06-09T10:14:07Z" level=info msg="Waiting for machinedriver.activate event" service
time="2019-06-09T10:14:10Z" level=info msg="Waiting for machinedriver.activate event" service
time="2019-06-09T10:14:10Z" level=info msg="Initializing event router" workerCount=250
time="2019-06-09T10:14:10Z" level=info msg="Connection established"
time="2019-06-09T10:14:10Z" level=info msg="Starting websocket pings"
time="2019-06-09T10:14:13Z" level=info msg="Waiting for machinedriver.activate event" service
time="2019-06-09T10:14:16Z" level=info msg="machinedriver.activate event detected" service=gm
time="2019-06-09T10:14:16Z" level=info msg="Downloading all drivers" service=gms
time="2019-06-09T10:14:16Z" level=info msg="Download https://github.com/packethost/docker-mac/download/v0.1.2/docker-machine-driver-packet_linux-amd64.zip" service=gms
time="2019-06-09T10:14:17Z" level=info msg="Found driver docker-machine-driver-packet" servic
time="2019-06-09T10:14:17Z" level=info msg="Copying /var/lib/cattle/machine-drivers/1f70583418e9ca5f57d5a650a049bcd5945e9-docker-machine-driver-packet => /usr/local/bin/docker-machine-drs
time="2019-06-09T10:14:17Z" level=info msg="Done downloading all drivers" service=gms
For an unknown reason,
CATTLE_DB_CATTLE_MYSQL_HOST=172.31.5.211
is different that the --db-host one you mention :
docker run --restart=unless-stopped -p 8080:8080 rancher/server --db-host 172.26.3.141 --db-port 3306 --db-user rancher --db-pass xxxx --db-name rancher
Got "ERR_CONNECTION_TIMED_OUT" when running docker for windows version: Docker version 18.06.1-ce, build e68fc7a. Using linux containers (default installation wizard settings).
Docker inspect containerID gives: "IPAddress": "172.17.0.2",
Network settings:
"NetworkSettings": {
"Bridge": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": null
State:
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 2918,
"ExitCode": 0,
"Error": "",
Connecting to the docker tomcat with: http://172.17.0.2:8080
Error message: ERR_CONNECTION_TIMED_OUT
No proxies in the docker settings.
Docker Logs containerID:
30-Sep-2018 19:44:54.540 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/8.5.34
30-Sep-2018 19:44:54.541 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Sep 4 2018 22:28:22 UTC
30-Sep-2018 19:44:54.541 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 8.5.34.0
30-Sep-2018 19:44:54.542 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
30-Sep-2018 19:44:54.542 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 4.9.93-linuxkit-aufs
30-Sep-2018 19:44:54.542 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
30-Sep-2018 19:44:54.542 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-8-openjdk-amd64/jre
30-Sep-2018 19:44:54.542 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_181-8u181-b13-1~deb9u1-b13
30-Sep-2018 19:44:54.543 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
30-Sep-2018 19:44:54.543 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat
30-Sep-2018 19:44:54.543 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat
30-Sep-2018 19:44:54.543 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
30-Sep-2018 19:44:54.543 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
30-Sep-2018 19:44:54.544 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
30-Sep-2018 19:44:54.544 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
30-Sep-2018 19:44:54.544 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
30-Sep-2018 19:44:54.544 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
30-Sep-2018 19:44:54.544 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
30-Sep-2018 19:44:54.545 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
30-Sep-2018 19:44:54.545 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
30-Sep-2018 19:44:54.545 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library [1.2.17] using APR version [1.5.2].
30-Sep-2018 19:44:54.545 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
30-Sep-2018 19:44:54.546 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
30-Sep-2018 19:44:54.548 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.0f 25 May 2017]
30-Sep-2018 19:44:54.609 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
30-Sep-2018 19:44:54.614 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
30-Sep-2018 19:44:54.618 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
30-Sep-2018 19:44:54.619 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
30-Sep-2018 19:44:54.619 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 320 ms
30-Sep-2018 19:44:54.630 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
30-Sep-2018 19:44:54.631 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/8.5.34
30-Sep-2018 19:44:54.652 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
30-Sep-2018 19:44:54.835 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [182] ms
30-Sep-2018 19:44:54.835 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/ROOT]
30-Sep-2018 19:44:54.843 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/ROOT] has finished in [8] ms
30-Sep-2018 19:44:54.844 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
30-Sep-2018 19:44:54.854 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [10] ms
30-Sep-2018 19:44:54.855 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/docs]
30-Sep-2018 19:44:54.862 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [8] ms
30-Sep-2018 19:44:54.862 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
30-Sep-2018 19:44:54.965 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [103] ms
30-Sep-2018 19:44:54.966 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]30-Sep-2018 19:44:54.973 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
30-Sep-2018 19:44:54.974 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 355 ms
It's clearly saying that port mapping not done properly :
"Ports": {
"8080/tcp": null
}
The null means the docker inside port 8080 has not mapped to it's corresponding in outside.
when port not mapped, there would not be any way you can access to it by url and you will get ERR_CONNECTION_TIMED_OUT
as it's not clear how did you run your container I can only suggest you a simple and proper way of port mapping would be :
docker run -it --rm -p 8080:8080 tomcat:latest
where 8080 outside port is mapped to inside docker tomcat 8080 port, then you can access it by localhost:8080
i'm trying to use jetty's serverpush-feature with haproxy. I've set up Jetty 9.4.7 with PushCacheFilter and haproxy in two docker-containers.
I think jetty tries to push something, but no PUSH_PROMISE-frames are delivered to the client (I've checked chrome's net-internals-tab).
I'm not sure if this is an issue with jetty (maybe with h2c)!
Here's my haporxy-config (taken from jetty's documentation):
global
tune.ssl.default-dh-param 1024
defaults
timeout connect 10000ms
timeout client 60000ms
timeout server 60000ms
frontend fe_http
mode http
bind *:80
# Redirect to https
redirect scheme https code 301
frontend fe_https
mode tcp
bind *:443 ssl no-sslv3 crt /usr/local/etc/domain.pem ciphers TLSv1.2
alpn h2,http/1.1
default_backend be_http
backend be_http
mode tcp
server domain basexhttp:8984
and here's how jetty starts:
[main] INFO org.eclipse.jetty.util.log - Logging initialized #377ms to org.eclipse.jetty.util.log.Slf4jLog
BaseX 9.0 beta 5cc42ae [HTTP Server]
[main] INFO org.eclipse.jetty.server.Server - jetty-9.4.7.v20170914
[main] INFO org.eclipse.jetty.webapp.StandardDescriptorProcessor - NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
[main] INFO org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0
[main] INFO org.eclipse.jetty.server.session - No SessionScavenger set, using defaults
[main] INFO org.eclipse.jetty.server.session - Scavenging every 600000ms
[main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.w.WebAppContext#7dc222ae{/,file:///opt/basex/webapp/,AVAILABLE}{/opt/basex/webapp}
[main] INFO org.eclipse.jetty.server.AbstractConnector - Started ServerConnector#3439f68d{h2c,[h2c, http/1.1]}{0.0.0.0:8984}
[main] INFO org.eclipse.jetty.server.Server - Started #784ms
HTTP Server was started (port: 8984).
HTTP Stop Server was started (port: 8985).
And here's the simple docker-compose.yml
version: '3.3'
services:
basexhttp:
container_name: pushcachefilter-basexhttp
build: pushcachefilter-basexhttp/
image: "pushcachefilter/basexhttp"
volumes:
- "${HOME}/data:/opt/basex/data"
- "${HOME}/base/app-web/webapp:/opt/basex/webapp"
networks:
- web
haproxy:
container_name: haproxy_container
build: ha-proxy/
image: "my_haproxy"
depends_on:
- basexhttp
ports:
- 80:80
- 443:443
networks:
- web
networks:
web:
driver: overlay
Please note that i have to configure jetty using my own jetty.xml - the way shown in the documentation is not working for me.
Thx in advance
Bodo
I'm trying to setup the consul server and connect an agent to it for 2 or 3 days already. I'm using docker-compose.
But after performing a join operation, agent gets a message "Agent not live or unreachable".
Here are the logs:
root#e33a6127103f:/app# consul agent -join 10.1.30.91 -data-dir=/tmp/consul
==> Starting Consul agent...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Version: 'v1.0.1'
Node ID: '0e1adf74-462d-45a4-1927-95ed123f1526'
Node name: 'e33a6127103f'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: e33a6127103f 172.17.0.2
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2017/12/06 10:44:43 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2017/12/06 10:44:43 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2017/12/06 10:44:43 [INFO] agent: (LAN) joining: [10.1.30.91]
2017/12/06 10:44:43 [INFO] serf: EventMemberJoin: consul1 172.19.0.2 2017/12/06 10:44:43 [INFO] consul: adding server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:44:43 [INFO] agent: (LAN) joined: 1 Err: <nil>
2017/12/06 10:44:43 [INFO] agent: started state syncer
2017/12/06 10:44:43 [WARN] manager: No servers available
2017/12/06 10:44:43 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:44:54 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:44:55 [ERR] consul: "Catalog.NodeServices" RPC failed to server 172.19.0.2:8300: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:55 [ERR] agent: failed to sync remote state: rpc error getting client: failed to get conn: dial tcp <nil>->172.19.0.2:8300: i/o timeout
2017/12/06 10:44:58 [INFO] memberlist: Marking consul1 as failed, suspect timeout reached (0 peer confirmations)
2017/12/06 10:44:58 [INFO] serf: EventMemberFailed: consul1 172.19.0.2
2017/12/06 10:44:58 [INFO] consul: removing server consul1 (Addr: tcp/172.19.0.2:8300) (DC: dc1)
2017/12/06 10:45:05 [INFO] memberlist: Suspect consul1 has failed, no acks received
2017/12/06 10:45:06 [WARN] manager: No servers available
2017/12/06 10:45:06 [ERR] agent: Coordinate update error: No known Consul servers
2017/12/06 10:45:12 [WARN] manager: No servers available
2017/12/06 10:45:12 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:13 [INFO] serf: attempting reconnect to consul1 172.19.0.2:8301
2017/12/06 10:45:28 [WARN] manager: No servers available
2017/12/06 10:45:28 [ERR] agent: failed to sync remote state: No known Consul servers
2017/12/06 10:45:32 [WARN] manager: No servers available . `
My settings are:
docker-compose SERVER:
consul1:
image: "consul.1.0.1"
container_name: "consul1"
hostname: "consul1"
volumes:
- ./consul/config:/config/
ports:
- "8400:8400"
- "8500:8500"
- "8600:53"
- "8300:8300"
- "8301:8301"
command: "agent -config-dir=/config -ui -server -bootstrap-expect 1"
Help please solve the problem.
I think you using wrong ip-address of consul-server
"consul agent -join 10.1.30.91 -data-dir=/tmp/consul"
10.1.30.91 this is not docker container ip it might be your host address/virtualbox.
Get consul-container ip and use that to join in consul-agent command.
For more info about how consul and agent works follow the link
https://dzone.com/articles/service-discovery-with-docker-and-consul-part-1
Try to get the right IP address by executing this command:
docker inspect <container id> | grep "IPAddress"
Where the is the container ID of the consul server.
Than use the obtained address instead of "10.1.30.91" in the command
consul agent -join <IP ADDRESS CONSUL SERVER> -data-dir=/tmp/consul