CircleCI - how to connect to localhost - circleci

In CircleCI I run an app that I would like to run the tests against:
test:
pre:
# run app
- ./gradlew bootRun -Dgrails.env=dev:
background: true
- sleep 40
override:
- ./gradlew test
On localhost the app is accessible on http://localhost:8080. I can see the app start up on CircleCI.
I thought that I would change the host localhost:
machine:
# Override /etc/hosts
hosts:
localhost: 127.0.0.1
My tests work locally correctly. On CircleCI they always end up without connection when calling new HttpPost("http://localhost:8080/api"); with this error:
org.apache.http.conn.HttpHostConnectException at SendMessageSpec.groovy:44
Caused by: java.net.ConnectException at SendMessageSpec.groovy:44

I had to increase the sleep time to something unreasonably big. - sleep 480
I think I'll have a look at how to block tests until the app is started.

Related

Provide port to elasticsearch in gitlab ci services for integration tests

I have a code repo in which I want to support both ES 5 and ES 7 versions. So the code is written in a way to detect which ES version is being run via env variables and use the connectors accordingly. While writing integration tests for this that run in the gitlab ci, I want to run both ES5 and ES7 to test both pieces of connectors. I have defined my ES images like so
- name: docker.elastic.co/elasticsearch/elasticsearch:5.6.0
alias: elastic
command:
- /bin/env
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
- /bin/bash
- bin/es-docker
- name: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
alias: elastic7
command:
- port 9202
- /bin/env
- 'discovery.type=single-node'
- 'xpack.security.enabled=false'
- 'http.port=9202'
- /bin/bash
- bin/es-docker
But my ES7 is not able to run on the given port and i get errors like so in my coverage stage :
*** WARNING: Service runner-ssasmqme-project-5918-concurrent-0-934a00b2fdd833db-docker.elastic.co__elasticsearch__elasticsearch-4 probably didn't start properly.
Health check error:
service "runner-ssasmqme-project-5918-concurrent-0-934a00b2fdd833db-docker.elastic.co__elasticsearch__elasticsearch-4-wait-for-service" timeout
Health check container logs:
Service container logs:
2023-02-07T07:59:39.562478117Z /usr/local/bin/docker-entrypoint.sh: line 37: exec: port 9202: not found
How do I run multiple ES versions on different ports in my gitlab ci ?

Karate-Chrome docker container in azure dev ops failing to connect

I have seen many similar issues to this but none seem to resolve or describe my exact issue.
I have configured an azure devops pipeline to use a container like below:
container:
image: ptrthomas/karate-chrome
options: --cap-add=SYS_ADMIN
I have uploaded the contents of the example from the jobserver demo to a repository and then run the following:
steps:
- script: mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner
It is my understanding (and I can see from the logs) that the files are loaded into the container and the script command is being executed inside the container. So that script command is the equivalent of docker exec -it -w /src karate mvn clean test -DargLine='-Dkarate.env=docker' -Dtest=WebRunner just without having to exec into the container.
When I run the example locally it executes the tests with no issues but in azure dev ops it fails at the point the tests actually start running, throwing this error:
14:16:37.388 [main] ERROR com.intuit.karate - karate.org.apache.http.conn.HttpHostConnectException: Connect to
localhost:9222 [localhost/127.0.0.1] failed: Connection refused
(Connection refused), http call failed after 2 milliseconds for url:
http://localhost:9222/json 14:16:39.388 [main] DEBUG
com.intuit.karate.shell.Command - attempt #4 waiting for http to be
ready at: http://localhost:9222/json 14:16:39.391 [main] DEBUG
com.intuit.karate - request: 5 > GET http://localhost:9222/json 5 >
Host: localhost:9222 5 > Connection: Keep-Alive 5 > User-Agent:
Apache-HttpClient/4.5.13 (Java/1.8.0_275) 5 > Accept-Encoding:
gzip,deflate
Looking at other issues there have been suggestions to specify the driver in the feature files with this line:
* configure driver = { type: 'chrome', executable: 'chrome' }
but a) that hasn't worked for me and b) shouldn't the karate-chrome docker image render this configuration unnecessary as it should be no different than the container I run locally?
Any help appreciated!
Thanks
Only thing I can think of is that the Azure config does not call the ENTRYPOINT of the image.
Maybe you should try to create a container from scratch (that does extensive logging) and see what happens. Use the Karate one as a reference.

Running Karate UI tests with “driverTarget” in GitLab CI

Question was:
I would like to run Karate UI tests using the driverTarget options to test my Java Play app which is running locally during the same job with sbt run.
I have a simple assertion to check for a property but whenever the tests runs I keep getting "description":"TypeError: Cannot read property 'getAttribute' of null This is my karate-config.js:
if (env === 'ci') {
karate.log('using environment:', env);
karate.configure('driverTarget',
{
docker: 'justinribeiro/chrome-headless',
showDriverLog: true
});
}
This is my test scenario:
Scenario: test 1: some test
Given driver 'http://localhost:9000'
waitUntil("document.readyState == 'complete'")
match attribute('some selector', 'some attribute') == 'something'
My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it.
Is there any workaround for this? thanks
A docker container cannot talk to localhost port as per what was posted: "My guess is that because justinribeiro/chrome-headless is running in its own container, localhost:9000 is different in the container compared to what's running outside of it."
To get around this and have docker container communicate with running app on localhost port use command host.docker.internal
Change to make:
From: Given driver 'http://localhost:9000'.
To: Given driver 'http://host.docker.internal:9000'
Additionally, I was able to use the ptrthomas/karate-chrome image in CI (GITLAB) by inserting the following inside my gitlab-ci.yml file
stages:
- uiTest
featureOne:
stage: uiTest
image: docker:latest
cache:
paths:
- .m2/repository/
services:
- docker:dind
script:
- docker run --name karate --rm --cap-add=SYS_ADMIN -v "$PWD":/karate -v
"$HOME"/.m2:/root/.m2 ptrthomas/karate-chrome &
- sleep 45
- docker exec -w /karate karate mvn test -DargLine='-Dkarate.env=docker' Dtest=testParallel
allow_failure: true
artifacts:
paths:
- reports
- ${CLOUD_APP_NAME}.log
my karate-config.js file looks like
if (karate.env == 'docker') {
karate.configure('driver', {
type: 'chrome',
showDriverLog: true,
start: false,
beforeStart: 'supervisorctl start ffmpeg',
afterStop: 'supervisorctl stop ffmpeg',
videoFile: '/tmp/karate.mp4'
});
}

Gitlab CI Config for Rails System Tests with Selenium and Headless Chrome

I'm trying to set up continuous Gitlab integration for a very simple Rails project and, despite all my searching, cannot find any workable solution for getting system tests to work using headless Chrome.
Here's my .gitlab-ci.yml file:
image: 'ruby:2.6.3'
before_script:
- curl -sL https://deb.nodesource.com/setup_11.x | bash -
- apt-get install -y nodejs
- apt-get install -y npm
- gem install bundler --conservative
- bundle install
- npm install -g yarn
- yarn install
stages:
- test
test:
stage: test
variables:
MYSQL_HOST: 'mysql'
MYSQL_DATABASE: 'cwrmb_test'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
SYSTEM_EMAIL: 'test#example.com'
REDIS_URL: 'redis://redis:6379/'
SELENIUM_URL: "http://selenium__standalone-chrome:4444/wd/hub"
services:
- redis:latest
- selenium/standalone-chrome:latest
- name: mysql:latest
command: ['--default-authentication-plugin=mysql_native_password']
script:
- RAILS_ENV=test bin/rails db:setup
- bin/rails test:system
Here's my application_system_test_case.rb:
require 'test_helper'
def selenium_options
driver_options = {
desired_capabilities: {
chromeOptions: {
args: %w[headless disable-gpu no-sandbox disable-dev-shm-usage]
}
}
}
driver_options[:url] = ENV['SELENIUM_URL'] if ENV['SELENIUM_URL']
driver_options
end
class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
driven_by :selenium, using: :chrome, screen_size: [1400, 1400], options: selenium_options
end
However, this configuration yields the following error for every system test:
Selenium::WebDriver::Error::UnknownError: java.net.ConnectException: Connection refused (Connection refused)
I don't believe there are any other errors (to do with Redis or MySQL) in this configuration file, because as soon as I omit system tests, everything works perfectly.
By the way, if anyone has any better configuration files for achieving the same goal, I would love to see what others do. Thanks in advance.
In how services are linked to the job and accessing the services it says that if you start a tutum/wordpress container (via a service stanza);
tutum/wordpress will be started and you will have access to it from your build container under two hostnames to choose from:
tutum-wordpress
tutum__wordpress
Note: Hostnames with underscores are not RFC valid and may cause problems in 3rd party applications
So here's how I'd proceed:
try with http://selenium-standalone-chrome:4444/wd/hub although this seems like a low probability solution..
output SELENIUM_URL in your test driver. Is it getting set correctly?
review the logs as in how the health check of services works. Is standalone-chrome coming up?
add ping or nslookup in there somewhere. Is selenium-standalone-chrome (or the alternative) resolving? It seems like it does otherwise we'd get a "hostname unknown" rather than the "connection refused", but you can never be too sure.

Gitlab ci exits lftp command when ending connection with error

I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.

Resources