Travis CI : How to allow failures with a customized environment variable? - travis-ci

Following this answer, I wrote this Travis configuration file :
language: php
php:
- 5.3
- 5.4
- 5.5
- 5.6
- 7
- hhvm
- nightly
branches:
only:
- master
- /^\d+\.\d+\.\d+$/
matrix:
fast_finish: true
include:
- php: 5.3
env: deps="low"
- php: 5.5
env: SYMFONY_VERSION=2.3.*
- php: 5.5
env: SYMFONY_VERSION=2.4.*
- php: 5.5
env: SYMFONY_VERSION=2.5.*
- php: 5.5
env: SYMFONY_VERSION=2.6.*
- php: 5.5
env: SYMFONY_VERSION=2.7.*
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
allow_failures:
- php: nightly
- env: TEST_GROUP=canFail
before_script:
- composer self-update
- if [ "$SYMFONY_VERSION" != "" ]; then composer require --dev --no-update symfony/symfony=$SYMFONY_VERSION; fi
- if [ "$deps" = "low" ]; then composer update --prefer-lowest; fi
- if [ "$deps" != "low" ]; then composer update --prefer-source; fi
script: phpunit
But Travis CI counts only the php nightly version as an "allowed to fail" version. Am I using the environment variables in a wrong way ?
UPDATE
Just a precision, I know that I can directly write the environment like that:
matrix:
include:
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev
allow_failures:
- env: SYMFONY_VERSION=2.8.*#dev
but still I don't get why the other way doesn't work.

What you specify in allow_failures: is your allowed failures
"You can define rows that are allowed to fail in the build matrix. Allowed failures are items in your build matrix that are allowed to fail without causing the entire build to fail. This lets you add in experimental and preparatory builds to test against versions or configurations that you are not ready to officially support."
Unfortunately, I believe the way the matrix reads your first set of code is as php nightly version as the "allowed to fail" version with the environment as part of nightly.
Because of how Travis allows failures it must be an exact match, you can not just specify env: as an allowed failure you have to specify for each php version with the env: you want to allow as a failure for example
allow_failures:
- php: 5.3
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.4
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.6
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 7.0
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: hhvm
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: nightly # Allow all tests to fail for nightly

According to this issue, the php and env keys must match perfectly. env can be either a single value or an array, but in both case it must be a perfect match. So if you want your build:
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
to be allowed to fail, you have to either give the whole env key SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail and the whole env key and the PHP version (if you had the same env key for different PHP versions).

Related

Docker unable to run neo4j (exited with code 4)

I am running neo4j on docker (windows 11), using the following:
version: '3.3'
services:
neo4j:
image: neo4j:latest
container_name: "cmkg-neo4j-db"
restart: always
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/import:/var/lib/neo4j/import
- ./db/neo4j-cyphers:/import
- $HOME/neo4j/plugins:/plugins
- $HOME/neo4j/logs:/logs
ports:
- 7474:7474
- 7687:7687
environment:
- NEO4J_ACCEPT_LICENCE_AGREEMENT=yes
- NEO4J_AUTH=neo4j/root
- NEO4J_dbms_default__listen__address=0.0.0.0
- NEO4J_dbms_default__advertised__address=localhost
- NEO4J_dbms_connector_bolt_enabled=true
- NEO4J_dbms_routing_enabled=true
- NEO4J_dbms_connector_bolt_listen__address=:7687
- NEO4J_dbms_connector_bolt_advertised__address=:7687
- NEO4J_dbms_logs_debug_level=DEBUG
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_apoc_initializer_cypher=CALL apoc.cypher.runSchemaFile('file:///init_db_setup.cypher')
- NEO4J_apoc_import_file_enabled=true
- NEO4J_dbms_security_procedures_unrestricted=apoc.\\\*
- NEO4JLABS_PLUGINS=["apoc", "n10s"]
networks:
- cmkg_net
api:
container_name: "cmkg-container"
restart: always
build:
context: .
ports:
- 5000:5000
environment:
- NEO4J_URI=bolt://neo4j:7687
- NEO4J_USER=neo4j
- NEO4J_PW=root
volumes:
- .:/app
links:
- 'neo4j'
depends_on:
- 'neo4j'
networks:
- cmkg_net
networks:
cmkg_net:
driver: bridge
It was working properly for a while, but after pulling a new branch, it gave me the following error (shown also in the image): Docker-neo4j issue
cmkg-neo4j-db exited with code 4
cmkg-neo4j-db | NEO4JLABS_PLUGINS has been renamed to NEO4J_PLUGINS since Neo4j 5.0.0.
cmkg-neo4j-db | The old name will still work, but is likely to be deprecated in future releases.
cmkg-neo4j-db | Installing Plugin 'apoc' from /var/lib/neo4j/labs/apoc-*-core.jar to /plugicmkg-neo4j-db | Applying default values for plugin apoc to neo4j.conf
cmkg-neo4j-db | Skipping dbms.security.procedures.unrestricted for plugin apoc because it is already set.cmkg-neo4j-db | You may need to add apoc.* to the dbms.security.procedures.unrestricted setting in your configuration file.cmkg-neo4j-db | Fetching versions.json for Plugin 'n10s' from https://neo4j-labs.github.io/neosemantics/versions.jsoncmkg-neo4j-db | Installing Plugin 'n10s' from null to /plugins/n10s.jar
cmkg-neo4j-db exited with code 4
Trying the previous branch got the same error.
So far, I tried to remove then re-install docker & neo4j but still!
Any idea what could cause this issue, and how can I make it run again?
Using latest takes the latest Neo4j release and it's really not recommended to use such tag.
The working behaviour is probably with Neo4j 4, but now Neo4j 5 is out and APOC changed as well.
Couple of things with your docker compose :
NEO4J_ACCEPT_LICENCE_AGREEMENT should be renamed to NEO4J_ACCEPT_LICENSE_AGREEMENT
The following settings are not valid anymore
- NEO4J_apoc_import_file_use__neo4j__config=true
- NEO4J_apoc_initializer_cypher=CALL apoc.cypher.runSchemaFile('file:///init_db_setup.cypher')
- NEO4J_apoc_import_file_enabled=true
Take a look at installing APOC Extended here https://neo4j.com/labs/apoc/5/installation/
In case you want to remain on latest 4.4, you can just use the 4.4-enterprise tag which is automatically updated to the latest 4.4 version.

Using environment variables in docker-compose.yml file

I am trying to configure GitLab CI to continuously build and deploy my app with Docker and Docker Compose.
When running the CI pipeline, I get the following error messages:
The Compose file './docker-compose.ci.yml' is invalid because:
services.dashboard.ports contains an invalid type, it should be a number, or an object
services.mosquitto.ports contains an invalid type, it should be a number, or an object
services.mosquitto.ports contains an invalid type, it should be a number, or an object
services.mosquitto.ports contains an invalid type, it should be a number, or an object
services.mosquitto.ports value [':', ':', ':'] has non-unique elements
I would like to use environment variables to keep my configuration hidden.
Following is a snippet of my docker-compose.ci.yml:
version: "3.9"
services:
dashboard:
build:
context: ./dashboard
dockerfile: Dockerfile.prod
cache_from:
- "${BACKEND_IMAGE}"
image: "${BACKEND_IMAGE}"
command: gunicorn dashboard.wsgi:application --bind ${DJANGO_HOST}:${DJANGO_PORT}
volumes:
- static_volume:/home/app/web/static
ports:
- "${DJANGO_PORT}:${DJANGO_PORT}"
env_file:
- .env
depends_on:
- postgres
...
mosquitto:
build:
context: ./mosquitto
cache_from:
- "${MOSQUITTO_IMAGE}"
image: "${MOSQUITTO_IMAGE}"
volumes:
- ./mosquitto/config/mosquitto.conf:/mosquitto/config/mosquitto.conf
- ./mosquitto/data:/mosquitto/data
- ./mosquitto/log:/mosquitto/log
- broker_certs:/mosquitto/config/certs
ports:
- "${MQTT_DEFAULT_PORT}:${MQTT_DEFAULT_PORT}"
- "${MQTT_SECURE_PORT}:${MQTT_SECURE_PORT}"
- "${MQTT_WEBSOCKETS_PORT}:${MQTT_WEBSOCKETS_PORT}"
env_file:
- .env
...
In my build stage, I set up the environment variables using a bash script:
.gitlab-ci.yml:
image:
name: docker/compose:1.28.5
entrypoint: [""]
services:
- docker:dind
stages:
- build
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
build:
stage: build
before_script:
...
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
...
...
setup_env.sh:
...
# mosquitto config
echo MQTT_DEFAULT_PORT=$MQTT_DEFAULT_PORT >> .env
echo MQTT_SECURE_PORT=$MQTT_SECURE_PORT >> .env
echo MQTT_WEBSOCKETS_PORT=$MQTT_WEBSOCKETS_PORT >> .env
echo MOSQUITTO_COMMON_NAME=$MOSQUITTO_COMMON_NAME >> .env
...
All my variables are well set up on Gitlab.
Running this with docker-compose locally on my machine doesn't generate any errors.
What am I doing wrong?
The configuration shown above is correct. The problem was that my branch was not protected, and the environments variables on Gitlab are set up for protected branches/tags only. Consequently, they were never picked during the build. In case anyone has a similar issue, make sure your branch/tag is protected.

Why is my first test in Postman/Newman hanging in Travis-CI?

Tl;dr I'm using Docker to run my Postman/Newman tests and my API tests hang when ran in Travis-CI but not when ran locally. Why am I encountering tests that run infinitely?
Howdy guys! I've recently started to learn Docker, Travis-CI and Newman for a full stack application. I started with developing the API and I'm taking a TDD approach. As such, I'm testing my API first. I setup my .travis.yml file to download a specific version of Docker-Compose and then use Docker-Compose to run my tests in a container I name api-test. The container has an image, dannydainton/htmlextra, which is built from the official postman/newman:alpine image like so:
language: node_js
node_js:
- "14.3.0"
env:
global:
- DOCKER_COMPOSE_VERSION: 1.26.2
- PGHOST: db
- PGDATABASE: battle_academia
- secure: "xDZHJ9ZVe3WPXr6WetERMjFnTlMowyEoeckzLcRvWyEIe2qbnWrJRo7cIRxA0FsyJ7ao4QLVv4XhOIeqJupwW3nfnljo35WGcuRBLh76CW6JSuTIbpV1dndOpATW+aY3r6GSwpojnN4/yUVS53pvIeIn03PzQWmnbiJ0xfjStrJzYNpSVIHLn0arujDUMyze8+4ptS1qfekOy2KRifG5+viFarUbWUXaUiJfZCn14S4Wy5N/T+ycltNjX/qPAVZYV3fxY1ZyNX7wzJA+oV71MyApp5PgNW2SBlePkeZTnkbI7FW100MUnE4bvy00Jr/aCoWZYTySz86KT+8HSGzy6d+THO8zjOXKJV5Vn93+XWmxtp/yjBsg+dtFlZUWkN99EBkEjjwJc1Oy5zrOQNjsptNGpl1kid5+bAT4XcP4xn7X5pc7QB8ZE3igbfKTM11LABYN1adcIwgGIjUz1eQnFuibtkVM4oqE92JShUF/6gbwGJsWjQGBNBCOBBueYNB86sk0TiAfS08z2VW9L3pcljA2IwdXclw3f1ON6YelBTJmc88EmxI4TS0hRC5KgMCkegW1ndcTZwqIQGFm+NFbe1hKMmqTfgOg5M8OQZBtUkF60Lox09ECg59IrYj+BIa9J303+bo+IMgZ1JVYlL7FA2qc0bE8J/9A1C2wCRjDLLE="
- secure: "F/Ru7QZvA+zWjQ7K7vhA3M2ZrYKXVIlkIF1H7v2dPv/lsc18eWGpOQep4uAjX4IMyLY/6n7uYRLnSlbvOWulVUW8U52zWiQkYFF9OwosuTdIlVTAQGp3B0CAA+RCxMtDQay6fN9H6e2bL3KwjT//VUHd1E6BPu+O1/RyX+0+0KvTmExmMSuioSpDPcI20Mym2vRCgNPb1gfajr5QfWKPJlrPjfyNhDxWMhM94nwTuLYIVZwZPTZ0Ro5D6hhXFVZOFIlHr5VDbbFa+Xo0TIdP/ZudxZ7p3Mn7ncA8seLx2Q5/zH6tJ4DSUpEm67l5IqUrvd9qp0CNCjlTcl3kOJK4qIB1WtLm6oW2rBqDyvthhuprPpqEcs7C9z2604VLybdOmJ0+Y/7uIo6po388avGN4ZwZbWQ1xiiW+Ja8kkHZYEKo4m0AbKdX9pn8otcNO+1xlDtUU7CZey2QA8WrFlfHWqRapIgNfT5tTSTAul3yWAFCRw09PHYELuO7oQCqFZi7zu3HKWknbkzjf+Cz3TfIFTX/3saiqyquhieOPbnGC5xgTmTrA2ShfNxQ6nkDJPU0/qmaCNJt9CwpNS2ArqcK3xYijiNi+SHaKwEsYh0VqiUqSCWn05eYKNAe3MUQDsyKFEkykJW60yEkN7JsvO1WpI53VKmOnZlRHLzJyc5WkZw="
- PGPORT: 5432
services:
- docker
before_install:
- npm rebuild
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
jobs:
include:
- stage: api tests
script:
- docker --version
- docker-compose --version
- >
docker-compose run api-test
newman run battle-academia_placement-exam_api-test.postman-collection.json
-e battle-academia_placement-exam_docker.postman-environment.json
-r htmlextra,cli
And, my docker-compose.yml file has 4 containers:
client is the React front end,
api is the NodeJs/Express back end,
db is the database that the API pulls data from in the test environment,
api-test is the container with Newman/Postman and some reporters which I believe is built from NodeJs.
I hardcode in the environment variables when running locally, but the file is as follows:
version: '3.8'
services:
client:
build: ./client
ports:
- "80:80"
depends_on:
- api
api:
build: ./server
environment:
- PGHOST=${PGHOST}
- PGDATABASE=${PGDATABASE}
- PGUSER=${PGUSER}
- PGPASSWORD=${PGPASSWORD}
- PGPORT=${PGPORT}
ports:
- "3000:3000"
depends_on:
- db
db:
image: postgres:12.3-alpine
restart: always
environment:
- POSTGRES_DB=${PGDATABASE}
- POSTGRES_USER=${PGUSER}
- POSTGRES_PASSWORD=${PGPASSWORD}
ports:
- "5432:5432"
volumes:
- ./server/db/scripts:/docker-entrypoint-initdb.d
api-test:
image: dannydainton/htmlextra
entrypoint: [""]
command: newman run -v
volumes:
- ./server/api/postman-collections:/etc/newman
depends_on:
- api
Now that the setup is out of the way, my issue is that this config works locally when I cut out .travis.yml and run the commands myself, however, putting Travis-CI in the mix stirs up an issue where my first test just... runs.
I appreciate any advice or insight towards this issue that anyone provides. Thanks in advance!
The issue did not come from where I had expected. After debugging, I thought that the issue originally came from permission errors since I discovered that the /docker-entrypoint-initdb.d directory got ignored during container startup. After looking at the Postgres Dockerfile, I learned that the files are given permission for a user called postgres. The actual issue stemmed from me foolishly adding the database initialization scripts to my .gitignore.
Edit
Also the Newman tests were hanging because they were trying to access database tables that did not exist.

CircleCI config: Missing property "docker" in VSCode

I have CircleCI workflow, it has defined executor and number of jobs using that executor:
version: 2.1
executors:
circleci-aws-build-agent:
docker:
- image: kagarlickij/circleci-aws-build-agent:latest
working_directory: ~/project
jobs:
checkout:
executor: circleci-aws-build-agent
steps:
- checkout
- persist_to_workspace:
root: ~/
paths:
- project
set_aws_config:
executor: circleci-aws-build-agent
steps:
- attach_workspace:
at: ~/
- run:
name: Set AWS credentials
command: bash aws-configure.sh
It works as expected but in VSCode I see errors:
Any ideas how it could be fixed?
There's nothing wrong with your yml, the issue is with Schemastore, which VSCode uses.
This is because you are missing the docker block which defines the default container image for the job. A valid block would be:
jobs:
build:
docker:
- image: node:10
steps:
- checkout
If you have several jobs that use the same image, you can define a variable:
var_1: &job_defaults
docker:
- image: node:10
jobs:
build:
<<: *job_defaults
steps:
- checkout
deploy:
<<: *job_defaults
steps:
- checkout
Documentation: https://circleci.com/docs/2.0/configuration-reference/#docker--machine--macosexecutor

Run CircleCI 2.0 build using several JDKs

I would like to run my Circle CI 2.0 build using Open JDK 8 & 9. Are there any YAML examples available explaining how to build a Java project using multiple JDK versions?
Currently I trying to add an new job java-8 to my build. But I do not want to repeat all the step of my default Java 9 build job. Is there a DRY approach for this?
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:9-jdk
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
java-8:
- image: circleci/openjdk:8-jdk
You can use YAML anchors to achieve a reasonable DRY approach. For instance, it might look like:
version: 2
shared: &shared
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
jobs:
java-9:
docker:
- image: circleci/openjdk:9-jdk
<<: *shared
java-8:
docker:
- image: circleci/openjdk:8-jdk
<<: *shared
I'm sharing my own solution for this problem.
The basic routing is using workflows
version: 2
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
steps:
- ...
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
steps:
- ...
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11
Now we can use the way explained on the accepted anser.
version: 2
shared: &shared
steps:
- checkout
- restore_cache:
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn package
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
<<: *shared
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
<<: *shared
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11

Resources