I would like to run my Circle CI 2.0 build using Open JDK 8 & 9. Are there any YAML examples available explaining how to build a Java project using multiple JDK versions?
Currently I trying to add an new job java-8 to my build. But I do not want to repeat all the step of my default Java 9 build job. Is there a DRY approach for this?
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:9-jdk
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
java-8:
- image: circleci/openjdk:8-jdk
You can use YAML anchors to achieve a reasonable DRY approach. For instance, it might look like:
version: 2
shared: &shared
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
jobs:
java-9:
docker:
- image: circleci/openjdk:9-jdk
<<: *shared
java-8:
docker:
- image: circleci/openjdk:8-jdk
<<: *shared
I'm sharing my own solution for this problem.
The basic routing is using workflows
version: 2
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
steps:
- ...
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
steps:
- ...
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11
Now we can use the way explained on the accepted anser.
version: 2
shared: &shared
steps:
- checkout
- restore_cache:
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn package
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
<<: *shared
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
<<: *shared
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11
Related
I have a docker repository (Nexus) and after every build, a new images will push into nexus with a tag as code below:
trigger:
- master
resources:
- repo: self
variables:
tag: $(Build.BuildId)
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
inputs:
containerRegistry: 'nexus'
repository: 'My.api'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
tags: '$(tag)'
on the other hand at the release step I have a docker-compose, all I want is to pass the build variable Build.BuildId (or anything else) to be able to point the related docker image version (by tag) from nexus
my compose file is:
version: '3.8'
services:
my.api:
container_name: my.api
image: "${REPO_URL}/my.api:${Build_BuildId}"
restart: always
ports:
- '5100:80'
I'm create an CI/CD process with using docker, heroku and Github Action but I've got an issue with envs.
Right now when I'm running heroku logs I see that MongoServerError: bad auth : Authentication failed. and I think that this problem is in passing envs to my container and then using in github actions cause in code I pass simply process.env.MONGODB_PASS.
In docker-compose.yml I'm using envs from .env file, but Github Actions can't use this file because I put this file into .gitignore...
Here is my config:
.github/workflows/main.yml
name: Deploy
on:
push:
branches:
- develop
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "---"
heroku_email: "---"
usedocker: true
docker-compose.yml
version: '3.7'
networks:
proxy:
external: true
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "---"]
networks:
- proxy
worker:
container_name: ---
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
can someone tell me how to resolve this issue? thanks for any help!
I have CircleCI workflow, it has defined executor and number of jobs using that executor:
version: 2.1
executors:
circleci-aws-build-agent:
docker:
- image: kagarlickij/circleci-aws-build-agent:latest
working_directory: ~/project
jobs:
checkout:
executor: circleci-aws-build-agent
steps:
- checkout
- persist_to_workspace:
root: ~/
paths:
- project
set_aws_config:
executor: circleci-aws-build-agent
steps:
- attach_workspace:
at: ~/
- run:
name: Set AWS credentials
command: bash aws-configure.sh
It works as expected but in VSCode I see errors:
Any ideas how it could be fixed?
There's nothing wrong with your yml, the issue is with Schemastore, which VSCode uses.
This is because you are missing the docker block which defines the default container image for the job. A valid block would be:
jobs:
build:
docker:
- image: node:10
steps:
- checkout
If you have several jobs that use the same image, you can define a variable:
var_1: &job_defaults
docker:
- image: node:10
jobs:
build:
<<: *job_defaults
steps:
- checkout
deploy:
<<: *job_defaults
steps:
- checkout
Documentation: https://circleci.com/docs/2.0/configuration-reference/#docker--machine--macosexecutor
I have a Gitlab CI/CD setup that deploys a spring boot application to a DigitalOcean droplet using Rancher.
The task fails with a wrong Rancher API Url and Key error message when in fact, those API details are correct judging from the fact that I have run the deployment manually using the "rancher up" command from the rancher cli.
Screenshots
.gitlab-ci.yml source
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
digitalocean-deploy:
image: cdrx/rancher-gitlab-deploy
stage: deploy
script:
- upgrade --no-ssl-verify --environment Default
docker-compose.yml
version: '2'
services:
web:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
mta-hosting-optimizer-lb:
image: rancher/lb-service-haproxy:v0.9.1
ports:
- 80:80/tcp
labels:
io.rancher.container.agent.role: environmentAdmin,agent
io.rancher.container.agent_service.drain_provider: 'true'
io.rancher.container.create_agent: 'true'
web2:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
rancher-compose.yml
version: '2'
services:
web:
scale: 1
start_on_create: true
mta-hosting-optimizer-lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- path: ''
priority: 1
protocol: http
service: web
source_port: 80
target_port: 8080
- priority: 2
protocol: http
service: web2
source_port: 80
target_port: 8080
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
initializing_timeout: 60000
interval: 2000
reinitializing_timeout: 60000
web2:
scale: 1
start_on_create: true
I eventually found the cause of the problem by doing a bit more research online. I discovered that the RANCHER_URL that was required was the base url rather than the full url provided in the Rancher UI. For example, I was initially using the full url generated by the Rancher UI system that looked like this http://XXX.XXX.XXX.XX:8080/v2-beta/projects/1a5.
The correct URL is http://XXX.XXX.XXX.XX:8080/.
I set the RANCHER_URL as a secret environment variable in Gitlab Saas (Cloud/Online).
I appreciate everyone that tried to help.
Thank you very much.
Following this answer, I wrote this Travis configuration file :
language: php
php:
- 5.3
- 5.4
- 5.5
- 5.6
- 7
- hhvm
- nightly
branches:
only:
- master
- /^\d+\.\d+\.\d+$/
matrix:
fast_finish: true
include:
- php: 5.3
env: deps="low"
- php: 5.5
env: SYMFONY_VERSION=2.3.*
- php: 5.5
env: SYMFONY_VERSION=2.4.*
- php: 5.5
env: SYMFONY_VERSION=2.5.*
- php: 5.5
env: SYMFONY_VERSION=2.6.*
- php: 5.5
env: SYMFONY_VERSION=2.7.*
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
allow_failures:
- php: nightly
- env: TEST_GROUP=canFail
before_script:
- composer self-update
- if [ "$SYMFONY_VERSION" != "" ]; then composer require --dev --no-update symfony/symfony=$SYMFONY_VERSION; fi
- if [ "$deps" = "low" ]; then composer update --prefer-lowest; fi
- if [ "$deps" != "low" ]; then composer update --prefer-source; fi
script: phpunit
But Travis CI counts only the php nightly version as an "allowed to fail" version. Am I using the environment variables in a wrong way ?
UPDATE
Just a precision, I know that I can directly write the environment like that:
matrix:
include:
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev
allow_failures:
- env: SYMFONY_VERSION=2.8.*#dev
but still I don't get why the other way doesn't work.
What you specify in allow_failures: is your allowed failures
"You can define rows that are allowed to fail in the build matrix. Allowed failures are items in your build matrix that are allowed to fail without causing the entire build to fail. This lets you add in experimental and preparatory builds to test against versions or configurations that you are not ready to officially support."
Unfortunately, I believe the way the matrix reads your first set of code is as php nightly version as the "allowed to fail" version with the environment as part of nightly.
Because of how Travis allows failures it must be an exact match, you can not just specify env: as an allowed failure you have to specify for each php version with the env: you want to allow as a failure for example
allow_failures:
- php: 5.3
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.4
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 5.6
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: 7.0
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: hhvm
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
- php: nightly # Allow all tests to fail for nightly
According to this issue, the php and env keys must match perfectly. env can be either a single value or an array, but in both case it must be a perfect match. So if you want your build:
- php: 5.5
env: SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail
to be allowed to fail, you have to either give the whole env key SYMFONY_VERSION=2.8.*#dev TEST_GROUP=canFail and the whole env key and the PHP version (if you had the same env key for different PHP versions).