CircleCI config: Missing property "docker" in VSCode - circleci

I have CircleCI workflow, it has defined executor and number of jobs using that executor:
version: 2.1
executors:
circleci-aws-build-agent:
docker:
- image: kagarlickij/circleci-aws-build-agent:latest
working_directory: ~/project
jobs:
checkout:
executor: circleci-aws-build-agent
steps:
- checkout
- persist_to_workspace:
root: ~/
paths:
- project
set_aws_config:
executor: circleci-aws-build-agent
steps:
- attach_workspace:
at: ~/
- run:
name: Set AWS credentials
command: bash aws-configure.sh
It works as expected but in VSCode I see errors:
Any ideas how it could be fixed?

There's nothing wrong with your yml, the issue is with Schemastore, which VSCode uses.

This is because you are missing the docker block which defines the default container image for the job. A valid block would be:
jobs:
build:
docker:
- image: node:10
steps:
- checkout
If you have several jobs that use the same image, you can define a variable:
var_1: &job_defaults
docker:
- image: node:10
jobs:
build:
<<: *job_defaults
steps:
- checkout
deploy:
<<: *job_defaults
steps:
- checkout
Documentation: https://circleci.com/docs/2.0/configuration-reference/#docker--machine--macosexecutor

Related

Azure devops pass BuildId to release pipeline

I have a docker repository (Nexus) and after every build, a new images will push into nexus with a tag as code below:
trigger:
- master
resources:
- repo: self
variables:
tag: $(Build.BuildId)
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
inputs:
containerRegistry: 'nexus'
repository: 'My.api'
command: 'buildAndPush'
Dockerfile: '**/Dockerfile'
tags: '$(tag)'
on the other hand at the release step I have a docker-compose, all I want is to pass the build variable Build.BuildId (or anything else) to be able to point the related docker image version (by tag) from nexus
my compose file is:
version: '3.8'
services:
my.api:
container_name: my.api
image: "${REPO_URL}/my.api:${Build_BuildId}"
restart: always
ports:
- '5100:80'

How do I pass values to docker compose from GitHub Action workflow

How does one pass a value to the docker-compose file from an action workflow? In my GitHub workflow, I have a build step comprising of ...
- name: Build Compose Images
env:
IMAGE_TAG: ${{ steps.preamble.outputs.releasetag }}
run: IMAGE_TAG=${{env.IMAGE_TAG }} docker compose -f compose.yaml build
with docker-compose file ...
version: "3"
services:
db:
build: MySQL
environment:
IMAGE_TAG: ${IMAGE_TAG}
image: "repo/image:${IMAGE_TAG}"
ports:
- '3306:3306'
In each case nothing seems to work unless I hard code a value in an environment block, which is not ideal. Thanks.
hmm that actually works if I remove the environment key in the docker-compose file.

How to pass env to container building with Github Actions?

I'm create an CI/CD process with using docker, heroku and Github Action but I've got an issue with envs.
Right now when I'm running heroku logs I see that MongoServerError: bad auth : Authentication failed. and I think that this problem is in passing envs to my container and then using in github actions cause in code I pass simply process.env.MONGODB_PASS.
In docker-compose.yml I'm using envs from .env file, but Github Actions can't use this file because I put this file into .gitignore...
Here is my config:
.github/workflows/main.yml
name: Deploy
on:
push:
branches:
- develop
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: "---"
heroku_email: "---"
usedocker: true
docker-compose.yml
version: '3.7'
networks:
proxy:
external: true
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "---"]
networks:
- proxy
worker:
container_name: ---
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
can someone tell me how to resolve this issue? thanks for any help!

Gitlab-CI Deployment stage and task fails with wrong Rancher API Url and Key

I have a Gitlab CI/CD setup that deploys a spring boot application to a DigitalOcean droplet using Rancher.
The task fails with a wrong Rancher API Url and Key error message when in fact, those API details are correct judging from the fact that I have run the deployment manually using the "rancher up" command from the rancher cli.
Screenshots
.gitlab-ci.yml source
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/username/mta-hosting-optimizer .
- docker push registry.gitlab.com/username/mta-hosting-optimizer
digitalocean-deploy:
image: cdrx/rancher-gitlab-deploy
stage: deploy
script:
- upgrade --no-ssl-verify --environment Default
docker-compose.yml
version: '2'
services:
web:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
mta-hosting-optimizer-lb:
image: rancher/lb-service-haproxy:v0.9.1
ports:
- 80:80/tcp
labels:
io.rancher.container.agent.role: environmentAdmin,agent
io.rancher.container.agent_service.drain_provider: 'true'
io.rancher.container.create_agent: 'true'
web2:
image: registry.gitlab.com/username/mta-hosting-optimizer:latest
ports:
- 8082:8080/tcp
rancher-compose.yml
version: '2'
services:
web:
scale: 1
start_on_create: true
mta-hosting-optimizer-lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- path: ''
priority: 1
protocol: http
service: web
source_port: 80
target_port: 8080
- priority: 2
protocol: http
service: web2
source_port: 80
target_port: 8080
health_check:
response_timeout: 2000
healthy_threshold: 2
port: 42
unhealthy_threshold: 3
initializing_timeout: 60000
interval: 2000
reinitializing_timeout: 60000
web2:
scale: 1
start_on_create: true
I eventually found the cause of the problem by doing a bit more research online. I discovered that the RANCHER_URL that was required was the base url rather than the full url provided in the Rancher UI. For example, I was initially using the full url generated by the Rancher UI system that looked like this http://XXX.XXX.XXX.XX:8080/v2-beta/projects/1a5.
The correct URL is http://XXX.XXX.XXX.XX:8080/.
I set the RANCHER_URL as a secret environment variable in Gitlab Saas (Cloud/Online).
I appreciate everyone that tried to help.
Thank you very much.

Run CircleCI 2.0 build using several JDKs

I would like to run my Circle CI 2.0 build using Open JDK 8 & 9. Are there any YAML examples available explaining how to build a Java project using multiple JDK versions?
Currently I trying to add an new job java-8 to my build. But I do not want to repeat all the step of my default Java 9 build job. Is there a DRY approach for this?
version: 2
jobs:
build:
docker:
# specify the version you desire here
- image: circleci/openjdk:9-jdk
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
java-8:
- image: circleci/openjdk:8-jdk
You can use YAML anchors to achieve a reasonable DRY approach. For instance, it might look like:
version: 2
shared: &shared
working_directory: ~/repo
environment:
# Customize the JVM maximum heap limit
JVM_OPTS: -Xmx1g
TERM: dumb
steps:
- checkout
# Run all tests
- run: gradle check
jobs:
java-9:
docker:
- image: circleci/openjdk:9-jdk
<<: *shared
java-8:
docker:
- image: circleci/openjdk:8-jdk
<<: *shared
I'm sharing my own solution for this problem.
The basic routing is using workflows
version: 2
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
steps:
- ...
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
steps:
- ...
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11
Now we can use the way explained on the accepted anser.
version: 2
shared: &shared
steps:
- checkout
- restore_cache:
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: proguard-with-maven-example-{{ checksum "pom.xml" }}
- run: mvn package
jobs:
jdk8:
docker:
- image: circleci/openjdk:8-jdk-stretch
<<: *shared
jdk11:
docker:
- image: circleci/openjdk:11-jdk-stretch
<<: *shared
workflows:
version: 2
work:
jobs:
- jdk8
- jdk11

Resources