elasticbeanstalk deploy docker platfrom aws.dockerrun.json error - docker

i'm realy help how to deploy new paltform Docker running on 64bit Amazon Linux 2/3.5.1
i use version 1 and 3 but i can't deploy..
so i use githubaction build and push but couldn't deploy my code
i got error can't find Docker file when i use aws.dockerrun.json virsion 1
then i use version 3 but just i got some
Instance deployment: 'Dockerrun.aws.json' in your source bundle specifies an unsupported version. Elastic Beanstalk only supports version 1 for non compose app and version 3 for compose app. The deployment fled.
i really don't know how to delpoy.. pls help sir
github action yaml
name: Deploy mern app to Elastic Beanstalk
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build_docker_images:
name: Build docker images
# this job will only run if the PR has been merged
# if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout#v2
with:
token: ${{ secrets.GHCR_TOKEN }}
submodules: true
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#v1
- name: Login to DockerHub
uses: docker/login-action#v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Docker meta
id: meta
uses: docker/metadata-action#v4
with:
images: ghcr.io/${{ github.repository }}
tags: latest
- name: Build frontend image
uses: docker/build-push-action#v2
with:
context: ./client
builder: ${{ steps.buildx.outputs.name }}
load: true
tags: user/app/client:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Build backend image
uses: docker/build-push-action#v2
with:
context: ./server
builder: ${{ steps.buildx.outputs.name }}
load: true
tags: user/app/server:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Build nginx image
uses: docker/build-push-action#v2
with:
context: ./nginx
file: ./nginx/Dockerfile
builder: ${{ steps.buildx.outputs.name }}
load: true
tags: user/app/nginx:latest
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Move cache
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Get timestamp
id: timestamp
run: echo "::set-output name=timestamp::$(date +'%s')"
- name: Zip docker-compose file for sending to Beanstalk
run: zip compose.zip docker-compose.yml
# - name: Generate Deployment Package
# run: zip -r deploy.zip * "**node_modules**"
- name: Deploy to EB
uses: einaregilsson/beanstalk-deploy#v14
with:
aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
application_name: WyzrsTask
environment_name: Wyzrstask-env
version_label: ${{steps.timestamp.outputs.timestamp}}
region: ap-northeast-2
deployment_package: docker-compose.yml
wait_for_environment_recovery: 200
docker.compose.yml
version: "2"
services:
proxy:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
ports:
- "3050:80"
depends_on:
- react-app
- gql-server
db:
image: postgres
restart: always
environment:
POSTGRES_USER: "${POSTGRES_USER:-wyzrs}"
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD:-wyzrs}"
POSTGRES_DB: "${POSTGRES_DB:-wyzrs}"
ports:
- 5432:5432
volumes:
- data-postgres:/var/lib/postgresql/data
- ./postgres/init-db:/docker-entrypoint-initdb.d
react-app:
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
restart: always
depends_on:
- gql-server
ports:
- 3000:3000
expose:
- 3000
volumes:
- /app/node_modules
- ./client:/app
environment:
- CHOKIDAR_USEPOLLING=true
- REACT_APP_GRAPHQL_SERVER_HOST=${REACT_APP_GRAPHQL_SERVER_HOST}
- REACT_APP_GRAPHQL_SUBSCRIPTIONS_HOST=${REACT_APP_GRAPHQL_SUBSCRIPTIONS_HOST}
- REACT_APP_FIREBASE_KEY=${REACT_APP_FIREBASE_KEY}
- REACT_APP_FIREBASE_DOMAIN=${REACT_APP_FIREBASE_DOMAIN}
- REACT_APP_FIREBASE_DATABASE=${REACT_APP_FIREBASE_DATABASE}
- REACT_APP_FIREBASE_PROJECT_ID=${REACT_APP_FIREBASE_PROJECT_ID}
- REACT_APP_FIREBASE_STORAGE_BUCKET=${REACT_APP_FIREBASE_STORAGE_BUCKET}
- REACT_APP_FIREBASE_SENDER_ID=${REACT_APP_FIREBASE_SENDER_ID}
- REACT_APP_FIREBASE_APP_ID=${REACT_APP_FIREBASE_APP_ID}
- REACT_APP_FIREBASE_MEASUREMENT_ID=${REACT_APP_FIREBASE_MEASUREMENT_ID}
gql-server:
build:
context: ./server
dockerfile: Dockerfile
restart: always
depends_on:
- db
ports:
- 4000:4000
expose:
- 4000
volumes:
- /app/node_modules
- ./server:/app
- ${GCP_KEY_PATH}:/tmp/keys/keyfile.json:ro
environment:
GRAPHQL_PORT: 4000
TYPEORM_CONNECTION: "postgres"
TYPEORM_HOST: "db"
TYPEORM_USERNAME: "${TYPEORM_USERNAME}"
TYPEORM_PASSWORD: "${TYPEORM_PASSWORD}"
TYPEORM_DATABASE: "${TYPEORM_DATABASE}"
TYPEORM_PORT: 5432
TYPEORM_SEEDING_FACTORIES: "${TYPEORM_SEEDING_FACTORIES}"
TYPEORM_SEEDING_SEEDS: "${TYPEORM_SEEDING_SEEDS}"
TYPEORM_ENTITIES: "${TYPEORM_ENTITIES}"
TYPEORM_MIGRATIONS: "${TYPEORM_MIGRATIONS}"
GOOGLE_APPLICATION_CREDENTIALS: /tmp/keys/keyfile.json
adminer:
image: adminer
ports:
- ${ADMINER_PORT:-4001}:8080
volumes:
data-postgres:
external: true
aws.dockerrun.json
{
"AWSEBDockerrunVersion": 3,
"containerDefinitions": [
{
"name": "react-app",
"image": "wyzrs/wyzrs-client",
"hostname": "client",
"essential": false,
"memory": 256
},
{
"name": "gql-server",
"image": "wyzrs/wyzrs-server",
"hostname": "api",
"essential": false,
"memory": 7168
},
{
"name": "adminer",
"image": "adminer",
"hostname": "db",
"essential": false,
"memory": 256
},
{
"name": "nginx",
"image": "wyzrs/wyzrs-nginx",
"hostname": "nginx",
"essential": true,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": ["react-app", "gql-server", "adminer"],
"memory": 256
}
]
}

Related

GitHub Actions - share services across jobs

I would like to know how I can share service containers between jobs in GitHub Actions. With this workflow currently the containers get destroyed after the build step.
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
RAILS_ENV: test
RACK_ENV: test
RAILS_MASTER_KEY: ${{ secrets.RAILS_MASTER_KEY }}
POSTGRES_PASSWORD: postgres15
POSTGRES_USERNAME: postgres
POSTGRES_HOST: localhost
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Initialize Ruby
uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Setup Rails
run: bin/setup
services:
postgres:
image: postgres:15.1-alpine
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: ${{ env.POSTGRES_PASSWORD }}
POSTGRES_USER: ${{ env.POSTGRES_USERNAME }}
# needed because the postgres container does not provide a healthcheck
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
test:
runs-on: ubuntu-latest
needs: [ build ]
steps:
- name: Lint Ruby
run: bundle exec rubocop
- name: Run tests
run: bin/rails test:all
coverage:
runs-on: ubuntu-latest
needs: [ build, test ]
steps:
- uses: joshmfrankel/simplecov-check-action#main
with:
minimum_suite_coverage: 98
minimum_file_coverage: 90
github_token: ${{ secrets.GITHUB_TOKEN }}
check_job_name: coverage
As per the documentation, you cannot share service containers across jobs.
https://docs.github.com/en/actions/using-containerized-services/about-service-containers
You can configure service containers for each job in a workflow. GitHub creates a fresh Docker container for each service configured in the workflow, and destroys the service container when the job completes. Steps in a job can communicate with all service containers that are part of the same job. However, you cannot create and use service containers inside a composite action.

Gitlab Runner stucks on docker login

I installed GitLab runner via HelmChart on my Kubernetes cluster
While installing via helm I used config values.yaml
But my Runner stucks every time at docker login command,
without docker login working good
I have no idea what is wrong :(
Any help appreciated!
Error: write tcp 10.244.0.44:50882->188.72.88.34:443: use of closed network connection
.gitlab-ci.yaml file
build docker image:
stage: build
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://localhost:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
before_script:
- mkdir -p $HOME/.docker
- echo passwd| docker login -u user https://registry.labs.com --password-stdin
script:
- docker images
- docker ps
- docker pull registry.labs.com/jappweek:a_zh
- docker build -t "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8 .
- docker push "$CI_REGISTRY"/"$CI_REGISTRY_IMAGE":1.8
tags:
- k8s
values.yaml file
image:
registry: registry.gitlab.com
#image: gitlab/gitlab-runner:v13.0.0
image: gitlab-org/gitlab-runner
# tag: alpine-v11.6.0
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.somebars.com
runnerRegistrationToken: "GR1348941a7jJ4WF7999yxsya9Arsd929g"
terminationGracePeriodSeconds: 3600
#
concurrent: 10
checkInterval: 30
sessionServer:
enabled: false
## For RBAC support:
rbac:
create: true
rules:
- resources: ["configmaps", "pods", "pods/attach", "secrets", "services"]
verbs: ["get", "list", "watch", "create", "patch", "update", "delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create", "patch", "delete"]
clusterWideAccess: false
podSecurityPolicy:
enabled: false
resourceNames:
- gitlab-runner
metrics:
enabled: false
portName: metrics
port: 9252
serviceMonitor:
enabled: false
service:
enabled: false
type: ClusterIP
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:16.04"
privileged: true
cache: {}
builds: {}
services: {}
helpers: {}
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
runAsNonRoot: true
privileged: false
capabilities:
drop: ["ALL"]
podSecurityContext:
runAsUser: 100
# runAsGroup: 65533
fsGroup: 65533
resources: {}
affinity: {}
nodeSelector: {}
tolerations: []
hostAliases: []
podAnnotations: {}
podLabels: {}
priorityClassName: ""
secrets: []
configMaps: {}
volumeMounts: []
volumes: []
I bypassed docker login with importing $HOME/.docker/config.json file which stores auth token from my host machine to Gitlab Ci
before_script:
- mkdir -p $HOME/.docker
- echo $DOCKER_AUTH_CONFIG > $HOME/.docker/config.json
$DOCKER_AUTH_CONFIG is $HOME/.docker/config.json
That's all no docker login required

Cache created in `ubuntu-latest` cannot be restored in Docker container

Here's my workflow file:
name: Build Pipeline
on: push
env:
NODE_VERSION: 11
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: ${{ env.NODE_VERSION }}
- id: cache-node-modules
uses: actions/cache#v2
with:
path: ${{ github.workspace }}/node_modules
key: node_modules-${{ hashFiles('package-lock.json') }}
restore-keys: node_modules
- uses: actions/cache#v2
with:
path: ${{ github.workspace }}/build
key: build-${{ github.sha }}
restore-keys: build
- if: steps.cache-node-modules.outputs.cache-hit != 'true'
run: npm install
- run: npm run build -- --incremental
npm-scripts:
needs: [build]
runs-on: ubuntu-latest
strategy:
matrix:
script: ['lint:pipeline', 'lint:exports', 'i18n:pipeline', 'schema:validate']
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: ${{ env.NODE_VERSION }}
- id: cache-node-modules
uses: actions/cache#v2
with:
path: ${{ github.workspace }}/node_modules
key: node_modules-${{ hashFiles('package-lock.json') }}
- if: steps.cache-node-modules.outputs.cache-hit != 'true'
run: |
echo 'Expected to have a cache hit for "node_modules", since this job runs after the "build" job, which caches the latest version of "node_modules". Not having a cache hit means probably there is a bug with the workflow file.'
exit 1
- id: cache-build-output
uses: actions/cache#v2
with:
path: ${{ github.workspace }}/build
key: build-${{ github.sha }}
- if: steps.cache-build-output.outputs.cache-hit != 'true'
run: |
echo 'Expected to have a cache hit for the build output folder, since this job runs after the "build" job, which caches the latest version of the "build" folder. Not having a cache hit means probably there is a bug with the workflow file.'
exit 1
- run: npm run ${{ matrix.script }}
jest-tests:
needs: [build]
runs-on: ubuntu-latest
container: node:11
services:
postgres:
image: postgres
env:
POSTGRES_DB: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
redis:
image: redis
steps:
- uses: actions/checkout#v2
- id: cache-node-modules
uses: actions/cache#v2
with:
path: ${{ github.workspace }}/node_modules
key: node_modules-${{ hashFiles('package-lock.json') }}
- if: steps.cache-node-modules.outputs.cache-hit != 'true'
run: |
echo 'Expected to have a cache hit for "node_modules", since this job runs after the "build" job, which caches the latest version of "node_modules". Not having a cache hit means probably there is a bug with the workflow file.'
exit 1
- id: cache-build-output
uses: actions/cache#v2
with:
path: ${{ github.workspace }}/build
key: build-${{ github.sha }}
- if: steps.cache-build-output.outputs.cache-hit != 'true'
run: |
echo 'Expected to have a cache hit for the build output folder, since this job runs after the "build" job, which caches the latest version of the "build" folder. Not having a cache hit means probably there is a bug with the workflow file.'
exit 1
- run: echo
node_modules and build folders are cached in the build job. These caches are able to be restored without a problem in the npm-scripts job. However, they are not able to be restored in the jest-tests job, where it gets a Cache not found for input keys error.
I don't know how this is possible, since the exact same cache keys are able to be restored without a problem in all of the npm-scripts jobs.
When I remove the:
container: node:11
services:
postgres:
image: postgres
env:
POSTGRES_DB: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
redis:
image: redis
part (and hence let the job run on ubuntu-latest, instead of a Docker container), the cache is able to be restored again properly. So not sure what's going on here.
It seems that the #actions/cache job silently fails if there is no zstd binary available in the PATH in the container that you are running in. This may be the case for your Node container.
I found this out by setting ACTIONS_STEP_DEBUG to true in the repository secrets. The debug log shows that the action tries to run zstd and can't, but it is instead reported as a cache miss. Once I figured that out, I found that there is a bug report open for it: https://github.com/actions/cache/issues/580
It is a weird bug. The workaround that I found is not running the jest-tests job in a container. That is, running the jest-tests job in a regular, ubuntu-latest machine, and mapping the service container ports like:
jest-tests:
needs: [build]
runs-on: ubuntu-latest
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_DB: localhost
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
redis:
image: redis
ports:
- 6379:6379
I'm using a custom image and just by adding zstd package to the image, it made the action/cache to work.

How to use parallel_tests in github actions

I'm trying to use parallel_tests in my github action to run my test suite but I was not able to find a proper solution.
The official docs has one but it is for gitlab:
https://github.com/grosser/parallel_tests/wiki/Distributed-Parallel-Tests-on-CI-systems
Any help would be appreciated thanks!
Here's a sample workflow you can drop into .github/workflows/tests.yml:
name: Rails Tests
on: push
env:
PGHOST: localhost
PGUSER: postgres
RAILS_ENV: test
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
# Set N number of parallel jobs you want to run
# Remember to update ci_node_index below to 0..N-1
ci_node_total: [6]
# set N-1 indexes for parallel jobs
# When you run 2 parallel jobs then first job will have index 0, the second job will have index 1 etc
ci_node_index: [0, 1, 2, 3, 4, 5]
services:
postgres:
image: postgres:11.5
ports: ["5432:5432"]
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
redis:
image: redis:5
ports: ["6379:6379"]
steps:
- uses: actions/checkout#v1
- uses: ruby/setup-ruby#v1
with:
bundler-cache: true
- name: Set node version (from .tool-versions)
run: echo "NODE_VERSION=$(cat .tool-versions | grep nodejs | sed 's/^nodejs //')" >> $GITHUB_ENV
- uses: actions/setup-node#v2
with:
node-version: ${{ env.NODE_VERSION }}
- uses: bahmutov/npm-install#v1
- name: Install PostgreSQL client
run: |
sudo apt-get -yqq install libpq-dev postgresql-client
- name: Test Prep
env:
CI_NODE_INDEX: ${{ matrix.ci_node_index }}
run: |
bundle exec rake parallel:create["1"] parallel:load_schema["1"]
- name: Run tests
env:
RAILS_MASTER_KEY: ${{ secrets.RAILS_MASTER_KEY }}
CI_NODE_TOTAL: ${{ matrix.ci_node_total }}
CI_NODE_INDEX: ${{ matrix.ci_node_index }}
run : |
bundle exec parallel_test spec/ -n $CI_NODE_TOTAL --only-group $CI_NODE_INDEX --type rspec
This was what I used to get it solved. Note: There are some other omitted parts such as the setup for ruby, postgresql, sqlite, etc.
name: "Lint and Test"
jobs:
test:
runs-on: ubuntu-latest
timeout-minutes: 30
services:
redis:
image: redis
ports: ["6379:6379"]
postgres:
ports: ["5432:5432"]
steps:
//omitted setups
- name: Setup Parallel Database
env:
RAILS_ENV: test
PGHOST: localhost
PGUSER: postgres
run: |
cp config/database.yml.example config/database.yml
bundle exec rake parallel:create
bundle exec rake parallel:rake[db:schema:load] || true
- name: Build and test with rspec
env:
RAILS_ENV: test
PGHOST: localhost
PGUSER: postgres
APP_REDIS_URL: redis://localhost:6379
MINIMUM_COVERAGE: 80
run: |
bundle exec parallel_rspec --verbose spec

Jenkins installation automation

Old Question
Is that possible to automate Jenkins installation(Jenkins binaries, plugins, credentials) by using any of the configuration management automation tool like Ansible and etc?
Edited
After this question asked I have learned and found many ways to achieve Jenkins Installation. I found docker-compose is interesting to achieve one way of Jenkins Installation automation. So my question is, Is there a better way to automate Jenkins Installation than I am doing, Is there any risk in the way I am handling this automation.
I have taken the advantage of docker Jenkins image and did the automation with docker-compose
Dockerfile
FROM jenkinsci/blueocean
RUN jenkins-plugin-cli --plugins kubernetes workflow-aggregator git configuration-as-code blueocean matrix-auth
docker-compose.yaml
version: '3.7'
services:
dind:
image: docker:dind
privileged: true
networks:
jenkins:
aliases:
- docker
expose:
- "2376"
environment:
- DOCKER_TLS_CERTDIR=/certs
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
jcac:
image: nginx:latest
volumes:
- type: bind
source: ./jcac.yml
target: /usr/share/nginx/html/jcac.yml
networks:
- jenkins
jenkins:
build: .
ports:
- "8080:8080"
- "50000:50000"
environment:
- DOCKER_HOST=tcp://docker:2376
- DOCKER_CERT_PATH=/certs/client
- DOCKER_TLS_VERIFY=1
- JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
- CASC_JENKINS_CONFIG=http://jcac/jcac.yml
- GITHUB_ACCESS_TOKEN=${GITHUB_ACCESS_TOKEN:-fake}
- GITHUB_USERNAME=${GITHUB_USERNAME:-fake}
volumes:
- type: volume
source: jenkins-home
target: /var/jenkins_home
- type: volume
source: jenkins-docker-certs
target: /certs/client
read_only: true
networks:
- jenkins
volumes:
jenkins-home:
jenkins-docker-certs:
networks:
jenkins:
jcac.yaml
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "github"
password: ${GITHUB_PASSWORD:-fake}
scope: GLOBAL
username: ${GITHUB_USERNAME:-fake}
- usernamePassword:
id: "slave"
password: ${SSH_PASSWORD:-fake}
username: ${SSH_USERNAME:-fake}
jenkins:
globalNodeProperties:
- envVars:
env:
- key: "BRANCH"
value: "hello"
systemMessage: "Welcome to (one click) Jenkins Automation!"
agentProtocols:
- "JNLP4-connect"
- "Ping"
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
disableRememberMe: false
markupFormatter: "plainText"
mode: NORMAL
myViewsTabBar: "standard"
numExecutors: 4
# nodes:
# - permanent:
# labelString: "slave01"
# launcher:
# ssh:
# credentialsId: "slave"
# host: "worker"
# port: 22
# sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
# name: "slave01"
# nodeDescription: "SSH Slave 01"
# numExecutors: 3
# remoteFS: "/home/jenkins/workspace"
# retentionStrategy: "always"
securityRealm:
local:
allowsSignup: false
enableCaptcha: false
users:
- id: "admin"
password: "${ADMIN_PASSWORD:-admin123}" #
- id: "user"
password: "${DEFAULTUSER_PASSWORD:-user123}"
authorizationStrategy:
globalMatrix:
permissions:
- "Agent/Build:user"
- "Job/Build:user"
- "Job/Cancel:user"
- "Job/Read:user"
- "Overall/Read:user"
- "View/Read:user"
- "Overall/Read:anonymous"
- "Overall/Administer:admin"
- "Overall/Administer:root"
unclassified:
globalLibraries:
libraries:
- defaultVersion: "master"
implicit: false
name: "jenkins-shared-library"
retriever:
modernSCM:
scm:
git:
remote: "https://github.com/samitkumarpatel/jenkins-shared-libs.git"
traits:
- "gitBranchDiscovery"
The command to start and stop Jenkins are
# start Jenkins
docker-compose up -d
# stop Jenkins
docker-compose down
Sure it is :) For Ansible you can always check Ansible Galaxy whenever you want to automate installation of something. Here is the most popular role for installing Jenkins. And here is its GitHub repo

Resources