I'm thinking of how to setup continuous integration and deployment using bitbucket, drone.io, hub.docker.com and swarm(aws ec2) cluster?
I submit code to bitbucket
bitbucket's web hook triggers drone.io and it builds and runs tests
On every "green" commit, docker image is pushed to the hub.docker.com and deployed to integration environment (swarm cluster) using "latest" label.
I can't figure it out how to setup step 3 ...
As an example, add to your .drone.yml:
publish:
docker:
username: octocat
password: password
email: octocat#github.com
repo: octocat/hello-world
tag: latest
when:
success: true
deploy:
webhook:
urls:
- https://your.webhook/...
header:
Authorization: pa55word
X-Docker-Image: name_of_your_image:latest
when:
success: true
This would perform the publish step using the docker plugin, followed by hitting a URL endpoint to deploy the published image to your integration environment using the webhook plugin.
Related
- task: Docker#2
displayName: Build an image
inputs:
command: build
repository: weather-update-project
dockerfile: '**/Dockerfile'
buildContext: '$(Build.SourcesDirectory)'
tags: 'latest'
- task: ECRPushImage#1
inputs:
awsCredentials: 'weather'
regionName: us-west-2
imageSource: 'imagename'
sourceImageName: 'weather-update-project'
sourceImageTag: 'latest'
pushTag: 'latest'
repositoryName: 'weather-update-project'
I'm building an image and then trying to push that image to ECR. When it gets to the ECR push image task, it tries to push a few times and then gives me the error "The process '/usr/bin/docker' failed with exit code 1" and that's it. There's no other information in my logs in regards to the error like there normally is. What is possibly happening? My ECR is public and all of my credentials are correct. Here's my YAML code for the docker build and ecrpushimage tasks in Azure DevOps
My Repository name that contains my dockerfile is 'weather-update-project' and my ECR repository also has the name 'weather-update-project'
Can you please validate on what agent this is running on & if Docker is there or not?
Is the image being created properly?
While executing the ECRPushImage task at the beginning it should show at least the configuration log like below, if not then it is related to docker on that agent.
Configuring credentials for task
...configuring AWS credentials from service endpoint 'xxxxxxxxxxxx'
...endpoint defines standard access/secret key credentials
Configuring region for task
...configured to use region us-east-1, defined in tas
I use CircleCI and the pipeline is as follows:
build
test
build app & nginx Docker images and push them to a GitLab registry
deploy Docker stack to the development server (currently the Swarm manager)
I just pushed my develop branch to my repository and faced a "Symfony4 new Controller page" on the development server after a successful message from CircleCI.
I logged via SSH in it and executed (with output for the application service):
docker stack ps my-development-stack --format "{{.Name}} {{.Image}} {{.CurrentState}}"
my-stack_app.1 gitlab-image:latest-develop Running 33 minutes ago
On my GitLab repository's registry, the application image has been "Last Updated" 41 minutes ago. The service's image has apparently been refreshed before with the last version.
Is it a common issue/error ?
How could (or should) I fix this timing issue ?
Can CircleCI help about this ?
Perhaps it is best ( though not ideal ) to introduce a delay between build and deploy , you can refer to this example here CircelCI Delay Between Jobs
I found a workaround using a CircleCI scheduled workflow triggered by a CRON. I scheduled a nightly build workflow which will run every day at midnight.
Sample of my config.yml file
# Beginning of the config.yml
# ...
workflows:
version: 2
# Push workflow
# ...
# Nightly build workflow
nightly-dev-deploy:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- build
- test:
requires:
- build
- deploy-dev:
requires:
- test
Read more about scheduled workflow with a nightly build example in the CircleCI official documentation
Looks more like a workaround to me. I'd be glad to hear how do you avoid this issue, which could lead to a better answer to the question.
I'm using a Docker Registry on Artifactory. I'm able to pull/push images using docker commands. Now I try to push an image using a Jenkins pipline.
The image is called registry-url/docker/image:latest.
I have a docker repository on Artifactory which is called docker. (I'm able to pull and push to this repo using docker commands).
This stage describes my Artifactory configuration:
...
stage('Deploy Docker image'){
steps {
script {
def server = Artifactory.server 'xxx'
def rtDocker = Artifactory.docker server: server
def buildInfo = rtDocker.push('registry-url/image:latest', 'docker')
//also tried:
//def buildInfo = rtDocker.push('registry-url/docker/image:latest', 'docker')
//the above results in registry/docker/docker/image..
server.publishBuildInfo buildInfo
}
}
}
...
When I use different paths I face the manifest.json error which is probably normal.
I'm able to download the manifest.json manually on: https://registry-url/artifactory/docker/image/latest/manifest.json.
I'm using a pretty new version of Docker on Jenkins:
Docker version 18.01.0-ce, build 03596f51b1
So far so good. But When I run the pipeline I receive the following error in Jenkins (it takes 50 seconds):
Pushing image: registry-url/image:latest
...
com.github.dockerjava.api.exception.DockerClientException: Could not push image: unknown: Not Found
at com.github.dockerjava.core.command.PushImageResultCallback.awaitSuccess(PushImageResultCallback.java:49)
at org.jfrog.hudson.pipeline.docker.utils.DockerUtils.pushImage(DockerUtils.java:60)
at org.jfrog.hudson.pipeline.docker.utils.DockerAgentUtils$3.call(DockerAgentUtils.java:213)
at org.jfrog.hudson.pipeline.docker.utils.DockerAgentUtils$3.call(DockerAgentUtils.java:205)
at hudson.remoting.UserRequest.perform(UserRequest.java:153)
at hudson.remoting.UserRequest.perform(UserRequest.java:50)
...
In Artifactory logs I see:
2018-04-25 14:24:26,663 [http-nio-8081-exec-xx] [ERROR] (o.a.a.d.r.DockerResource:153) - Unsupported docker v2 repository request for 'image'
2018-04-25 14:24:46,684 [http-nio-8081-exec-xx] [ERROR] (o.a.a.d.r.DockerResource:153) - Unsupported docker v2 repository request for 'image'
2018-04-25 14:24:46,689 [http-nio-8081-exec-xx] [ERROR] (o.a.a.d.r.DockerResource:153) - Unsupported docker v2 repository request for 'image'
2018-04-25 14:24:46,702 [http-nio-8081-exec-xx] [ERROR] (o.a.a.d.r.DockerResource:153) - Unsupported docker v2 repository request for 'image'
What am I missing or doing wrong?
EDIT:
Based on this issue I went back to my initial idea:
def buildInfo = rtDocker.push('registry-url/docker/image:latest', 'docker')
I tried the build again. Error:
Could not find manifest.json in Artifactory in the following path: https://registry-url/artifactory/docker/docker/image/latest/manifest.json
Two times 'docker' in the path and it seems not to work. BUT when I check in Artifactory the image is there... I can also pull the image. It seems to be fine but still the jenkins build is failing.
Artifactory Plugin: 2.15.1
Artifactory Version: 5.10.3
Is this really a bug which will be fixed soon?
Artifcatory can be configured as a docker registry either with or without a reverse proxy.
It looks like your Artifactory is not configured using a reverse-proxy (proxy-less configuration). You can read more about the configuration options here.
Version 2.16.1 of the Jenkins Artifactory Plugin added support for proxy-less configuration. Upgrading your Artifactory Plugin should resolve your issue.
Try following this example. Here we have Jenkins pipeline to pull/push docker image to/from Artifactory: https://github.com/jfrogtraining/kubernetes_example/blob/master/docker-app/Jenkinsfile#L43
Installed Blue Ocean from the docker image docker pull jenkinsci/blueocean. I wanted to include a Cloud Foundry Deployment step (sh cf push) in my pipeline and stuck with the error:
script.sh: line 1: cf: not found
I knew what's happening - as there is no compatible CF CLI plug-in the script command CF is not working. And I tried different things:
In my Jenkinsfile, I Tried using the the Cloud foundry plug-in (CloudFoundryPushPublisher) which is supported in non-pipeline build. And that didn't help.
step([$class: 'com.hpe.cloudfoundryjenkins.CloudFoundryPushPublisher',
target: 'https://api.ng.bluemix.net',
organization: 'xxxx',
cloudSpace: 'xxxxx',
credentialsId: 'xxxxxx',
selfSigned: true,
resetIfExists: true]);
That failed with Invalid Argument exception.
My question is, I heard Cloudbees has a commercial version that supports CF CLI, but that ability is missing from Blue ocean. So how should I be able to push the deployments to cloud foundry using Pipeline job?
I'm not sure whether you already fixed the issue, but I just installed 'cf cli' on jenkins machine by manual and use 'cf push' as shell script like;
sh 'cf login -u xxx - p xxx -s space -o org'
sh 'cf push appname ...'
I am migrating from Jenkins 1.x to Jenkins 2. I want to build and deploy application using Jenkinsfile.
I am able to build gradle application, but I am confused about deploying application via AWS Codedeploy using Jenkinsfile.
Here is my jenkinsfile
node {
// Mark the code checkout 'stage'....
stage 'Checkout'
// Get some code from a GitHub repository
git branch: 'master',
credentialsId: 'xxxxxxxx-xxxxx-xxxxx-xxxxx-xxxxxxxx',
url: 'https://github.com/somerepo/someapplication.git'
// Mark the code build 'stage'....
stage 'Build'
// Run the gradle build
sh '/usr/share/gradle/bin/gradle build -x test -q buildZip -Pmule_env=aws-dev -Pmule_config=server'
stage 'Deploy via Codedeploy'
//Run using codedeploy agent
}
I have searched many tutorial but they're using AWS Code deploy plugin instead.
Could you help me deploying application via AWS Codedeploy using Jenkinsfile?
Thank you.
Alternatively you can use AWS CLI commands to do code deployment. This involves two steps.
Step 1 - Push the deployment bundle to S3 bucket. See the following command:
aws --profile {profile_name} deploy push --application-name {code_deploy_application_name} --s3-location s3://<s3_file_path>.zip
Where:
profile_name = name of AWS profile (if using multiple accounts)
code_deploy_application_name = name of AWS code deployment application.
s3_file_path = S3 file path for deployment bundle zip file.
Step 2 - Initiate code deployment
The second command is the used to trigger code deployment. See the following command:
aws --profile {profile} deploy create-deployment --application-name {code_deploy_application_name} --deployment-group-name {code_deploy_group_name} --s3-location bucket={s3_bucket_name},bundleType=zip,key={s3_bucket_zip_file_path}
Where:
profile = name of your AWS profile (if using multiple accounts)
code_deploy_application_name = same as step 1.
code_deploy_group_name = name of code deployment group. This is associated with your code deploy application.
s3_bucket_name = name of S3 bucket which will store your deployment artefacts. (Make sure that your role that performs code deploy has permissions to s3 bucket.)
s3_bucket_zip_file_path = same as step 1.