How to deploy image to EC2 usign Docker and Circleci. CI CD - docker

I am trying to implement CI/CD. So far what I have is a CI using CircleCI, and
circleci/aws-ecr#6.15.0
orb, so when I push to master the image is created and pushed to ECR. That works perfect, but what I want to implement is the deployment.
I think that I am not searching well in google, but I can not find any tutorial or explanation of how to do it.
I read that the orb
circleci/aws-ecs#01.4.0
and seems to do the job but I really dont know how to proceed.
This is my yaml file:
version: 2.1
orbs:
node: circleci/node#4.1.0
aws-ecr: circleci/aws-ecr#6.15.0
aws-ecs: circleci/aws-ecs#01.4.0
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
repo: node.js-api
tag: "latest,v0.1.${CIRCLE_BUILD_NUM}"
dockerfile: "Dockerfile"
path: .
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,tag=${CIRCLE_SHA1}"
Error:
An error occurred (ClientException) when calling the
DescribeTaskDefinition operation: Unable to describe task definition.
What I want to achieve is an automated deploy, I choose CircleCI because I thought it would be a good option also with my bitbucket repo, but if there is another way to achieve this I am totally open to hear suggestions.
Thanks!!

Related

With CircleCI, is it possible to share an executor between two jobs

I am rewriting my CircleCI config. Everything was put in only one job and everything was working well, but for some good reasons I want more structure.
Now I have two jobs build and test, and I want the second job to reuse the machine exactly where the build job stopped.
I will later have a third and four job.
My desire would be a line that says I want to reuse the previous machine/executor, built-in from CircleCI.
Other options are Workspaces that save data on CircleCI machine, or building and deploying my own docker that represents the machine after the build job
What is the easiest way to achieve what I want to do ?
Currently, I have basically in my yaml:
jobs:
build:
docker:
- image: cypress/base:14.16.0
steps:
- checkout
- node/install:
install-yarn: true
node-version: '16.13'
- other-long-commands
test:
# NOT GOOD: need an executor
steps:
- run:
name: 'test'
command: 'npx cypress run'
environment:
TEST_SUITE: SMOKE
workflows:
build-and-test:
jobs:
- build
- smoke:
requires:
- build
Can't be done. Workspaces is the solution instead.
My follow up would be, why do you need two jobs? Depending on your use case, pulling steps out into reusable commands might help, or even an orb.

Retrieve image digest value from Docker task inside azure pipelines

I am using Azure Pipelines (with YAML format) to build Dockerfile and push the image to Azure Container Registry.
Here is part of the YAML definition:
- task: Docker#2
displayName: Build Dockerfile
inputs:
command: 'build'
containerRegistry: 'containerRegistry'
repository: '$(imageRepository)'
Dockerfile: 'src/Api/Dockerfile'
buildContext: '.'
tags: '$(imageTag)'
- task: Docker#2
displayName: Push image
inputs:
command: push
containerRegistry: 'containerRegistry'
repository: '$(imageRepository)'
tags: '$(imageTag)'
So my question is, is there a way to retrieve the digest value on docker push task, so I can use it in the next tasks?
It seems that in the older versions of the Docker task, that was possible and there was a task parameter imageDigestFile, I am referring to Docker#0.
Unfortunately now that looks deprecated and I can't find a way to do it using the latest version.
Thanks!
Best regards,
Nikolay
Unfortunately now that looks deprecated and I can't find a way to do it using the latest version.
This is a known issue for the latest version of docker push task:
How to use output of DockerV2 task
That because that product team have tried to limit the number of inputs to the task to simplify it in DockerV2. So they have not provided the support for image digest files. But image digest is written to the output called DockerOutput.
The source code here.
And the product team will work with the people involved in the design of this task and see how to do this work.
To resolve this issue, we could try to use the older versions of the Docker task or parse the output DockerOutput and fetch the image digest

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Swarm stack is deployed before the new images are pushed

I use CircleCI and the pipeline is as follows:
build
test
build app & nginx Docker images and push them to a GitLab registry
deploy Docker stack to the development server (currently the Swarm manager)
I just pushed my develop branch to my repository and faced a "Symfony4 new Controller page" on the development server after a successful message from CircleCI.
I logged via SSH in it and executed (with output for the application service):
docker stack ps my-development-stack --format "{{.Name}} {{.Image}} {{.CurrentState}}"
my-stack_app.1 gitlab-image:latest-develop Running 33 minutes ago
On my GitLab repository's registry, the application image has been "Last Updated" 41 minutes ago. The service's image has apparently been refreshed before with the last version.
Is it a common issue/error ?
How could (or should) I fix this timing issue ?
Can CircleCI help about this ?
Perhaps it is best ( though not ideal ) to introduce a delay between build and deploy , you can refer to this example here CircelCI Delay Between Jobs
I found a workaround using a CircleCI scheduled workflow triggered by a CRON. I scheduled a nightly build workflow which will run every day at midnight.
Sample of my config.yml file
# Beginning of the config.yml
# ...
workflows:
version: 2
# Push workflow
# ...
# Nightly build workflow
nightly-dev-deploy:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- develop
jobs:
- build
- test:
requires:
- build
- deploy-dev:
requires:
- test
Read more about scheduled workflow with a nightly build example in the CircleCI official documentation
Looks more like a workaround to me. I'd be glad to hear how do you avoid this issue, which could lead to a better answer to the question.

Can I run a job under another job's steps in CircleCI 2.0?

Is it possible to have another job run in the context of another job? I have some jobs that have some steps in common, and I don't want to repeat these steps in the different jobs.
push-production-image:
docker:
- image: google/cloud-sdk:latest
working_directory: ~/app
steps:
- setup-gcp-docker
- run: docker push [image]
No you cannot however YAML itself has a way to solve this problem with what is called YAML Anchors and Aliases.
Here's a blog post I wrote on how to do specifically this: https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/

Resources