AWS Elastic Container Service CI Template issue - docker

I’m on gitlab.com and tried deploying to a fargate AWS ECS container using the instructions for including the Deploy-ECS.gitlab-ci.yml template found here.
It is failing with the following error:
Authenticating with credentials from job payload (GitLab Registry)
$ ecs update-task-definition
An error occurred (InvalidParameterException) when calling the UpdateService operation: Task definition does not support launch_type FARGATE.
Running after_script
00:01
Uploading artifacts for failed job
00:02
ERROR: Job failed: exit code 1
I believe I may have found a solution here where Ryangr() advises that the --requires-compatibilities "FARGATE" flag needs to be on added to the aws ecs register-task-definition command. This is supported by the AWS documentation
In the AWS Management Console, for the Requires Compatibilities field, specify FARGATE.
In the AWS CLI, specify the --requires-compatibilities option.
In the Amazon ECS API, specify the requiresCompatibilities flag.
I'd like to know if there is a way to override the Deploy-ECS.gitlab-ci.yml template and add that or if I just need to submit an issue ticket with GitLab.

Check again with GitLab 13.2 (July 2020):
Bring Fargate support for ECS to Auto DevOps and ECS template
We want to make AWS deployments easier.
In order to do so, we recently delivered a CI/CD template that deploys to AWS ECS:EC2 targets and even connected it to Auto DevOps.
Scaling container instances in EC2 is a challenge, so many users choose to use AWS Fargate instead of EC2 instances.
In this release we added Fargate support to the template, which continues to work with Auto DevOps as well, so more users can benefit from it.
This is linked to issue 218841 which includes the proposal:
Use the gitlab-ci.yml template for deploying to AWS Fargate.
We will enforce the presence of --requires-compatibilities argument from the launch type - this will only be passed in case Fargate is selected.
If ECS is selected as the launch type this is ignored.
As noted by the David Specht in the comments, this has been closed with issue 218798 and cloud-deploy MR (Merge Request) 16, in commit 2c3d198.

Related

Automatically deploy new container to Google Cloud Compute Engine from Google Container Registry

I have a docker container which I push to GCR like gcloud builds submit --tag gcr.io/<project-id>/<name>, and when I deploy it on GCE instance, every time I deploy it creates a new instance and I have to remove the old instance manually. The question is, is there a way to deploy containers and force the GCE instances to fetch new containers? I need exactly GCE, not Google Cloud Run or other because it is not an HTTP service.
I deploy the container from Google Console using the Deploy to Cloud Run button
I'm posting this Community Wiki for better visibility. In the comment section there were already a few good solutions, however at the end OP wants to use Cloud Run.
At first I'd like to clarify a few things.
I have a docker container which I push to GCR like gcloud builds submit
gcloud builds submit is a command to build using Google Cloud Build.
Cloud Build is a service that executes your builds on Google Cloud Platform infrastructure. Cloud Build can import source code from Cloud Storage, Cloud Source Repositories, GitHub, or Bitbucket, execute a build to your specifications, and produce artifacts such as Docker containers or Java archives.
In this question, OP is referring to Container Registry, however GCP recommends to use Artifact Registry which soon will replace Container Registry.
Pushing and pulling images from Artifact Registry is explained in Pushing and pulling images documentation. It can be done by docker push or docker pull command, where earlier you have to tag an image and create Artifact Registry.
Deploying on different GCP products
Regarding deploying on GCE, GKE and Cloud Run, those are GCP products which are quite different from each.
GCE is IaaS where you are specifying the amount of resources and you are maintaining all the installation of all software (you would need to install Docker, Kubernetes, programming libs, etc).
GKE is like Hybrid as you mention the amount of resources you need but it's customized to run containers on it. After creation you already have docker, kubernetes and other software needed to run containers on it.
Cloud Run is a serverless GCP product, where you don't need to calculate the amount of needed resources, installing software/libs, it's a fully managed serverless platform.
When you want to deploy a container app from Artifact Registry / Container Registry, you are creating another VM (GCE and GKE) or new service (Cloud Run).
If you would like to deploy new app on the same VM:
On GCE, you would need to pull an image and deploy it on that VM using Docker or Kubernetes (Kubeadm).
On GKE you would need to deploy a new deployment using command like
kubectl create deployment test --image=<location>-docker.pkg.dev/<projectname>/<artifactRegistryName>/<imageName>
and delete the old one.
In Cloud Run you can deploy an app without concerns about resources or hardware, which steps are described here. You can create revisions for specific changes in the image. However Cloud Run also allows CI/CD using GitHub, BitBucket or Cloud Source Repositories. This process is also well described in GCP documentation - Continuous deployment
Possible solutions:
Write a Cloudbuild.yaml file that do that for you at each CI/CD pipeline run
Write a small application on GCE that subscribes to Pub/Sub notifications created by Cloud Build. You can then either pull the new container or launch a new instance.
Use Cloud Run with CI/CD.
Based on one of the OP's comments, as chosen solution was to use Cloud Run with CI/CD.

AWS-CDK: How to deploy a new image to an existing Fargate-Cluster?

I've got an existing FargateCluster with a running service and a running task definition created by the great aws-cdk.
I wonder what is the best way to deploy a new docker image to this existing Fargate-Service within a seperate AWS CDK routine/script/class? The docker image gets a new version (not latest) and I like to keep all the parameters configured in the existing task-definition and just deploy the new docker image. What I like to do in detail is geting the existing task-definition and just change the name of the image and let Fargate deploy it.
Is there any working example for this?
Any help will be appreciated.....
Regards
Christian
I would suggest exploring using Codepipeline to deploy your app in this case.
There's a very specific codepipeline action to Deploy ECS Fargate images.
If you want to start writing your own pipeline, check the standard Codepipeline package or try the cdk specific Pipelines package.
Other option would be to rerun your existent deployment and let CloudFormation deal with the changes.

CI/CD for new ECS task Definations

I have Jenkins pipeline which builds docker image of spring boot application and push that image to AWS ECR.We have created ECS cluster which takes this image from ECR repository and runs container using ECS task and services.
We have created ECS cluster manually.But now i want whenever a new image is pushed by my CICD to ECR repository it should take the new image and create new task definition and run automatically.What are the ways to achieve this ?
But now i want whenever a new image is pushed by my CICD to ECR
repository it should take the new image and create new task definition
and run automatically.What are the ways to achieve this ?
As far this step is a concern, it would more easy to do with code pipeline as there is no out of the box feature in Jenkins which can detect changes in ECR image.
The completed pipeline detects changes to your image, which is
stored in the Amazon ECR image repository, and uses CodeDeploy to
route and deploy traffic to an Amazon ECS cluster and load balancer.
CodeDeploy uses a listener to reroute traffic to the port of the
updated container specified in the AppSpec file. The pipeline is also
configured to use a CodeCommit source location where your Amazon ECS
task definition is stored. In this tutorial, you configure each of
these AWS resources and then create your pipeline with stages that
contain actions for each resource.
tutorials-ecs-ecr-codedeploy
build-a-continuous-delivery-pipeline-for-your-container-images-with-amazon-ecr-as-source
If you are looking for this thing in Jenkins, then you have to manage these things at your end.
Here will be the step
Push image to ECR
re-use the image name and Create Task definition in your jenkins job using aws-cli or ecs-cli with same image name
Create service with new task definitioni
You can look for details here
set-up-a-build-pipeline-with-jenkins-and-amazon-ecs
We ended up to the same conclusion, as there was no exact tooling matching this scenario. So we developed a little "glueing" tool from fee others open-source ones, and recently open-sourced as well:
https://github.com/GuccioGucci/yoke
Please have a look, since we're sharing templates for Jenkins, as it's our pipeline orchestrator as well.

Upgrading from Jenkins Amazon ECS plugin 1.16 to 1.20 results in need to handle iam:passRole and trust relationship?

Have any other fargate ECS jenkins cluster needed to handle the passRole/trust relationship? We normally do not pass special roles to our ecs instances and configure a general role as all the instances have identical requirements.
After upgrading jenkins and then upgrading all of its plugins I began to see errors. Amazon Elastic Container Service plugin upgraded from 1.16 to 1.20. First, I saw errors in the jenkins log about adding iam:PassRole missing from my ecs controlling policy. After adding it I now reach the error:
com.amazonaws.services.ecs.model.ClientException: ECS was unable to
assume the role 'arn:aws:iam::...:role/ecsTaskExecutionRole'
that was provided for this task.
Please verify that the role being passed has the proper trust
relationship and permissions and that your IAM user has
permissions to pass this role.
I will configured the pass role/trust relationship but the fact I need to do so is troubling as we aren't passing a role or at the very least do not intend to do so. It sure looks like we do not have a choice in the matter.
I found a solution. It seems that the upgrade from 1.16 to 1.20 resulted in taskRole and executionRole for each template set to values in jenkin's config.xml. I stopped jenkins, manually deleted those keys from each template and started. Now, jenkins can attempt to launch the container tasks

Deployment with Ansible in Jenkins pipeline

I have an Ansible playbook to deploy a java application (jar) on AWS EC2. I would like to use it inside a Jenkins pipeline as 'Deploy' step. To deploy on EC2, I need the downloaded private ssh key when the instance was created.
I have 2 choices :
Install ansible on the machine hosting Jenkins, insert the private SSH key in Jenkins, and use ansible-playbook plugin to deploy my app
Take a base docker image with ansible installed, extend it by inserting my private SSH key, and use this docker image to deploy my app
From a security point of view, what is best ?
For option 1, it's recommended to create a new user account, e.g. jenkins in the EC2 instance without sudo privilege, or at least passcode protected sudo
And it's a good scenario that using Ansible to manage those users accounts, it limits usage of the super key created by AWS
While for option 2, Docker is a good scenario of immutable deployment, which means the configuration should be determined even before the image is ready, so that Ansible is not quite useful in this scenario.
Different conf means different images to be created
Maybe you still use Ansible to manage those DockerFiles rather than initiate Ansible and interact with the application itself
The 2 options look quite different from each other in terms of how you design your system more than security concern
Do let me know you need more clarification

Resources