I'm very new to Amazon ECS, and I've written a task definition with 3 containers. One for my php application (main-server), second for node application (pubsub-server) and a redis container.
main-server and pubsub-server link to redis container. (Is this the best way to arrange the containers?)
The cluster runs well. However, I have an update to make in my main-server. I am able to push the updated image to Amazon ECR but my changes don't reflect on the cluster. Is there any additional step to perform to run the updated container on push?
I have tried deregistering the tasks and activating them back. But it doesn't seem to work.
Please let me know if I need to provide anymore details.
You need to force a new deployment.
From the AWS console, update the service definition and check the force new deployment checkbox on the first page, then skip to confirmation page.
From CLI:
aws ecs update-service --cluster [cluster arn] --service [service arn] --force-new-deployment
Related
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
We have an existing swarm cluster with about 20 services spanning across 10-15 VMs.
During service setup we are deploying them with “–with-registry-auth” option. Our docker registry is on AWS ECR.
Everything works fine till we add a new node. When new node is added with an existing label, containers does not come up on the node. On checking service status, I can see message “No image found”.
Example: I have nginx image running on 2 VMs with labels proxy. I add 1 more VM and apply label proxy.
Docker swarm tries to deploy one more copy of Nginx on this new 3rd VM but it fails as image is not present.
Twist/Pointer : If new VM is added within 4 hrs of service creation, nginx comes up on new VM. But if VM is added after 4 hr of service deployment (4 hr is credentials expiry time of AWS ECR), image fails to come on new VM. (Image is still present on manager VM).
To overcome the issue, we added a cron job on each VM that will login to AWS ECR every 2 hr so that credentials are always refreshed. But this also didn’t fix the issue.
If i update nginx from manager node with “–with-registry-auth”, nginx comes on 3rd VM even if it was added after 4 hr. But this is not expected.
Anyone else faced the issue?
Any guidance will be helpful
I faced it to...
This happens if your registry is private only.
We solve this by add docker login as the stage in add node process
I can't run docker (linux) containers on my PC - it is too slow for that. Is there other ways of running/developing/testing docker containers in a similar way to doing it on my PC? Maybe some browser app? Or the only option is to simply host a VM somewhere like DigitalOcean or AWS?
Use AWS ECS. AWS gives you a free trial for 12 months where they give you some free stuff like free server hours and so on. After you create a new AWS account, go to AWS ECS service and then go to the repository section and create a new repository item by uploading your Docker image. Now go to the tasks section and create a task configuration for your docker image like memory, port and so on. After then, create a new service and assign it to the task that you just created and run it.
This process is straightforward and it will take you around an hour to finish all.
Follow the steps in this video: https://www.youtube.com/watch?v=1wLMLwjCqN4
I'm setting up a CI/CD workflow for my organization but I'm missing the final piece of the puzzle. Surely this is a solved problem, or do I have to write my own?
The full picture.
I'm running a few EC2 instances on AWS, each running docker in its native swarm mode. A few services are running here which I've started manually via docker service create ....
When a developer commits source code a trigger is sent to jenkins to pull the new code and build a new docker image which is then pushed to my private registry.
All is well and good up to here, but how do I get the new image onto my docker hosts and the running container automatically updated to the new version?
Docker documentation states (here) that the registry can send events to configurable endpoints when a new image gets pushed onto it. This is what I want to automatically react to by having my docker hosts then pull the new image and stop, destroy and restart the service using that new version (with the same env flags, labels, etc etc), but I'm not finding any solution to this that fits my use case.
I've found v2tec/watchtower but it's not swarm-aware nor can it pull from a private registry at the time of writing this question.
Preferably I want a docker image I can deploy on my docker manager which listens to registry events (after pointing the registry config at it) and does the magic I need.
Cost is an issue, but time is less so, so I'm more inclined writing my own solution than I am adopting a fee-based service for this.
One option you have is to SSH to swarm master from Jenkins using SSH plugin and pull the new image and update the service when new image is pushed to the registry.
I run Jenkins and my app is dockerized, i.e. when I run the container it exposes port 3000 and I can point my browser there. On every Github PR I would like to deploy that git commit to a running container somewhere and have Jenkins post back to the PR the link where it can be accessed. On any PR updates it gets auto re-deployed and on PR close/resolve it gets torn down.
I have looked at kubernetes and a little rancher, but what's the easiest way to get this going assuming I can only deploy to one box?
There is a jenkins plugin github-pullrequest can resolve your problem.
Prerequisites:
You have a jenkins server can access by internet if you want trigger your build by a webhook.
Your have a github API token to access/admin your git repository, it can be generate by yourself in settings.
Please follow the guide configuration to setup your jenkins integration with github.
After configuration:
you can trigger your build by PR events: opened/commit changed/closed, or comment with specific pattern.
you can get a PR status via environment variable ${GITHUB_PR_STATE}, so you can start or stop a container on specific value.
you can publish a comment to a PR to tell the address of your web service after you started docker container.
About expose port of cotainer with multi PR, your can just run container with -p 3000, it will auto expose a port in a range on the docker host, docker port <container> will show the specific port number, so for example:
container1 with address <host>:32667 for PR1
container2 with address <host>:35989 for PR2
I think the simplest solution to this would be to create two different Jenkins Jobs, one which deploys and the other which nukes it. The trigger for this can be set 2 webhooks in GitHub one for PR create and one for PR resolve.
As Sylvain GIROD pointed out:
With only one box to run the application you need to change the port that is exposed. When a GitHub PR happens you deploy your application (docker run -p newport:containerport). If you are deploying services you change the target port.
Then you send the link with this port back to the user (email?).
Additionally you need some key-value store to remember which pod was created for which users so that you can decide on a new PR whether to destroy the old containers.
I would also suggest giving the services a time to live and regularly cleaning up stale containers/services.