How to run a Docker image from the ECS repo? - docker

I have managed to push a Docker image to the ECS repo (I also pushed it to docker hubs repo).
I have created a cluster and an EC2 instance with public IP.
What now? How do you run the server? Do you have to push it from the repo somewhere? Will it just run automatically now? Do I have to setup a script somewhere?

You specify the repo to pull your container from as part of the container setup step inside the task definition.
From here, you'll see a prompt for you to specify where the container should be pulled from:
Once that's all complete, you need to make sure your EC2 instance is part of your cluster (which you also should create). As part of your cluster configuration, you can launch your task on a host that belongs to that cluster (or setup a service to manage the launching for you). When a task is launched on a host, all of the containers specified in that task get pulled and started via whatever entrypoint script you've defined in your dockerfile (or, alternatively, in your task definition).

Related

How to deploy to a Docker Swarm from a local dev machine?

I've set up a Docker Swarm consisting of two VMs on my local network, (1 manager, 1 worker). In the manager node, I created a private registry service and I want to deploy a number of locally built images in my local dev machine (which is not in the swarm) to that registry. The Swarm docs and the dozens of examples I've read in the Internet seem not to go beyond the basics, running commands inside the manager node, building, tagging and pushing images from the manager's local cache to the registry in that same node, and I have that uneasy feeling that I'm missing something right on my face.
I see that my machine could simply join the swarm as a manager, owning the registry. The other nodes would automagically receive the updates and my problem would go away. But does this make sense for a production swarm setting, a cluster of nodes serving production code, depending on my dev's home machine - even as non-worker, manager-only?
Things I've tried:
Retagging my local image to <my.node.manager.ip>/my_app:1.0.0, followed by docker-compose push. I can see this does push the image to the manager's registry, but the service fails to start with the message "No such image: <my.node.manager.ip>/my_app:1.0.0"
Creating a context and, from my machine, run docker-compose --context my_context up --no-start. This (re)creates the image in the manager node's local cache, which I can then push to the registry, but it feels very unwieldy as a deploy process.
Should I run a remote script in the manager node to git pull my code and then do the build/push/docker stack deploy?
TL;DR What's the expected steps to deploy an image/app to a Docker Swarm from a local dev machine outside the swarm? Is this possible? Is this supported by Docker Swarm?
After reading a bit more on private registries and tags, I could finally wrap my head around the necessary tagging for my use case to work. My first approach was halfway right, but I had to change my deploy script so as to:
extract the image field from my docker-compose.yml (in the form localhost:5000/my_app:${MY_APP_VERSION-latest}, to circumvent the "No such image" error)
create a second tag for pushing to the remote registry, replacing "localhost" by my manager node address (where the registry is at)
Tag my locally built image with that tag and docker-compose push it
Deploy the app with docker --context <staging|production> stack deploy my_app
I'm answering myself since I did solve my original problem, but would love to see other DevOps implementations for similar scenarios.

How does gitlab-ci works internally with gitlab runner?

I have some specific questions regarding gitlab-ci and runner:
If my specific runner is configured in kubernetes cluster then how code mirroring happens into runner from Gitlab code repository
How does the build happens in runner when it is configured within kubernetes cluster?
When using any docker image in my .gitlab-ci.yml, how does those images are pulled by runner and how does commands mentioned within "script" tag are executed into those docker containers? Does runner creates pods within the kubernetes cluster (where runner is configured) with the image mentioned within .gitlab-ci.yml, and executes commands within those containers?
Any additional explanations or references to learning material on how Gitlab runner works internally is highly appreciated.
I'm assuming when you say your GitLab Runner is configured in Kubernetes you mean you're using the Kubernetes executor. I marked the sections relevant to your questions.
(1) GitLab CI pulls the code from the repository (if public it's not an issue, but you can also use a private registry). Basically a helper image is used to clone the repository and download any artifacts into a container.
The Kubernetes executor lets you use an existing Kubernetes cluster to execute your pipeline/build step by calling the Kubernetes cluster API and creating a new Pod, with both build and services containers for each job. (3)
A more detailed view of the steps a Runner takes:
Prepare: Create the Pod against the Kubernetes Cluster. This creates the containers required for the build and services to run.
Pre-build: Clone, restore cache and download artifacts from previous stages. This is run on a special container as part of the Pod. (2)
Build: User build.
Post-build: Create cache, upload artifacts to GitLab. This also uses the special container as part of the Pod.
The GitLab repository for the runners might also be interesting for you.

Run docker container from Jenkins pipeline

I currently run a Jenkins instance inside a Docker container. I've been doing some experimenting with Jenkins and their pipelines. I managed to make my Maven app build successful using a Jenkinsfile.
Right now I am trying to automatically deploy the app I built to a docker container that's a sibling to the Jenkins container. I did already mount the /var/run/docker.sock so I have access to the parent docker. But right now I can't seem to find any good information to guide me through the next part, which is modifying the Jenkinsfile to deploy my app in a new container.
How would I go over and run my Maven app inside a sibling docker container?
It might be more appropriate to spin up a slave node (a physical or virtual box) and then run something in there with docker.
If you check your instance URL: https://jenkins.address/job/myjob/pipeline-syntax/
You will find more information about what are the commands you need for this.
Anyway best way to do this is to actually create a dockerfile and as a step copy the artifact in it, push the image to a registry and then find somewhere to deploy the just built image

Docker commit doesn't save the changed state of my container

I am a newbie about Docker. But I have looked many guides of that. I am configuring a container that it is running in a base image of jenkins with blue-ocean plugin. I run this one using docker run command and I configured my proxy information and added another plugin, k8s plugin through Jenkins Manage Plugin UI. Then I stop this container and I commit this container to save this state that has the k8s plugin and proxy information that I set already. But I run new docker image that I have made with docker commit command I can't see any proxy information and k8s plugin. It is same image that I started. Is there something I miss?
JENKINS_HOME is set to be a volume in the default Jenkins Docker image (which I'm assuming you're using). Volumes live outside of the Docker container layered filesystem. This means that any changes in those folders will not be persisted in subsequent image commits.

docker remote api set env

How to overwrite environment variables inside container after creation with remote API? I see no such option in container update method description. But docker itself is doing that when linking containers (source) to provide port and host variables:
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
...
I need to provide the same variables of other infrastructure elements which are not managed by docker. And each time I run a container this variables could be different.
I think it should look like:
initialize container's dependencies.
Create container itself.
Run container's dependencies.
Get dependencies parameters (IP, ports, etc).
Configure container environment (as I thought with container update).
Run container.
Steps from 3 to 6 could be repeated many times for one instance.

Resources