Docker run github branch/pull request - docker

I forked whilp/ssh-agent and created a feature enhancement and submitted a pull request.
I want to reference/use my branch until it is accepted. On my CI agents, and I don't want to go locally to each one to build a local image.
github.com/rosskevin/ssh-agent branch: feature-known-hosts is what I'd like to use with the run command, is this possible? I can't find references to using github (not to mention a branch) with run, only build.
i.e.
docker run -d --name=ssh-agent whilp/ssh-agent \
github.com/rosskevin/ssh-agent -b feature-known-hosts
Any other advice on docker project patches/workflow/best practices? This is really easy with Bundler, looking for an analog here.

You can't run a docker image directly from GitHub, because GitHub is made to store only the code itself.
When you run the following command:
docker run -d --name=ssh-agent whilp/ssh-agent
Docker is looking for whilp/ssh-agent on Docker Hub, and not on GitHub.
Docker Hub is the equivalent of GitHub for Docker images.
To use your pull request the same way you are using whilp/ssh-agent, you need to create an account on Docker Hub, and create an automated build based on your ssh-agent fork (tutorial here).
Finally, you will be able to use your version with:
docker run -d --name=ssh-agent <username>/ssh-agent

Related

How to install docker on GitHub Actions

What is the official / recommended way to install Docker on GitHub Actions?
In particular I need to run a custom script on GitHub Actions that, among other tasks, calls docker build and docker push.
I don't want pre-made actions that build/push, I want to have access to the docker command.
What should I do?
The only official action that I can find uses Docker Buildx and not the normal docker: https://github.com/marketplace/actions/build-and-push-docker-images
Beside that I can find this action (https://github.com/marketplace/actions/setup-docker) but I don't know if I can trust that source, since it's not official.
Any recommendations? How do you install the docker command on GitHub Actions?

How to pull new docker images and restart docker containers after building docker images on gitlab?

There is an asp.net core api project, with sources in gitlab.
Created gitlab ci/cd pipeline to build docker image and put the image into gitlab docker registry
(thanks to https://medium.com/faun/building-a-docker-image-with-gitlab-ci-and-net-core-8f59681a86c4).
How to update docker containers on my production system after putting the image to gitlab docker registry?
*by update I mean:
docker-compose down && docker pull && docker-compose up
Best way to do this is to use Image puller, lot of open sources are available, or you can write your own on the Shell. There is one here. We use swarm, and we use this hook concept to be triggered from our CI-CD pipeline. Once our build stage is done, we http the hook url, and the docker pulls the updated image. One disadvantage with this is you need a daemon to watch your hook task, that it doesnt crash or go down. So my suggestion is to run this hook task as a docker container with restart-policy as RestartAlways

Is git pull, docker-compose build and docker-compose up -d a good way to deploy complete solution on an empty machine

Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.

Gitlab Continuous Integration on Docker

I have a Gitlab server running on a Docker container: gitlab docker
On Gitlab there is a project with a simple Makefile that runs pdflatex to build pfd file.
On the Docker container I installed texlive and make, I also installed docker runner, command:
curl -sSL https://get.docker.com/ | sh
the .gitlab-ci.yml looks like follow:
.build:
script: &build_script
- make
build:
stage: test
tags:
- Documentation Build
script: *build
The job is stuck running and a message is shown:
This build is stuck, because the project doesn't have any runners online assigned to it
any idea?
The top comment on your link is spot on:
"Gitlab is good, but this container is absolutely bonkers."
Secondly looking at gitlab's own advice you should not be using this container on windows, ever.
If you want to use Gitlab-CI from a Gitlab Server, you should actually be installing a proper Gitlab server instance on a proper Supported Linux VM, with Omnibus, and should not attempt to use this container for a purpose it is manifestly unfit for: real production way to run Gitlab.
Gitlab-omnibus contains:
a persistent (not stateless!) data tier powered by postgres.
a chat server that's entire point in existing is to be a persistent log of your team chat.
not one, but a series of server processes that work together to give you gitlab server functionality and web admin/management frontend, in a design that does not seem ideal to me to be run in production inside docker.
an integrated CI build manager that is itself a Docker container manager. Your docker instance is going to contain a cache of other docker instances.
That this container was built by Gitlab itself is no indication you should actually use it for anything other than as a test/toy or for what Gitlab themselves actually use it for, which is probably to let people spin up Gitlab nightly builds, probably via kubernetes.
I think you're slightly confused here. Judging by this comment:
On the Docker container I installed texlive and make, I also installed
docker runner, command:
curl -sSL https://get.docker.com/ | sh
It seems you've installed docker inside docker and not actually installed any runners? This won't work if that's the case. The steps to get this running are:
Deploy a new gitlab runner. The quickest way to do this will be to deploy another docker container with the gitlab runner docker image. You can't run a runner inside the docker container you've deployed gitlab in. You'll need to make sure you select an executor (I suggest using the shell executor to get you started) and then you need to register the runner. There is more information about how to do this here. What isn't detailed here is that if you're using docker for gitlab and docker for gitlab-runner, you'll need to link the containers or set up a docker network so they can communicate with each other
Once you've deployed and registered the runner with gitlab, you will see it appear in http(s)://your-gitlab-server/admin/runners - from here you'll need to assign it to a project. You can also make it as "Shared" runner which will execute jobs from all projects.
Finally, add the .gitlab-ci.yml as you already have, and the build will work as expected.
Maybe you've set the wrong tags like me. Make sure the tag name with your available runner.
tags
- Documentation Build # tags is used to select specific Runners from the list of all Runners that are allowed to run this project.
see: https://docs.gitlab.com/ee/ci/yaml/#tags

Continuous Deployment Using Travis CI and Docker

What is the best way to automate the deployment of a Docker image in a CI environment?
After building a simple web project using Travis CI and using a Dockerfile to build the corresponding Docker image, is there a way to automatically cause that image to be deployed to a cloud provider?
Right now, the Dockerfile pulls down the base image to the Travis build machine and builds the image based on the instructions in the Dockerfile. At this point if the build is successful I can push it to the Docker Hub, though I have no need save this image to the Docker hub, what I envision is deploying the successfully built Docker image to a cloud provider (IE. DigitalOcean, Linode, or AWS) and starting/running the image.
While pushing directly to a host might seem ideal, I think it ignores the fact that hosts can fail, or may need to be replicated.
If you push directly to a prod host, and that host goes down, you don't have any way to start another one without re-running the entire CI pipeline.
If you push to an intermediary (the hub or a docker registry), you can create as many hosts as you want without having to re-run the build. You can also recover on a new host very easily (the initialize script can just pull the image and start).
If you wanted to, you could run your own registry on the cloud provider (instead of using the hub).
For a static website, you might want to look at Surge.
Otherwise, you might want to look at the AWS Elastic Beanstalk Command Line Interface (AWS EB CLI) in combination with using docker with AWS EB.
For using docker with AWS EB, read this AWS Blog post
For AWS EB CLI, here is an excerpt from the AWS EB dashboard sidebar
If you want to use a command line to create, manage, and scale your
Elastic Beanstalk applications, please use the Elastic Beanstalk
Command Line Interface (EB CLI). Get Started
$ mkdir HelloWorld
$ cd HelloWorld
$ eb init -p PHP
$ echo "Hello World" > index.html
$ eb create dev-env
$ eb open
To deploy updates to your applications, use ‘eb deploy’.
Further reading
Installing the AWS EB CLI
EB CLI Command Reference
I had this very same question.
There's a very cool docker image called Watchtower that checks the running version of a container with the same image tag on Docker hub. If there is an update on the hub, Watchtower pulls the newer image and restarts the running container (retaining all the env vars etc). Works really well for single containers that need updating.
NB: it really is as simple as running:
docker run -d \
--name watchtower \
-e REPO_USER="username" -e REPO_PASS="pass" -e REPO_EMAIL="email" \
-v /var/run/docker.sock:/var/run/docker.sock \
drud/watchtower container_to_watch --debug
I'm looking for the same thing but for the containers running as part of a docker swarm...

Resources