how does gitlab ci clone repo into docker? - docker

I'm familar with Bamboo but new to gitlab ci, I have tried several times with gitlab and found a key advantage of gitlab is the automatic cloning of git repository.
The tricky part is gitlab ci can even clone repository into docker container automatically.
my git repo:
.git
.gitlab-ci.yml
foobar.sh
this job:
job1:
stage: run
image:
name: my_image
script:
- ./foobar.sh
- some other scripts within the docker
can successfully run.
The log shows after pulling my_image, there's a git clone action, like what another SO answer said. but the log isn't detail enough to let me know where this command is triggered(I'm not the owner of gitlab ci runner so cannot control the log verbose level, if it matters).
So my questions:
Is this git clone command run within or outside the docker?
If within, who triggered it? what's the complete command of docker run ...?
If outside, when and where is the directory mounted to docker?
I have read the docs, but didn't find anywhere to explain above mechanism.

See, the gitlab runner pulls the image and spins up a container. Then from inside the container, a git clone of that gitlab repo is performed (by the gitlab runner). It's not from outside, and nothing is mounted. It works only with the repo where the pipeline belongs to.
If you wish to clone another repo, you have to do it manually by either baking it into your image upfront or telling the gitlab runner to perform another git clone.
script:
- git clone https://github.com/bluebrown/dotfiles
I assume, that when git is not installed in the container, it causes problems.

Related

Docker build and push git code inside Azure devops

I have a docker file which copies some code from git. When I put this docker file as part of Azure devops pipeline I am unable to get the code inside container. is git clone the only option or is there any way out of this?
The container is not having internet access to connect to the git repository. You can prebuild your image from local system and use the image from docker images registry.

How to pull docker image from github and build image in ec2?

My actual requirement is pull docker image from GitHub and build a docker image in ec2 instance and push that image to ecr. So, am just trying to clear my first step by asking help to pull image from git, very new to all this.
Let's walk through each step you're asking about in your requirements:
Pull from GitHub - You won't pull a docker image from here, however you may pull a Dockerfile from here, which would be used to build an image. The command to do this would be just like cloning any other repository: git clone <repository url>
Build the image on ec2 - First you will need to have docker installed on the ec2 instance. Assuming you're running Ubuntu on your ec2 instance, follow the good instructions on Docker's page (https://docs.docker.com/install/linux/docker-ce/ubuntu/) miror. Once docker is installed, navigate to the directory that has your Dockerfile in it (cloned from git) and type docker build . --tag mytag
Push the image to ecr - To do this, you need to have the amazon CLI installed on your box, and you need an ACCESS_KEY_ID and SECRET_ACCESS_KEY from AWS IAM. Once you have these, configure your connection by storing them as environment variables, or by typing aws configure and entering them. Once your credentials are configured, log into ECR by typing aws ecr get-login --no-include-email, and then copy/pasting the command it gives you. (you can also put ` around it to skip the copying step). This will allow you to push to ecr using docker push.
To clarify some of the points:
Github: It is a web-based hosting service for version control using git. So you can not pull docker image from Github.
To build a Docker image, you need Dockerfile. So you can fork the GitHub project which has this Dockerfile.
Then to build it on ec2, you can check out the project containing Dockerfile on ec2 server and build it using:
https://docs.docker.com/engine/reference/commandline/build/
and then you can push it to any registry using:
https://docs.docker.com/engine/reference/commandline/push/

Is git pull, docker-compose build and docker-compose up -d a good way to deploy complete solution on an empty machine

Recently, we just finished web application solution using Docker.
https://github.com/yccheok/celery-hello-world/tree/nginx (The actual solution is hosted in private repository. This example just a quick glance on how our project structure looks like)
We plan to purchase 1 empty Linux machine on deploy on it. We might purchase more machines in the future but with current traffic right now, 1 machine will be sufficient.
My plan for deployment on the single empty machine is
git pull <from private code repository>
docker-compose build
docker-compose up -d
Since we are going to deploy to multiple machines in near future, I was wondering, is this a common practice to deploy docker application into a fresh empty machine?
Is there anything we can utilize from https://hub.docker.com/ , without requiring us to perform git pull during deployment stage?
You don't want to perform git pull in each machine - your intuition is correct.
Instead you want to use remote docker registry (as docker hub for example).
So the right flow, each time your source code (git repo) is changed:
git pull from all relevant repos.
docker-compose build to build all relevant images.
docker-compose push to push all images (diff) to remote registry.
docker-compose pull in your production machines, to get the latest updated images.
docker-compose up to start all containers.
First 3 steps should be done in your CI machine (for example, as a jenkins job). Steps 4-5 in your production machines.
EDIT: one thing to consider. I think build via docker-compose is bad. Consider building directly by docker build -f Dockerfile -t repo/image:tag . and in docker-compose just specify the image name.
My opinion is you should not BUILD images on production machines. Because the image might be different than you would expect and you should limit yourself what you do on production machines.. With that being said, i would recommend:
updating the code on your local computer (development)
when you push code to git, you should use some software to build
your images from your push. For example Gitlab-CI (Continuous
integration tool)
gitlab-ci will build the image, then it could run some tests on that
image, and then deploy it to production (this build image)
On you production machine just do docker-compose pull &&
docker-compose up -d and that is it.
I strongly recommend to build images on other machine than production machines, and use some CI tool to test your images before deploying. For example https://docs.gitlab.com/ce/ci/README.html
Deploying it on a fresh machine or the other way around would be fine.
The best way to go around is to make a private repo on https://hub.docker.com/ and push your images there.
Building and shipping the image
git pull
docker build
docker login
docker push repo/image
Pulling the shipped image and deploying
docker login on the server
docker pull repo/image
docker-compose up -d
Though i would recommend you to look at container scheduling using kubernetes and setting up your CI/CD stack with jenkins to automate this process, in case something bad happens it can be a life saver.

Test Docker cluster in Jenkins

I have some difficulties to configure Jenkins to run test on a dockerized application.
First here is my set up: the project is on bitbucket and I have a docker-compose that run my application which is composed of 3 three conmtainers for now (one for mongo, one for redis, one for my node app).
The webhook of bitbucket works well and Jenkins is triggered when I push.
However what i would like to do for a build is:
get a repo where my docker-compose is, run the docker-compose in order to have my cluster running, and then run a "npm test" inside the repo (my test use mocha), and finally having Jenkins notified if the test have passed or not.
If someone could help me to get this chain of operation applied by Jenkins, it would be awesome.
The simplest way is use jenkins pipeline plugin or shell script.
To build docker image and run compose you could use docker-compose command. Important thing is that you need rebuild docker image from compose level (because if you run docker-compose run only jenkins can use previous bilded image). So you need run docker-compose build before.
Your dockerfile should copy all files of your application.
Next when your service is ready you could run command in docker image using: docker exec {CONTAINER_ID} {COMMAND_TO_RUN_TESTS}.

Docker run github branch/pull request

I forked whilp/ssh-agent and created a feature enhancement and submitted a pull request.
I want to reference/use my branch until it is accepted. On my CI agents, and I don't want to go locally to each one to build a local image.
github.com/rosskevin/ssh-agent branch: feature-known-hosts is what I'd like to use with the run command, is this possible? I can't find references to using github (not to mention a branch) with run, only build.
i.e.
docker run -d --name=ssh-agent whilp/ssh-agent \
github.com/rosskevin/ssh-agent -b feature-known-hosts
Any other advice on docker project patches/workflow/best practices? This is really easy with Bundler, looking for an analog here.
You can't run a docker image directly from GitHub, because GitHub is made to store only the code itself.
When you run the following command:
docker run -d --name=ssh-agent whilp/ssh-agent
Docker is looking for whilp/ssh-agent on Docker Hub, and not on GitHub.
Docker Hub is the equivalent of GitHub for Docker images.
To use your pull request the same way you are using whilp/ssh-agent, you need to create an account on Docker Hub, and create an automated build based on your ssh-agent fork (tutorial here).
Finally, you will be able to use your version with:
docker run -d --name=ssh-agent <username>/ssh-agent

Resources