Docker GitLab image that update itself - docker

Does everyone know a docker gitlab image that update itself when new release comes out ?
I fix the version for now because I haven't try the automatique update using the :latest tagged image.
I have tested sameersbn/gitlab and gitlab/gitlab-ce image.
Does anyone has any recommendation for updating safely ?

You can do that.
Upgrading to latest Gitlab
By using docker the upgrade process of gitlab becomes very simple. We update the image in our docker-compose.yml to use gitlab/gitlab-ce:latest
So now whenever we pull the images, it would always pull the latest version.
upgrade_gitlab.sh
#!/bin/bash
# Pull the latest image
docker-compose pull | grep -q "Image is up to date"
IMAGE_UPDATED=$?
if [ $IMAGE_UPDATED -ne 0 ]; then
echo "New image found for Gitlab"
echo "Backing up old Gitlab"
docker-compose exec gitlab gitlab-rake gitlab:backup:create
# Update docker
docker-compose up -d
# Check logs for any issues
docker-compose logs -f
else
echo "No update found for Gitlab"
fi
Now you can schedule this script or execute it using jenkins. Whatever node you like
PS: Taken from my article http://tarunlalwani.com/post/migrating-gitlab-6-mysql-to-latest-gitlab/

Related

Check if Docker image exists in cloud repo

Say I have this image tag "node:9.2" as in FROM node:9.2...
is there an API I can hit to see if image with tag "node:9.2" exists and can be retrieved, before I actually try docker build ...?
This script will build only if image not exist.
update for V2
function docker_tag_exists() {
curl --silent -f -lSL https://hub.docker.com/v2/repositories/$1/tags/$2 > /dev/null
}
Use above function for v2
#!/bin/bash
function docker_tag_exists() {
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
}
if docker_tag_exists library/node 9.11.2-jessie; then
echo "Docker image exist,...."
echo "pulling existing docker..."
#so docker image exist pull the docker image
docker pull node:9.11.2-jessie
else
echo "Docker image not exist remotly...."
echo "Building docker image..."
#build docker image here with absoult or retlative path
docker build -t nodejs .
fi
With little modification from the link below.
If the registry is private u check this link With username and password
If you have "experimental": enabled set for the Docker daemon then you can use this command:
docker manifest inspect node:9.2
Yes.
docker image pull node:9.2

GitLab CI invalid argument on job for Docker build

So I'm trying to setup my Gitlab CI to trigger a job on git push to build and deploy my Docker. This is the .gitlab-ci.yml file I'm using based on an example from Gitlab docs (Elixir yml).
stages:
- build
build:
before_script:
- docker build -f Dockerfile.build -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF .
- docker create
-v /build/deps
-v /build/_build
-v /build/rel
-v /root/.cache/aceapp/
--name build_data_$CI_PROJECT_ID_$CI_BUILD_REF busybox /bin/true
tags:
- docker
stage: build
script:
- docker run --volumes-from build_data_$CI_PROJECT_ID_$CI_BUILD_REF --rm -t ci-project-build-$CI_PROJECT_ID:$CI_BUILD_REF
The output when pushing to GitLab instance is this:
Running with gitlab-runner 10.7.2 (b5e03c94)
on my.host.rhel.runner 8f724ea7
Using Shell executor...
Running on my.host.local...
Fetching changes...
HEAD is now at 14351c4 Merge branch 'Development' into 'master'
From https://my.host.example/zalmosc/ace-app
14351c4..9fa2d43 master -> origin/master
Checking out 9fa2d435 as master...
Skipping Git submodules setup
$ # Auto DevOps variables and functions # collapsed multi-line command
$ setup_docker
$ build
Logging to GitLab Container Registry with CI credentials...
Login Succeeded
Building Dockerfile-based application...
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for t: Error parsing reference: "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" is not a valid repository/tag: invalid reference format
See 'docker build --help'.
ERROR: Job failed: exit status 1
I understand the docker tag is not valid (is the before_script: really triggered based on the name?), and I'm looking for help regarding a) a solution b) how I can learn more about the requirements for a pipeline that builds docker based on default settings. Do I need to tag my docker image locally and then somehow add this to my git commit?
The thing is -t is to tag your Docker image. See the docs here.
The tag should be formated like name:version, and you giving it /master:9fa2d4358e6c426b882e2251aa5a49880013614b which is not a valid tag. You could try to delete the / before master
Your tag cannot begin with '/':
$ docker build -f Dockerfile.build -t /master:9fa2d4358e6c426b882e2251aa5a49880013614b .
invalid argument "/master:9fa2d4358e6c426b882e2251aa5a49880013614b" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
# remove '/'
$ docker build -f Dockerfile.build -t master:9fa2d4358e6c426b882e2251aa5a49880013614b .
Sending build context to Docker daemon 3.584kB
Step 1/3 : FROM ubuntu:16.04
---> 14f60031763d
...
If you are not using the built in registry, you might have to set the CI_REGISTRY_IMAGE value to something. It seems that if you don't se this it gets set to /master and causes this error. you can set this in the CI setting page, or when making a new pipeline. e.g CI_REGISTRY_IMAGE gitlab.com/user/project

Docker: permission denied while trying to connect to Docker Daemon with local CircleCI build

I have a very simple config.yml:
version: 2
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
steps:
- checkout
- run: node -e "console.log('Hello from NodeJS ' + process.version + '\!')"
- run: yarn
- setup_remote_docker
- run: docker build .
All it does: boot a node image, test if node is running, do a yarn install and a docker build.
My dockerfile is nothing special; it has a COPY and ENTRYPOINT.
When I run circleci build on my MacBook Air using Docker Native, I get the following error:
Got permission denied while trying to connect to the Docker daemon socket at unix://[...]
If I change the docker build . command to: sudo docker build ., everything works as planned, locally, with circleci build.
However, pushing this change to CircleCI will result in an error: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
So, to summarize: using sudo works, locally, but not on CircleCI itself. Not using sudo works on CircleCI, but not locally.
Is this something the CircleCI staff has to fix, or is there something I can do?
For reference, I have posted this question on the CircleCI forums as well.
I've created a workaround for myself.
In the very first step of the config.yml, I run this command:
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "This is a local build. Enabling sudo for docker"
echo sudo > ~/sudo
else
echo "This is not a local build. Disabling sudo for docker"
touch ~/sudo
fi
Afterwards, you can do this:
eval `cat ~/sudo` docker build .
Explanation:
The first snippet checks if the CircleCI-provided environment variable CIRCLE_SHELL_ENV contains localbuild. This is only true when running circleci build on your local machine.
If true, it creates a file called sudo with contents sudo in the home directory.
If false, it creates a file called sudo with NO contents in the home directory.
The second snippet opens the ~/sudo file, and executes it with the arguments you give afterwards. If the ~/sudo file contains "sudo", the command in this example will become sudo docker build ., if it doesn't contain anything, it will become docker build ., with a space before it, but that will be ignored.
This way, both the local (circleci build) builds and remote builds will work.
To iterate on the answer of Jeff Huijsmans,
an alternative version is to use a Bash variable for docker:
- run:
name: Set up docker
command: |
if [[ $CIRCLE_SHELL_ENV == *"localbuild"* ]]; then
echo "export docker='sudo docker'" >> $BASH_ENV
else
echo "export docker='docker'" >> $BASH_ENV
fi
Then you can use it in your config
- run:
name: Verify docker
command: $docker --version
You can see this in action in my test for my Dotfiles repository
Documentation about environment variables in CircleCi
You might also solve your issue by running the docker image as root. Specify user: root under the image parameter:
...
jobs:
build:
working_directory: ~/app
docker:
- image: circleci/node:8.4.0
user: root
steps:
- checkout
...
...

Detect if Docker image would change on running build

I am building a Docker image using a command line like the following:
docker build -t myimage .
Once this command has succeeded, then rerunning it is a no-op as the image specified by the Dockerfile has not changed. Is there a way to detect if the Dockerfile (or one of the build context files) subsequently changes without rerunning this command?
looking at docker inspect $image_name from one build to another, several information doesn't change if the docker image hasn't changed. One of them is the docker Id. So, I used the Id information to check if a docker has been changed as follows:
First, one can get the image Id as follows:
docker inspect --format {{.Id}} $docker_image_name
Then, to check if there is a change after a build, you can follow these steps:
Get the image id before the build
Build the image
Get the image id after the build
Compare the two ids, if they match there is no change, if they don't match, there was a change.
Code: Here is a working bash script implementing the above idea:
docker inspect --format {{.Id}} $docker_image_name > deploy/last_image_build_id.log
# I get the docker last image id from a file
last_docker_id=$(cat deploy/last_image_build_id.log)
docker build -t $docker_image_name .
docker_id_after_build=$(docker inspect --format {{.Id}} $docker_image_name)
if [ "$docker_id_after_build" != "$last_docker_id" ]; then
echo "image changed"
else
echo "image didn't change"
fi
There isn't a dry-run option if that's what you are looking for. You can use a different tag to avoid affecting existing images and look for ---> Using cache in the output (then delete the tag if you don't want it).

How to test the container or image after docker build?

I have the following Dockerfile
############################################################
# Purpose : Dockerize Django App to be used in AWS EC2
# Django : 1.8.1
# OS : Ubuntu 14.04
# WebServer : nginx
# Database : Postgres inside RDS
# Python : 2.7
# VERSION : 0.1
############################################################
from ubuntu:14.04
maintainer Kim Stacks, kimcity#gmail.com
# make sure package repository is up to date
run echo "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe" > /etc/apt/sources.list
run apt-get update
# install python
# install nginx
Inside my VM, I did the following:
docker build -t ubuntu1404/djangoapp .
It is successful.
What do I do to run the docker image?
Where is the image or container?
I have already tried running
docker run ubuntu1404/djangoapp
Nothing happens.
What I see when I run docker images
root#vagrant-ubuntu-trusty-64:/var/virtual/Apps/DockerFiles/Django27InUbuntu# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
ubuntu1404/djangoapp latest cfb161605c8e 10 minutes ago 198.3 MB
ubuntu 14.04 07f8e8c5e660 10 days ago 188.3 MB
hello-world latest 91c95931e552 3 weeks ago 910 B
When I run docker ps, nothing shows up
You have to give a command your container will have to process.
Example : sh
you could try :
docker run -ti yourimage sh
(-ti is used to keep a terminal open)
If you want to launch a daemon (like a server), you will have to enter something like :
docker run -d yourimage daemontolaunch
Use docker help run for more options.
You also can set a default behaviour with CMD instruction in your Dockerfile so you won't have to give this command to your container each time you want to run it.
EDIT - about container removing :
Containers and images are different.
A container is an instance of an image.
You can run several containers from the same image.
The container automatically stops when the process it runs terminates.
But the container isn't deleted (just stopped, so you can restart it).
But if you want to remove it (removing a container doesn't remove the image) you have two ways to do :
automatically removing it at the end of the process by adding --rm option to docker run.
Manually removing it by using the docker rm command and giving it the container ID or its name (a container has to be stopped before being removed, use docker stop for this).
A usefull command :
Use docker ps to list containers. -q to display only the container IDs, -a to display even stopped containers.
More here.
EDIT 2:
This could also help you to discover docker if you didn't try it.
How to test the container or image after docker build?
In order to test you can add write a bash script which will do the job https://blog.brazdeikis.io/posts/docker-image-tests
Btw, from the post, I see that it does not match the question from the title.
So, Added a link for the souls who arrived here based on the title...
Download the latest shaded dist from https://github.com/dgroup/docker-unittests/releases:
wget https://github.com/dgroup/docker-unittests/releases/download/s1.1.1/docker-unittests-app-1.1.1.jar
De fine an *.yml file with tests.
version: 1.1
setup:
- apt-get update
- apt-get install -y tree
tests:
- assume: java version is 1.9, Debian build
cmd: java -version
output:
contains:
- openjdk version "9.0.1"
- build 9.0.1+11-Debian
- assume: curl version is 7.xxx
cmd: curl --version
output:
startsWith: curl 7.
matches:
- "^curl\\s7.*\\n.*\\nProtocols.+ftps.+https.+telnet.*\\n.*\\n$"
contains:
- AsynchDNS IDN IPv6 Largefile GSS-API
- assume: Setup section installed `tree`
cmd: tree --version
output:
contains: ["Steve Baker", "Florian Sesser"]
Run tests for image
java -jar docker-unittests.jar -f image-tests.yml -i openjdk:9.0.1-11
https://i.stack.imgur.com/DSv72.png

Resources