When running the following command:
helm upgrade --cleanup-on-fail \
-- install $releaseName $dockerHubName/$dockerHubRepo:$tag \
-- namespace $namespace \
-- create-namespace \
-- values config.yaml
I get the following error:
Error: Failed to download "$dockerHubName/$dockerHubRepo"
I've also tried with different tags, with semantic versioning (tag="1.0.0") and there's a image with the tag "latest" on the DockerHub repo (which is Public)
This also works with the base JuPyTerHub image jupyterhub/jupyterhub
Based on information from the jupyterhub for kubernetes site, to use a different image from jupyter/docker-stacks, the following steps are required:
Modify your config.yaml file to specify the image. For example:
singleuser:
image:
# You should replace the "latest" tag with a fixed version from:
# https://hub.docker.com/r/jupyter/datascience-notebook/tags/
# Inspect the Dockerfile at:
# https://github.com/jupyter/docker-stacks/tree/HEAD/datascience-notebook/Dockerfile
name: jupyter/datascience-notebook
tag: latest
Apply the changes by following the directions listed in apply the changes.
If you have configured prePuller.hook.enabled, all the nodes in your
cluster will pull the image before the hub is upgraded to let users
use the image. The image pulling may take several minutes to complete,
depending on the size of the image.
Restart your server from JupyterHub control panel if you are already logged in.
Related
We want to use Paketo.io / CloudNativeBuildpacks (CNB) GitLab CI in the most simple way. Our GitLab setup uses an AWS EKS cluster with unprivileged GitLab CI Runners leveraging the Kubernetes executor. We also don't want to introduce security risks by using Docker in our builds. So we don't have our host’s /var/run/docker.sock exposed nor want to use docker:dind.
We found some guides on how to use Paketo with GitLab CI like this https://tanzu.vmware.com/developer/guides/gitlab-ci-cd-cnb/ . But as described beneath the headline Use Cloud Native Buildpacks with GitLab in GitLab Build Job WITHOUT Using the GitLab Build Template, the approach relies on Docker and pack CLI. We tried to resemble this in our .gitlab-ci.yml which looks like this:
image: docker:20.10.9
stages:
- build
before_script:
- |
echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)"
apk add --no-cache curl
(curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.21.1/pack-v0.21.1-linux.tgz" | tar -C /usr/local/bin/ --no-same-owner -xzv pack)
build-image:
stage: build
script:
- pack --version
- >
pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest
--builder paketobuildpacks/builder:base
--path .
But as outlined our setup does not support docker and we end up with the following error inside our logs:
...
$ echo "install pack CLI (see https://buildpacks.io/docs/tools/pack/)" # collapsed multi-line command
install pack CLI (see https://buildpacks.io/docs/tools/pack/)
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
(1/4) Installing brotli-libs (1.0.9-r5)
(2/4) Installing nghttp2-libs (1.43.0-r0)
(3/4) Installing libcurl (7.79.1-r0)
(4/4) Installing curl (7.79.1-r0)
Executing busybox-1.33.1-r3.trigger
OK: 12 MiB in 26 packages
pack
$ pack --version
0.21.1+git-e09e397.build-2823
$ pack build $REGISTRY_GROUP_PROJECT/$CI_PROJECT_NAME:latest --builder paketobuildpacks/builder:base --path .
ERROR: failed to build: failed to fetch builder image 'index.docker.io/paketobuildpacks/builder:base': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Any idea on how to use Paketo Buildpacks with GitLab CI without having Docker present inside our GitLab Kubernetes runners (which seems to be kind of a best practice)? We also don't want our setup to become to complex - e.g. by adding kpack.
TLDR;
Use the Buildpack's lifecycle directly inside your .gitlab-ci.yml here's a fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
The details: "using the lifecycle directly"
There are ongoing discussions about this topic. Especially have a look into https://github.com/buildpacks/pack/issues/564 and https://github.com/buildpacks/pack/issues/413#issuecomment-565165832. As stated there:
If you're looking to build images in CI (not locally), I'd encourage
you to use the lifecycle directly for that, so that you don't need
Docker. Here's an example:
The link to the example is broken, but it refers to the Tekton implementation on how to use buildpacks in a Kubernetes environment. Here we can get a first glue about what Stephen Levine referred to as "to use the lifecycle directly". Inside it the crucial point is the usage of command: ["/cnb/lifecycle/creator"]. So this is the lifecycle everyone is talking about! And there's good documentaion about this command that could be found in this CNB RFC.
Choosing a good image: paketobuildpacks/builder:base
So how to develop a working .gitlab-ci.yml? Let's start simple. Digging into the Tekton implementation you'll see that the lifecycle command is executed inside an environment defined in BUILDER_IMAGE, which itself is documented as The image on which builds will run (must include lifecycle and compatible buildpacks). That sound's familiar! Can't we simply pick the builder image paketobuildpacks/builder:base from our pack CLI command? Let's try this locally on our workstation before commiting to much noise into our GitLab. Choose a project you want to build (I created a example Spring Boot app if you'd like at gitlab.com/jonashackt/microservice-api-spring-boot you can clone) and run:
docker run --rm -it -v "$PWD":/usr/src/app -w /usr/src/app paketobuildpacks/builder bash
Now inside the paketobuildpacks/builder image powered container try to run the Paketo lifecycle directly with:
/cnb/lifecycle/creator -app=. microservice-api-spring-boot:latest
I only used the -app parameter of the many possible parameters for the creator command, since most of them have quite good defaults. But as the default app directory path is not the default /workspace - but the current directory, I configured it. Also we need to define an <image-name> at the end, which will simply be used as the resulting container image name.
The first .gitlab-ci.yml
Both commands did work at my local workstation, so let's finally create a .gitlab-ci.yml using this approach (here's a fully working example .gitlab-ci.yml):
image: paketobuildpacks/builder
stages:
- build
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
docker login without docker
As we don't have docker available inside our Kubernetes Runners, we can't login into GitLab Container Registry as described in the docs. So the following error occured to me using this first approach:
===> ANALYZING
ERROR: failed to get previous image: connect to repo store "gitlab.yourcompanyhere.cloud:4567/yourgroup/microservice-api-spring-boot:latest": GET https://gitlab.yourcompanyhere.cloud/jwt/auth?scope=repository%3Ayourgroup%2Fmicroservice-api-spring-boot%3Apull&service=container_registry: DENIED: access forbidden
Cleaning up project directory and file based variables 00:01
ERROR: Job failed: command terminated with exit code 1
Using the approach described in this so answer fixed the problem. We need to create a ~/.docker/config.json containing the GitLab Container Registry login information - and then the Paketo build will pick them up, as stated in the docs:
If CNB_REGISTRY_AUTH is unset and a docker config.json file is
present, the lifecycle SHOULD use the contents of this file to
authenticate with any matching registry.
Inside our .gitlab-ci.yml this could look like:
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
Our final .gitlab-ci.yml
As we're using the image: paketobuildpacks/builder at the top of our .gitlab-ci.yml, we can now leverage the lifecycle directly. Which is what we wanted to do in the first place. Only remember to use the correct GitLab CI variables to describe your <image-name> like this:
/cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Otherwise the Buildpack process analyser step will break and it finally won't get pushed to the GitLab Container Registry. So finally our .gitlab-ci.yml looks like this (here's the fully working example):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Our builds should now run successfully using Paketo/Buildpacks without pack CLI and Docker:
See the full log of the example project here.
I'm using gitlab-ci for my simple project.
And everything is ok my runner is working on my local machine(ubuntu18-04) and I tested it with simple .gitlab-ci.yml.
Now I try to use the following yml:
image: ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- sudo apt-get update
but I get the following error:
/bin/bash: line 110: sudo: command not found
How can I use sudo?
You shouldn't have to worry about updating the Ubuntu image used in a Gitlab CI pipeline job because the docker container is destroyed when the job is finished. Furthermore, the docker images are frequently updated. If you look at ubuntu:18.04's docker hub page, it was just updated 2 days ago: https://hub.docker.com/_/ubuntu?tab=tags&page=1&ordering=last_updated
Since you're doing an update here, I'm going to assume that next you might want to install some packages. It's possible to do so, but not advised since every pipeline that you run will have to install those packages, which can really slow them down. Instead, you can create a custom docker image based on a parent image and customize it that way. Then you can either upload that docker image to docker hub, Gitlab's registry (if using self-hosted Gitlab, it has to be enabled by an admin), or built on all of your gitlab-runners.
Here's a dumb example:
# .../custom_ubuntu:18.04/Dockerfile
FROM ubuntu:18.04
RUN apt-get install git
Next you can build the image: docker build /path/to/directory/that/has/dockerfile, tag it so you can reference it in your pipeline config file: docker tag aaaaafffff59 my_org/custom_ubuntu:18.04. Then if needed you can upload the tagged image docker push my_org/custom_ubuntu:18.04.
In your .gitlab-ci.yml file, reference this custom Ubuntu image:
image: my_org/custom_ubuntu:18.04
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
- git --version # ensures the package you need is available
You can read more about using custom images in Gitlab CI here: https://docs.gitlab.com/charts/advanced/custom-images/
I'm using Kubernates for production environment (I'm new for these kinds of configuration), This is an example for one of my depolyment files(with changes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myProd
labels:
app: thisIsMyProd
spec:
replicas: 3
selector:
matchLabels:
app: thisIsMyProd
template:
metadata:
labels:
app: thisIsMyProd
spec:
containers:
- name: myProd
image: DockerUserName/MyProdProject # <==== Latest
ports:
- containerPort: 80
Now, I wanted to make it works with the travis ci, So I made something similar to this:
sudo: required
services:
- docker
env:
global:
- LAST_COMMIT_SHA=$(git rev-parse HEAD)
- SERVICE_NAME=myProd
- DOCKER_FILE_PATH=.
- DOCKER_CONTEXT=.
addons:
apt:
packages:
- sshpass
before_script:
- docker build -t $SERVICE_NAME:latest -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
script:
# Mocking run test cases
deploy:
- provider: script
script: bash ./deployment/deploy-production.sh
on:
branch: master
And finally here is the deploy-production.sh script:
#!/usr/bin/env bash
# Log in to the docker CLI
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
# Build images
docker build -t $DOCKER_USERNAME/$SERVICE_NAME:latest -t $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
# Take those images and push them to docker hub
docker push $DOCKER_USERNAME/$SERVICE_NAME:latest
docker push $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA
# Run deployment script in deployment machine
export SSHPASS=$DEPLOYMENT_HOST_PASSWORD
ssh-keyscan -H $DEPLOYMENT_HOST >> ~/.ssh/known_hosts
# Run Kubectl commands
kubctl apply -f someFolder
kubctl set image ... # instead of the `...` the rest command that sets the image with SHA to the deployments
Now here are my questions:
When travis finish its work, the deploy-production.sh script with run when it is about merging to the master branch, Now I've a concern about the kubectl step, for the first time deployment, when we apply the deployment it will pull the image from dockerhup and try to run them, after that the set image command will run changing the image of these depolyment. Does this will make the deployment to happen twice?
When I try to deploy for the second time, I noticed the deployment used old version from the latest image because it found it locally. After searching I found imagePullPolicy and I set it to always. But imagine that I didn't use that imagePullPolicy attribute, what would really happen in this case? I know that old-version code containers for the first apply command. But isn't running the set image will fix that? To clarify my question, Is kubernetes using some random way to select pods that are going to go down? Like it doesn't mark the pods with the order which the commands run, so it will detect that the set image pods should remain and the apply pods are the one who needs to be terminated?
Doesn't pulling every time is harmful? Should I always make the deployment image somehow not to use the latest is better to erase that hassle?
Thanks
If the image tag is the same in both apply and set image then only the apply action re-deploy the Deployment (in which case you do not need the set image command). If they refer to different image tags then yes, the deployment will be run twice.
If you use latest tag, applying a manifest that use the latest tag with no modification WILL NOT re-deploy the Deployment. You need to introduce a modification to the manifest file in order to force Kubernetes to re-deploy. Like for my case, I use date command to generate a TIMESTAMP variable that is passed as in the env spec of the pod container which my container does not use in any way, just to force a re-deploy of the Deployment. Or you can also use kubectl rollout restart deployment/name if you are using Kubernetes 1.15 or later.
Other than wasted bandwidth or if you are being charged by how many times you pull a docker image (poor you), there is no harm with additional image pull just to be sure you are using the latest image version. Even if you use a specific image tag with version numbers like 1.10.112-rc5, they will be case where you or your fellow developers forget to update the version number when pushing a modified image version. IMHO, imagePullPolicy=always should be the default rather than explicitly required.
All of us know that helm charts are amazing and make our lives easier.
However, I got a use case when I would like to use helm charts - WITHOUT INTERNET ACCESS
And there are two steps:
Downloading chart from Git
Pulling Docker images from Dockerhub (spefified in values.yaml files)
How can I do this?
Using a helm chart offline involves pulling the chart from the internet then installing it.:
$ helm pull <chart name>
$ ls #The chart will be pulled as a tar to the local directory
$ helm install <whatever release name you want> <chart name>.tgz
For this method to work you'll need all the docker images the chart uses locally as you mentioned.
I know the answer to the first part of my question.
you can actually git clone https://github.com/kubernetes/charts.git all offical charts from github and specify path to a chart (folder) on your filesystem you want to install.
This will be in form of helmfile
execute the command like this:
helmfile -f deployment.yaml sync
cat deployment.yaml
...
repositories:
- name: roboll
url: http://roboll.io/charts
context: example.int.com # kube-context (--kube-context)
releases:
# Prometheus deployment
- name: my_prometheus # name of this release
namespace: monitoring # target namespace
chart: /opt/heml/charts/stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
values: ["values/values_prometheus_ns_monitoring.yaml"]
set: # values (--set)
- name: rbac.create
value: true
# Grafana deployment
- name: my_grafana # name of this release
namespace: monitoring # target namespace
chart: /opt/heml/charts/stable/grafana
values: ["values/values_grafana_ns_monitoring.yaml"]
So as you can see I have specified some custom values_<software>_ns_monitoring.yaml files.
The second part of my original question is still unanswered.
I want to be able to tell docker to use a local docker image in this section
cat values_grafana_ns_monitoring.yaml
replicas: 1
image:
repository: grafana/grafana
tag: 5.0.4
pullPolicy: IfNotPresent
I have managed to manually copy/paste - then load docker image
so it is visible in my computer - but I can't figure out how to convince
docker + helmfile to use my image. The goal is to process totally offlilne installation.
ANY IDEAS ???
sudo docker images
[sudo] password for jantoth:
REPOSITORY TAG IMAGE ID CREATED SIZE
my_made_up_string/custom_grafana/custom_grafana 5.1.2 917f46a60761 6 days ago 238 MB
Pulling docker images from Dockerhub WITHOUT INTERNET ACCESS
Obviously, that is not possible. However, the problem can be addressed by splitting it into stages:
Pull the docker images from Dockerhub to a system that has internet access.
Save the docker images docker save, copy them to the destination environment where you want to do offline installation and load the images back docker load
Setup a docker registry in the destination environment and tag / push the docker images into this registry. Local docker registry
In the helm charts / kubernetes yaml files, update image references to point to the local docker registry Kubernetes and private docker registry
Alternatively, you can look at offline packaging / deployment tools like Gravity
I totally understand that Concourse is meant to be stateless, but nevertheless is there any way to re-use already pulled docker images?
In my case, I build ~10 docker images which have the same base image, but each time build is triggered Concourse pulls base image 10 times.
Is it possible to pull that image once and re-use it later (at least in scope of the same build) using standard docker resource?
Yeah, it should be possible to do that using custom image and code it in sh script, but I'm not in fond of inviting bicycles.
If standard docker resource does not allow that, is it possible to extend it somehow to enable such behaviour?
--cache-from is not helpful, as CI spends most of time pulling image, not building new layers.
Theory
First, some Concourse theory (at least as of v3.3.1):
People often talk about Concourse having a "cache", but misinterpret what that means. Every concourse worker has a set of volumes on disk which are left around, forming a volume cache. This volume cache contains volumes that have been populated by resource get and put and task outputs.
People also often misunderstand how the docker-image-resource uses Docker. There is no global docker server running with your Concourse installation, in fact Concourse containers are not Docker containers, they are runC containers. Every docker-image-resource process (check, get, put) is run inside of its own runC container, inside of which there is a local docker server running. This means that there's no global docker server that is pulling docker images and caching the layers for further use.
What this implies is that when we talk about caching with the docker-image-resource, it means loading or pre-pulling images into the local docker server.
Practice
Now to the options for optimizing build times:
load_base
Background
The load_base param in your docker-image-resource put tells the resource to first docker load an image (retrieved via a get) into its local docker server, before building the image specified via your put params.
This is useful when you need to pre-populate an image into your "docker cache." In your case, you would want to preload the image used in the FROM directive. This is more efficient because it uses Concourse's own volume caching to only pull the "base" once, making it available to the docker server during the execution of the FROM command.
Usage
You can use load_base as follows:
Suppose you want to build a custom python image, and you have a git repository with a file ci/Dockerfile as follows:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y python python-pip
If you wanted to automate building/pushing of this image while taking advantage of Concourse volume caching as well as Docker image layer caching:
resources:
- name: ubuntu
type: docker-image
source:
repository: ubuntu
- name: python-image
type: docker-image
source:
repository: mydocker/python
- name: repo
type: git
source:
uri: ...
jobs:
- name: build-image-from-base
plan:
- get: repo
- get: ubuntu
params: {save: true}
- put: python-image
params:
load_base: ubuntu
dockerfile: repo/ci/Dockerfile
cache & cache_tag
Background
The cache and cache_tag params in your docker-image-resource put tell the resource to first pull a particular image+tag from your remote source, before building the image specified via your put params.
This is useful when it's easier to pull down the image than it is to build it from scratch, e.g. you have a very long build process, such as expensive compilations
This DOES NOT utilize Concourse's volume caching, and utilizes Docker's --cache-from feature (which runs the risk of needing to first perform a docker pull) during every put.
Usage
You can use cache and cache_tag as follows:
Suppose you want to build a custom ruby image, where you compile ruby from source, and you have a git repository with a file ci/Dockerfile as follows:
FROM ubuntu
# Install Ruby
RUN mkdir /tmp/ruby;\
cd /tmp/ruby;\
curl ftp://ftp.ruby-lang.org/pub/ruby/2.0/ruby-2.0.0-p247.tar.gz | tar xz;\
cd ruby-2.0.0-p247;\
chmod +x configure;\
./configure --disable-install-rdoc;\
make;\
make install;\
gem install bundler --no-ri --no-rdoc
RUN gem install nokogiri
If you wanted to automate building/pushing of this image while taking advantage of only Docker image layer caching:
resources:
- name: compiled-ruby-image
type: docker-image
source:
repository: mydocker/ruby
tag: 2.0.0-compiled
- name: repo
type: git
source:
uri: ...
jobs:
- name: build-image-from-cache
plan:
- get: repo
- put: compiled-ruby-image
params:
dockerfile: repo/ci/Dockerfile
cache: mydocker/ruby
cache_tag: 2.0.0-compiled
Recommendation
If you want to increase efficiency of building docker images, my personal belief is that load_base should be used in most cases. Because it uses a resource get, it takes advantage of Concourse volume caching, and avoids needing to do extra docker pulls.