I have an OpenShift Cluster running where I am trying to build a simple nodejs Dockerimage using the Docker BuildConfiguration strategy. Unfortunately, it fails when starting the first init-container (git-clone), as it expects a ca.crt
Error setting up cluster CA cert: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
It is a Docker-Build strategy, so I am not sure why it uses the git-clone init-step to begin with. I assume that the step expects a certificate for the git-repository or something like that.
There actually were two problems.
The cluster policy dictated that automountServiceAccountToken:false must be set for all SAs. Since I cannot edit the build-pods, I had no way to set this value to true. For now I've disabled this check for my test-namespace.
The BuildConfiguration is created with triggers for a (non-existing) git-repository. This causes the git-clone init-container to constantly fail and brick the build. I've removed the triggers by hand and now it starts as expected into the build-process.
Related
We run gitlab-ee-12.10.12.0 under docker and use kubernetes to manage the gitlab-runner
All of a sudden a couple of days ago, all my pipelines, in all my projects, stopped working. NOTHING CHANGED except I pushed some code. Yet ALL projects (even those with no repo changes) are failing. I've looked at every certificate I can find anywhere in the system and they're all good so it wasn't a cert expiry. Disk space is at 45% so it's not that. Nobody logged into the server. Nobody touched any admin screens. One code push triggered the pipeline successfully, next one didn't. I've looked at everything. I've updated the docker images for gitlab and gitlab-runner. I've deleted every kubernetes pod I can find in the namespace and let them get relaunched (my go-to for solving k8s problems :-) ).
Every pipeline run in every project now says this:
Running with gitlab-runner 14.3.2 (e0218c92)
on Kubernetes Runner vXpkH225
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image lxnsok01.wg.dir.telstra.com:9000/broadworks-build:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:00
ERROR: Error cleaning up configmap: resource name may not be empty
ERROR: Job failed (system failure): prepare environment: setting up build pod: error setting ownerReferences: configmaps "runner-vxpkh225-project-47-concurrent-0-scripts9ds4c" is forbidden: User "system:serviceaccount:gitlab:gitlab" cannot update resource "configmaps" in API group "" in the namespace "gitlab". Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
That URL talks about bash logout scripts containing bad things. But nothing changed. At least we didn't change anything.
I believe the second error implying that the user doesn't have permissions is not correct. It seems to just be saying that the user couldn't do it. The primary error being the previous one about the configmaps clean up. Again, no serviceaccounts, roles, rolebindings, etc have changed in any way.
So I'm trying to work out what may CAUSE that error. What does it MEAN? What resource name is empty? Where can I find out?
I've checked the output from "docker container logs " and it says exactly what's in the error above. No more, no less.
The only thing I can think of is perhaps 14.3.2 of gitlab-runner doesn't like my k8s or the config. Going back and checking, it seems this has changed. Previous working pipelines ran in 14.1.
So two questions then: 1) Any ideas how to fix the problem (eg update some config, clear some crud, whatever) and 2) How to I get gitlab to use a runner other than :latest?
Turns out something DID change. gitlab-runner changed and kubernetes pulled gitlab/gitlab-runner:latest between runs. Seems gitlab-runner 14.3 has a problem with my kubernetes. I went back through my pipelines and the last successful one was using 14.1
So, after a day of working through it, I edited the relevant k8s deployment to redefine the image tag used for gitlab-runner to :v14.1.0 which is the last one that worked for me.
Maybe I'll wait a few weeks and try a later one (now that I know how to easily change that tag) and see if the issue gets fixed. And perhaps go raise an issue on gitlab-runner
I want to build some docker images in a certain step of my Google Cloud Build, then push them in another step. I'm thinking the CI used doesn't really matter here.
This is because some of the push commands are dependent on some other conditions and I don't want to re-build the images.
I can docker save to some tar in the mounted workspace, then docker load it later. However that's fairly slow. Is there any better strategy? I thought of trying to copy to/from /var/lib/docker, but that seems ill advised.
The key here is doing the docker push from the same host on which you have done the docker build.
The docker build, however, doesn’t need to take place on the CICD build machine itself, because you can point its local docker client to a remote docker host.
To point your docker client to a remote docker host you need to set three environment variables.
On a Linux environment:
DOCKER_HOST=tcp:<IP Address Of Remote Server>:2376
DOCKER_CERT_PATH=/some/path/to/docker/client/certs
DOCKER_TLS_VERIFY=1
This is a very powerful concept that has many uses. One can for example, point to a dev|tst|prod docker swarm manager node. Or, point from Linux to a remote Windows machine and initiate the build of a Windows container. This latter use case might be useful if you have common CICD tooling that implements some proprietary image labeling that you want to re-use also for Windows containers.
The authentication here is mutual SSL/TLS and so there need to be both client and server private/public keys generated with a common CA. This might be a little tricky at first and so you may want to see how it works using docker-machine first using the environment setting shortcuts initially:
https://docs.docker.com/machine/reference/env/
Once you’ve mastered this concept you’ll then need to script the setting of these environment variables in your CICD scripts making client certs available in a secure way.
I try and deploy an app in a kubernetes cluster following these instructions
https://cloud.ibm.com/docs/containers?topic=containers-cs_apps_tutorial#cs_apps_tutorial
Then I make a build following the instructions with ibmcloud cr build -t registry..bluemix.net//hello-world:1 .
Output looks good except a securitywarning
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
But as this was just a test I did not worry.
At the next stage running this command following instructions
kubectl run hello-world-deployment --image=registry..bluemix.net//hello-world:1
I get the following error
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1?timeout=32s: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
As you see in the message it looks like it is trying to do something to my local PC rather than the IBMCloud. What have I missed to do?
As #N Fritze mentioned in the comment, in order to organize access to Kubernetes cluster you might require to set KUBECONFIG environment variable which holds a list of kubeconfig files needed to provide sufficient information about authentication method in API server.
Find more information about managing Kubernetes Service in official IBM Cloud documentation. As issue has been already solved, answer composed for any further contributors research.
I have a software suite (node web server, database, other tools) that I'm developing inside a corporate firewall, building into docker images, and deploying with docker-compose. In order to actually install all the software into the images, I need to set up the environment to use a network proxy, and also to disable strict SSL checking (because the firewall includes ssl inspection), not only in terms of environment variables but also for npm, apt and so on.
I've got all this working so that I can build within the firewall and deploy within the firewall, and have set up my Dockerfiles and build scripts so that enabling all the proxy/ssl config stuff is dependent on a docker --build-arg which sets an environment variable via ENV enable_proxies=$my_build_arg, so I can also just as easily skip all that configuration for building and deploying outside the firewall.
However, I need to be able to build everything inside the firewall, and deploy outside it. Which means that all the proxy stuff has to be enabled at build time (so the software packages can all be installed) if the relevant --build-arg is specified, and then also separately either enabled or disabled at runtime using --env enable_proxies=true or something similar.
I'm still relatively new to some aspects of Docker, but my understanding is that the only thing executed when the image is run is the contents of the CMD entry in the Dockerfile, and that CMD can only execute a single command.
Does anyone have any idea how I can/should go about separating the proxy/ssl settings during build and runtime like this?
You should be able to build and ship a single image; “build inside the firewall, deploy outside” is pretty normal.
One approach that can work for this is to use Docker’s multi-stage build functionality to have two stages. The first maybe has special proxy settings and gets the dependencies; the second is the actual runtime image.
FROM ... AS build
ARG my_build_arg
ENV enable_proxies=$my_build_arg
WORKDIR /artifacts
RUN curl http://internal.source.example.com/...
FROM ...
COPY --from=build /artifacts/ /artifacts/
...
CMD ["the_app"]
Since the second stage doesn’t have an ENV directive, it never will have $enable_proxies set, which is what you want for the actual runtime image.
Another similar approach is to write a script that runs on the host that downloads dependencies into a local build tree and then runs docker build. (This might be required if you need to support particularly old Dockers.) Then you could use whatever the host has set for $http_proxy and not worry about handling the proxy vs. non-proxy case specially.
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.