How to include code from another repository in gitlab registry? - docker

I have a shared registry (a docker image with tools installed for running my code) which is used by multiple repositories. I have created a new repository called LuaServer, which uses code from another repository called LuaDB. In LuaServer I have created a test which requires the code from LuaDB, this test is run in a pipeline on GitLab CI/CD in said shared registry. I get an error during the execution of this test, stating the following:
spec/serializer_spec.lua:36: module 'luadb.manager.AST' not found:No LuaRocks module found for luadb.manager.AST
Now I tried to directly clone the repository and set it up in the registry (a docker image basically which now has LuaDB), which did not seem to work as the error stays the same. Then I tried to include LuaDB as a submodule for LuaServer, but this still did not solve my problem. Is there a way to work this out?

Try using curl to get files from gitlab repo (check gitlab api)
Gitlab CI/CD pipeline when using it's runner (gitlab shared runners or custom runners) they use a default path that exists on $CI_PROJECT_DIR env variable. so you can clone you code (luaDB) under $CI_PROJECT_DIR/your_existing_code_luaserver

Related

With GitLab CI/CD, how to have code cloned in a container by user:group 'java:java', instead of 'root?

In a GitLab repo, I have a Dockerfile with the following lines,
FROM python:alpine
RUN addgroup -S java
RUN adduser -s /bin/bash -S -G java java
USER java
WORKDIR /home/java
so that when the image is instantiated (container running), it will run as user ‘java’
When GitLab CI/CD clones the project code however, it is owned by root in directory /home/java
This is unexpected behavior, I would expect it to be owned by user ‘java’
How do I get the code to be cloned by user ‘java’, and owned (user:group), by user:group ‘java:java’?
GitLab CI clones the code outside your job container using the gitlab/gitlab-runner-helper docker image (repository for runner helper). If you're running your own executor you can override what helper image is used for cloning the repository to one that clones using a java user though you'd have to make sure that the user/group ID matched in the two containers to prevent issues. This would also mean you're maintaining your own extended runner helper and you couldn't use the shared runners hosted by GitLab.
You have an alternate possible approach though I wouldn't recommend it: You could set your git strategy to not clone your repo, then clone it in a before_script: action within your job container, which would cause it to clone with your java user. However this will only clone the repository within that one job, so you'd have to repeat yourself across every job which would violate DRY all over the place.
Generally though, I'd agree with David that having the code owned by root is fine in this case unless you have a specific reason to change
Projects in GitLab are cloned with the GitLab runner helper image, which is using root. It will also use umask 0000 to avoid permission issues, if data is cached.
See this GitLab issue for more details.
To fix your issue, add an environment variable:
FF_DISABLE_UMASK_FOR_DOCKER_EXECUTOR=true
This will disable the umask usage and the runner tries to get UID and GID from the image of the build container.

CI/CD integration problem when using google-cloud-build with github push as trigger for Cloud Run

I am trying to set up a CI/CD pipeline using one of my public GitHub repositories as the source for Cloud Run (fully-managed) service using Cloud Build. I am using a Dockerfile initialized in root folder of the repository with source configuration parameter initialized as /Dockerfile when setting up the cloud build trigger. (to continuously deploy new revisions from source repository)
When, I initialize the cloud run instance, I face the following error:
Moreover, when I try to run my cloud build trigger manually, it shows the following error:
I also tried editing continuous deployment settings by setting it to automatically detect Dockerfile/cloudbuild.yaml. After that, build process becomes successful but the revision are not getting updated. I've also tried deploying a new revision and then triggering cloud build trigger but it isn't still able to pick the latest build from container registry.
I am positive that my Dockerfile and application code are working properly since I've previously submitted the build on Container registry using Google Cloud Shell and have tested it manually after deploying it to cloud run.
Need help to fix the issue.
UPPERCASE letters in the image path aren't allowed. Chnage Toxicity-Detector to toxicity-detector

Jenkins X: trying to execute 'jx boot' from a non requirements repo

I'm trying to install Jenkins X on an existing Kubernetes cluster (GKE), using jx boot, but it always gives me the error trying to execute 'jx boot' from a non requirements repo
In fact, I have tried to use jx install, and it works, but this command is already marked as deprecated, but I see it's still the method to use on Jenkins X's GitHub page.
Then another detail ... I'm in fact creating the cluster using Terraform because I don't like the idea that Jenkins X creates the cluster for me. And I want to use Terraform to install Jenkins X as well but that would be another question. :)
So how to install using jx boot and what is a non requirements repo ?
Thanks
Are you trying to execute jx boot from within an existing git repository? Try changing into an empty, non-git directory run jx boot from there.
jx wants to clone the jenkins-x-boot-config and create your dev repository. It cannot do so from within an existing repository.
One thing I've noticed is that running jx boot in an existing repo without a jx-requirements.yml it asks if you want to check out the Jenkins X boot config.
Creating boot config with defaults, as not in an existing boot directory with a git repository.
No Jenkins X pipeline file jenkins-x.yml or no jx boot requirements file jx-requirements.yml found. You are not running this command from inside a Jenkins X Boot git clone
To continue we will clone https://github.com/jenkins-x/jenkins-x-boot-config.git # master to jenkins-x-boot-config
? Do you want to clone the Jenkins X Boot Git repository? [? for help] (Y/n)
I let it do this checkout, and then either let it crash or cancel it.
I can now go into the new repo, make changes to the jx-requirements.yml and run it as I want it to.

Jenkins failing to clone Bitbucket links during Rspec Puppet unit tests

I'm trying to set up a Jenkins build to clone a Bitbucket link and run unit tests I've written against some Puppet modules. I've got Jenkins set up with an SSH keypair and have verified that it can clone the Bitbucket repository initially, but when the unit tests run and clone separate modules as part of the test, I get an error that the public key does not work.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've verified that the build is running under the Jenkins user and that the keys are in the .ssh directory, what else can I try to fix it?
I was able to fix it by logging into the Jenkins Docker instance and using the SSH keypair from there instead of the one on the Jenkins server hosting the instance. It worked completely after I did this.

Git to docker export

I'd like some opinions on this workflow. The intention is to semi-automate and revision control the creation/export of docker containers.
I have some docker directories with a dockerfile etc inside (enough to build a docker image from). At the moment, I've set up a process where this becomes a local git repo, then I set up a bare repo on a remote server. Then I add in an 'update' hook to the remote repo that will take the name of the repo and call a script that proceeds to clone that repo, build docker image, start a container, export container, delete repo. Then I end up with a .tar of my docker container every time I push an update to that repo.
The only issue is that I have to manually copy the hook to each remote repo I set up (considering .git/hooks doesn't get pushed from local).
So I'm looking for some feedback on whether this whole process has any intelligence to it or if I am going about it the completely wrong way.
What you are looking for is called "Continuous Integration".
There are multiple ways to achieve it, but here's how I do it:
Set up a Jenkins server
Put all docker files into one git repo, as modules if necessary
Have Jenkins check for changes in the repo every few minutes
Have Jenkins build the docker images after pulling in the changes

Resources