I used the CircleCI CLI to initialise a private orb following these instructions. The CLI didn't work as expected and no more initialisation steps were completed after the remote Git repository setup; Enter the remote git repository. The orb has been created, but the build's publish step fails with multiple problems and the publishing context was not created in the CircleCI account.
I want to start again. I can delete the repo but it is not possible to delete an orb. What's the best way to start again using the same orb name? I have tried to run the CLI command again:
circleci orb init path-to-orb --private
But I get the error Error: Unable to create orb: Cannot create an Orb named 'orb-name': an Orb with that name already exists.
Related
I've successfully connected to a Bitbucket repository within my Jenkins job. My issue is that I haven't been able to find any information about how to access/use the files that the repository contains. I added an "Execute Shell" step after connecting the repo, but don't know where the files the repo contains are. I tried cd'ing into /NameOfRepo/NameOfSubfolder but it says the file/directory does not exist in the console output when I run the job. Where does Jenkins store files it has gained access to that live in a remote repository? Do I need to use shell commands to clone my repo to a specific location?
I have a shared registry (a docker image with tools installed for running my code) which is used by multiple repositories. I have created a new repository called LuaServer, which uses code from another repository called LuaDB. In LuaServer I have created a test which requires the code from LuaDB, this test is run in a pipeline on GitLab CI/CD in said shared registry. I get an error during the execution of this test, stating the following:
spec/serializer_spec.lua:36: module 'luadb.manager.AST' not found:No LuaRocks module found for luadb.manager.AST
Now I tried to directly clone the repository and set it up in the registry (a docker image basically which now has LuaDB), which did not seem to work as the error stays the same. Then I tried to include LuaDB as a submodule for LuaServer, but this still did not solve my problem. Is there a way to work this out?
Try using curl to get files from gitlab repo (check gitlab api)
Gitlab CI/CD pipeline when using it's runner (gitlab shared runners or custom runners) they use a default path that exists on $CI_PROJECT_DIR env variable. so you can clone you code (luaDB) under $CI_PROJECT_DIR/your_existing_code_luaserver
I am trying to set up a CI/CD pipeline using one of my public GitHub repositories as the source for Cloud Run (fully-managed) service using Cloud Build. I am using a Dockerfile initialized in root folder of the repository with source configuration parameter initialized as /Dockerfile when setting up the cloud build trigger. (to continuously deploy new revisions from source repository)
When, I initialize the cloud run instance, I face the following error:
Moreover, when I try to run my cloud build trigger manually, it shows the following error:
I also tried editing continuous deployment settings by setting it to automatically detect Dockerfile/cloudbuild.yaml. After that, build process becomes successful but the revision are not getting updated. I've also tried deploying a new revision and then triggering cloud build trigger but it isn't still able to pick the latest build from container registry.
I am positive that my Dockerfile and application code are working properly since I've previously submitted the build on Container registry using Google Cloud Shell and have tested it manually after deploying it to cloud run.
Need help to fix the issue.
UPPERCASE letters in the image path aren't allowed. Chnage Toxicity-Detector to toxicity-detector
I setup fresh gitlab docker , then set up a runner docker with docer executer based on microsoft/dotnet:latest
then I added simple project to gitlab just a dotnet core hello world
then I create a ci file as below:
image: microsoft/dotnet:latest
stages:
- build
variables:
project: "ConsoleApp"
before_script:
- "dotnet restore"
build:
stage: build
variables:
build_path: "$ConsoleApp"
script:
- "cd $build_path"
- "dotnet build"
then in pipleline I get this output:
Preparing environment
Running on runner-vtysysr-project-2-concurrent-0 via e189cc9d1c60...
Getting source from Git repository
00:07
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in /builds/root/gitlabcitest/.git/
fatal: couldn't find remote ref refs/pipelines/18
Uploading artifacts for failed job
00:06
ERROR: Job failed: exit code 1
I searched error but all asnwers are about projects which have branches, but I dont have any branch, just a simple hello world project.
The OP ali-kamrani adds in the comments:
my issue was in ssh config in runner docker: after adding the ssh key to docker, the issue is solved.
Other avenues (for other users) if this is similar to gitlab-org/gitlab issue 36123
We were using git push --mirror to mirror some projects from a different repository regularly.
As it turns out, it also deletes unknown branches, i.e. pipelines/XXXX and merge/XXXX.
We are now pushing & deleting every branch explicitly and ignoring all pipelines/XXXX and merge/XXXX ones.
Afterwards the error didn't occur again.
I understand you don't have many branches, but the issue here is not with your local branches.
It is with a push operation, initiated locally, pruning remote branches which does not exist locally.
Basically, pipelines are depending on a pipeline specific ref refs/pipelines/* and it has to exist when the pipeline is running.
So if git push --mirror deletes these refs, you might encounter the job failure.
The same issue illustrates a similar scenario:
In our setup, we are using a system hook to mirror our GitLab repositories into another location closer to where our GitLab Runner instances are.
This has worked fine for a long time.
However, now that GitLab is dependent on the refs/pipelines/<pipeline ID> ref existing, all of our runners fail.
The problem is that the refs/pipelines/<pipeline ID> ref gets created behind the scenes and there is no system hook that gets invoked (so we don't know about the new ref that needs to be mirrored).
The built-in Repository Mirroring feature isn't very suitable for us because it must be configured for each repository; with System Hooks, we can automatically mirror all of our repositories.
I'm trying to install Jenkins X on an existing Kubernetes cluster (GKE), using jx boot, but it always gives me the error trying to execute 'jx boot' from a non requirements repo
In fact, I have tried to use jx install, and it works, but this command is already marked as deprecated, but I see it's still the method to use on Jenkins X's GitHub page.
Then another detail ... I'm in fact creating the cluster using Terraform because I don't like the idea that Jenkins X creates the cluster for me. And I want to use Terraform to install Jenkins X as well but that would be another question. :)
So how to install using jx boot and what is a non requirements repo ?
Thanks
Are you trying to execute jx boot from within an existing git repository? Try changing into an empty, non-git directory run jx boot from there.
jx wants to clone the jenkins-x-boot-config and create your dev repository. It cannot do so from within an existing repository.
One thing I've noticed is that running jx boot in an existing repo without a jx-requirements.yml it asks if you want to check out the Jenkins X boot config.
Creating boot config with defaults, as not in an existing boot directory with a git repository.
No Jenkins X pipeline file jenkins-x.yml or no jx boot requirements file jx-requirements.yml found. You are not running this command from inside a Jenkins X Boot git clone
To continue we will clone https://github.com/jenkins-x/jenkins-x-boot-config.git # master to jenkins-x-boot-config
? Do you want to clone the Jenkins X Boot Git repository? [? for help] (Y/n)
I let it do this checkout, and then either let it crash or cancel it.
I can now go into the new repo, make changes to the jx-requirements.yml and run it as I want it to.