Deploying Jenkins Server on Google Cloud Run - jenkins

Deploying Jenkins Server on Google Cloud Run
Cloud run service URL is long-awaited and the Jenkins server is not getting ready to configure
Please wait while Jenkins is getting ready - waiting long
but not being starting or getting ready
Jenkins server is successfully running in my machine via Kubernetes pod
I want to run Jenkins server on serverless solutions, one of the best serverless is cloud run.
I have deployed jenkins/jenkins:lts container on cloud run and make it public IP to access cloud run service.
everything is working fine.
When I access public URL to configure Jenkins server it showing "Please wait while Jenkins is getting ready"
It is taking too much time and not getting ready. I have tried many times to delete service and check it locally, fine, and again deploy to cloud run, many times and it not getting ready, now it's stuck. Is there any way out to run Jenkins on the Google cloud run.
Docker Image jenkins/jenkins:lts
Tag Image gcr.io/projectName/jenkinsci:latest
successfully deploy on Google cloud run
but Jenkins server is not getting ready

Related

Gitlab runner stucks while pulling docker image

I was trying to run my gitlab-ci in my hosted gitlab server and I picked docker for gitlab-runner executer but in pipline it got stucked and doesn't work.
What should I do to fix this?
Seems the same issue, the Machine on which the docker is running, is sitting behind a proxy server, which is why its getting stuck when its trying to pull the image.
If you are able to login to the machine and check the internet access..
Check if you are using some kind of proxy or not?
Your ID may have SSO to Proxy and hence your ID works .. if the gitlab-runner service runs on a different account, that account may not have internet access

How to build and deploy kubernetes cluster to Google Cloud using Cloud Build and Skaffold?

I am new to micro-services technologies and getting troubled with Google Cloud Build.
I am using Docker, Kubernetes, Ingres Nginx and skaffold and my deployment works fine in local machine.
Now I want to develop locally and build and run remotely using Cloud Platform so, here's what I have done:
In Google Cloud, I have set up kubernetes cluster
Set local kubectl context to cloud cluster
Set up an Ingress Nginx load balancer
Enabled Cloud Build API (no trigger setup)
Here's my create deployment and skaffold yaml files look like:
When I run skaffold dev, it logs out: Some taggers failed. Rerun with -vdebug for errors., then it takes some time and my network bandwidth.
The image does get pushed to Cloud Container Registry and I can access the app using load balancer's IP address but the Cloud Build History is still empty. Where am I missing?
Note: Right now I am not pushing my code to any online repository like github.
Sorry If the information I provide is insufficient, I am new to these technologies.
Cloud Build started working:
First, In Cloud Build settings, I enabled kubernetes Engine, Compute Engine and Service Accounts.
Then, I executed these 2 commands:
gcloud auth application-default login: As google describes it This will acquire new user credentials to use for Application Default Credentials
As mentioned in ingress nginx -> deploy -> GCE-GKE documentation, this will Initialize your user as a cluster-admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)

Deploy docker to on-premise using azure CI-CD

I have created.NetCore Application and was successfully deployed to the local PC docker container.
Now I am trying to build it from Azure DevOps and publish it to one of my servers hosted on-premise.
Now I have no idea how to host it. Also not sure what is Docker Registry Service Connection & Container Registry Type.
My DevOps server is also hosted on-premise with no docker installed on it.
I have a docker account with one private repository.
Please suggest how to continue as I am getting the below error while building the image
open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
Thanks
Deploy docker to on-premise using azure CI-CD
If you want to deploy app to the local PC docker container, you can use Self-hosted Agent(Build Pipeline and Release Pipeline) or Deployment Group(Release Pipeline).
Note: we need set the self-agent on the server where have docker installed.
Then you could try the following pipeline settings.
Here is a blog about ASP.Net Application Deployment in Docker for Windows.
You could use Command Line Task to run the docker command. In this case, you can move the local build and deploy process to azure devops

Not able to retrieve AWS_CONTAINER_CREDENTIALS_RELATIVE_URI inside the container from fargate task

I'm running a docker container in Fargate ECS Task.
And my docker container, I have enabled ssh server, so that I can login to container directly, if I have to debug something. Which is working fine, so I can ssh my task ip, check and investigate my issues.
But, now I noticed I have an issue while accessing any AWS service via ssh inside the container, => when I logged in container via ssh I found configuration files such as ~/.aws/credentials, ~/.aws/config are missing and I can't issue any cli commands e.g. check the caller-identity. which supposed to be my task arn.
But the strange, is if I connect this same task to an ECS instance, I don't have any such issues. I can see my task arn and all rest of services. So, the ecs task agent just working fine.
So, coming back to ssh, connectivity I notice, i'm getting 404 page not found from curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. So, how can I make this possible that ECS Instance access and ssh access have same capability? if I can access AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in my ssh then I think everything will be changed.

Gitlab webhook to Docker Container running Jenkins hosted on home network

I have an Ubuntu server running 2 docker containers one is running Jenkins and the other one is running Sonarqube. My school has a private gitlab server from this server I want to create an webhook to my Jenkins to trigger a build but the following error is coming up.. And I Don't know what is going on...
enter image description here
Both port forwarded via router using the following ports:
jenkins xx.xx.xx.xxx:8080
sonarqube xx.xx.xx.xxx:8090
Getting the following error when testing the webhook using the by jenkins provided url to trigger: http://xx.xx.xx.xxx:8080/project/some_project
enter image description here
Does it have to do with docker?

Resources