I've been using Cloudbees DEV#Cloud to build and test my code. Now I want to automate the deployment of my applications to Amazon AWS.
To deploy the applications I scp files to Amazon and use ssh sessions to start the applications. This works just fine, but I'm force to allow ssh connections to my Amazon AWS instances.
Is it possible to run the Cloudbees builds from a fixed IP address, so I don't have to allow ssh access to the AWS instances from every IP address?
No, DEV#cloud slaves are drawn from an elastic pool of machines (in EC2, currently), so you cannot fix a particular IP address. If you need to restrict access to a resource which builds must use, you can configure VPN service instead.
Related
I'm quite new in docker and VPNs so I don't know what should be the best way to achieve this.
Contex:
I use airflow in Google Cloud to schedule some task. These tasks are dockerized so each task is the execution of a docker container with a script (Using KubernetesPodOperator)
For this use case I need that the connection was done through VPN and then run the script.
To connect the VPN (locally) I use user, password and CA certificate.
I've seen some ways to do it, but all of them use another docker image as VPN or with a bridge using host vpn.
What's the best way to develop a solution for this?
I think what you saw is good advice.
There are a number of projects that show how it could be done - one example here: https://gitlab.com/dealako/k8s-sidecar-vpn
Using sidecar for VPN connection is usually a good idea. It has a number of advantages:
allows you to use existing VPN images so that you do not have to add the VPN software to your images
Allows to use exactly the same VPN image and configuration for multiple pods/services
allows you to keep your secrets (user/password) only available to VPN and the VPN will only expose a plain TCP/http connection available only to your service - your service/task will never a cess the secrets which makes it a very secure way of storing the secrets and authentication
After some investigation I think there is not a good way to connect docker to a VPN using Airflow (KubernetesPodOperator)
That is a serverles service so the correct way to do this.
Should be connect your private VPN to a Google VPN (VPC) where you deploy the airfow and K8s server that runs Airflow.
I'm setting up a local development environment for a cloud native app where the idea is once in production up in Google Cloud, I'll be using Cloud SQL (managed cloud service) for data persistence. While I'm developing my application locally, I am using a local cluster with KinD, and would like my containers there to be able to reach a couple of external services outside the cluster (in this case PostgreSQL) and I'm doing it this way to keep dev/prod parity.
I have Postgres running locally using docker compose alongside my cluster, and while I can reach it already using the host's (my computer) IP + exposed port from within my pod containers, this is not very portable and would require every team member to configure their host IP to get their local environment working. I would like to avoid this.
Is there a better solution? Thanks.
I might have just written a blog post which could help...
https://medium.com/google-cloud/connecting-cloud-sql-kubernetes-sidecar-46e016e07bb4
It runs the Cloud SQL Proxy as a sidecar to the application. This way, only the deployment yaml would need to change with the --instances parameter for the Cloud SQL proxy to change from your local Postgres instance to the connection string for the Cloud SQL instance. You'll also need to sort the service account file in the deployment (covered in the blog post) so that you have the right permissions from your k8s deployment in GKE to access the Cloud SQL instance.
Right now I found 2 possible solutions creating Jenkins Slaves or Jenkins Workers:
Using the SSH-Slave Plugin
Using JNLP
My question now: What is the better / more stable solution and why?
I found myself some pros and cons using both of the solutions but I don't want to affect the discussion
Java Web Start -- and the underlying Java Network Launch Protocol (JNLP) is the recommended mode of connection for windows agents.
jenkins.wiki.io
SSH is recommended for Linux agents
The Pros and Cons don't matter if both are recommended for different platforms.
We are only using SSH, mainly because in a corporate setting:
asking for the JNLP port to be opened on the agent server (agent being the new name for slave in Jenkins)
any query to another server should use an authenticated flow (like SSH with a key, or an account username/password)
Plus we can add to the Launch method configuration various JVM Options to monitor the GC and other memory parameters.
I have jenkins installed as a service on a Google Cloud Platform Compute Engine VM.
It was working fine till a few days ago.
Recently I am having issues connecting with the compute engine vm itself. I try to connect to that vm using a RDP and the vm would not open. I am trying to trigger jenkins using https, the requests will timeout.
I have to restart the compute engine vm itself to make it working again. It works for some minutes and starts rejecting requests again.
Weird thing is that I have jenkins slaves installed on multiple different gcp compute vms (other than the jenkins master). And while I am unable to access the VM via RDP, I can trigger jenkins requests from other gcp slaves.
I have configured SSL for the jenkins.
I have a static IP assigned to the compute vm which has jenkins server installed.
Any direction on what might be causing the gcp compute engine vm to reject connection requests would be helpful.
I am trying to setup multiple Jenkins instances on the same Windows server with the same port number. I dint had much luck to do this.
How can I achieve this?
I can configure Jenkins on a different port on the same server, however, I am unable to run that instance as a service. Is there a document that helps for setting up Multiple instances of Jenkins on the same windows server.
Or I dont mind hosting Jenkins on a different directory if it is going to run on a different port number.