Installing Jenkins-X on GKE - jenkins

This may sound like a stupid question, but I am installing Jenkins-X on a Kubernetes cluster on GKE. When I install through Cloud Shell, the /usr/local/bin folder I am moving it to is refreshed every time the shell is restarted.
My question is two-fold:
Am I correct in installing Jenkins-X through Cloud Shell (and not on a particular node)?
How can I get it so the /jx folder is available when the Cloud Shell is restarted (or at least have the /jx folder on the path at all times)?

I am running jx from the Cloud shell
In the cloud shell you are already logged in and you have a project configured. To prevent jx to re-login to google cloud/project use the following arguments
jx create cluster gke --skip-login=true --project-id projectId
download jx to ~/bin and update $PATH to include both ~/bin and ~/.jx/bin. Put the following to ~/.profile
if [ -d "$HOME/bin" ] ; then
PATH="$HOME/bin:$PATH"
fi
PATH="$HOME/.jx/bin:$PATH"
The .jx/bin is the place where JX downloads helm if needed.

Google Cloud Shell VMs are ephemeral and they are discarded shortly after the end of a session. However, your home directory persists, so anything installed in the home directory will remain from session to session.
I am not familiar with Jenkins-X. If it requires a daemon process running in the background, Cloud Shell is not a good option and you should probably set up a GCE instance. If you just need to run some command-line utilities to control a GKE cluster, make sure that whatever you install goes into your home directory where it will persist across Cloud Shell sessions.

Related

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

jupyterhub - How to install packages persistently?

I have installed the JupyterHub docker image on my server, which automatically creates and launches jupyter notebook containers for each user who logs in: https://github.com/jupyterhub/jupyterhub
Inside this personal container, I can use pip/conda to install extra packages. However, whenever the host machine reboots, the container has to be recreated and the installed packages are lost.
Is there a good solution for making this persistent? I suppose the installed packages could be mounted as some kind of persistent volume (like the user data already is), but with little Docker experience I wouldn't know how to set that up.
Check if official Jupyter documentation on user environments helps.
I've copied the text from the link below:
Allow users to create their own conda environments
Sometimes you want users to be able to create their own conda
environments. By default, any environments created in a JupyterHub
session will not persist across sessions. To resolve this, take the
following steps:
Ensure the nb_conda_kernels package is installed in the root
environment (e.g., see Build a custom Docker image with repo2docker)
Configure Anaconda to install user environments to a folder within
$HOME.
Create a file called .condarc in the home folder for all users, and
make sure that the following lines are inside:
envs_dirs:
/home/jovyan/my-conda-envs/
The text above will cause Anaconda to install new environments to this
folder, which will persist across sessions.

Initial setup for ssh on docker-compose

I am using docker for MacOS / Win.
I connect to external servers via ssh from shell in docker container,
For now, I generate ssh-key in docker shell, and manually send sshkey to servers.
However in this method, everytime I re-build container, sshkey is deleted.
So I want to set initial sshkey when I build images.
I have 2 ideas
Mount .ssh folder from my macOS to docker folder and persist.
(Permission control might be difficult and complex....)
Write scripts that makes the ssh-keymake & sends this to servers in docker-compose.yml or Dockerfile.
(Everytime I build , new key is send...??)
Which is the best practice? or do you have any idea to set ssh-key automatically??
Best practice is typically to not make outbound ssh connections from containers. If what you’re trying to add to your container is a binary or application code, manage your source control setup outside Docker and COPY the data into an image. If it’s data your application needs to run, again fetch it externally and use docker run -v to inject it into the container.
As you say, managing this key material securely, and obeying ssh’s Unix permission requirements, is incredibly tricky. If I really didn’t have a choice but to do this I’d write an ENTRYPOINT script that copied the private key from a bind-mounted volume to my container user’s .ssh directory. But my first choice would be to redesign my application flow to not need this at all.
After reading the "I'm a windows user .." comment I'm thinking you are solving the wrong problem. You are looking for an easy (sane) shell access to your servers. The are are two simpler solutions.
1. Windows Linux subsystem -- https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux. (not my choice)
Cygwin -- http://www.cygwin.com -- for that comfy Linux feel to your cmd :-)
How I install it.
Download and install it (be careful to only pick the features beyond base that you need. (there is a LOT and most of it you will not need -- like the compilers and X). Make sure that SSH is selected. Don't worry you can rerun the setup as many times as you want (I do that occasionally to update what I use)
Start the bash shell (there will be a link after the installation)
a. run 'cygpath -wp $PATH'
b. look at the results -- there will be a couple of folders in the begging of the path that will look like "C:\cygwin\bin;C:\cygwin\usr\local\bin;..." simply all the paths that start with "C:\cygwin" provided you installed your Cygwin into "C:\Cygwin" directory.
c. Add these paths to your system path
d. Start a new instance of CMD. run 'ls' it should now work directly under windows shell.
Extra credit.
a. move the all the ".xxx" files that were created during the first launch of the shell in your C:\cygwin\home\<username> directory to you windows home directory (C:\Users\<username>).
b. exit any bash shells you have running
c. delete c:\cygwin\home directory
d. use windows mklink utility to create a link named home under cygwin pointing to C:\Users (Administrator shell) 'mklink /J C:\Cygwin\home C:\Users'
This will make your windows home directory the same as your cygwin home.
After that you follow the normal setup for ssh under Cygwin bash and you will be able to generate the keys and distribute them normally to servers.
NOTE: you will have to sever the propagation of credentials from windows to your <home>/.ssh folder (in the folder's security settings) leave just your user id. then set permissions on the folder and various key files underneath appropriately for SSH using 'chmod'.
Enjoy -- some days I have to squint to remember I'm on a windows box ...

mongooseim cluster setup eacces error on ubuntu 14.04

We are trying to create master-master cluster of two mongooseim instances on AWS in same virtual network..
All necessary ports are opened in AWS security group.
I suspect some issue with mongooseim setup on Ubuntu 14.04 LTS
After running join_cluster command on one of the node we get error as follows ( refer screenshot )
Error: {error,{badmatch,{error,eacces}}}
Attached screenshot with details.
Server configuration was not changed except vm args as shown in attached screenshot.
is this an issue with your binary ? or some other glitch ?
I ran into this issue myself. Mongoose uses erlangs internal mnesia storage system for a lot of information including cluster topology. The default path for mnesia's storage is /var/lib/mongooseim. When you do a mongooseimctl join_cluster ... it needs to wipe it's mnesia store and basically pulls a copy from the cluster it's joining. The issue arises because it also tries to delete /var/lib/mongooseim itself, which it won't have permissions to do because the running user mongooseim won't have permissions of the parent directory, /var/lib. Nor should it.
The way I fixed this was by creating a subdirectory which it could safely delete and recreate and configuring it to use that as it's mnesia directory:
sudo mkdir /var/lib/mongooseim/mnesia
sudo chown mongooseim:mongooseim /var/lib/mongooseim/mnesia
Configuration for the mnesia directory exists by default in /etc/mongooseim/app.config. In mine it was the third line. Originally looked like this:
{mnesia, [{dir, "/var/lib/mongooseim"}]},
I changed the path to the new directory I created
{mnesia, [{dir, "/var/lib/mongooseim/mnesia"}]},
After that, I stopped and started mongoose and was successfully able to join the cluster
mongooseimctl stop
mongooseimctl start && mongooseimctl started
mongooseimctl join_cluster mongooseim#other.node.name

EC2 User Data runs script but does not boot up application

I have a custom AMI that has my app directory and a docker image. I'm setting up Auto Scale Group with Launch Configuration to create a new instance. I have a User Data script to boot up the application. This is the code:
#!/bin/bash
docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
the script runs, but the app doesn't run. I can SSH and run the app manually which works. Looking at the cloud-init-output.log file, I'm getting the following:
/var/lib/cloud/instance/scripts/part-001: line 4: docker-compose: command not found
Docker-compose is available when I SSH as I've installed it before creating my custom AMI.
Anything I'm missing?
Doesn't matter regarding your best practice question. Either way would suffice.
HakRou is right however.
The boot strap is operating under a different security context / shell environment so you need to cater for that.
You could just put the entire path to the binary file as well such as:
/usr/local/bin/docker-compose -f /home/ec2-user/app/docker-compose.yaml up -d app
and see how that goes.
docker-compose might have been available to the user you used to SSH into your instance (like ec2-user, ubuntu or admin), but it might not be available to root, and root is the one used with user-data when Amazon spins a new instance.
So you might want to add a soft link of docker-compose in one of the folders in the root $PATH, /usr/bin for exemple.

Resources