Using the OpenShift API is there a way to get deployments in a project - openshift-3

I am doing some recon work on projects in our openshift cluster and I am looking for an easy way to get all the Projects in a certain group.
I know there is an openshift API that has access to certain openshift artifacts:
For example I could make an API call to the openshift cluster like this:
/oapi/v1/projects/{name}
To get a project of a specific name.
Is there a way to then get all the deployments for that project... Something like this:
/oapi/v1/projects/{name}/deployments
So I could know what deployments are in a certain environment in our openshift cluster.
Any thoughts on this would be great.

OCP"Projects" being a superset or "encapsulation" of k8s"Namespaces", you can list the deployments of a specific"Project/Namespace" with this API:
GET /apis/apps/v1/namespaces/{namespace}/deployments
Of course you need the correct autorizations to be able to list such objects as explained in the following docs
Reference: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#http-request-2
OCP API Reference: https://docs.okd.io/latest/rest_api/index.html
K8S API Reference: https://kubernetes.io/docs/reference/kubernetes-api/

Related

Can an Akka.net node hosted within a container participate in a cluster outside of the container host?

I'm fairly new to Akka.net and I'm a total noob when it comes to containers so please forgive me if this is too simple (but I kind of hope it is).
I'm trying to build a web app cluster using Azure app services. I want the lighthouse to be hosted in an Azure container instance. I've been successful putting the cluster together on my local box (without docker). I've tried standing up a local docker container with port forwarding but I haven't been able to get it to work.
Thanks in advance for your help.
You can definitely do this, but since you're using Azure App Services I'd recommend taking a look at Akka.Management and Akka.Disovery.Azure instead.
This will eliminate the need to use Lighthouse at all - and instead your nodes can form a cluster on Azure App Service by querying a shared Azure Table Storage table instead.
There's a complete Azure App Services demo that shows how to do this here: https://github.com/petabridge/azure-app-service-akkadotnet
And the relevant code is here: https://github.com/petabridge/azure-app-service-akkadotnet/blob/dev/src/Akka.ShoppingCart/Startup.cs
NOTE: this uses the Akka.Hosting methods, which eliminates 99% of HOCON configuration and ties into Microsoft.Extensions for configuration, hosting, and DI. Akka.Hosting is a relatively new package and just hit stable at the end of 2022. You should definitely use it - all of the documentation and examples will be reworked to incorporate it once Akka.NET v1.5 ships at the end of February, 2023.

How to restrict Jenkins access to specific github organization?

We have Jenkins setup in our organisation with two organisational folders which basically does builds for repo's from two different github organizations.
We use Keycloak to authenticate to Jenkins. (Not sure if that's relevant or not) and we authenticate using openid connect with Keycloak.
I would like to know if it is possible to restrict access for a certain group of users to only be able to view builds on one of the github organizations. So for example if we have two github organizations: mrrobot_org and evilcorp_org, then I would like to be able to make an evilcorp_org_devs_group and add users to that group which would then restrict those developers from only accesing builds from the evilcorp_org github organization.
Someone told me this might be possible to do from Keycloak, but it does not seem likely.
I've tried quite a few things already but from what I've read the best option seems to use this plugin
https://plugins.jenkins.io/role-strategy/
and match the organzation using a regex to match a folder:"Folders can be matched using expressions like
^foo/bar.*".
Any other suggestions how I could do this?
Thanks so much.
For anyone reading this. I ended up using the Folder auth plugin for Jenkins.
I ended up sticking to Keycloak for Authentication, but used the folder auth plugin for Authorization.
So this allows me to restrict access per Jenkins folder. Each folder containing the builds of a given github organization.
The plugin is pretty easy to use. You can check it out here:
https://github.com/jenkinsci/folder-auth-plugin
The docs are here:
https://github.com/jenkinsci/folder-auth-plugin/blob/master/docs/usage.md

Pulling a Google Container Registry container into Google Kubernetes Engine from another GCP project

I am looking to pull a container from Google Container Registry that exists in one Google Cloud Platform project into a Google Kubernetes Engine cluster that exists in a separate GCP project.
There's a good resource on this here: https://medium.com/hackernoon/today-i-learned-pull-docker-image-from-gcr-google-container-registry-in-any-non-gcp-kubernetes-5f8298f28969 but it includes the complexity of a non-GCP project. My guess is that there's an easier approach since everything here resides in Google Cloud Platform.
Thanks,
https://medium.com/google-cloud/using-single-docker-repository-with-multiple-gke-projects-1672689f780c
This Medium post from way back seems to describe what you are trying to do. In short: you need to give “Storage Object Viewer” IAM permission to the service account of the cluster that wants to pull images from the other project's registry. The name of the role isn't exactly intuitive but sort of makes sense when you consider that the images are stored in cloud storage.

How to collect all ip's of pods by specific name filter

I have some legacy application, which deployed on clustered environment. When one of the application nodes receives call it gets from some configuration file static list of all application nodes where application is deployed.
When all ip's collected it communicates with each app node over jmx.
Current aim is to migrate to k8s, so in this case list of application pods is dynamic and can be just stored as is. Need to implement something like service discovery.
Current thoughts is to implement some simple rest service that will run in separate pod, main aim of which is always return some list of ips (entrypoints) of application pods filtered by some predicate.
So I have few questions:
Is it correct way to work? Any other options? (without changing legacy code)
Is there any ready solution for this? If not, how can I get information about needed pods inside my rest service?
Define a service with a scope selector so all your special pods are included then you can list all your endpoints IP's asking the apiservice.
You can check it's working with the command.
kubectl get endpoints
After that remains how to execute this command inside your pod. That's another story.
This link explain that matter
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod
Looks you're running a clustered application, so probably you need a Headless Service combined with a StatefulSet.
With this, you will be able to reach your replicas using simple DNS like replicas-[0-9].namespace.svc without need to extract IP addresses from endpoints query.

How to create a microservice that replicates itself as load of data increases?

Iam working on a project of big data, where Iam trying to get tweets from Twitter and analyse these tweets and make predictions out of it.
I have followed this tutorial : http://blog.cloudera.com/blog/2012/10/analyzing-twitter-data-with-hadoop-part-2-gathering-data-with-flume/
for getting the tweets. Now Iam planning to build a microservice which can replicate itself as I increase the number of topics on which I want tweets. Now whatever code I have written to gather the tweets with that I want to make a microservice that can take a keyword and create a instance of that code for that keyword and gather tweets, for each keyword an instance should be created.
It will also be helpful if you inform me what tools to use for such application.
Thank you.
I want to make a microservice that can take a keyword and create a instance of that code for that keyword and gather tweets, for each keyword an instance should be created.
You could use kubernetes as an underlying cluster/deployment infrastructure. It has an API that allows you to deploy new services programmatically. So what you would have to do is:
Set up a basic service container for your twitter-service that is available in a container repository.
Then you deploy a first service based on your container. The service configuration will contain the keyword that the service uses as well as information about the kubernetes cluster (how to access the cluster API and where to find the container in the repository).
Now your first service has all the information it needs to automatically create additional service descriptions for kubernetes (with other key words) and deploy those additional services by calling the kubernetes cluster API.
Since the additional services will be passed all the necessary information as well, they themselves can then start even more services and so on.
You probably need to put some effort into figuring out the cluster provisioning, but that can also be done automatically with auto-scaling (available for Google or AWS clouds for example).
A different approach would be to run a horizontally scaled cluster of your basic twitter services that use a self organization algorithm to involve all the keywords put into a database or event queue.

Resources