Equivalent Docker.DotNet AuthConfig class in KubernetesClient for Dotnet application - docker

I have a Dokerswarm application(.Net) which is using Authconfig class for storing information [username, passord, serveraddress, tokens etc] for authenticating with the registries. The same application I am trying to write in Kubernetes using KubernetesClient.
Can someone please let me know if there is any equivalent of Authconfig class in Kubernetes K8s.Model client also ?

The similar class for creating connection to the k8s APIServer endpoint would be the following:
KubernetesClientConfiguration (in case you have proper KUBECONFIG environment variable set, or at least k8s config on the disk)
More specific classes could be found in the folder:
csharp/src/KubernetesClient/KubeConfigModels/
Usage examples could be found here:
csharp/examples/
I would also recommend to read the following documentation pages:
Access Clusters Using the Kubernetes API
Configure Access to Multiple Clusters

Related

How i can authenticate the Google Cloud Video Intelligence API in a Golang Docker Container running on a GoogleVirtual Machine using a serviceAccount?

I'm trying to make a request in Go client.AnnotateVideo(ctx, &annotateVideoRequest) to the Google Cloud Video Intelligence API using the package cloud.google.com/go/videointelligence/apiv1.
I noticed that if I'm on a Google VM, i don't need any credentials or environment variable because the API says:
For API packages whose import path is starting with "cloud.google.com/go",
such as cloud.google.com/go/storage in this case, if there are no credentials
provided, the client library will look for credentials in the environment.
But I guess I can't authenticate because I'm running a Docker Container inside the Google VM, and I don't know if I really need a credentials file in that docker container, because I don't know if the library automatically creates a credentials file, or it just check if there is a $GOOGLE_APPLICATION_CREDENTIALS and then use that (But that makes no sense. I'm on a GOOGLE VM, and I'm supposed to have that permission).
The error is:
PermissionDenied: The caller does not have permissions
Some links that might be helpful:
https://pkg.go.dev/cloud.google.com/go/storage
https://cloud.google.com/docs/authentication#environment-service-accounts
https://cloud.google.com/docs/authentication/production#auth-cloud-implicit-go
https://cloud.google.com/video-intelligence/docs/common/auth#adc
Thanks in advance!

Keycloak Docker import LDAP bind credentials without exposing them

I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.

Spring Cloud Data Flow for Kubernetes - Could not configure multiple kafka brokers

I'm trying to migrate my SCDF local server deployments to the k8s-based solution. But I've got some problems when handling the server configuration of the kafka broker-list for the apps.
I followed the instructions here: https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.7.2.RELEASE/reference/htmlsingle
and downloaded the sample configuration from : https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes at branch v1.7.2.RELEASE
Because we've already deployed a kafka cluster, I'd like to configure the broker- and zk-nodes in the server-config-kafka.yaml file so that we could use the same kafka cluster.
I configured my environmentVaribales like this:
deployer:
kubernetes:
environmentVariables: >
SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS='172.16.3.192:9092,172.16.3.193:9092,172.16.3.194:9092',
SPRING_CLOUD_STREAM_KAFKA_BINDER_ZK_NODES='172.16.3.192:2181,172.16.3.193:2181,172.16.3.194:2181'
but got an error when trying to deploy my SCDF stream:
Invalid environment variable declared: 172.16.3.193:9092
How should I configure it to make it work?
Thanks in advance.
Remove the > in your YAML
That's creating a block string, not a map of environment variables. In YAML, how do I break a string over multiple lines?
Also, if using CoreDNS in kubernetes, you should probably be using something like kafka.default.cluster.local for the value, rather than IP addresses, and similar for Zookeeper

How to config dataflow Pipeline to use a Shared VPC?

I know we have configuration arguments where you can specify network and subnets, I tried doing that but with a Shared VPC network, it gives me this error.
The usage of subnetworks in Cloud Dataflow require to specify the subnetwork parameter when running the pipeline; However, in the case of subnetwork that are located in a Shared VPC network, it is required to use the complete URL based on the following format:
https://www.googleapis.com/compute/v1/projects/<HOST_PROJECT>/regions/<REGION>/subnetworks/<SUBNETWORK>
Additionally, verify you are adding the project's Dataflow service account into the Shared VPC's project IAM table and give it the "Compute Network User" role permission in order to ensure that the service has the required access scope.
You can take a look on the Subnetwork parameter official Google's documentation which contains detailed information about this matter.
Be sure to include the Project ID in the --subnetwork option:
/projects/<PROJECT_ID>/regions/<REGION>/subnetworks/<SUBNETWORK>
and give to the Dataflow Service account the Network User role in the host project, which is what I suspect is going on according to the error message.

What should WSO2 APIM <localMemberHost> parameter contain in the config and what it is used by the APIM?

I'm trying to deploy APIM in a distributed setup with docker. What should be the value of localmemberhost value in the axis2.xml file? And what it is used for?
This is used for clustering. If you use WKA membership schema you need to specify your IP address of the container which accessible from outside. For docker, this is the docker contain IP.

Resources