kubeflow - what is the username and password to login to the default example kubeflow deployment? - kubeflow

What is the kubeflow ui default username and password for the default example deployment of kubeflow?
The kubeflow manifest version is 1.5.0 which has been deployed by following the instruction in the page.
while ! kustomize build example | kubectl apply -f -; do echo "Retrying to apply resources"; sleep 10; done
The URL for the UI:
http://localhost:8001/api/v1/namespaces/istio-system/services/http:istio-ingressgateway:80/proxy/

Kubeflow Manifests
Dex is an OpenID Connect Identity (OIDC) with multiple authentication backends. In this default installation, it includes a static user with email user#example.com. By default, the user's password is 12341234. For any production Kubeflow deployment, you should change the default password by following the relevant section.

Related

Permission issues while docker push

I'm trying to push my docker image to google container image registry but get an error which says I do not have the needed permission to perform this operation.
I have already tried gcloud auth configure-docker but it doesn't work for me.
I first build the image using:
docker build -t gcr.io/trynew/hello-world-image:v1 .
Then I'm trying to attach a tag and push it:
docker push gcr.io/trynew/hello-world-image:v1
This is my output :
The push refers to repository [gcr.io/trynew/hello-world-image]
e62774cdb1c2: Preparing
0f6265b750f3: Preparing
f82351274ce3: Preparing
31a16430afc8: Preparing
67298499a3ed: Preparing
62d5f39c8fe4: Waiting
9f8566ee5135: Waiting
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials.
To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Google cloud services have specific information how to grant permissions for docker push, this is the first thing you should have a look I think, https://cloud.google.com/container-registry/docs/access-control
After checking that you have sufficient permissions you should proceed with authentication with something like:
gcloud auth configure-docker
See more here: https://cloud.google.com/container-registry/docs/pushing-and-pulling
If you are running docker as root (i.e. with sudo docker), then make sure to configure the authentication as root. You can run for example:
sudo -s
gcloud auth login
gcloud auth configure-docker
...that will create (or update) a file under /root/.docker/config.json.
(Are there any security implications of gcloud auth login as root? Let me know in the comments.)
In order to be able to push images to the private registry you need two things: API Access Scopes and Authenticate your VM with the registry.
For the API Access Scopes (https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform) we can read in the official documentation:
For GKE:
By default, new Google Kubernetes Engine clusters are created with
read-only permissions for Storage buckets. To set the read-write
storage scope when creating a Google Kubernetes Engine cluster, use
the --scopes option.
For GCE:
By default, a Compute Engine VM has the read-only access scope
configured for storage buckets. To push private Docker images, your
instance must have read-write storage access scope configured as
described in Access scopes.
So first, verify if your GKE cluster or GCE instance actually has the proper scopes set.
The next is to authenticate to the registry:
a) If you are using a Linux based image, you need to use "gcloud auth configure-docker" (https://cloud.google.com/container-registry/docs/advanced-authentication).
b) For Container-Optimized OS (COS), the command is “docker-credential-gcr configure-docker” (https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_google_container_registry)
Windows / Powershell
I got this error on Windows when I was trying to run docker push from a normal powershell window after authenticating in the google cloud shell that had opened when I installed the SDK.
The solution was simple:
Start a new powershell window to run docker push after running the gcloud auth configure-docker command.
Make sure you've activated the registry too:
gcloud services enable containerregistry.googleapis.com
Also Google has a tendency to jump to a default account (maybe your personal gmail) which may or may not be the one you want (your business email). Make sure if you're opening any links in a browser that you're in the correct Google account.
I'm not exactly sure what's going on yet because I'm brand new to docker, but something got refreshed when starting a new Powershell instance.
as noted https://stackoverflow.com/a/59799035/26283371 there appears to be a bug in the Linux version of cloud sdk where authentication fails using the standard authentication method (gcloud auth configure-docker). Instead, create a JSON keyfile per this and that tends to work.
I still can't get the gcloud auth configure-docker helper to work. What did was authenticating with an access token, like so
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io. (Be sure to include https://, otherwise it won't work).
You can view options for print-access-token here.
First thing, Make sure you covered all points listed in the following official documentation
https://cloud.google.com/container-registry/docs/advanced-authentication
This error occurs mostly due to docker config update, which you can check using command cat .docker/config.json
Now update with gcr with following command
gcloud auth configure-docker
Just in case anyone else is banging their head against a wall my PIA VPN caused this behavior.
"unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Turn my VPN off and it works fine. Turn it back on and it breaks again.
This is the only way that worked for me. I found it in a kubernetes/kompose Github issue.
Remove the credsStore key in ~/.docker/config.json
This will force docker to write the auth into the json when you use docker login. You can't untick Securely store Docker logins in macOS keychain in the docker desktop any more -- and the current credStore is no longer macOS keychain, it's desktop.
gcloud auth login Auth with gcloud (just to be explicit)
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
You should see this:
WARNING! Your password will be stored unencrypted in /Users/andrew/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Source: https://github.com/kubernetes/kompose/issues/1043#issuecomment-609019141
The fix is as follows: run gcloud auth login (the browser will open and allow you to authenticate) then run gcloud auth configure-docker and select Y - then redo push. It should work like charm.
I also have the same issue in the Linux environment. So I just set the Docker to run as a non-root user, (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user), and it works.
In my case DOCKER_CONFIG env variable was defined with an invalid value (not pointing to a docker config json).
I had the same issue, but for me, the problem was with internal users in my Linux system. I authenticated with gcloud my personal Linux user and when pushing, I was doing with root. So I had to authenticate my root user with gcloud as well:
sudo gcloud init
This issue happens to me when i switch service account which is pointing to different GCP Projects. Even though the service account has permission to push it says it does not have the permission. To resolve this by deleting config.json file which is present in .docker
Once this is done run the below commands and you should be able to push the image.
gcloud auth configure-docker
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
Where HOSTNAME= gcr.io , asia.gcr.io etc

Helm Sentry installation failed on deployment: initdb: could not change permissions of directory

I have a local Openshift instance where I'm trying to install Sentry using helm as:
helm install --name sentry --wait stable/sentry.
All pods are deployed fine other than the PostgreSQL pod also deployed as a dependency for Sentry.
This pod's initiliazation fails as a CrashLoopBackOff and the logs show the following:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted
Not sure where to start to fix this issue so I can get sentry deployed successfully with all its dependencies
The issue was resolved by adding permissions to the service account that was being used to run commands on the pod.
In my case the default service account on OpenShift was being used.
I added the appropriate permissions to this service account using the cli:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
Also see: https://blog.openshift.com/understanding-service-accounts-sccs/

Authentic Jenkins to Google Cloud Platform Kubernetes

I have the latest 1.6.4 Kubernetes installed on my GCP cluster but cannot figure out how to give Jenkins authorization.
i just tried adding two new commands in jenkins file
sh(“gcloud config set compute/zone us-central1-b”)
sh(“gcloud container clusters get-credentials te-cluster”)
1st was successful
second one failed
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
First create credentials using following:
Navigate to Google Cloud API Manager > Credentials > Create Credentials and create a JSON key for the Compute Engine default service account.
Then use gcloud auth activate-service-account --key-file key.json
You can save the content of the json file to a variable

Authenticate Google Cloud service account on docker image

I'm finding different behavior from within and outside of a docker image for authenticating a google service account.
Outside. Succeeds.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
Activated service account credentials for: [773889352370-compute#developer.gserviceaccount.com]
Run docker container, pass the .json key to tmp directory.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>docker run -it -v C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json:/tmp/MeerkatReader-d77c0d6aa04f.json --rm -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash gcr.io/cloud-datalab/datalab:local-20161227
From within docker, confirm the file is there
root#4a4a9314f15c:/tmp# ls
MeerkatReader-d77c0d6aa04f.json npm-24-b7aa1bcf npm-45-fd13ef7c npm-7-22ec336e
Run the same command as before. Fails.
root#4a4a9314f15c:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccoun
t.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
What might cause this error? More broadly, what is the suggested strategy for passing credentials. I've tried this and it fails as well. I'm using the cloudml API and cloud vision, and i'd like to avoid manual gcloud init at the beginning of every run.
EDIT: To show gcloud info
root#7ff49b26484f:/# gcloud info --run-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic (1/1 checks) passed.
confirmed same behavior
root#7ff49b26484f:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
This is probably due to a clock skew of the docker VM. I debugged the activate-service-account function of the google SDK and got the following error message:
There was a problem refreshing your current auth tokens: invalid_grant:
Invalid JWT: Token must be a short-lived token and in a reasonable timeframe
Please run:
$ gcloud auth login
to obtain new credentials, or if you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
After rebooting the VM, it worked like a charm.
Have you attempted to put the credential in the image from the beginning? Is that a similar outcome?
On the other hand, have you tried using --key-file /tmp/MeerkatReader-d77c0d6aa04f.json? Since it appears you're putting the json file in /tmp.
You might also consider checking the network configuration inside the container and with docker from the outside.
In my case, I was using a workload identity provider, and I made a little mistake, I set the workload provider with the full name of the pool
How it should be: /projects/${project-number}/locations/global/workloadIdentityPools/my-pool/providers/${id-provider}
And I also added the following command:
gcloud config set account ${{GCP_SERVICE_ACCOUNT}}
Before my docker push, because it was required.
In addition, according to the docs https://github.com/google-github-actions/auth#usage, my service account was missing the required roles:
roles/iam.serviceAccountTokenCreator
roles/iam.workloadIdentityUser
Edit: You may also need to grant access for your service account to your Workload Identity Pool, you can do it by command or interface:
gcloud iam service-accounts add-iam-policy-binding SERVICE_ACCOUNT_EMAIL \
--role=roles/iam.workloadIdentityUser \
--member="MEMBER_ID"
Docs:https://cloud.google.com/iam/docs/using-workload-identity-federation#gcloud

Deploy to IBM Containers without cf/ice CLI

I currently have a workflow that goes like this: Bitbucket -> Wercker.
Wercker correctly builds my app, but when it comes to deploying I am lost. I am attempting to deploy to my IBM Containers registry on Bluemix (recently out of beta).
Running docker login registry.ng.bluemix.net with my IBM account credentials returns a 401: bad credentials on my local machine (boot2docker on OSX). It does the same on Wercker in my deploy step.
Here is my deploy step:
deploy:
box:
id: node
tag: 0.12.6-slim
steps:
- internal/docker-push:
username: $USERNAME
password: $PASSWORD
tag: main
entrypoint: node bundle/main.js
repository: <my namespace/<my container name> (removed for this post)
registry: registry.ng.bluemix.net
As you can see: I have the username and password passed in as environment variables as per the Wercker Docs (and I have tested that they are passed in correctly).
Basically: how do you push containers to an IBM registry WITHOUT using the ice/cf CLI? I have a feeling that I'm missing something obvious. I just can't find it.
You need to use either the Containers plugin for cf or the ICE tool to login.
Documentation
Cloud Foundry plug-in:
cf ic login
ICE:
ice login
Can you create a custom script that can log in first? If the environment already has cf with the containers extension:
- script:
name: Custom login for Bluemix Containers
code: cf login -u <username> -p <password> -o <org> -s <space>
Excuse my wercker newb.
The problem is that the authentication with the registry uses a token rather than your userID and password. ice login and cf ic login take care of that but unfortunately a straight up docker login won't work.
Some scripts for initializing, building and cleaning up images are also available here: https://github.com/Osthanes/docker_builder. These are used in the DevOps Services delivery pipeline which is likely a similar flow to what you are building.
Turns out: it's very possible.
Basically:
Install CF cli
cf login -a https://api.ng.bluemix.net
Extract token from ~/.cf/config.json (text after bearer in AccessToken + "|" + OrganizationFields.Guid
It depends what you want to do with it. I have a very detailed write-up here on Github.
You can use the token as the password, passing 'bearer' as the username.
#mods: Is this enough for me to link to another site? I really hate to duplicate stuff like this...
You can now generate tokens to access the IBM Bluemix Container Registry using the container-registry plugin for the bx command.
These tokens can be read-only or read-write and either non-expiring (unless revoked) or expire after 24 hours.
The tokens can be used directly with docker login.
Read the docs here

Resources