Can I run oc commands in openshift pod terminals? - docker

Is there any way that I can run the oc commands on pod terminals? What I am trying to do is let the user login using
oc login
Then run the command to get the token.
oc whoami -t
And then use that token to call the REST APIs of openshift. This way works on local environment but on openshift, there are some permission issues as I guess openshift doesn't give the root permissions to the user. it says permission denied.
EDIT
So basically i want to be able to get that BEARER token I can send in the HEADERS of the REST APIs to create pods, services, routes etc. And i want that token before any pod is made because i am going to use that token to create pods. It might sound silly I know but that's what I want to know if it is possible, the way we do it using command line using oc commands, is it possible on openshift.
The other possible way could be to call an API that gives me a token and then use that token in other API calls.
#gshipley It does sound like a chiken egg problem to me. But if i were to explain you what I do on my local machine, all i would want is to replicate that on openshift if it is possible. I run the oc commands on nodejs, oc.exe file is there in my repository. I run oc login and oc whoami -t. I read the token i get and store it. Then I send that token as BEARER in API headers. Thats what works on my local machine. I just want to replicate this scenario on openshift. Is it possible?

As a cluster admin create new Role as e.g. role.yml
apiVersion: authorization.openshift.io/v1
kind: ClusterRole
metadata:
name: mysudoer
rules:
apiGroups: [''],
resources: ['users']
verbs: ['impersonate']
resourceNames: ["<your user name>"]
and run
oc create -f role.yml
or instead of creating raw role.yml file, use:
oc create clusterrole mysudoer --verb impersonate --resource users --resource-name "<your user name>"
then give your ServiceAccount the new role
oc adm policy add-cluster-role-to-user mysudoer system:serviceaccount:<project>:default
download the oc tool into your container. Now whenever you execute a command you need to add --as=<user name>, or to hide that, create a shell alias inside your container
alias oc="oc --as=<user name>"
the oc should now behave exactly as on your machine, including the exact same privileges as the ServiceAccount only functions as an entry point to the API, but the real tasks are done as your user.
In case you want something simpler, just add the proper permissions to your ServiceAccount, e.g.
oc policy add-role-to-user admin -z default --namespace=<your project>
if you run the command, any container in your project that has oc will be able to auto-magically do tasks inside the project. However, this way, the permissions are not inherited from the user as in the first step, so it's always required to manually add them to the service account as needed.
Explanation, there is always ServiceAccount in your project called default. It has no privileges a thus can not do anything, however all necessary credentials for authenticating the ServiceAccount are by default in every single container. The cool thing is that oc, if you do not provide any credentials, and just run it inside a container in OpenShift, it will automatically try to login using this account. The steps above simply show, how to get the proper permissions to the account, so that the oc can use it to do something meaningful.
In case you simply want access the RESt API, use the token provided in
/var/run/secrets/kubernetes.io/serviceaccount/token
and set up the permissions for the ServiceAccount as described above. With that, you will not even need the oc command line tool.

Related

Cloud Scheduler has Permission Denied when attempting to run a Cloud Run job

I have created a simple Cloud Run job. I am able to trigger this code via a curl command:
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://sync-<magic>.a.run.app
(Obviously <magic> is actually something else)
Cloud Run is configured for Ingress to Allow All Traffic and with Authentication to be required.
I followed this documentation: https://cloud.google.com/run/docs/triggering/using-scheduler
And created a service account, granted it the Cloud Run Invoker Role and then setup an HTTP scheduled job to GET the same URL I tested with CURL. I have Add OIDC Token selected, and I provide the service account created above and the Audience which is the same URL I used with curl.
When I attempt to trigger this job (or when it triggers based of the native cron) it fails with:
{ "status": "PERMISSION_DENIED", "#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished", "targetType": "HTTP", "jobName": "projects/<project>/locations/<region>/jobs/sync", "url": "https://sync-<magic>.a.run.app/" }
Again <project>, <region> and <magic> have real values.
I tried using service-YOUR_PROJECT_NUMBER#gcp-sa-cloudscheduler.iam.gserviceaccount.com with YOUR_PROJECT_NUMBER updated appropriately as the service account that runs the scheduled job. It has the same error.
Any advice on how to debug this would be greatly appreciated!
Here is what i did which solved the issue altogether and now I get the success flag when running a secure Cloud Run service via a Cloud Scheduler job -
Create your service on Cloud run - let's call it "hello" and make it secured by removing "allUsers" permission from the list of Permissions PRINCIPALS - you should get an error when going to the endpoint as such - Error: Forbidden
Your client does not have permission to get URL / from this server.
Create an IAM service account for cloud scheduler - let's call it "cloud-scheduler" you will get this: cloud-scheduler#project-ID.iam.gserviceaccount.com now comes the important part :
Give your SA the ability to run Scheduler Jobs by adding the -
Cloud Run Invoker & Cloud Scheduler Job Runner permissions
Create your Cloud scheduler job and add the new SA to it according to google procedure :
Auth header: Add OIDC token
Service account: cloud-scheduler#project-id.iam.gserviceaccount.com
Audience : https://Service.url.from.cloud.run.service/
Add to your cloud run service an additional principal that will let your SA access to cloud run invoker
Run your scheduler and voila - all green !
Enjoy
I have tried to create a new service account, gave it Cloud run invoker role. Disable the Cloud Scheduler API and re-enable it.
The only thing that work for me is changing Auth header from Add OIDC token to None.
For some reason Cloud Scheduler change None back to Add OIDC token and Trigger cloud run normally

How to authorize Google API inside of Docker

I am running an application inside of Docker that requires me to leverage google-bigquery. When I run it outside of Docker, I just have to go to the link below (redacted) and authorize. However, the link doesn't work when I copy-paste it from the Docker terminal. I have tried port mapping as well and no luck either.
Code:
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
# Make clients.
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
Response:
requests_oauthlib.oauth2_session - DEBUG - Generated new state
Please visit this URL to authorize this application:
Please see the available solutions on this page, it's constantly updated.
gcloud credential helper
Standalone Docker credential helper
Access token
Service account key
In short you need to use a service account key file. Make sure you either use a Secret Manager, or you just issue a service account key file for the purpose of the Docker image.
You need to place the service account key file into the Docker container either at build or runtime.

Permission issues while docker push

I'm trying to push my docker image to google container image registry but get an error which says I do not have the needed permission to perform this operation.
I have already tried gcloud auth configure-docker but it doesn't work for me.
I first build the image using:
docker build -t gcr.io/trynew/hello-world-image:v1 .
Then I'm trying to attach a tag and push it:
docker push gcr.io/trynew/hello-world-image:v1
This is my output :
The push refers to repository [gcr.io/trynew/hello-world-image]
e62774cdb1c2: Preparing
0f6265b750f3: Preparing
f82351274ce3: Preparing
31a16430afc8: Preparing
67298499a3ed: Preparing
62d5f39c8fe4: Waiting
9f8566ee5135: Waiting
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials.
To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Google cloud services have specific information how to grant permissions for docker push, this is the first thing you should have a look I think, https://cloud.google.com/container-registry/docs/access-control
After checking that you have sufficient permissions you should proceed with authentication with something like:
gcloud auth configure-docker
See more here: https://cloud.google.com/container-registry/docs/pushing-and-pulling
If you are running docker as root (i.e. with sudo docker), then make sure to configure the authentication as root. You can run for example:
sudo -s
gcloud auth login
gcloud auth configure-docker
...that will create (or update) a file under /root/.docker/config.json.
(Are there any security implications of gcloud auth login as root? Let me know in the comments.)
In order to be able to push images to the private registry you need two things: API Access Scopes and Authenticate your VM with the registry.
For the API Access Scopes (https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform) we can read in the official documentation:
For GKE:
By default, new Google Kubernetes Engine clusters are created with
read-only permissions for Storage buckets. To set the read-write
storage scope when creating a Google Kubernetes Engine cluster, use
the --scopes option.
For GCE:
By default, a Compute Engine VM has the read-only access scope
configured for storage buckets. To push private Docker images, your
instance must have read-write storage access scope configured as
described in Access scopes.
So first, verify if your GKE cluster or GCE instance actually has the proper scopes set.
The next is to authenticate to the registry:
a) If you are using a Linux based image, you need to use "gcloud auth configure-docker" (https://cloud.google.com/container-registry/docs/advanced-authentication).
b) For Container-Optimized OS (COS), the command is “docker-credential-gcr configure-docker” (https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_google_container_registry)
Windows / Powershell
I got this error on Windows when I was trying to run docker push from a normal powershell window after authenticating in the google cloud shell that had opened when I installed the SDK.
The solution was simple:
Start a new powershell window to run docker push after running the gcloud auth configure-docker command.
Make sure you've activated the registry too:
gcloud services enable containerregistry.googleapis.com
Also Google has a tendency to jump to a default account (maybe your personal gmail) which may or may not be the one you want (your business email). Make sure if you're opening any links in a browser that you're in the correct Google account.
I'm not exactly sure what's going on yet because I'm brand new to docker, but something got refreshed when starting a new Powershell instance.
as noted https://stackoverflow.com/a/59799035/26283371 there appears to be a bug in the Linux version of cloud sdk where authentication fails using the standard authentication method (gcloud auth configure-docker). Instead, create a JSON keyfile per this and that tends to work.
I still can't get the gcloud auth configure-docker helper to work. What did was authenticating with an access token, like so
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io. (Be sure to include https://, otherwise it won't work).
You can view options for print-access-token here.
First thing, Make sure you covered all points listed in the following official documentation
https://cloud.google.com/container-registry/docs/advanced-authentication
This error occurs mostly due to docker config update, which you can check using command cat .docker/config.json
Now update with gcr with following command
gcloud auth configure-docker
Just in case anyone else is banging their head against a wall my PIA VPN caused this behavior.
"unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Turn my VPN off and it works fine. Turn it back on and it breaks again.
This is the only way that worked for me. I found it in a kubernetes/kompose Github issue.
Remove the credsStore key in ~/.docker/config.json
This will force docker to write the auth into the json when you use docker login. You can't untick Securely store Docker logins in macOS keychain in the docker desktop any more -- and the current credStore is no longer macOS keychain, it's desktop.
gcloud auth login Auth with gcloud (just to be explicit)
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
You should see this:
WARNING! Your password will be stored unencrypted in /Users/andrew/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Source: https://github.com/kubernetes/kompose/issues/1043#issuecomment-609019141
The fix is as follows: run gcloud auth login (the browser will open and allow you to authenticate) then run gcloud auth configure-docker and select Y - then redo push. It should work like charm.
I also have the same issue in the Linux environment. So I just set the Docker to run as a non-root user, (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user), and it works.
In my case DOCKER_CONFIG env variable was defined with an invalid value (not pointing to a docker config json).
I had the same issue, but for me, the problem was with internal users in my Linux system. I authenticated with gcloud my personal Linux user and when pushing, I was doing with root. So I had to authenticate my root user with gcloud as well:
sudo gcloud init
This issue happens to me when i switch service account which is pointing to different GCP Projects. Even though the service account has permission to push it says it does not have the permission. To resolve this by deleting config.json file which is present in .docker
Once this is done run the below commands and you should be able to push the image.
gcloud auth configure-docker
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
Where HOSTNAME= gcr.io , asia.gcr.io etc

Which roles should I add to my service account utilised by CircleCi?

I'm running tests and pushing my docker images from CircleCi to Google Container Registry. At least I'm trying to.
Which roles does my service account require to be able to pull and push images to GCR?
Even as an account with the role "Project Owner", I get this error:
gcloud --quiet container clusters get-credentials $PROJECT_ID
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
ResponseError: code=403,
message=Required "container.clusters.get" permission(s)
for "projects/$PROJECT_ID/locations/europe-west1/clusters/$CLUSTER".
According to this doc, you will need the storage.admin role to Push (Read & Write), and storage.objectViewer to Pull (Read Only) from Google Container Registry.
On the topic of not being able to get credentials as owner, you are likely using the service account of the machine instead of your owner account. Check which account you are using with the command:
gcloud auth list
You can change the service account the machine is using through the UI by first stopping the instance, then editing the service account. You can also use your Google credentials using the command:
gcloud auth login
Hope this helps
When you get Required "___ANYTHING____" permission message:
go to Console -> IAM -> Roles -> Create new custom role [ROLE_NAME]
add container.clusters.get and/or whatever other permissions you need in order to get the whole thing going (I needed some rights for kubectl for example)
assign that role (Console -> IAM -> Add+) to your service account

Google Cloud Jenkins gcloud push access denied

I'm trying via Jenkins to push an image to the container repository. It was working at first, but now, I got "access denied"
docker -- push gcr.io/xxxxxxx-yyyyy-138623/myApp:master.1
The push refers to a repository [gcr.io/xxxxxxx-yyyyy-138623/myApp]
bdc3ba7fdb96: Preparing
5632c278a6dc: Waiting
denied: Access denied.
the Jenkinsfile look like :
sh("gcloud docker --authorize-only")
sh("docker -- push gcr.io/xxxxxxx-yyyyy-138623/hotelpro4u:master.1")
Remarks:
Jenkins is running in Google Cloud
If I try in Google Shell or from my computer, it's working
I followed this tutorial : https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes
I'm stuck while 12 hours.... I need help
That error means that the GKE node is not authorized to push to the GCS bucket that is backing your repository.
This could be because:
The cluster does not have the correct scopes to authenticate to GCS. Did you create the cluster w/ --scopes storage-rw?
The service account that the cluster is running as does not have permissions on the bucket. Check the IAM & Admin section on your project to make sure that the service account has the necessary role.
Building on #cj-cullen's answer above, you have two options:
Destroy the node pool and then, from the CLI, recreate it with the missing https://www.googleapis.com/auth/projecthosting,storage-rw scope. The GKE console does not have the capability to change the default scopes when creating a node pool from the console.
Stop each instance in your cluster. In the console, click the edit button for the instance. You should now be able to add the appropriate https://www.googleapis.com/auth/projecthosting,storage-rw scope.

Resources