I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.
Related
Below is screenshot of reference but not able to get exact what need to get temporary password from mention path.
These are guidelines given :
Next steps
Prerequisites
You'll need the following tools in your environment:
gcloud: if gcloud has not been configured yet, then configure gcloud by following the gcloud Quickstart.
kubectl: set kubectl to a specific cluster by following the steps at container get-credentials.
sed
Accessing your Jenkins instance
NOTE: For HTTPS, you must accept a temporary TLS certificate.
Read a temporary password:
$(kubectl -ndefault get pod -oname | sed -n /\\/jenkins-job-jenkins/s.pods\\?/..p) \
cat /var/jenkins_home/secrets/initialAdminPassword
Identify the HTTPS endpoint:
echo https://$(kubectl -ndefault get ingress -l "app.kubernetes.io/name=jenkins-job" -ojsonpath="{.items[0].status.loadBalancer.ingress[0].ip}")/
Navigate to the endpoint.
Configuring your Jenkins instance
Follow the on-screen instructions to fully configure your Jenkins instance:
Install plugins
Create a first admin user
Set your Jenkins URL (you can choose to start with the default URL and change it later)
Start using your fresh Jenkins installation!
For further information, refer to the Jenkins website or this project GitHub page.
I put a step by step instruction as follow:
Under Kubernetes engine goto Workloads tab then on the right side click on your jenkins Stateful Set.
You will route to the Stateful Set details page.
Under Managed pods click on your pod name.
On Pod details page you can find KUBECTL on the top right, Click on KUBECTL > Exec > jenkins-master
Cloud shell terminal should open and two row of command will be paste into it.
The very end of command should be ended with jenkins-master -- ls
Replace ls with cat /var/jenkins_home/secrets/initialAdminPassword then press enter.
Outcome will be your Administrator password, you may copy and paste it into "Unlock Jenkins" page!
Good luck!
Openshift/okd version: 3.11
I'm using jenkins-ephemeral app from the openshift catalog and using a buildconfig to create a pipeline. Reference: https://docs.okd.io/3.11/dev_guide/dev_tutorials/openshift_pipeline.html
When i start the pipeline, in one the stage of jenkins it needs to create a persistent volume, at that point im getting the following error:
Error from server (Forbidden): persistentvolumes is forbidden: User "system:serviceaccount:pipelineproject:jenkins" cannot create persistentvolumes at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "create" not found
I have tried giving the cluster-create role to service account jenkins with following command, still im getting the same error.
oc adm policy add-cluster-role-to-user create system:serviceaccount:pipelineproject:jenkins
Creating a PersistentVolume is typically something that you should not be manually doing. You should ideally be relying on PersistentVolumeClaims. PersistentVolumeClaims are namespaced resources, that your service account should be able to create with the edit Role.
$ oc project pipelineproject
$ oc policy add-role-to-user edit -z jenkins
However, if it's required that you interact with PersistentVolume objects directly, there is a storage-admin ClusterRole that should be able to give your ServiceAccount the necessary permissions.
$ oc project pipelineproject
$ oc adm policy add-cluster-role-to-user storage-admin -z jenkins
I'm trying to delete gcloud environments. One did not successfully create (no associated Airflow or Bucket) and one did. When I attempt to delete, I get an error message (after a really long time) of RPC Skipped due to required preoperation not finished yet. The logs don't provide any valuable information, and I wasn't able to find anything wrong in the cluster. The only solution I have found so far is to delete the entire project, but I would prefer not to. Any suggestions would be greatly appreciated!
Follow the steps below to delete the environment's resources manually:
Delete GKE cluster that corresponds to the environment
Delete the Google Storage bucket used by the environment
Delete the related deployment with:
gcloud deployment-manager deployments delete <DEPLOYMENT_NAME> --delete-policy=ABANDON
Then try again to delete the Composer environment with:
gcloud composer environments delete <ENVIRONMENT_NAME> --location <LOCATION>
I would like to share what worked for me in case someone else runs into this problem as I followed all the steps above and still could not delete the composer environment.
My 'gcloud composer environments list' command was returning '0', but I could see my environment was still in the console view and when I tried to delete it, I would get the same error message as honlicious. Additionally, I ran 'gcloud projects add-iam-policy-binding' to try to give my Compute Engine ServiceAccount the composer.serviceAgent role, but this still did not resolve my issue. What eventually worked was disabling the Cloud Composer API and then re-enabling it. This removed my old environment I was unable to previously delete.
I got this issue when I tried to create and delete Cloud Composer by Terraform.
I created a Service Account apart from the Composer and this led to deletion it in the first order during a terraform destroy operation.
So the correct order is:
Delete Composer environment
Delete Composer’s Service Account
I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.
After creating a new app using oc new-app location/nameofapp, many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run oc delete <label>. I would like to know how to delete all of these given the label.
When using oc new-app, it would normally add a label on each resource created call app with value being the name given to the application. That name would be based on the name of the git repository, or could have been supplied using the --name option. Knowing that to delete everything you can then run:
oc delete all --selector app=appname
Before you delete anything you should be able to check what would matche by running:
oc get all --selector app=appname
Note that if creating from a template, rather than a repository, how things are labelled can depend on what the template itself sets up, so the instructions above may not apply.