I want to create projects on a particular node on openshift using the rest api.
I call the openshift api /oapi/v1/projectrequests to create projects.
And in the body I am sending,
body:
{
apiVersion: 'v1',
kind: 'ProjectRequest',
metadata: {
name: projectName,
annotations: {
"openshift.io/node-selector": "servicelevel=botifyproject",
}
},
This api runs successfully, project gets created but when I run the command,
oc describr project <project-name> it doesn't show me anything that says it is inside the botifyproject node selector.
Is this the wrong way of doing this?
Related
I am using ArgoCD for a couple of weeks now and I don't really understand how to define spec.source.path in the application.
I have set the following path:
...
spec:
source:
repoURL: https://github.com/theautomation/home-assistant.git
targetRevision: main
path: deploy/k8s
...
but still argoCD syncs when a commit is committed outside this path in the repo, argoCD should ONLY watch this path for changes right? or does it not work that way?
Full application yaml:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: home-assistant
namespace: devops
annotations:
notifications.argoproj.io/subscribe.slack: cicd
argocd.argoproj.io/manifest-generate-paths: /deploy
spec:
project: default
source:
repoURL: https://github.com/theautomation/home-assistant.git
targetRevision: main
path: deploy/k8s
directory:
recurse: true
destination:
server: https://kubernetes.default.svc
namespace: home-automation
syncPolicy:
syncOptions:
- CreateNamespace=true
- Validate=true
- PrunePropagationPolicy=foreground
- PruneLast=true
automated:
selfHeal: true
prune: true
retry:
limit: 2
backoff:
duration: 5s
factor: 2
maxDuration: 3m
By default, Argo CD will sync when the commit changes regardless of what files were modified. This can be helpful when something in an App's directory references (via symlink or some other mechanism) something outside its directory.
If you are using webhooks to trigger the syncs, and you know an App isn't influenced by outside files, you can use an annotation to specify which directory (or directories) to watch for changes.
If you are not using webhooks and are relying on the usual reconciliation loop, the sync will still happen regardless of the annotation.
Argo CD will ONLY look for changes that have been applied to your targetRevision (ie main) and changes within your defined path. Assuming you have deploy/k8s with a bunch of manifests in that space, it will automatically look for changes to files within that path. Also, bc you have syncPolicy.automated configured, it will automatically apply changes when it is found (normally ab 3-5 mins bc of git polling, unless you have have webhooks configured)
I'd like to have a debug endpoint available only for stage. One way to do it is to the authorization in server code, but it would be even better not to create the API Gateway endpoint all together. Is it possible to achieve using Serverless framework?
In Serverless Framework, you can use a switch based on your environment to toggle configuration options, within the context of your YAML file.
For example, if you wanted to setup a normal HTTP endpoint via API Gateway you'd add something like the following to your functions section:
functions:
ping:
handler: "src/functions/ping/handler.main"
events:
http:
method: "any"
path: "/ping"
If you want this http event to be available for only a certain stage, you can define a block elsewhere and reference it based on the current stage name. It's commonplace to use the custom block in your serverless.yml file for this. To isolate between the stages prod and dev, with dev being your staging area you'd do this:
custom:
pingEvents: "${opt:stage, self:provider.stage}"
prod:
http:
method: "any"
path: "/ping"
dev:
http:
functions:
ping:
handler: "src/functions/ping/handler.main"
events: "${self:custom.pingEvents.${opt:stage, self:provider.stage}}"
The above would only expose the endpoint via HTTP when you run sls deploy --stage dev to release your application; any other stage (e.g. – prod) will have it disabled. Do note, that in this example, if you want to support stages outside of dev and prod you'll need to add a new block under custom.pingEvents.
I am building a docker image for my vuejs app. And I want some environment variable to read by my vuejs application.
For example:
I want to change my 'baseUrl' after creating and building the image, in runtime environment.
What I've tried & Search is:
It says you can not read outside an application after creating vuejs application build.
Check the docs of React example. You can inject to the html any variable at runtime (from backend, server side). Here the simple schema used in React.
html:
<script>
window.SERVER_DATA = __SERVER_DATA__;
</script>
client:
const constants = {
debug: false,
port: 8443,
host: window.SERVER_DATA,
graphql: {
path: '/graphql'
},
}
On the server side use the functions to pipe (and change your variable) the html output to client.
I'm running Jenkins 2.204.5
With plugins:
- job-dsl v1.77
- credentials 1.7
I'm trying to create a secret of type 'Secret Text' or 'StringCredentialsImpl' bound to a folder, using the JobDSL plugin code.
https://github.com/jenkinsci/plain-credentials-plugin/blob/master/src/main/java/org/jenkinsci/plugins/plaincredentials/impl/StringCredentialsImpl.java
But despite the fact that it is mentioned as supported in
https://github.com/jenkinsci/job-dsl-plugin/blob/master/job-dsl-core/src/main/groovy/javaposse/jobdsl/dsl/helpers/parameter/CredentialsParameterContext.groovy#L23
I cant see it in my dynamic viewer JENKINS_URL/plugin/job-dsl/api-viewer/index.html
All I see is:
credentials {
basicSSHUserPrivateKey {}
certificateCredentialsImpl {}
fileSystemServiceAccountCredential {}
// OpenShift do use a dedicated authorization layer on top of Kubernetes and does not allow to access Kubernetes API using plain username/password credentials.
openShiftBearerTokenCredentialImpl {}
usernamePasswordCredentialsImpl {}
}
Apparently the plain-credentials-plugin is after all not compatible with the job-dsl-plugin.
Jenkins bug:
https://issues.jenkins-ci.org/browse/JENKINS-59971
I am trying to implement a "helm.sh/hook": crd-install hook on CustomResourceDefinition object of course, of this way
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
"helm.sh/hook": crd-install
"helm.sh/hook-delete-policy": "before-hook-creation"
name: certificates.certmanager.k8s.io
and when I am going to update or deploy a new instance with different name of my helm chart in the same cluster, I get this error
Error: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists
I suppose that "before-hook-creation" delete policy specifies Tiller should delete the previous hook before the new hook is launched.
But I can't to make a deletion and re-creation of my different CustomResourceDefinition objects
I have my chart in a CI/CD process, and the idea is that when I execute a helm upgrade operation or even an a new helm install operation with a new name of the chart in the same cluster, I don't have to force to delete the CRDs objects manually in order to it works
How can I get this purpose?
I have used "helm.sh/resource-policy": delete and also the "helm.sh/hook-delete-policy": "hook-succeeded" and "helm.sh/hook-delete-policy": "hook-failed" hooks all together but I am confused in this point
I am using the following versions of helm
⟩ helm version
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Like anecdote, I have the same YAML configuration in a secret resource, I mean I am creating that secret resource, but as a pre-install hook, but also using "helm.sh/hook-delete-policy": "before-hook-creation" and when I want to upgrade or install a new instance of my chart, it works.
I have made the same installing a CRDs as a pre-install hooks and it works
Why it works as a pre-install hook and not for crd-install hook?
Can I deploy CRDs kubernetes objects as a pre-install hooks? I mean it isn't a bad practice or it is recommendable?
Any support I will be highly grateful
UPDATE
Even having the CRDs object resources a a pre-install hook, if I delete the helm chart helm delete --purge ... and I create it again, I get the same error
Error: object is being deleted: customresourcedefinitions.apiextensions.k8s.io "certificates.certmanager.k8s.io" already exists
The previous hook resources are not being deleted in order to continue with the installation workflow without generate errors.
I mean, it process only works when I am going to make a helm install for first time and its post-correspondent upgrades operations