After creating a new app using oc new-app location/nameofapp, many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run oc delete <label>. I would like to know how to delete all of these given the label.
When using oc new-app, it would normally add a label on each resource created call app with value being the name given to the application. That name would be based on the name of the git repository, or could have been supplied using the --name option. Knowing that to delete everything you can then run:
oc delete all --selector app=appname
Before you delete anything you should be able to check what would matche by running:
oc get all --selector app=appname
Note that if creating from a template, rather than a repository, how things are labelled can depend on what the template itself sets up, so the instructions above may not apply.
Related
I have a private repository. I want to update my container image in Kubernetes automatically when the image is updated in my private repository. How can I achieve this?
Kubernetes natively does not have the feature of automatically redeploying pods when there is a new image. Ideally what you want is a tool which enables GitOps style deployment wherein state change in git will be synced to the Kubernetes cluster. There is Flux and ArgoCD open source tools which supports GitOps.
Recently there is an announcement to combine these two projects as ArgoFlux.
You should assign some sort of unique identifier to each build. This could be based off a source-control tag (if you explicitly tag releases), a commit ID, a build number, a time stamp, or something else; but the important detail is that each build creates a unique image with a unique name.
Once you do that, then your CI system needs to update the Deployment spec with a new image:. If you're using a tool like Kustomize or Helm, there are standard patterns to provide this; if you are using kubectl apply directly, it will need to modify the deployment spec in some way before it applies it.
This combination of things means that the Deployment's embedded pod spec will have changed in some substantial way (its image: has changed), which will cause the Kubernetes deployment controller to automatically do a rolling update for you. If this goes wrong, the ordinary Kubernetes rollback mechanisms will work fine (because the image with yesterday's tag is still in your repository). You do not need to manually set imagePullPolicy: or manually cause the deployment to restart, changing the image tag in the deployment is enough to cause a normal rollout to happen.
Have a look at the various image pull policies.
imagePullPolicy: always might come closest to what you need. I don't know if there is a way in "vanilla" K8s to achieve an automatic image pull, but I know that RedHat's OpenShift (or OKD, the free version) works with image streams, which do exactly what you ask for.
The imagePullPolicy and the tag of the image affect when the kubelet attempts to pull the specified image.
imagePullPolicy: IfNotPresent: the image is pulled only if it is not already present locally.
imagePullPolicy: Always: the image is pulled every time the pod is started.
imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.
imagePullPolicy is omitted and the image tag is present but not :latest: IfNotPresent is applied.
imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.
So to achieve this you have to set imagePullPolicy: Always and restart you pod and it should pull a fresh latest copy of image. I don't think there is any other way in K8s
Container Images
I just wrote a bash script to achieve this.My imagePullPolicy option is always and i am running this script with crontab.(You can check everytime with infinite loops).It is checking repository and if any change occured , it is deleting the pod (automatically because of imagePullPolicy set Always) and pulling updated image.
#!/bin/bash
registry="repository_name"
username="user_name"
password="password"
##BY this you can append all images from repository to the array
#images=($( echo $(curl -s https://"$username":"$password"#"$registry"/v2/_catalog | jq .repositories | jq .[])))
##Or you can set manuaaly your image array
images=( image_name_0 image_name_1 )
for i in "${images[#]}"
do
old_image=$(cat /imagerecords/"$i".txt)
new_image=$(echo $(curl -s https://"$username":"$password"#"$registry"/v2/"$i"/manifests/latest | jq ."fsLayers" | jq .[]))
if [ "$old_image" == "$new_image" ];then
echo "image: "$i" is already up-to-date"
else
echo "image: " $i" is updating"
kubectl delete pod pod_name
echo $new_image > /imagerecords/"$i".txt
fi
done
This functionality is provided by open-source project argocd-image-updater:
https://github.com/argoproj-labs/argocd-image-updater
I have a docker image that receives a set of environment variables to customize its execution.
A simple example would be a web-server, that has stuff like client secret for OAuth2, a secret to sign cookies, etc.
The whole app is containerized on a docker image, that receives (runtime) environment variables.
I distribute that docker image on a private registry, and I would like to document that image, so that users can understand how they can customize the image.
Is it possible to ship, as part of the docker image, annotations that e.g. using docker describe my_image output markdown to the stdout?
I could of course use a static page on the web for documentation, but the user would still need to know where that documentation could be found, and the whole distribution would be more complext this way (e.g. documentation changes with image tag).
Any ideas?
There is no silver bullet here as far as I know, All solutions below work, but require the user to be informed of how to retrieve the documentation.
There is no standard way of doing it.
The open container initiative have created an image spec annotation suggesting that
A link to more information about the image should be provided in a label called org.opencontainers.image.documentation.
A description of the software packaged inside the container should be provided in a label called org.opencontainers.image.description
According to OCI, one of the variations of option 1 below is correct.
Option 1: Providing a link in a label (Prefered by OCI)
Assuming the Dockerfile and related assets are version controlled in a git repository that is publicly accessible (for example on github), that git repository could also contain a README.md file. If you have a pipeline hooked up to the repo that builds and publishes the Docker image to a registry automatically, you could setup the docker build command to add a label with a link to the documentation as follows
# Get the current commit id
commit=$(git rev-parse HEAD)
# Build docker image and attach a link to the Readme as a label
docker build -t myimagename:myversion \
--label "org.opencontainers.image.documentation=https://github.com/<user>/<repo>/blob/$commit/README.md"
This solution links to specific commit documentation for that particular commit versioned alongside your Dockerfile. It does however require the user to have access to internet to be able to read the documentation
Option 1b: Providing full documentation in a label (Prefered by OCI)
A variation of option 1 where the full documentation is serialized and put into the label (there is no length restrictions on labels). This way the documentation is bundled with the image itself
As Jorge Leitao pointed out in the comments, the image annotaion spec from OCI specifies the name of such a label as org.opencontainers.image.description
Option 2: Bundling documentation inside image
If you prefer to actually bundle the Readme.md file inside the image to make it independent on any external web page consider the following
Upon build, make sure to copy the Readme.md file to the docker image
Also create a simple shell script describe that cats the Readme.md
describe
#!/usr/bin/env sh
cat /docs/Readme.md
Dockerfile additions
...
COPY Readme.md /docs/Readme.md
COPY describe /opt/bin/describe
RUN chmod +x /opt/bin/describe
ENV PATH="/opt/bin:${PATH}"
...
A user that have your Docker image an now run the following command to have the markdown sent to stdout
docker run myimage:version describe
This solution bundles the documentation for this particular version of the image inside the image and it can be retrieved without any external dependencies
I'm trying to delete gcloud environments. One did not successfully create (no associated Airflow or Bucket) and one did. When I attempt to delete, I get an error message (after a really long time) of RPC Skipped due to required preoperation not finished yet. The logs don't provide any valuable information, and I wasn't able to find anything wrong in the cluster. The only solution I have found so far is to delete the entire project, but I would prefer not to. Any suggestions would be greatly appreciated!
Follow the steps below to delete the environment's resources manually:
Delete GKE cluster that corresponds to the environment
Delete the Google Storage bucket used by the environment
Delete the related deployment with:
gcloud deployment-manager deployments delete <DEPLOYMENT_NAME> --delete-policy=ABANDON
Then try again to delete the Composer environment with:
gcloud composer environments delete <ENVIRONMENT_NAME> --location <LOCATION>
I would like to share what worked for me in case someone else runs into this problem as I followed all the steps above and still could not delete the composer environment.
My 'gcloud composer environments list' command was returning '0', but I could see my environment was still in the console view and when I tried to delete it, I would get the same error message as honlicious. Additionally, I ran 'gcloud projects add-iam-policy-binding' to try to give my Compute Engine ServiceAccount the composer.serviceAgent role, but this still did not resolve my issue. What eventually worked was disabling the Cloud Composer API and then re-enabling it. This removed my old environment I was unable to previously delete.
I got this issue when I tried to create and delete Cloud Composer by Terraform.
I created a Service Account apart from the Composer and this led to deletion it in the first order during a terraform destroy operation.
So the correct order is:
Delete Composer environment
Delete Composer’s Service Account
I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.
I am pushing a docker image to a private docker registry, and am having trouble marking it 'public' via
a script.
For this discussion, I'm guessing the content of the Dockerfile doesn't matter... so lets assume I have the following in my
current working directory:
Dockerfile
from ubuntu
touch /tmp/foo
I build like this:
docker build -t my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04 .
Then, I am doing my push like this:
docker push my.private.docker.registry.com/foo/jdk1.8.on.ubuntu14.04
Next, I navigate to the web site that allows me to manage my private registry (# the url http://my.private.docker.registry.com)
I look at my image, and I see it has a padlock icon next to it, indicating that it is private. I can manually unlock from the
web UI, but I'd like to know if there are any options to docker's 'push command that will allow me to mark the image
as 'public' without manual intervention.
One thing I tried was setting global settings for my namespace such that all new repos would be readable/writable by all users.
Specifically: I went into the Docker web ui for my private registry and for the namespace 'foo' I tried adding default permissions
(for any newly created repos) such that all users will have 'write' access to any new repo pushed under the 'foo' namespace.
However, even after doing the above, when I pushed a new image to my private registry under namespace foo, that image was still
marked with the pad-lock. I looked up the command line options for 'docker push', and I did not see any option that looked like
it would affect the visibility of the image at the time of push.
thanks in advance for your help !
-chris
So, according to the folks who manage the Docker registry at the company I'm at now: there is no command line way to enable permissions for users other than the repository creator to have write access to that repo. You have to go to the web UI and manually mark the repo 'public', and you have to add permissions for each user (although it is possible to have groups of users, and then add a whole group -- this still is clunky because new employees have to be manually added to the group).
I find it hard to believe that there's no command line way.. But this is what our experts say.. If there are other experts out there who have a better idea, please chime in ! Otherwise I will do it manually through the web UI (grrrrRRrr).