Can not change timezone for kubernetes pod - docker

I am trying to modify the config file of a pod to use local time, but it show invalid when saving. Do you know what's wrong ?
In volumeMounts section: I added below lines:
- mountPath: /etc/localtime
name: tz-config
In volumes setion: I added below lines:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
Here is my yaml file:
apiVersion: v1
kind: Pod
metadata:
.....
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-jgznd
readOnly: true
- mountPath: /etc/localtime
name: tz-config
dnsPolicy: ClusterFirst
.....
volumes:
- name: default-token-jgznd
secret:
defaultMode: 420
secretName: default-token-jgznd
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
UPDATE: Below are error detail
# pods "hello-75fdf45c64-w7xm8" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
# core.PodSpec{
# Volumes: []core.Volume{
# {Name: "default-token-wcf8m", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: "default-token-wcf8m", DefaultMode: &420}}},
# - {
# - Name: "tz-config",
# - VolumeSource: core.VolumeSource{
# - HostPath: &core.HostPathVolumeSource{Path: "/usr/share/zoneinfo/Asia/Ho_Chi_Minh", Type: &""},
# - },
# - },
# },
# InitContainers: nil,

I resolved my problem by adding mountPath to the file deployment yaml as below. Many thanks #Shawlz for help:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-02-14T15:59:50Z"
generation: 1
labels:
run: hello
name: hello
namespace: default
resourceVersion: "523908"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello
uid: 43196302-0176-4ce2-9d10-c8fefcc6c316
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello
spec:
containers:
- image: hello-microservice
imagePullPolicy: Never
name: hello
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
volumes:
- name: tz-config
hostPath:
path: /usr/share/zoneinfo/Asia/Ho_Chi_Minh
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}

Related

TeamCity/EKS cluster

apiVersion: apps/v1
kind: Deployment
metadata:
name: example-teamcity-server
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: example-teamcity-server
template:
metadata:
labels:
app: example-teamcity-server
teamcity: server
spec:
containers:
- name: example-teamcity-server
image: jetbrains/teamcity-server
imagePullPolicy: Always
ports:
- containerPort: 8111
volumeMounts:
- name: teamcity-server-datadir-volume
mountPath: "/data/teamcity_server/datadir"
- name: teamcity-server-logs-volume
mountPath: "/opt/teamcity/logs"
volumes:
- name: teamcity-server-datadir-volume
persistentVolumeClaim:
claimName: teamcity-server-premium-datadir-disk
- name: teamcity-server-logs-volume
persistentVolumeClaim:
claimName: teamcity-server-premium-logs-disk

kubernates hostPath type check failed is not a file

hostPath type check failed
i deploy my pod ,but something wrong .
i deploy daemmon in my kubernates
two node is correct and woking ,but only one are pending . i describe this pod
error message said:
follow is my logstash.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitor-logtash
spec:
# replicas: 1
# minReadySeconds: 120
# strategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
selector:
matchLabels:
app: monitor-logtash
template:
metadata:
labels:
app: monitor-logtash
version: v1
spec:
imagePullSecrets:
- name: dockerlogin
containers:
- name: monitor-logtash
image: xxx.xxx.xxx/xxx/logstash:7.11.2
imagePullPolicy: Always
volumeMounts:
- name: log
mountPath: /data/log/
- name: logstash-conf
mountPath: /usr/share/logstash/pipeline/logstash.conf
- name: logstash-yml
mountPath: /usr/share/logstash/config/logstash.yml
- name: log4j-pattern
mountPath: /data/config/patterns/log4j-pattern.conf
ports:
- containerPort: 9600
- containerPort: 5044
volumes:
- name: log
hostPath:
path: /data/log/
type: Directory
- name: logstash-conf
hostPath:
path: /data/www/logstash/logstash.conf
type: File
- name: logstash-yml
hostPath:
path: /data/www/logstash/logstash.yml
type: File
- name: log4j-pattern
hostPath:
path: /data/www/logstash/log4j-pattern.conf
type: File
here is my config , and these file is defined existed on each server
resolved
the answer is i created wrong file
Some file are missing from your host.
I advise you to use configmap and secret to store your configuration and be able to update them without going on the nodes

Kubernetes Permission denied in container

My company bought a software we're trying to deploy on IBM cloud, using kubernetes and given private docker repository. Once deployed, there is always a Kubernetes error : "Back-off restarting failed container". So I read logs in order to understand why the container is restarting and here is the error :
Caused by: java.io.FileNotFoundException: /var/yseop-log/yseop-manager.log (Permission denied)
So I deduced that I just had to change permissions in the Kubernetes file. Since I'm using a deployment, I tried the following initContainer :
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
This didn't worked because I'm not allowed to execute chmod on read-only folders as non root user.
So I tried remounting those volumes, but that also failed, because I'm not a root user.
I then found out about running as User and group. In order to find out which User and group I had to write in my security context, I read the dockerfile and here is the user and group :
USER 1001:0
So I tought I could just write this in my deployment file :
securityContext:
runAsUser: 1001
rusAsGroup: 0
Obvisouly, that didn't worked neither, because I'm not allowed to run as group 0
So I still don't know what to do in order to properly deploy this image. The image is working when doing a docker pull and exec on m computer, but it's not working on Kubernetes.
Here is my complete Volume file :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
ibm.io/auto-create-bucket: "true"
ibm.io/auto-delete-bucket: "false"
ibm.io/bucket: ""
ibm.io/secret-name: "cos-write-access"
ibm.io/endpoint: https://s3.eu-de.cloud-object-storage.appdomain.cloud
name: yseop-pvc
namespace: ns
labels:
app: yseop-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: ibmc
volumeMode: Filesystem
And here is my full deployment file :
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
runAsUser: 1001
rusAsGroup: 0
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chmod -R 777 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
Thanks for helping
Can you please try including supplementary group ID in the security context like
SecurityContext:
runAsUser: 1001
fsGroup: 2000
By Default runAsGroup is 0 which is root. Below link might give more insight about this.
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
Working Yaml Content
apiVersion: apps/v1
kind: Deployment
metadata:
name: yseop-manager
namespace: ns
spec:
selector:
matchLabels:
app: yseop-manager
template:
metadata:
labels:
app: yseop-manager
spec:
securityContext:
fsGroup: 2000
initContainers:
- name: permission-fix
image: busybox
command: ['sh', '-c']
args: ['chown -R root:2000 /var']
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
containers:
- name: yseop-manager
image:IMAGE
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1001
runAsGroup: 2000
env:
- name: SECURITY_USERS_DEFAULT_ENABLED
value: "true"
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /var/yseop-engine
name: yseop-data
- mountPath: /var/yseop-data/yseop-manager
name: yseop-data
- mountPath: /var/yseop-log
name: yseop-data
imagePullSecrets:
- name: regcred
volumes:
- name: yseop-data
persistentVolumeClaim:
claimName: yseop-pvc
I was not told by my company that we do have restrictives Pod Security Policies. Because of that, volumes are Read-only and there is no way I could have written anything in said volumes.
The solution is as follow :
volumes:
- name: yseop-data
emptyDir: {}
Then, I have to specify a path in volumeMounts (Which was already done) and create a PVC, so my Data would be persistent.

Docker in Docker configuration

I am having Jenkins running in K8s and now i am trying to run: docker build as one of the step in Jenkins build. Since Jenkins is running inside Docker, i came to the solution to use Docker in Docker from this post: https://medium.com/hootsuite-engineering/building-docker-images-inside-kubernetes-42c6af855f25
However, after I modified the deployment yaml file, it still does not work.
There are 2 containers running: Jenkins (Jenkins image) and dind (docker in docker image). I could run the docker command inside dind container but i can not run docker command in Jenkins or pod.
Here is the yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "9"
field.cattle.io/publicEndpoints: '[{"addresses":["10.0.0.111"],"port":80,"protocol":"HTTP","serviceName":"jenkins-with-did:jenkins-with-did","ingressName":"jenkins-with-did:jenkins-with-did","hostname":"jenkins.dtl.miproad.ad","allNodes":true}]'
creationTimestamp: "2020-04-30T06:38:40Z"
generation: 11
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
io.cattle.field/appId: jenkins-with-did
name: jenkins-with-did
namespace: jenkins-with-did
resourceVersion: "29233038"
selfLink: /apis/apps/v1/namespaces/jenkins-with-did/deployments/jenkins-with-did
uid: 6439c48d-c4ce-418c-8553-d06fee13c7d1
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2020-04-30T18:15:50Z"
checksum/config: fda7089fede91f066c406bbba5e2a1d59f71183eebe9bca3fe7de19d13504058
field.cattle.io/ports: '[[{"containerPort":8080,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"http","protocol":"TCP","sourcePort":0},{"containerPort":50000,"dnsName":"jenkins-with-did","hostPort":0,"kind":"ClusterIP","name":"slavelistener","protocol":"TCP","sourcePort":0}]]'
creationTimestamp: null
labels:
app.kubernetes.io/component: jenkins-master
app.kubernetes.io/instance: jenkins-with-did
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: jenkins
helm.sh/chart: jenkins-1.18.0
spec:
containers:
- args:
- --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)
- --argumentsRealm.roles.$(ADMIN_USER)=admin
- --httpPort=8080
env:
- name: JAVA_OPTS
- name: JENKINS_OPTS
- name: JENKINS_SLAVE_AGENT_PORT
value: "50000"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 5
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
name: jenkins
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 50000
name: slavelistener
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
securityContext:
capabilities: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
- image: docker:18.05-dind
imagePullPolicy: IfNotPresent
name: dind
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
dnsPolicy: ClusterFirst
initContainers:
- command:
- sh
- /var/jenkins_config/apply_config.sh
env:
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: jenkins-admin-password
name: jenkins-with-did
optional: false
- name: ADMIN_USER
valueFrom:
secretKeyRef:
key: jenkins-admin-user
name: jenkins-with-did
optional: false
image: jenkins/jenkins:lts
imagePullPolicy: Always
name: copy-default-config
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 50m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /tmp
name: tmp
- mountPath: /var/jenkins_home
name: jenkins-home
- mountPath: /var/jenkins_config
name: jenkins-config
- mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
- mountPath: /var/jenkins_plugins
name: plugin-dir
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
serviceAccount: jenkins-with-did
serviceAccountName: jenkins-with-did
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: dind-storage
- emptyDir: {}
name: plugins
- emptyDir: {}
name: tmp
- configMap:
defaultMode: 420
name: jenkins-with-did
name: jenkins-config
- emptyDir: {}
name: secrets-dir
- emptyDir: {}
name: plugin-dir
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-with-did
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-04-30T18:20:47Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-04-30T06:38:40Z"
lastUpdateTime: "2020-04-30T18:20:47Z"
message: ReplicaSet "jenkins-with-did-5db85986b6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Thank you so much in advance!
Your idea is a valid approach.
The regular jenkins image does not provide the docker cli - therefore using docker does not work out of the box. You can either build your own jenkins image which provides the docker command or you can use a prebuilt jenkins image including the docker cli, for example: https://hub.docker.com/r/trion/jenkins-docker-client
You can do a hostpath volumes and mount /usr/bin/docker, /lib64 and /usr/lib64 from the node to your pod. This would need securityContext: -> privileged: true

How would I assign ConfigMap to a pod that is already running?

I cannot get a ConfigMap loaded into a pod that is currently running nginx.
I tried by creating a simple pod definition and added to it a simple read ConfigMap shown below:
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: testcontainer
image: nginx
env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
name: configmap1
key: data1
This ran successfully and its YAML file was saved and then deleted.
Here's what I got:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"testpod","namespace":"default"},"spec":{"containers":[{"env":[{"name":"MY_VAR","valueFrom":{"configMapKeyRef":{"key":"data1","name":"configmap1"}}}],"image":"nginx","name":"testcontainer"}]}}
creationTimestamp: null
name: testpod
selfLink: /api/v1/namespaces/default/pods/testpod
spec:
containers:
- env:
- name: MY_VAR
valueFrom:
configMapKeyRef:
key: data1
name: configmap1
image: nginx
imagePullPolicy: Always
name: testcontainer
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-103
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
I then tried copying its syntax into what was another pod which was running.
This is what I got using kubectl edit pod po?
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-08-17T18:15:22Z"
labels:
run: pod1
name: pod1
namespace: default
resourceVersion: "12167"
selfLink: /api/v1/namespaces/default/pods/pod1
uid: fa297c13-c11a-11e9-9a5f-02ca4f0dcea0
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:27Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-08-17T18:15:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://99bfded0d69f4ed5ed854e59b458acd8a9197f9bef6d662a03587fe2ff61b128
image: nginx:latest
imageID: docker-pullable://nginx#sha256:53ddb41e46de3d63376579acf46f9a41a8d7de33645db47a486de9769201fec9
lastState: {}
name: pod1
ready: true
restartCount: 0
state:
running:
startedAt: "2019-08-17T18:15:27Z"
hostIP: 10.0.1.102
phase: Running
podIP: 10.244.2.2
qosClass: BestEffort
startTime: "2019-08-17T18:15:22Z"
And also k get po pod1 -o yaml --export
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
selfLink: /api/v1/namespaces/default/pods/pod1
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: pod1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-27x4x
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-0-1-102
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-27x4x
secret:
defaultMode: 420
secretName: default-token-27x4x
status:
phase: Pending
qosClass: BestEffort
What am I doing wrong or have I missed something?
You can't add configuration to a running pod, that's something inherent to containers.
To put it simply: a container is running with a service, the state of the service defines the state of the container. As you know, nginx needs to reload it's configuration if you change it, but that's not really a good idea in this context, so you need to stop/start the container with the new configuration.
So what you are getting is normal, the service state is still running so it's keeping the old file configuration it has from before even if you make change inside the file.
If you need the service to be reloading without downtime, set multiple replicas and create a rolling update rule for no downtime during update.
There are some special cases to this, like grafana, where it can go check if files have been changed from the last modification.

Resources