Currently we have a working task_definition file for a AWS Fargate instance.
We want to migrate from Fargate to a specific AWS EC2 instance, e.g. Z1d.
From the AWS documentation I found that the ecs.instance-type parameter needs to be added.
Unfortunately it does not state where it should be added in the task_definition.json file.
Currently we have something along the lines of:
{
"family": "generic-family",
"requiresCompatibilities": ["FARGATE"],
"cpu": "4096",
"memory": "8192",
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
},
]
}
We think it should be something like:
{
"family": "generic-family",
"requiresCompatibilities": ["EC2"],
"ecs.instance-type": "Z1d",
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
},
]
}
Or looking at some other documentation:
{
"family": "generic-family",
"requiresCompatibilities": ["EC2"],
...
"containerDefinitions": [
{
"name": "generic-docker-name",
"image": "...",
"Parameters": {
"InstanceTypeParameter" : {
"Type" : "String",
"Default" : "z1d.large",
"AllowedValues" : ["z1d.large"],
"Description" : "..."
}
}
},
]
}
But that doesn't seem to work.
Does anyone know how this should be done? Or how I should read the AWS documentation for this specific topic?
Add this to your task_definition.json file at the top level.
"placementConstraints": [
{
"type": "memberOf",
"expression": "attribute:ecs.instance-type == z1d.large"
}
],
You can read more about it here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-constraints.html
Related
I am trying to construct a ML pipeline DAG using Argo. And I am running into an issue where I need a value from one node in the DAG to be sent as a parameter to its subsequent node. Say the ARGO DAG structure looks like the following:
{
"apiVersion": "argoproj.io/v1alpha1",
"kind": "Workflow",
"metadata": {
"generateName": "workflow01-"
},
"spec": {
"entrypoint": "workflow01",
"arguments": {
"parameters": [
{
"name": "log-level",
"value": "INFO"
}
]
},
"templates": [
{
"name": "workflow01",
"dag": {
"tasks": [
{
"name": "A",
"template": "task-container",
"arguments": {
"parameters": [
{
"name": "model-type",
"value": "INTENT-TRAIN"
}
]
}
},
{
"name": "B",
"template": "task-container",
"dependencies": ["A"],
"arguments": {
"parameters": [
{
"name": "model-type",
"value": "INTENT-EVALUATE"
}
]
}
}
]
}
},
{
"name": "task-container",
"inputs": {
"parameters": [
{
"name": "model-type",
"value": "NIL"
}
]
},
"container": {
"env": [
{
"name": "LOG_LEVEL",
"value": "{{workflow.parameters.log-level}}"
},
{
"name": "MODEL_TYPE",
"value": "{{inputs.parameters.model-type}}"
}
]
}
}
]
}
}
A -> B
The computation happening in B depends on the value that has been computed in A.
How will I be able to pass the value computed in A into B?
You can use Argo's "artifacts" for this - see the examples at https://github.com/argoproj/argo-workflows/tree/master/examples#artifacts
Another way is to set up a shared volume: https://github.com/argoproj/argo-workflows/tree/master/examples#volumes
I am using kubernetes : v1.10.3 , i have one external NFS server which i am able to mount anywhere ( any physical machines). I want to mount this NFS directly to pod/container . I tried but every time i am getting error. don't want to use privileges, kindly help me to fix.
ERROR: MountVolume.SetUp failed for volume "nfs" : mount failed: exit
status 32 Mounting command: systemd-run Mounting arguments:
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
--scope -- mount -t nfs 10.225.241.137:/stagingfs/alt/ /var/lib/kubelet/pods/d65eb963-68be-11e8-8181-00163eeb9788/volumes/kubernetes.io~nfs/nfs
Output: Running scope as unit run-43393.scope. mount: wrong fs type,
bad option, bad superblock on 10.225.241.137:/stagingfs/alt/, missing
codepage or helper program, or other error (for several filesystems
(e.g. nfs, cifs) you might need a /sbin/mount. helper program)
In some cases useful info is found in syslog - try dmesg | tail or so.
NFS server : mount -t nfs 10.X.X.137:/stagingfs/alt /alt
I added two things for volume here but getting error every time.
first :
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
Second :
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
---------------------complete yaml --------------------------------
{
"kind": "Deployment",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "jboss",
"namespace": "staging",
"selfLink": "/apis/extensions/v1beta1/namespaces/staging/deployments/jboss",
"uid": "6a85e235-68b4-11e8-8181-00163eeb9788",
"resourceVersion": "609891",
"generation": 2,
"creationTimestamp": "2018-06-05T11:34:32Z",
"labels": {
"k8s-app": "jboss"
},
"annotations": {
"deployment.kubernetes.io/revision": "2"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"k8s-app": "jboss"
}
},
"template": {
"metadata": {
"name": "jboss",
"creationTimestamp": null,
"labels": {
"k8s-app": "jboss"
}
},
"spec": {
"volumes": [
{
"name": "nfs",
"nfs": {
"server": "10.X.X.137",
"path": "/stagingfs/alt/"
}
}
],
"containers": [
{
"name": "jboss",
"image": "my.abc.com/alt:7.1_1.1",
"resources": {},
"volumeMounts": [
{
"name": "nfs",
"mountPath": "/alt"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {},
"schedulerName": "default-scheduler"
}
},
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": "25%",
"maxSurge": "25%"
}
},
"revisionHistoryLimit": 10,
"progressDeadlineSeconds": 600
},
"status": {
"observedGeneration": 2,
"replicas": 1,
"updatedReplicas": 1,
"readyReplicas": 1,
"availableReplicas": 1,
"conditions": [
{
"type": "Available",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:45Z",
"lastTransitionTime": "2018-06-05T11:35:45Z",
"reason": "MinimumReplicasAvailable",
"message": "Deployment has minimum availability."
},
{
"type": "Progressing",
"status": "True",
"lastUpdateTime": "2018-06-05T11:35:46Z",
"lastTransitionTime": "2018-06-05T11:34:32Z",
"reason": "NewReplicaSetAvailable",
"message": "ReplicaSet \"jboss-8674444985\" has successfully progressed."
}
]
}
}
Regards
Anupam Narayan
As stated in the error log:
for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program
According to this question, you might be missing the nfs-commons package which you can install using sudo apt install nfs-common
I Post one Json with RestAssured and After I need to verify that all fields are stored in the database with the correct values.My Json is :
{
"id": "1",
"name": "name1",
"description": "description1",
"source": "source1",
"target": "target1",
"domain": "PM",
"transformation_rules": [
{
"name": "name2",
"filters": [
{
"object": "object1",
"pattern": "pattern1"
}
],
"operations": [
{
"pattern": "pattern2",
"replacement": "replacement1"
}
]
},
{
"name": "name3",
"filters": [
{
"object": "object2",
"pattern": "pattern2"
}
],
"operations": [
{
"pattern": "pattern3",
"replacement": "replacement2"
},
{
"pattern": "pattern3",
"replacement": "replacement3"
},
{
"pattern": "pattern4",
"replacement": "replacement4"
}
]
}
],
"conflict_policy": "ACCEPT_SOURCE"
}
So, I have :
responseGet = RestAssured.given().contentType(ContentType.JSON).when().get(urlApi + "/" + id);
My first verification is :
responseGet.then().body("$[0]['id']", equalTo("1"));
to verify that the field "id" equals to 1 it doesn't execute well and I change to :
responseGet.then().body("$.id", equalTo("1"));
and the same result ---> fails
Please, can you give me your suggestions for testing all the Json ?
Just for information, I try to apply : https://github.com/json-path/JsonPath.
Thank you very much in Advance,
Best Regards,
You can directly use jsonPath() for checking this:
For example:
responseGet.body().jsonPath().getString("id").equals("1");
For reading JsonPath
Kubernetes create Service in deployment+rbd mode, also same configured to succeed in default namespace, fail under non default namespace?
The config(json):
{
"kind": "Deployment",
"spec": {
"replicas": "1",
"template": {
"spec": {
"volumes": [
{
"rbd": {
"secretRef": {
"name": "ceph-secret"
},
"image": "zhaosiyi.24",
"fsType": "ext4",
"readOnly": false,
"user": "admin",
"monitors": [
"xxx.xxx.xxx.6:6789",
"xxx.xxx.xxx.7:6789",
"xxx.xxx.xxx.8:6789"
],
"pool": "rrkd.rbd"
},
"name": "aa"
}
],
"imagePullSecrets": [
{
"name": "registrykey-m3-1"
}
],
"containers": [
{
"image": "ccr.ccs.tencentyun.com/rrkd/rrkd-nginx:1.0",
"volumeMounts": [
{
"readOnly": false,
"mountPath": "/mnt",
"name": "aa"
}
],
"name": "aa",
"ports": [
{
"protocol": "TCP",
"containerPort": 80
}
]
}
]
},
"metadata": {
"labels": {
"name": "aa"
}
}
},
"selector": {
"matchLabels": {
"name": "aa"
}
}
},
"apiVersion": "extensions/v1beta1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
{
"kind": "Service",
"spec": {
"type": "NodePort",
"ports": [
{
"targetPort": 80,
"protocol": "TCP",
"port": 80
}
],
"selector": {
"name": "aa"
}
},
"apiVersion": "v1",
"metadata": {
"labels": {
"name": "aa"
},
"name": "aa"
}
}
The strangest thing is that the details of the describe pod show success, without any error information, but the get pod is not actually successful, as below:
enter image description here
enter image description here
The problem has been solved. Non default of namespace is unsuccessful because the new namespace is not have Secret, so the authentication failed at pull image. Need to manually create the Secret for pull image. If it is RBD or PVC mode, you also need to manually create the Secret for mounting storage.
Can you show more detail for the describe of pod?
I think it will show the failed log
it always you don't have secret or the secret not setting well,
and the ceph maybe wrong too
Can someone give an example of how to use the gitRepo type of volume in Kubernetes?
The doc says it's a plugin, not sure what that means. Could not find an example anywhere and i don't know the proper syntax.
especially is there parameters to pull a specific branch, use credentials (username, password, or SSH key) etc...
EDIT:
Going through the Kubernetes code this is what I figured so far:
- name: data
gitRepo:
repository: "git repo url"
revision: "hash of the commit to use"
But can't seen to make it work, and not sure how to troubleshoot this issue
This is a sample application I used:
{
"kind": "ReplicationController",
"apiVersion": "v1",
"metadata": {
"name": "tess.io",
"labels": {
"name": "tess.io"
}
},
"spec": {
"replicas": 3,
"selector": {
"name": "tess.io"
},
"template": {
"metadata": {
"labels": {
"name": "tess.io"
}
},
"spec": {
"containers": [
{
"image": "tess/tessio:0.0.3",
"name": "tessio",
"ports": [
{
"containerPort": 80,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/tess",
"name": "tess"
}
]
}
],
"volumes": [
{
"name": "tess",
"gitRepo": {
"repository": "https://<TOKEN>:x-oauth-basic#github.com/tess/tess.io"
}
}
]
}
}
}
}
And you can use the revision too.
PS: The repo above does not exist anymore.
UPDATE:
gitRepo is now deprecated
https://github.com/kubernetes/kubernetes/issues/60999
ORIGINAL ANSWER:
going through the code this is what i figured:
- name: data
gitRepo:
repository: "git repo url"
revision: "hash of the commit to use"
after fixing typos in my mountPath, it works fine.