I am making a website that follows John Papa's Code Camper SPA Jumpstart Pluralsight course. My database/entities has the following hierarchy:
Proficiency contains Action(s) and Level(s).
When I delete a "Proficiency", I get the following server side error:
"Object reference not set to an instance of an object."
Looking at the json JObject saveBundle in the BreezeController, I am seeing a mysterious:
"undefined": false,
in every entity. None of my entities have a Boolean in them. Just like in Code Camper, I am adding an "isPartial" in the constructor of each entity as shown in the code below.
var proficiencyConstructor = function () {
this.isPartial = false;
}
metadataStore.registerEntityTypeCtor('Proficiency', proficiencyConstructor, proficiencyInitializer);
function proficiencyInitializer(proficiency) {
var empty = "00000000-0000-0000-0000-000000000000";
if (proficiency.id() === empty) {
proficiency.id(breeze.core.getUuid());
}
};
My gut says the mysterious "undefined":false is the "isPartial" property. According to the documentation, the "Breeze adds the isPartial property to the Entity metadata as an unmapped property. The values of unmapped properties are not transmitted to the service." I am stuck. Anyone recommend things I can do to figure this out?
Thanks,
Dan
Here is a sample from the savebundle
{ "entities":
[
{ "Id": "a0223d7c-35e5-458f-ba83-65ec7ec189fa", "Name": "AST Prof0", "IsEnabled": true, "Description": "AST Prof0", "ProficiencyType": "TBD", "ApplicationId": "7ba4b47f-06a3-4ceb-bca6-de3fd3699bbd", "undefined": false, "entityAspect": { "entityTypeName": "Proficiency:#LobGame.Model", "entityState": "Deleted", "originalValuesMap": { "IsPartial": true }, "autoGeneratedKey": null } },
This is likely due to a bug fix that is now in breeze 1.2.8. This fixed it for me.
From their release notes:
Bug fix for the case where a save involving a delete would fail when
that save also involved a modification to an unmapped property.
http://www.breezejs.com/documentation/download
Related
Has anyone had luck with placing a GraphQL custom type argument as a Postman or Graphql variable? I'm kinda spinning in circles right now, I hope a fresh pair of eyes could point me in the right direction.
What I'm trying to do is to send a mutation request using Postman. The problem I'm having is that the method I'm calling is taking a custom type as an argument. Placing the content of that variable as GraphQL variable or Postman variable is giving me a headache. I can't embedd pictures yet, so here are the links (they are safe).
Schema
This custom type is a JSON-like structure, consisting of two enums and a set of primitive types (strings, ints...). I can screenshot the entire thing but basically that's it: two enums followed by strings, ints...
Custom type definition
What I've tried so far:
Simply hardcoding the request in Postman works but I wish to send multiple requests with varying data
Placing it in a GraphQL variable results in error message
{
"errors": [
{
"message": "Bad request - invalid request body.",
"locations": []
}
],
"data": null
}
Placing the custom type content as a Postman environment variable works, but I'm getting a syntax error (although the request passes...).
Request body is below. Hardcoding it and using a Postman variable produces the same request body, apart from the syntax error.
query: "mutation {
createApplication(request: {
applicationKind: NEW_ISSUANCE,
documentKind: REGULAR_PASSPORT,
personalData: {
timestamp: null,
firstname: "NAME",
lastname: "LASTNAME",
middlename: "MIDDLENAME",
dateOfBirth: "2011-09-28",
citizenshipCountryCode: "USA",
gender: MALE,
personalNumber: "3344",
placeOfBirth: "CHICAGO",
municipalityOfBirth: "SOUTH",
countryCodeOfBirth: "USA"},
addressData:{
street: "WEST",
municipality: "EAST",
place: "CHICAGO",
country: {
code: "USA",
name: null
},
entrance: "б",
flat: "13",
number: "35"}
})
{
__typename
... on AsyncTaskStatus {
taskID
state
payload {
... on ApplicationUpdated {
applicationID
applicationNumber
__typename
}
__typename
}
__typename
}
... on Error {
...errorData
__typename
}
}
}
fragment errorData on Error {
__typename
code
message
}"
Postman variable with a squiggly line
I'm spinning in circles right now. Has anyone had any luck with Postman requests of this kind?
I can post more info, screenshots...just let me know. I'll be watching this topic closely and provide feedback.
Thank you for your time.
please add a the variable in variable section as :
{
"request": {{request}}
}
and then refer this in your query as
$request
We are creating a Zapier app to expose our APIs to the public, so anyone can use it. The main endpoint that people are using returns a very large and complex JSON object. Zapier, it looks like, has a really difficult time parsing nested complex JSON. But it does wonderful with a very simple response object such as
{ "field": "value" }
Our data that is being returned has this structure and we want to move some of the fields to the root of the response so it's easily parsed by Zapier.
"networkSections": [
{
"identifier": "Deductible",
"label": "Deductible",
"inNetworkParameters": [
{
"key": "Annual",
"value": " 600.00",
"message": null,
"otherInfo": null
},
{
"key": "Remaining",
"value": " 600.00",
"message": null,
"otherInfo": null
}
],
"outNetworkParameters": null
},
So, can we do something to return for example the remaining deductible?
I got this far (adding outputFields) but this returns an array of values. I'm not sure how to parse through this array either in the Zap or in the App.
{key: 'networkSections[]inNetworkParameters[]key', label: 'xNetworkSectionsKey',type: 'string'},
ie this returns an array of "Annual", "Remaining", etc
Great question. In this case, there's a lot going on, and outputFields can't quite handle it all. :(
In your example, inNetworkParameters contains an array of objects. Throughout our documentation, we refer to these as line items. These lines items can be passed to other actions, but the different expected structures presents a bit of a problem. The way we've handled this is by letting users map line-items from one step's output to another step's input per field. So if step 1 returns
{
"some_array": [
{
"some_key": "some_value"
}
]
}
and the next step needs to send
{
"data": [
{
"some_other_key": "some_value"
}
]
}
users can accomplish that by mapping some_array.some_key to data.some_other_key.
All of that being said, if you want to always return a Remaining Deductible object, you'll have to do it by modifying the result object itself. As long as this data is always in that same order, you can do something akin to
var data = z.JSON.parse(bundle.response.content);
data["Remaining Deductible"] = data.networkSections[0].inNetworkParameters[1].value;
return data;
If the order differs, you'll have to implement some sort of search to find the objects you'd like to return.
I hope that all helps!
Caleb got me where I wanted to go. For completeness this is the solution.
In the creates directory I have a js file for the actual call. The perform part is below.
perform: (z, bundle) => {
const promise = z.request({
url: 'https://api.example.com/API/Example/' + bundle.inputData.elgRequestID,
method: 'GET',
headers: {
'content-type': 'application/json',
}
});
return promise.then(function(result) {
var data = JSON.parse(result.content);
for (var i=0; i<data.networkSections.length; i++) {
for (var j=0; j<data.networkSections[i].inNetworkParameters.length; j++) {
// DEDUCT
if (data.networkSections[i].identifier == "Deductible" &&
data.networkSections[i].inNetworkParameters[j].key == "Annual")
data["zAnnual Deductible"] = data.networkSections[i].inNetworkParameters[j].value;
} // inner for
} // outer for
return data;
});
I've got the following JSON payload:
"user": {
"id": 1,
"username": "bla",
"first_name": "bla",
"self": {
"info": "MyInfo",
"website": "MyWebsite"
},
//... some more properties doesn't matter
}
I try to map that nested object self into the user model as well and did the following property mappings:
mapping.addAttributeMappingsFromDictionary(
["id" : "id",
"username" : "username",
"first_name" : "firstname",
"self.info" : "info",
"self.website" : "website"])
Now when I trigger a GET - request everything maps fine instead of the nested properties self.info and self.website. When I do a relationship mapping it works as well but I need a separate model which is a bit ugly for these informations.
I'm using RestKit 0.25
I just encountered that the problem relies to the method [object valueForKeyPath:] which RestKit uses for its mappings. This method returns the object itself when it gets called with self so when I change the JSON keyPath to something different e.g. personal.info it works as expected! I think this has changed in a recent RestKit release because another app using RestKit 0.23.x works with a self keypath.
I am using swagger 2.0. I have a response object defined in "definitions" by the name "mobilePrice".
I have another response object named "Offer" which has properties "PriceOne" and "PriceTwo" referencing "mobilePrice".
Code looks like this:
"mobilePrice": {
"properties": {
"amount": {
"type": "string"
}
}
}
"Offer": {
"properties": {
"PriceOne": {
"$ref": "mobilePrice"
},
"PriceTwo": {
"$ref": "mobilePrice"
}
}
}
When I try to see it on swagger UI. It does not show me the "PriceTwo" property at all.
On trying various things , I figured that , the problem is occurring because of response object "mobilePrice" being referenced more than once. Can someone help me allow the reference to the same object more than once.
Thank You in advance
First, you should fix your references. It may work not, but it officially that's not the right way, and support for it may be dropped. The correct form would be:
"$ref": "#/definitions/mobilePrice"
Second, the behavior you describe is a known issue. You can follow progress of it here - https://github.com/swagger-api/swagger-js/issues/186.
Elasticsearch experts,
I have been unable to find a simple way to just tell ElasticSearch to insert the _timestamp field for all the documents that are added in all the indices (and all document types).
I see an example for specific types:
http://www.elasticsearch.org/guide/reference/mapping/timestamp-field/
and also see an example for all indices for a specific type (using _all):
http://www.elasticsearch.org/guide/reference/api/admin-indices-put-mapping/
but I am unable to find any documentation on adding it by default for all documents that get added irrespective of the index and type.
Elasticsearch used to support automatically adding timestamps to documents being indexed, but deprecated this feature in 2.0.0
From the version 5.5 documentation:
The _timestamp and _ttl fields were deprecated and are now removed. As a replacement for _timestamp, you should populate a regular date field with the current timestamp on application side.
You can do this by providing it when creating your index.
$curl -XPOST localhost:9200/test -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"_default_":{
"_timestamp" : {
"enabled" : true,
"store" : true
}
}
}
}'
That will then automatically create a _timestamp for all stuff that you put in the index.
Then after indexing something when requesting the _timestamp field it will be returned.
Adding another way to get indexing timestamp. Hope this may help someone.
Ingest pipeline can be used to add timestamp when document is indexed. Here, is a sample example:
PUT _ingest/pipeline/indexed_at
{
"description": "Adds indexed_at timestamp to documents",
"processors": [
{
"set": {
"field": "_source.indexed_at",
"value": "{{_ingest.timestamp}}"
}
}
]
}
Earlier, elastic search was using named-pipelines because of which 'pipeline' param needs to be specified in the elastic search endpoint which is used to write/index documents. (Ref: link) This was bit troublesome as you would need to make changes in endpoints on application side.
With Elastic search version >= 6.5, you can now specify a default pipeline for an index using index.default_pipeline settings. (Refer link for details)
Here is the to set default pipeline:
PUT ms-test/_settings
{
"index.default_pipeline": "indexed_at"
}
I haven't tried out yet, as didn't upgraded to ES 6.5, but above command should work.
You can make use of default index pipelines, leverage the script processor, and thus emulate the auto_now_add functionality you may know from Django and DEFAULT GETDATE() from SQL.
The process of adding a default yyyy-MM-dd HH:mm:ss date goes like this:
1. Create the pipeline and specify which indices it'll be allowed to run on:
PUT _ingest/pipeline/auto_now_add
{
"description": "Assigns the current date if not yet present and if the index name is whitelisted",
"processors": [
{
"script": {
"source": """
// skip if not whitelisted
if (![ "myindex",
"logs-index",
"..."
].contains(ctx['_index'])) { return; }
// don't overwrite if present
if (ctx['created_at'] != null) { return; }
ctx['created_at'] = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").format(new Date());
"""
}
}
]
}
Side note: the ingest processor's Painless script context is documented here.
2. Update the default_pipeline setting in all of your indices:
PUT _all/_settings
{
"index": {
"default_pipeline": "auto_now_add"
}
}
Side note: you can restrict the target indices using the multi-target syntax:
PUT myindex,logs-2021-*/_settings?allow_no_indices=true
{
"index": {
"default_pipeline": "auto_now_add"
}
}
3. Ingest a document to one of the configured indices:
PUT myindex/_doc/1
{
"abc": "def"
}
4. Verify that the date string has been added:
GET myindex/_search
An example for ElasticSearch 6.6.2 in Python 3:
from elasticsearch import Elasticsearch
es = Elasticsearch(hosts=["localhost"])
timestamp_pipeline_setting = {
"description": "insert timestamp field for all documents",
"processors": [
{
"set": {
"field": "ingest_timestamp",
"value": "{{_ingest.timestamp}}"
}
}
]
}
es.ingest.put_pipeline("timestamp_pipeline", timestamp_pipeline_setting)
conf = {
"settings": {
"number_of_shards": 2,
"number_of_replicas": 1,
"default_pipeline": "timestamp_pipeline"
},
"mappings": {
"articles":{
"dynamic": "false",
"_source" : {"enabled" : "true" },
"properties": {
"title": {
"type": "text",
},
"content": {
"type": "text",
},
}
}
}
}
response = es.indices.create(
index="articles_index",
body=conf,
ignore=400 # ignore 400 already exists code
)
print ('\nresponse:', response)
doc = {
'title': 'automatically adding a timestamp to documents',
'content': 'prior to version 5 of Elasticsearch, documents had a metadata field called _timestamp. When enabled, this _timestamp was automatically added to every document. It would tell you the exact time a document had been indexed.',
}
res = es.index(index="articles_index", doc_type="articles", id=100001, body=doc)
print(res)
res = es.get(index="articles_index", doc_type="articles", id=100001)
print(res)
About ES 7.x, the example should work after removing the doc_type related parameters as it's not supported any more.
first create index and properties of the index , such as field and datatype and then insert the data using the rest API.
below is the way to create index with the field properties.execute the following in kibana console
`PUT /vfq-jenkins
{
"mappings": {
"properties": {
"BUILD_NUMBER": { "type" : "double"},
"BUILD_ID" : { "type" : "double" },
"JOB_NAME" : { "type" : "text" },
"JOB_STATUS" : { "type" : "keyword" },
"time" : { "type" : "date" }
}}}`
the next step is to insert the data into that index:
curl -u elastic:changeme -X POST http://elasticsearch:9200/vfq-jenkins/_doc/?pretty
-H Content-Type: application/json -d '{
"BUILD_NUMBER":"83","BUILD_ID":"83","JOB_NAME":"OMS_LOG_ANA","JOB_STATUS":"SUCCESS" ,
"time" : "2019-09-08'T'12:39:00" }'