Ruby on Rails MongoDB existing dataset - ruby-on-rails

I have a RoR application which will read info from a MongoDB collection.
I have my model set as:-
class Vulnerability
include Mongoid::Document
field :id, type: String
field :description, type: String
field :type, type: String
end
The first record in the collection is formatted like this:-
> db.vulnerabilities.findOne()
{
"_id" : "NGINX:CVE-2009-3896",
"_index" : "bulletins",
"_type" : "bulletin",
"_score" : null,
"_source" : {
"lastseen" : "2016-09-26T17:22:32",
"references" : [ ],
"edition" : 1,
"description" : "Null pointer dereference vulnerability\nSeverity: major\nCVE-2009-3896\nNot vulnerable: 0.8.14+, 0.7.62+, 0.6.39+, 0.5.38+\nVulnerable: 0.1.0-0.8.13",
"reporter" : "Nginx",
"published" : "2009-11-24T12:30:00",
"type" : "nginx",
"title" : "Null pointer dereference vulnerability",
"bulletinFamily" : "software",
"affectedSoftware" : [
{
"name" : "nginx",
"version" : "0.8.13",
"operator" : "le"
}
],
"cvelist" : [
"CVE-2009-3896"
],
"modified" : "2009-11-24T12:30:00",
"id" : "NGINX:CVE-2009-3896",
"href" : "http://nginx.org/en/security_advisories.html",
"cvss" : {
"score" : 5,
"vector" : "AV:NETWORK/AC:LOW/Au:NONE/C:NONE/I:NONE/A:PARTIAL/"
}
},
"sort" : [
38985
]
}
I want to pull the ID, the type and the description. When trying to view the index view I get
NameError at /vulnerabilities uninitialized constant Bulletin
I believe the issue is with the type in the document as there is a key of _type but I want to get the type from the key _source, so in the example above it will display nginx not bulletin.

Related

Does POST /reviews-v1 using Crucible REST API allow us to add reviewers?

Is there a way to add reviewers when creating a code review through the REST API? I tried adding "reviewers" with an array of usernames but that didn't work. Got an error
Unrecognized field "reviewers" (Class com.atlassian.crucible.spi.data.ReviewData), not marked as ignorable
The request body example on their api docs is below:
{
"reviewData" : {
"projectKey" : "CR-FOO",
"name" : "Example review.",
"description" : "Description or statement of objectives for this example review.",
"author" : {
"userName" : "auth",
"displayName" : "Jane Authy",
},
"moderator" : {
"userName" : "scott",
"displayName" : "Scott the Moderator",
},
"creator" : {
"userName" : "joe",
"displayName" : "Joe Krustofski",
},
"permaId" : {
"id" : "CR-FOO-21"
},
"summary" : "some review summary.",
"state" : "Review",
"type" : "REVIEW",
"allowReviewersToJoin" : true,
"metricsVersion" : 4,
"createDate" : "2022-06-20T09:37:11.621+0000",
"dueDate" : "2022-06-23T09:37:11.621+0000",
"reminderDate" : "2022-06-21T09:37:11.621+0000",
"linkedIssues" : [ "DEF-456", "ABC-123", "GHI-789" ],
"jiraIssueKey" : "ABC-123"
},
"patch" : "Index: emptytests/notempty/a.txt\n===================================================================\ndiff -u -N -r1.31 -r1.32\n--- emptytests/notempty/a.txt\t22 Sep 2004 00:38:15 -0000\t1.31\n+++ emptytests/notempty/a.txt\t5 Dec 2004 01:04:25 -0000\t1.32\n## -4,4 +4,5 ##\n hello there :D\n CRU-123\n http://madbean.com/blog/\n-!\n\\ No newline at end of file\n+!\n+foobie\n\\ No newline at end of file\nIndex: test/a.txt\n===================================================================\ndiff -u -N -r1.31 -r1.32\n--- test/a.txt\t22 Sep 2004 00:38:15 -0000\t1.31\n+++ test/a.txt\t5 Dec 2004 01:04:25 -0000\t1.32\n## -4,4 +4,5 ##\n hello there :D\n CRU-123\n http://madbean.com/blog/\n-!\n\\ No newline at end of file\n+!\n+foobie\n\\ No newline at end of file",
"anchor" : {
"anchorPath" : "/",
"anchorRepository" : "REPO",
"stripCount" : 2
},
"changesets" : {
"changesetData" : [ {
"id" : "63452"
} ],
"repository" : "REPO"
}
}

Avro Schema Evolution with Enum – Deserialization Crashes

I defined two versions of a record in two separate AVCS schema files. I used the namespace to distinguish versions
SimpleV1.avsc
{
"type" : "record",
"name" : "Simple",
"namespace" : "test.simple.v1",
"fields" : [
{
"name" : "name",
"type" : "string"
},
{
"name" : "status",
"type" : {
"type" : "enum",
"name" : "Status",
"symbols" : [ "ON", "OFF" ]
},
"default" : "ON"
}
]
}
Example JSON
{"name":"A","status":"ON"}
Version 2 just has an additional description field with default value.
SimpleV2.avsc
{
"type" : "record",
"name" : "Simple",
"namespace" : "test.simple.v2",
"fields" : [
{
"name" : "name",
"type" : "string"
},
{
"name" : "description",
"type" : "string",
"default" : ""
},
{
"name" : "status",
"type" : {
"type" : "enum",
"name" : "Status",
"symbols" : [ "ON", "OFF" ]
},
"default" : "ON"
}
]
}
Example JSON
{"name":"B","description":"b","status":"ON"}
Both schemas were serialized to Java classes.
In my example I was going to test backward compatibility. A record written by V1 shall be read by a reader using V2. I wanted to see that default values are inserted. This is working as long as I do not use enums.
public class EnumEvolutionExample {
public static void main(String[] args) throws IOException {
Schema schemaV1 = new org.apache.avro.Schema.Parser().parse(new File("./src/main/resources/SimpleV1.avsc"));
//works as well
//Schema schemaV1 = test.simple.v1.Simple.getClassSchema();
Schema schemaV2 = new org.apache.avro.Schema.Parser().parse(new File("./src/main/resources/SimpleV2.avsc"));
test.simple.v1.Simple simpleV1 = test.simple.v1.Simple.newBuilder()
.setName("A")
.setStatus(test.simple.v1.Status.ON)
.build();
SchemaPairCompatibility schemaCompatibility = SchemaCompatibility.checkReaderWriterCompatibility(
schemaV2,
schemaV1);
//Checks that writing v1 and reading v2 schemas is compatible
Assert.assertEquals(SchemaCompatibilityType.COMPATIBLE, schemaCompatibility.getType());
byte[] binaryV1 = serealizeBinary(simpleV1);
//Crashes with: AvroTypeException: Found test.simple.v1.Status, expecting test.simple.v2.Status
test.simple.v2.Simple v2 = deSerealizeBinary(binaryV1, new test.simple.v2.Simple(), schemaV1);
}
public static byte[] serealizeBinary(SpecificRecord record) {
DatumWriter<SpecificRecord> writer = new SpecificDatumWriter<>(record.getSchema());
byte[] data = new byte[0];
ByteArrayOutputStream stream = new ByteArrayOutputStream();
Encoder binaryEncoder = EncoderFactory.get()
.binaryEncoder(stream, null);
try {
writer.write(record, binaryEncoder);
binaryEncoder.flush();
data = stream.toByteArray();
} catch (IOException e) {
System.out.println("Serialization error " + e.getMessage());
}
return data;
}
public static <T extends SpecificRecord> T deSerealizeBinary(byte[] data, T reuse, Schema writer) {
Decoder decoder = DecoderFactory.get().binaryDecoder(data, null);
DatumReader<T> datumReader = new SpecificDatumReader<>(writer, reuse.getSchema());
try {
T datum = datumReader.read(null, decoder);
return datum;
} catch (IOException e) {
System.out.println("Deserialization error" + e.getMessage());
}
return null;
}
}
The checkReaderWriterCompatibility method confirms that schemas are compatible.
But when I deserialize I’m getting the following exception
Exception in thread "main" org.apache.avro.AvroTypeException: Found test.simple.v1.Status, expecting test.simple.v2.Status
at org.apache.avro.io.ResolvingDecoder.doAction(ResolvingDecoder.java:309)
at org.apache.avro.io.parsing.Parser.advance(Parser.java:86)
at org.apache.avro.io.ResolvingDecoder.readEnum(ResolvingDecoder.java:260)
at org.apache.avro.generic.GenericDatumReader.readEnum(GenericDatumReader.java:267)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:181)
at org.apache.avro.specific.SpecificDatumReader.readField(SpecificDatumReader.java:136)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:247)
at org.apache.avro.specific.SpecificDatumReader.readRecord(SpecificDatumReader.java:123)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:179)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:160)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at test.EnumEvolutionExample.deSerealizeBinary(EnumEvolutionExample.java:70)
at test.EnumEvolutionExample.main(EnumEvolutionExample.java:45)
I don’t understand why Avro thinks it got a v1.Status. Namespaces are not part of the encoding.
Is this a bug or has anyone an idea how get that running?
Try adding an #aliases.
For example:
v1
{
"type" : "record",
"name" : "Simple",
"namespace" : "test.simple.v1",
"fields" : [
{
"name" : "name",
"type" : "string"
},
{
"name" : "status",
"type" : {
"type" : "enum",
"name" : "Status",
"symbols" : [ "ON", "OFF" ]
},
"default" : "ON"
}
]
}
v2
{
"type" : "record",
"name" : "Simple",
"namespace" : "test.simple.v2",
"fields" : [
{
"name" : "name",
"type" : "string"
},
{
"name" : "description",
"type" : "string",
"default" : ""
},
{
"name" : "status",
"type" : {
"type" : "enum",
"name" : "Status",
"aliases" : [ "test.simple.v1.Status" ]
"symbols" : [ "ON", "OFF" ]
},
"default" : "ON"
}
]
}
Found a workaround. I moved the enum to a "unversioned" namespace. So it is in both versions the same.
But actually it looks like a bug for me. Converting a record is not an issue but enum is not working. Both are complex types in Avro.
{
"type" : "record",
"name" : "Simple",
"namespace" : "test.simple.v1",
"fields" : [
{
"name" : "name",
"type" : "string"
},
{
"name" : "status",
"type" : {
"type" : "enum",
"name" : "Status",
"namespace" : "test.model.unversioned",
"symbols" : [ "ON", "OFF" ]
},
"default" : "ON"
}
]
}

Extracting a specific part from JSON

Hello I am trying to extract only the id value from a JSON. However if I
print(response!["id"])
result = "not created" is outputted. reponse is already in JSON format.
Is there something that I am doing wrong?
Update 1
[
{
"user" : {
"last_name" : "test",
"email" : "test#test.com",
},
"id" : 902,
"scale" : 7,
"created_at" : "2018-02-24 06:45:33",
},
{
"user" : {
"last_name" : "test",
"email" : "test#test.com",
},
"id" : 903,
"scale" : 7,
"created_at" : "2018-02-24 06:45:33",
},
{
"user" : {
"last_name" : "test",
"email" : "test#test.com",
},
"id" : 904,
"scale" : 7,
"created_at" : "2018-02-24 06:45:33",
},
]
The best way to access the id of any object from the array of response will be:
let id = response[index]["id"]
// Here index is the index of the object you want to access.
I am assuming you want to extract ids from this JSON array. You can do this using
response.map({ $0["id"]})
Which will give you another array containing only ids

Spring Data Neo4j corrupted json

I'm using neo4j with spring data, when I store an object with inside a relation I get as a result of the findAll a corrupted json. I never get this error when I query the objects one at time.
Even more strange the first one in the list is correct but the second has this error. The error is at the edge field. Any idea?
[{
"uuid" : "e5c90af5-6259-4ddf-ae1f-c0cff5a41296",
"name" : "test",
"createdBy" : {
"uuid" : "319535cc-288f-4a23-bc02-a3b01bf6e93f",
"createdAt" : "2017-03-10T02:06:55.925+0000",
"user" : {
"uuid" : "9e91032e-a54d-4297-8a6a-1506589b7529",
},
"edge" : { "id" : 6514
},
"graphId" : 664
}
},
{
"uuid" : "e5c90af5-6259-4ddf-ae1f-c0cff5a41296",
"name" : "test",
"createdBy" : {
"uuid" : "319535cc-288f-4a23-bc02-a3b01bf6e93f",
"createdAt" : "2017-03-10T02:06:55.925+0000",
"user" : {
"uuid" : "9e91032e-a54d-4297-8a6a-1506589b7529",
},
"edge" : { : 6514
},
"graphId" : 664
}
}]

Why my PassBook isn't valid or outdate?

I generate one passbook using this gem in rails and it seen that works but when I open the passbook .pkpass file I see this message:
It's in spanish but basically it says that this card isn't valid anymore.
Here is my JSON:
{
"formatVersion" : 1,
"passTypeIdentifier" : "{MY PASS ID HERE}",
"serialNumber" : "E5982H-I2",
"teamIdentifier" : "{MY TEAM ID HERE}",
"webServiceURL" : "https://example.com/passes/",
"authenticationToken" : "vxwxd7J8AlNNFPS8k0a0FfUFtq0ewzFdc",
"barcode" : {
"message" : "123456789",
"format" : "PKBarcodeFormatPDF417",
"messageEncoding" : "iso-8859-1"
},
"locations" : [
{
"longitude" : -122.3748889,
"latitude" : 37.6189722
},
{
"longitude" : -122.03118,
"latitude" : 37.33182
}
],
"organizationName" : "CROCANTICKETS SL",
"description" : "Paw Planet Coupon",
"logoText" : "Paw Planet",
"foregroundColor" : "rgb(255, 255, 255)",
"backgroundColor" : "#FF4B33",
"coupon" : {
"primaryFields" : [
{
"key" : "offer",
"label" : "Any premium dog food",
"value" : "20% off"
}
],
"auxiliaryFields" : [
{
"key" : "expires",
"label" : "EXPIRES",
"value" : "2016-04-24T10:00-05:00",
"isRelative" : true,
"dateStyle" : "PKDateStyleShort"
}
]
}
}
Any idea? Thanks!
According to the Expiration Keys in the Passbook Package Format Reference check the expirationDate and voided keys. Since you do not have those in your JSON, it might be added by the gem you are using.

Resources