So I think I have done everything in order to server images and am unable to see an image even though I have followed the instructions precisely on this site:
http://jwage.com/2010/07/27/storing-files-with-mongodb-gridfs/
When I do:
$dm = $this->get('doctrine.odm.mongodb.document_manager');
$image = $dm->createQueryBuilder('MyBundle:Image')
->field('id')->equals($imageID) //I have verified imageID to be correct
->getQuery()
->getSingleResult();
var_dump($image->getFile()->getBytes());
I get:
string(194992) ""
Which tells me that the image is there, just that I am unable to get the bytes using the following as that comes null:
echo $image->getFile()->getBytes();
Please note that I am able to see the image using RockMongo etc so I know image is stored properly plus I can see it in the database stored as chucks. So what is wrong???
UPDATE: I HAVE ATTACHED THE VAR_DUMP OF THE file object.
object(Doctrine\MongoDB\GridFSFile)#427 (4) {
["mongoGridFSFile":"Doctrine\MongoDB\GridFSFile":private]=>
object(MongoGridFSFile)#430 (3) {
["file"]=>
array(7) {
["_id"]=>
object(MongoId)#431 (1) {
["$id"]=>
string(24) "503302b3c7e24c401a000004"
}
["name"]=>
string(6) "Image2"
["filename"]=>
string(14) "/tmp/phpBhZDAy"
["uploadDate"]=>
object(MongoDate)#432 (2) {
["sec"]=>
int(1345520307)
["usec"]=>
int(983000)
}
["length"]=>
float(194992)
["chunkSize"]=>
float(262144)
["md5"]=>
string(32) "5bbc9ede74f50f93a3f7d1f7babe3170"
}
["gridfs":protected]=>
object(MongoGridFS)#437 (5) {
["w"]=>
int(1)
["wtimeout"]=>
int(10000)
["chunks"]=>
object(MongoCollection)#438 (2) {
["w"]=>
int(1)
["wtimeout"]=>
int(10000)
}
["filesName":protected]=>
string(12) "assets.files"
["chunksName":protected]=>
string(13) "assets.chunks"
}
["flags"]=>
int(0)
}
["filename":"Doctrine\MongoDB\GridFSFile":private]=>
NULL
["bytes":"Doctrine\MongoDB\GridFSFile":private]=>
NULL
["isDirty":"Doctrine\MongoDB\GridFSFile":private]=>
bool(false)
}
null
Related
I'm using the following function score for outfits purchased:
{
"query": {
"function_score": {
"field_value_factor": {
"field": "purchased",
"factor": 1.2,
"modifier": "sqrt",
"missing": 1
}
}
}
}
However, when I create a search - I get the following error:
"type":"illegal_argument_exception","reason":"Fielddata is disabled on text fields by default. Set fielddata=true on [purchased] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
The syntax is correct for the search as I've run it locally and it works perfectly. I'm now running it on my server and it's not workings. Do I need to define purchased as an integer somewhere or is this due to something else?
The purchased field is an analyzed string field, hence the error you see.
When indexing your documents, make sure that the numbers are not within double quotes, i.e.
Wrong:
{
"purchased": "324"
}
Right:
{
"purchased": 324
}
...or if you can't change the source documents (because you're not responsible for producing them), make sure that you create a mapping that defines the purchased field as being an integer field.
{
"your_type": {
"properties": {
"purchased": {
"type": "integer"
}
}
}
}
I am new to Hyperledger .I am using docker to run Hyperledger. Pulled hyperledger/fabric-peer:latest from Docker hub and
able to run stub.CreateTable() ,stub.GetRows() , stub.InsertRows() and some other functions in my Chaincode. But when i tried to run
stub.GetHistoryKeys() or stub.GetCompositeKeys() ...etc in my chaincode
It's reporting an error
stub.GetHistoryForKey undefined (type shim.ChaincodeStubInterface has no field
or method GetHistoryForKey)
I found that in my interface.go file there are no such functions . Googled a lot but found nothing .Can anyone tell the correct hyperledger/fabric-peer image to pull so that the above functions can run in Chaincode.
Please download latest version of fabric image, (
hyperledger/fabric-peer x86_64-1.1.0 ). It can be downloaded from script mentioned on hyperledger official website (Install binary)=> (https://hyperledger-fabric.readthedocs.io/en/latest/install.html. Can't paste url due to stackover flow policy issue). Once you have it. Create a go code. Simply add one json record on one key add then try to add some change some field of json and again add to same key. Once you have done that, fire the below code for gethistory:=>
func (s *SmartContract) getAllTransactionForid(APIstub shim.ChaincodeStubInterface, args []string) sc.Response {
fmt.Println("getAllTransactionForNumber called")
id:= args[0]
resultsIterator, err := APIstub.GetHistoryForKey(id)
if err != nil {
return shim.Error(err.Error())
}
defer resultsIterator.Close()
// buffer is a JSON array containing historic values for the number
var buffer bytes.Buffer
buffer.WriteString("[")
bArrayMemberAlreadyWritten := false
for resultsIterator.HasNext() {
response, err := resultsIterator.Next()
if err != nil {
return shim.Error(err.Error())
}
// Add a comma before array members, suppress it for the first array member
if bArrayMemberAlreadyWritten == true {
buffer.WriteString(",")
}
buffer.WriteString("{\"TxId\":")
buffer.WriteString("\"")
buffer.WriteString(response.TxId)
buffer.WriteString("\"")
buffer.WriteString(", \"Value\":")
// if it was a delete operation on given key, then we need to set the
//corresponding value null. Else, we will write the response.Value
//as-is (as the Value itself a JSON marble)
if response.IsDelete {
buffer.WriteString("null")
} else {
buffer.WriteString(string(response.Value))
}
buffer.WriteString(", \"Timestamp\":")
buffer.WriteString("\"")
buffer.WriteString(time.Unix(response.Timestamp.Seconds, int64(response.Timestamp.Nanos)).String())
buffer.WriteString("\"")
buffer.WriteString(", \"IsDelete\":")
buffer.WriteString("\"")
buffer.WriteString(strconv.FormatBool(response.IsDelete))
buffer.WriteString("\"")
buffer.WriteString("}")
bArrayMemberAlreadyWritten = true
}
if !bArrayMemberAlreadyWritten {
buffer.WriteString("No record found")
}
buffer.WriteString("]")
fmt.Printf("- getAllTransactionForNumber returning:\n%s\n", buffer.String())
return shim.Success(buffer.Bytes())
}
If still in doubt, please revert. I will give you my whole source code to make it work. But I hope this will make your problem go away :-)
At last I am able to figure it out to get the hyperledger images to support my chaincode.
A Twitter API request may get a tweet with an image, returning JSON that contains something like this (non-relevant bits removed)
["media"]=>
array(1) {
...
[0]=>
array(14) {
...
["media_url"]=>
string(46) "http://pbs.twimg.com/media/XXXXXXXXXXXXXXX.jpg"
...
["type"]=>
string(5) "photo"
["sizes"]=>
array(4) {
["medium"]=>
array(3) {
["w"]=>
int(1200)
["h"]=>
int(1200)
["resize"]=>
string(3) "fit"
}
["large"]=>
array(3) {
["w"]=>
int(2048)
["h"]=>
int(2048)
["resize"]=>
string(3) "fit"
}
...
}
...
}
}
How do you access the different sized versions of the image? I tried entering "http://pbs.twimg.com/media/XXXXXXXXXXXXXXX_medium.jpg" or "http://pbs.twimg.com/media/XXXXXXXXXXXXXXX_1200x1200.jpg" into the address bar (obviously with real images, as opposed to this example URL) but that doesn't come up with anything
The syntax is slightly confusing, but is described in the Twitter documentation.
The size name medium needs to be added to the end of the URl.
For example:
https://pbs.twimg.com/media/CqnMeKkW8AAYDsc.jpg:thumb
https://pbs.twimg.com/media/CqnMeKkW8AAYDsc.jpg:small
https://pbs.twimg.com/media/CqnMeKkW8AAYDsc.jpg:medium
https://pbs.twimg.com/media/CqnMeKkW8AAYDsc.jpg:large
https://pbs.twimg.com/media/CqnMeKkW8AAYDsc.jpg:orig (original size image, with metadata stripped out)
I am having several experiments a day storing the error of the experiment and a boolean value (if the result is ok) in elasticsearch.
Now, I would like to display the results in a graph (using highchart js).
I use an aggregation query like this to receive the aggregated errors for each day including the standard deviation:
query: {
filtered: {
filter: {
range : {
date: {
"gte":"2015-1-1",
"lte": "2016-1-1,
"time_zone": "+1:00"
}
}
}
}
},
// Aggregate on the results
aggs: {
group_by_date: {
terms:{
field:"date",
order: {_term:"asc"}
},
aggs:{
error_stats:{
extended_stats:{
field:"error"
}
}
}
}
}
The problem I face is that I cannot retrieve the boolean values the same way as I get the double errors from the DB.
When I just change the field name to "ok" in
aggs:{
error_stats:{
extended_stats:{
field:"ok"
}
}
}
I receive this error message:
ClassCastException[org.elasticsearch.index.fielddata.plain.PagedBytesIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.IndexNumericFieldData
However, it would be OK to aggreate all the boolean values usign true as 1 and false as zero and then to receive a mean value for each day.
Can anyone help me with this?
Thanks alot!
First 0/1 representation is not exactly ES Boolean representation. There is a Boolean type for as true/false.
Second stats aggregation can be only done on numeric field and not on string field.
That is why it worked for 0/1 representation.
You can transform this value using scripts in extended stats
{
"aggs" : {
...
"aggs" : {
"grades_stats" : {
"extended_stats" : {
"field" : "grade",
"script" : "_value == 'T' ? 1 : 0",
}
}
}
}
}
To see some example usage of scripting in aggregation , you can look here.
I'm currently trying the Neo4j 2.0.0 M3 and see some strange behaviour. In my unit tests, everything works as expected (using an newImpermanentDatabase) but in the real thing, I do not get results from the graphDatabaseService.findNodesByLabelAndProperty.
Here is the code in question:
ResourceIterator<Node> iterator = graphDB
.findNodesByLabelAndProperty(Labels.User, "EMAIL_ADDRESS", emailAddress)
.iterator();
try {
if (iterator.hasNext()) { // => returns false**
return iterator.next();
}
} finally {
iterator.close();
}
return null;
This returns no results. However, when running the following code, I see my node is there (The MATCH!!!!!!!!! is printed) and I also have an index setup via the schema (although that if I read the API, this seems not necessary but is important for performance):
ResourceIterator<Node> iterator1 = GlobalGraphOperations.at(graphDB).getAllNodesWithLabel(Labels.User).iterator();
while (iterator1.hasNext()) {
Node result = iterator1.next();
UserDao.printoutNode(emailAddress, result);
}
And UserDao.printoutNode
public static void printoutNode(String emailAddress, Node next) {
System.out.print(next);
ResourceIterator<Label> iterator1 = next.getLabels().iterator();
System.out.print("(");
while (iterator1.hasNext()) {
System.out.print(iterator1.next().name());
}
System.out.print("): ");
for(String key : next.getPropertyKeys()) {
System.out.print(key + ": " + next.getProperty(key).toString() + "; ");
if(emailAddress.equals( next.getProperty(key).toString())) {
System.out.print("MATCH!!!!!!!!!");
}
}
System.out.println();
}
I already debugged through the code and what I already found out is that I pass via the InternalAbstractGraphDatabase.map2Nodes to a DelegatingIndexProxy.getDelegate and end up in IndexReader.Empty class which returns the IteratorUtil.EMPTY_ITERATOR thus getting false for iterator.hasNext()
Any idea's what I am doing wrong?
Found it:
I only included neo4j-kernel:2.0.0-M03 in the classpath. The moment I added neo4j-cypher:2.0.0-M03 all was working well.
Hope this answer helps save some time for other users.
#Neo4j: would be nice if an exception would be thrown instead of just returning nothing.
#Ricardo: I wanted to but I was not allowed yet as my reputation wasn't good enough as a new SO user.