I have a list of dicts that i'm trying to save to file.
the dicts are comprised of different value-types. str, int, tuples and a QFrame object..
QDataStream:ing this with a writeQStringList fails. because of the mix of stuff in the dicts? or because it's a list of dicts? it returns an empty list anyway (with correct length)
I tested with just a list of strings, this works fine.
how do I go about doing this?
should I use writeRawData? how does that work?
or do I need some custom workaround..
any ideas?
best /Sam
Related
I recently started checking new Java 8 features.
I've come across this forEach iterator-which iterates over the Collection.
Let's take I've one ArrayList of type <Integer> having values= {1,2,3,4,5}
list.forEach(i -> System.out.println(i));
This statement iteates over a list and prints the values inside it.
I'd like to know How am I going to specify that I want it to iterate over some specific values only.
Like, I want it to start from 2nd value and iterate it till 2nd last value. or something like that- or on alternate elements.
How am I going to do that?
To iterate on a section of the original list, use the subList method:
list.subList(1, list.length()-1)
.stream() // This line is optional since List already has a foreach method taking a Consumer as parameter
.forEach(...);
This is the concept of streams. After one operation, the results of that operation become the input for the next.
So for your specific example, you can follow #Joni's command. But if you're asking in general, then you can create a filter to only get the values you want to loop over.
For example, if you only wanted to print the even numbers, you could create a filter on the streams before you forEached them. Like this:
List<Integer> intList = Arrays.asList(1,2,3,4,5);
intList.stream()
.filter(e -> (e & 1) == 0)
.forEach(System.out::println);
You can similarly pick out the stuff you want to loop over before reaching your terminal operation (in your case the forEach) on the stream. I suggest you read this stream tutorial to get a better idea of how they work: http://winterbe.com/posts/2014/07/31/java8-stream-tutorial-examples/
I want to redefine the order of a tuple looking for specific words
Example, I have a list of tuples like this:
[{"a",["r001"]},
{"bi",["bidder"]},
{"bo",["an"]}]
But sometimes the order of the tuples can change for example:
[{"bi",["bidder"]},
{"a",["r001"]},
{"bo",["an"]}]
or
[{"bo",["an"]},
{"a",["r001"]},
{"bi",["bidder"]}]
The first string/list of the tuple is my unique key ("bo","a","bi")
But I want to be able to reorder the list of tuples, always like:
[{"a",["r001"]},
{"bi",["bidder"]},
{"bo",["an"]}]
How can I achieve this?
This will do it:
lists:sort(fun({A,_},{B,_}) -> A =< B end, List).
Or this, which will sort by the tuples second element after the first:
lists:sort(List).
I offer the second version, because without the custom sort function, it is faster for data like this.
If you need to sort by specified element, you just sort by specified element
lists:keysort(1, List).
I need to use a for loop to create a 2d array. So far "+=" and .append have not yielded any results. Here is my code. Please excuse the rushed variable naming.
let firstThing = contentsOfFile!.componentsSeparatedByString("\n")
var secondThing: [AnyObject] = []
for i in firstThing {
let temp = i.componentsSeparatedByString("\"")
secondThing.append(temp)
}
The idea is that it takes the contents of a csv file, then separates the individual lines. It then tries to separate each of the lines by quotation marks. This is where the problem arises. I am successfully making the quotation separated array (stored in temp), however, I cannot make a collection of these in one array (i.e. a 2d array) using the for loop. The above code generates an error. Does anybody have the answer of how to construct this 2d array?
You can do this using higher order functions...
let twoDimensionalArray = contentsOfFile!.componentsSeparatedByString("\n").map{
$0.componentsSeparatedByString("\"")
}
The map function takes an array of items and maps each one to another type. In this I'm mapping the strings from the first array into an array pf strings and so creating a 2d array.
This will infer the type of array that is created so no need to put [[String]].
Here you go...
This happens when you parse a list of strings and you want to split each one in two, afterwards making a hashmap.
Say we have a list of strings, each one with first line ID and rest data:
("#ID
data
More data",
"#another ID
Some more data still")
Now suppose that we use the following method that returns a nested structure:
(map #(clojure.string/split % #"\n" 2) data)
Now if we want to put this into a hashmap, it first has to be flatten'd and then apply hash-map'd. Is there a way to skip the flatten part and by having some flat-map return an non-nested structure?
You can use into:
(into {} (map #(clojure.string/split % #"\n") data))
i'm using riak to store json documents right now, and i want to sort them based on some attribute, let's say there's a key, i.e
{
"someAttribute": "whatever",
"order": 1
}
so i want to sort the documents based on the "order".
I am currently retrieving the documents in riak with the erlang interface. i can retrieve the document back as a string, but i dont' really know what to do after that. i'm thinking the map function just reduces the json document itself, and in the reduce function, i'd make a check to see whether the item i'm looking at has a higher "order" than the head of the rest of the list, and if so append to beginning, and then return a lists:reverse.
despite my ideas above i've had zero results after almost an entire day, i'm so confused with the erlang interface in riak. can someone provide insight on how to write this map/reduce function, or just how to parse the json document?
As far as I know, You do not have access to Input list in Map. You emit from Map a document as 1 element list.
Inputs (all the docs to handle as {Bucket, Key}) -> Map (handle single doc) -> Reduce (whole list emitted from Map).
Maps are executed per each doc on many nodes whereas Reduce is done once on so called coordinator node (the one where query was called).
Solution:
Define Inputs (as a list or bucket)
Retrieve Value in Map and emit whole doc or {Id, Val_to_sort_by)
Sort in Reduce (using regular list:keysort)
This is not a map reduce solution but you should check out Riak Search.
so i "solved" the problem using javascript, still can't do it using erlang.
here is my query
{"inputs":"test",
"query":[{"map":{"language":"javascript",
"source":"function(value, keyData, arg){ var data = Riak.mapValuesJson(value)[0]; var obj = {}; obj[data.order] = data; return [ obj ];}"}},
{"reduce":{"language":"javascript",
"source":"function(values, arg){ return [ values.reduce(function(acc, item){ for(var order in item){ acc[order] = item[order]; } return acc; }) ];}",
"keep":true}}
]
}
so in the map phase, all i do is create a new array, obj, with the key as the order, and the value as the data itself. so visually, the obj is like this
{"1":{"firstName":"John","order":1}
in the reduce phase, i'm just putting it in the accumulator, so basically that's the sort if you think about it, because when you're done, everything will be put in order for you. so i put 2 json documents for testing, one is above, the ohter is just firstName: Billie, order 2. and here is my result for the query above
[{"1":{"firstName":"John","order":1},"2":{"firstName":"Billie","order":2}}]
so it works! . but i still need to do this in ERLANG, any insights?