I'm building the neural network and should test in on modified CIFAR-10. I have used keras.datasets.cifar10.load_data() for retrieving the dataset and then parsed it in a dict using pickle.load(datafile, encoding='bytes'). After some modifications I've written the images in keras-like format using pickle.dump().
I noticed that the resulting file after pickle.dump() is 53 bytes bigger then the source file. Even if I don't make any modifications and use the dump() right away after load() the resulting file has extra 53 bytes. Looks like the structure of the resulting file isn't violated because I'm able to restore images, labels, filenames from it and they are correct. But if I'm learning and testing the neural network (even the simplest NN from the example!) I'm getting very bad score (~0.5).
Please help me to figure out how the loading-dumping affects the NN's result if the structure in general doesn't change?
How I can load and dump to leave the structure and the size of files without changes? How to avoid the inconsistency of load-dump operation?
P.S. Looks like the dump() writes some header to the file and doesn't write it if the header already exists (i've tried to apply load-dump twice but the size was changes only at first applying). But how I can avoid writing this header?
Please help me to figure out how the loading-dumping affects the NN's
result if the structure in general doesn't change?
If the structure does not change, it has no effect to the network. You can simple dump with human readable outputs and compare the files. First read about protocols here.
How I can load and dump to leave the structure and the size of files
without changes? How to avoid the inconsistency of load-dump
operation?
there is no real inconsistancy, its probably just using another protocol or attaching headers. You should not deal with these internals. If you want to make your model human readable or put it under version control, you can use json or another human readbale protocol. (e.g. simplejson)
Related
Im trying to make a game on Scratch that will use a feature to generate a special code, and when that code is input into a certain area it will load the stats that were there when the code was generated. I've run into a problem however, I don't know how to make it and I couldn't find a clear cut answer for how to make it.
I would prefer that the solution be:
Able to save information for as long as needed (from 1 second to however long until it's input again.)
Doesn't take too many blocks to make, so that the project won't take forever to load it.
Of course i'm willing to take any solution in order to get my game up and running, those are just preferences.
You can put all of the programs in a custom block with "Run without screen refresh" on so that the program runs instantly.
If you save the stats using variables, you could combine those variable values into one string divided by /s. i.e. join([highscore]) (join("/") (join([kills]) (/))
NOTE: Don't add any "/" in your stats, you can probably guess why.
Now "bear" (pun) with me, this is going to take a while to read
Then you need the variables:
[read] for reading the inputted code
[input] for storing the numbers
Then you could make another function that reads the code like so: letter ([read]) of (code) and stores that information to the [input] variable like this: set [input] to (letter ([read]) of (code)). Then change [read] by (1) so the function can read the next character of the code. Once it letter ([read]) of (code) equals "/", this tells the program to set [*stat variable*] to (input) (in our example, this would be [highscore] since it was the first variable we saved) and set [input] to (0), and repeat again until all of the stats variables are filled (In this case, it repeats 2 times because we saved two variables: [highscore] and [kills]).
This is the least amount of code that it takes. Jumbling it up takes more code. I will later edit this answer with a screenshot showcasing whatever I just said before, hopefully clearing up the mess of words above.
The technique you mentioned is used in many scratch games but there is two option for you when making the save/load system. You can either do it the simpler way which makes the code SUPER long(not joking). The other way is most scratchers use, encoding the data into a string as short as possible so it's easy to transfer.
If you want to do the second way, you can have a look at griffpatch's video on the mario platformer remake where he used a encode system to save levels.https://www.youtube.com/watch?v=IRtlrBnX-dY The tips is to encode your data (maybe score/items name/progress) into numbers and letters for example converting repeated letters to a shorter string which the game can still decode and read without errors
If you are worried it took too long to load, I am pretty sure it won't be a problem unless you really save a big load of data. The common compress method used by everyone works pretty well. If you want more data stored you may have to think of some other method. There is not an actual way to do that as different data have different unique methods for things working the best. Good luck.
I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.
I run caffe using an image_data_layer and don't want to create an LMDB or LevelDB for the data, But The compute_image_mean tool only works with LMDB/LevelDB databases.
Is there a simple solution for creating a mean file from a list of files (the same format that image_data_layer is using)?
You may notice that recent models (e.g., googlenet) do not use a mean file the same size as the input image, but rather a 3-vector representing a mean value per image channel. These values are quite "immune" to the specific dataset used (as long as it is large enough and contains "natural images").
So, as long as you are working with natural images you may use the same values as e.g., GoogLenet is using: B=104, G=117, R=123.
The simplest solution is to create a LMDB or LevelDB database of the image set.
The complicated solution is to write a tool similar to compute_image_mean, which takes image inputs and do the transformations and find the mean!
New user of hadoop and mapreduce, i would like to create a mapreduce job to do some measure on images. this why i would like to know if i can passe an image as input to mapreduce?if yes? any kind of example
thanks
No.. you cannot pass an image directly to a MapReduce job as it uses specific types of datatypes optimized for network serialization. I am not an image processing expert but I would recommend to have a look at HIPI framework. It allows image processing on top of MapReduce framework in a convenient manner.
Or if you really want to do it the native Hadoop way, you could do this by first converting the image file into a Hadoop Sequence file and then using the SequenceFileInputFormat to process the file.
Yes, you can totally do this.
With the limited information provided, I can only give you a very general answer.
Either way, you'll need to:
1) You will need to write a custom InputFormat that instead of taking chunks of files in HDFS locations (like TextInputFormat and SequenceFileInputFormat do), it actually passes to each map task the Image's HDFS path name. Reading the image from that won't be too hard.
If you plan to have a Reduce phase in which Images are passed around through the framework, you'll need to:
2) You will need to make an "ImageWritable" class that implements Writable (or WritableComparable if you're keying on the image). In your write() method, you'll need to serialize your image to a byte array. When you do this, what I would do is first write to the output an int/long which is the size of the array you're going to write. Lastly, you'll want to write the array as bytes.
In your read() method, you'll read an int/long first (which will describe the payload of the image), create an byte array of this size, and then read the bytes fully into your byte array up to the length of your int/long that you captured.
I'm not entirely sure what you're doing, but that's how I'd go about it.
I'm trying to do the following in hadoop map/reduce( written in java, linux kernel OS)
Text files 'rules-1' and 'rules-2' (total 3GB in size) contains some rules, each rule are separated by endline character, so the files can be read using readLine() function.
These files 'rules-1' and 'rules-2' needs to be imported as a whole from hdfs in every map function in my cluster i.e. these file are not splittable across different map function.
Input to the mapper's map function is a text file called 'record' (each line is terminated by endline character), so from the 'record' file we get the (key, value) pair. The file is splittable and can be given as input to different map function used in the whole map/reduce process.
What needs to be done is compare each value(i.e. lines from record file) with the rules inside 'rules-1' and 'rules-2'
Problem is, if I pull out each line of rules-1 and rules-2 files to a static arraylist only once, so that each mapper can share the same arraylint and try to compare elements in the arraylist with the each input value from the record file, I get a memory overflow error, since 3GB cannot be stored at a time in the arraylist.
Alternatively, if I import only few lines from the rules-1 and rules-2 files at a time and compare them to each value, map/reduce is taking a lot time to finish its job.
Could you guys provide me any other alternative ideas how can this be done without the memory overflow error? Will it help if I put those file-1 and file-2 inside a hdfs supporting database or something? I'm going out of ideas actually.Would really appreciate if some of you guys could provide me your valuable suggestions.
Iif you input files are small - you can load them into static variables and use rules as an input.
If above is not a case I can suggest the following ways:
a) To give rule-1 and rule-2 high replication factor close to the number of nodes you have. Then you can read from HDFS rule=1 and rule-2 for each record in the input relatively efficient - because it will be sequential read from the local datanode.
b) If you can consider some hash function which, when applied to the rule and to the input string will predict without false negatives that they can match - then you can emit this hash for rules, input record and resolve all possible matches in the reducer. It will be very similar to the way how a join is done using MR
c) I would consider some other optimization techniques like building search trees, or sorting since otherwise the problem looks computationally expensive and will took forever...
On this page find Real-World Cluster Configurations
it will cover file size configuration
You could use the param "mapred.child.java.opts" in conf/mapred-site.xml to increase the memory for your mappers. You might not be able to run as many map slots per server but with more servers in your cluster you could still parallelize your job.
Read the content text file from the MapReduce function and read the keyword text file from the mapper function (for reading your HDFS) and split using StringTokenizer value.toString reading from MapReduce and in your mapper function write HDFS read text file code it will read line-by-line so use two while loops here you compare. Whenever you want data send it to reducer.
Split the 3gb text file into several text files and apply that all text files as usual MapReduce your previous program.
For splitting text file I written Java program and you decide how many lines you want write in each text file.