New user of hadoop and mapreduce, i would like to create a mapreduce job to do some measure on images. this why i would like to know if i can passe an image as input to mapreduce?if yes? any kind of example
thanks
No.. you cannot pass an image directly to a MapReduce job as it uses specific types of datatypes optimized for network serialization. I am not an image processing expert but I would recommend to have a look at HIPI framework. It allows image processing on top of MapReduce framework in a convenient manner.
Or if you really want to do it the native Hadoop way, you could do this by first converting the image file into a Hadoop Sequence file and then using the SequenceFileInputFormat to process the file.
Yes, you can totally do this.
With the limited information provided, I can only give you a very general answer.
Either way, you'll need to:
1) You will need to write a custom InputFormat that instead of taking chunks of files in HDFS locations (like TextInputFormat and SequenceFileInputFormat do), it actually passes to each map task the Image's HDFS path name. Reading the image from that won't be too hard.
If you plan to have a Reduce phase in which Images are passed around through the framework, you'll need to:
2) You will need to make an "ImageWritable" class that implements Writable (or WritableComparable if you're keying on the image). In your write() method, you'll need to serialize your image to a byte array. When you do this, what I would do is first write to the output an int/long which is the size of the array you're going to write. Lastly, you'll want to write the array as bytes.
In your read() method, you'll read an int/long first (which will describe the payload of the image), create an byte array of this size, and then read the bytes fully into your byte array up to the length of your int/long that you captured.
I'm not entirely sure what you're doing, but that's how I'd go about it.
Related
I want to create a 3d dask array from data that I have that is already chunked. My data consists of 216 blocks containing 1024x1024x1024 uint8 voxels each, each stored as a compressed hdf5 file with one key called data. Compressed, my data is only a few megabytes per block, but decompressed, it takes 1GB per block. Furthermore, my data is currently stored in Google Cloud storage (gcs), although I could potentially mirror it locally inside a container.
I thought the easiest way would be to use zarr, following these instructions (https://pangeo.io/data.html). Would xarray have to decompress my data before saving to zarr format? Would it have to shuffle data and try to communicate across blocks? Is there a lower level way of assembling a zarr from hdf5 blocks?
There are a few questions there, so I will try to be brief and hope that some edits can flesh out details I may have omitted.
You do not need to do anything in order to view your data as a single dask array, since you can reference the individual chunks as arrays (see here) and then use the stack/concatenate functions to build up into a single array. That does mean opening every file in the client, in order to read the meatadata, though.
Similarly, xarray has some functions for reading sets of files, where you should be able to assume consistency of dtype and dimensionality - please see their docs.
As far as zarr is concerned, you could use dask to create the set of files for you on GCS or not, and choose to use the same chunking scheme as the input - then there will be no shuffling. Since zarr is very simple to set up and understand, you could even create the zarr dataset yourself and write the chunks one-by-one without having to create the dask array up front from the zarr files. That would normally be via the zarr API, and writing a chunk of data does not require any change to the metadata file, so can be done in parallel. In theory, you could simply copy a block in, if you understood the low-level data representation (e.g., int64 in C-array layout); however, I don't know how likely it is that the exact same compression mechanism will be available in both the original hdf and zarr (see here).
I'm building the neural network and should test in on modified CIFAR-10. I have used keras.datasets.cifar10.load_data() for retrieving the dataset and then parsed it in a dict using pickle.load(datafile, encoding='bytes'). After some modifications I've written the images in keras-like format using pickle.dump().
I noticed that the resulting file after pickle.dump() is 53 bytes bigger then the source file. Even if I don't make any modifications and use the dump() right away after load() the resulting file has extra 53 bytes. Looks like the structure of the resulting file isn't violated because I'm able to restore images, labels, filenames from it and they are correct. But if I'm learning and testing the neural network (even the simplest NN from the example!) I'm getting very bad score (~0.5).
Please help me to figure out how the loading-dumping affects the NN's result if the structure in general doesn't change?
How I can load and dump to leave the structure and the size of files without changes? How to avoid the inconsistency of load-dump operation?
P.S. Looks like the dump() writes some header to the file and doesn't write it if the header already exists (i've tried to apply load-dump twice but the size was changes only at first applying). But how I can avoid writing this header?
Please help me to figure out how the loading-dumping affects the NN's
result if the structure in general doesn't change?
If the structure does not change, it has no effect to the network. You can simple dump with human readable outputs and compare the files. First read about protocols here.
How I can load and dump to leave the structure and the size of files
without changes? How to avoid the inconsistency of load-dump
operation?
there is no real inconsistancy, its probably just using another protocol or attaching headers. You should not deal with these internals. If you want to make your model human readable or put it under version control, you can use json or another human readbale protocol. (e.g. simplejson)
I have input data stored as a single large file on S3.
I want Dask to chop the file automatically, distribute to workers and manage the data flow. Hence the idea of using distributed collection, e.g. bag.
On each worker I have a command line tools (Java) that read the data from file(s). Therefore I'd like to write a whole chunk of data into file, call external CLI/code to process the data and then read the results from output file. This looks like processing batches of data instead of record-at-a-time.
What would be the best approach to this problem? Is it possible to write partition to disk on a worker and process it as a whole?
PS. It nor necessary, but desirable, to stay in a distributed collection model because other operations on data might be simpler Python functions that process data record by record.
You probably want the read_bytes function. This breaks the file into many chunks cleanly split by a delimiter (like an endline). It gives you back a list of dask.delayed objects that point to those blocks of bytes.
There is more information on this documentation page: http://dask.pydata.org/en/latest/bytes.html
Here is an example from the docstring:
>>> sample, blocks = read_bytes('s3://bucket/2015-*-*.csv', delimiter=b'\n')
I run caffe using an image_data_layer and don't want to create an LMDB or LevelDB for the data, But The compute_image_mean tool only works with LMDB/LevelDB databases.
Is there a simple solution for creating a mean file from a list of files (the same format that image_data_layer is using)?
You may notice that recent models (e.g., googlenet) do not use a mean file the same size as the input image, but rather a 3-vector representing a mean value per image channel. These values are quite "immune" to the specific dataset used (as long as it is large enough and contains "natural images").
So, as long as you are working with natural images you may use the same values as e.g., GoogLenet is using: B=104, G=117, R=123.
The simplest solution is to create a LMDB or LevelDB database of the image set.
The complicated solution is to write a tool similar to compute_image_mean, which takes image inputs and do the transformations and find the mean!
I'm trying to do the following in hadoop map/reduce( written in java, linux kernel OS)
Text files 'rules-1' and 'rules-2' (total 3GB in size) contains some rules, each rule are separated by endline character, so the files can be read using readLine() function.
These files 'rules-1' and 'rules-2' needs to be imported as a whole from hdfs in every map function in my cluster i.e. these file are not splittable across different map function.
Input to the mapper's map function is a text file called 'record' (each line is terminated by endline character), so from the 'record' file we get the (key, value) pair. The file is splittable and can be given as input to different map function used in the whole map/reduce process.
What needs to be done is compare each value(i.e. lines from record file) with the rules inside 'rules-1' and 'rules-2'
Problem is, if I pull out each line of rules-1 and rules-2 files to a static arraylist only once, so that each mapper can share the same arraylint and try to compare elements in the arraylist with the each input value from the record file, I get a memory overflow error, since 3GB cannot be stored at a time in the arraylist.
Alternatively, if I import only few lines from the rules-1 and rules-2 files at a time and compare them to each value, map/reduce is taking a lot time to finish its job.
Could you guys provide me any other alternative ideas how can this be done without the memory overflow error? Will it help if I put those file-1 and file-2 inside a hdfs supporting database or something? I'm going out of ideas actually.Would really appreciate if some of you guys could provide me your valuable suggestions.
Iif you input files are small - you can load them into static variables and use rules as an input.
If above is not a case I can suggest the following ways:
a) To give rule-1 and rule-2 high replication factor close to the number of nodes you have. Then you can read from HDFS rule=1 and rule-2 for each record in the input relatively efficient - because it will be sequential read from the local datanode.
b) If you can consider some hash function which, when applied to the rule and to the input string will predict without false negatives that they can match - then you can emit this hash for rules, input record and resolve all possible matches in the reducer. It will be very similar to the way how a join is done using MR
c) I would consider some other optimization techniques like building search trees, or sorting since otherwise the problem looks computationally expensive and will took forever...
On this page find Real-World Cluster Configurations
it will cover file size configuration
You could use the param "mapred.child.java.opts" in conf/mapred-site.xml to increase the memory for your mappers. You might not be able to run as many map slots per server but with more servers in your cluster you could still parallelize your job.
Read the content text file from the MapReduce function and read the keyword text file from the mapper function (for reading your HDFS) and split using StringTokenizer value.toString reading from MapReduce and in your mapper function write HDFS read text file code it will read line-by-line so use two while loops here you compare. Whenever you want data send it to reducer.
Split the 3gb text file into several text files and apply that all text files as usual MapReduce your previous program.
For splitting text file I written Java program and you decide how many lines you want write in each text file.