I have a flume agent that ingests data into elasticsearch. The agent is using spoolDir source. There is another agent which writes the files into the spoolDir of the elasticsearch agent.
Over the time the files are increased and the difference between the processed files files and the unprocessed files increase.
I want to increase the number of events processed by the flume agent to speedup the ingesting process.
Here is the configuration of the flume agent.
agent04.sources = s1
agent04.channels = ch1
agent04.channels = memoryChannel
agent04.channels.memoryChannel.type = memory
agent04.channels.memoryChannel.capacity=100000
agent04.channels.memoryChannel.transactionCapacity=1000
agent04.sources.s1.channels = memoryChannel
agent04.sources.s1.type = spooldir
agent04.sources.s1.spoolDir = /DataCollection/Flume_Cleaner_Output/Json_Elastic/
agent04.sources.s1.deserializer.maxLineLength = 100000
agent04.sinks = elasticsearch
agent04.sinks.elasticsearch.channel = memoryChannel
agent04.sinks.elasticsearch.type=org.css.cssElasticsearchSink
agent04.sinks.elasticsearch.batchSize=400
agent04.sinks.elasticsearch.hostNames = elastic-node01.css.org
agent04.sinks.elasticsearch.indexName = all_collections
agent04.sinks.elasticsearch.indexType = live_tweets
agent04.sinks.elasticsearch.indexNameBuilder= org.css.sa.flume.elasticsearch.sink.indexNameBuilder.HeaderValueBasedIndexNameBuilder
agent04.sinks.elasticsearch.clusterName = css_rai_social
agent04.sinks.elasticsearch.serializer = org.jai.flume.sinks.elasticsearch.serializer.ElasticSearchJsonBodyEventSerializer
agent04.sinks.elasticsearch.cache_period_ms=90d
Why are you chaining two Flume agents using the spooldir? That'll be really slow and is a surprising configuration. You're incurring the cost of frequent fsyncs as each batch gets processed.
I recommend you chain them using the Avro Sink and Avro Source. I would also up the batch size to at least 1000. (Computers really like batches and Flume is setup to do that).
Related
I have about 1m "result" files in S3 bucket which I want to process. Each result file should be merge with additional columns from an associated "context" file, which I have about 50k of (i.e. each context is associated with about 20 results)
Processing it serially is slow so I am using dask to parallelize some of the work.
In my serial code, I just load everything up-front and merge them, e.g.
contexts_map = {get_context_id(ctx_file): load_context(ctx_file) for ctx_file in ctx_files}
data = []
for result_file in result_files:
ctx_id, res_id = get_context_and_res_id(result_file)
ctx = contexts_map[ctx_id]
data.append(process_result(ctx))
df = pd.DataFrame(data)
Initially I thought to divide the data and process in batches using dask (i.e. run the above in parallel on several batches) but then I read about dask bag and dask dataframe from_delayed and thought to use it. What I have:
delayed_get_context = delayed(get_context)
# load the contexts
ctx_map = {}
for ctx_file in ctx_files:
ctx_id = get_context_id(ctx_file)
ctx_map[ctx_file] = delayed_get_context(ctx_item)
# process the contexts
delayed_get_context_stats = delayed(get_context_stats)
ctx_stat_map = {ctx_id: delayed_get_context_stats(ctx) for ctx_id, ctx in ctx_map}
# the main bag of result files to process
res_bag = db.from_sequence(res_items, npartitions=num_workers * 2)
# prepare a list of corresponding delayed per results
# the order in this list corresponds to order of res_bag
res_context_list = [
ctx_stat_map[get_context_and_res_id(item)[0]] for item in res_items
]
# then create a bag from that list
ctx_bag = db.from_sequence(res_context_list, npartitions=num_workers * 2)
# create delays for the results
delayed_extract = delayed(extract_stats)
# from what I understand, if one of the arguments is also a bug
# it is distributed in accordance to the "main" bag
results = res_bag.map(delayed_extract, ctx_stats=ctx_bag)
df = ddf.from_delayed(results)
df = df.compute()
df.to_csv("results.csv")
This create a computation graph similar to the following:
When I run this on a subset (as in the image above) it works ok. Running the code on 1m items, I don't see anything happen (maybe didn't wait enough for it to finish building the graph and moving things around?)
With that, does the code above makes sense? Should I have done it another way?
One of the things I am "afraid" of with the above implementation is that there's a lot of data movement.
I could potentially spend some time up-front to arrange context+results and then treat that as the "unit-of-work" and maybe get better results?
Any feedback here would be appreciated - is there a better approach?
And another question - what number of partitions I should use? I saw in the docs it will default to about 100, but is there some rule of thumb to use here?
I am trying to connect twitter source to hive sink using flume.
I have my property file given below
# Naming the components on the current agent.
TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = k1
# Describing/Configuring the source
TwitterAgent.sources.Twitter.type = org.apache.flume.source.twitter.TwitterSource
#TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.consumerKey = xxx
TwitterAgent.sources.Twitter.consumerSecret = xxx
TwitterAgent.sources.Twitter.accessToken = xxx
TwitterAgent.sources.Twitter.accessTokenSecret = xxx
TwitterAgent.sources.Twitter.keywords = kafka, flume, hadoop, hive
# Describing/Configuring the sink
TwitterAgent.channels = MemChannel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.sinks = k1
TwitterAgent.sinks.k1.type = hive
TwitterAgent.sinks.k1.channel = MemChannel
TwitterAgent.sinks.k1.hive.metastore = thrift://xxxx:9083
TwitterAgent.sinks.k1.hive.database = sample
TwitterAgent.sinks.k1.hive.table = tweets_twitter
#TwitterAgent.sinks.k1.hive.partition = user_location
TwitterAgent.sinks.k1.useLocalTimeStamp = false
TwitterAgent.sinks.k1.round = true
TwitterAgent.sinks.k1.roundValue = 10
TwitterAgent.sinks.k1.roundUnit = minute
TwitterAgent.sinks.k1.serializer = DELIMITED
TwitterAgent.sinks.k1.serializer.delimiter = "\t"
TwitterAgent.sinks.k1.serializer.serdeSeparator = '\t'
#TwitterAgent.sinks.k1.serializer.fieldnames =user_friends_count,user_location,user_email
# Describing/Configuring the channel
TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 100000
TwitterAgent.channels.MemChannel.transactionCapacity = 1000
TwitterAgent.channels.MemChannel.byteCapacity = 6912212
# Binding the source and sink to the channel
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sinks.k1.channel = MemChannel
I am not creating any database or table in hive here. Should i need to create database name, table name, partition column, field names?? before starting the agent ??
If so where should i get the schema of twitter streaming data??
I am starting the flume agent using this below command
bin/flume-ng agent --conf ./conf/ -f conf/twitter_hive.conf Dflume.root.logger=DEBUG,console -n TwitterAgent --classpath "/usr/hdp/2.6.3.0-235/hive-hcatalog/share/hcatalog/*":"/usr/hdp/2.6.3.0-235/hive/lib/*"
Where should i get the schema of twitter data to create hive tables, to be mention in twitter.conf property file
HiveSink was introduced in version 1.6 and as per the documentation, yes the metastore, database name and the table name are mandatory. The partition part however is optional as flume can create the missing partitions.
As for the schema for twitter, it seems to be a problem some others have also faced and I found this link quite useful (You may have already come across this). It mentions some of the data structures available in hive that you may need to work with data coming in JSON format. You may however need to alter some of the bits and pieces for your scenario but this should give you a good start.
I hope this helps.
I am setting the properties of a Flume Agent and I am not sure what value should I use for the batchSize (number of events to batch together for send).
In my particular case I will use the console as a sink. As I understand the logger-sink is the type used in this case. But Flume documentation doesn't mention the batchSize paramenter for this kind of sink. Isn't it necessary to define a batchSize for logger-sinks?
Well, I found an answer for the question: Isn't it necessary to define a batchSize for logger-sinks?
https://flume.apache.org/FlumeUserGuide.html#logger-sink There is not batchSize, instead there is a parameter calle maxBytesToLog which defines the maximum number of bytes of the Event body to log (by default its value is 16). Here there is simple example I found of a Flume Agent which uses the console as sink:
node.sources = my-source
node.channels = my-channel
node.sinks = my-sink
# Since node 1 sink is avro-type, here we indicate avro as source type
node.sources.my-source.type = avro
node.sources.my-source.bind = 0.0.0.0
node.sources.my-source.port = 11112
node.sources.my-source.channels = my-channel
node.channels.my-channel.type = memory
node.channels.my-channel.capacity = 10000
node.channels.my-channel.transactionCapacity = 100
node.sinks.my-sink.type = logger
node.sinks.my-sink.channel = my-channel
node.sinks.my-sink.maxBytesToLog = 256
Source from: https://medium.com/#DCA/something-about-flume-3cb720ba00e8#.37zs23dnt
And about the main question How to determine the batchSize of the sink?
With regards to the hdfs batch size, the larger your batch size the better performance will be. However, keep in mind that if a transaction fails the entire transaction will be replayed which could have the implication of duplicate events downstream.
From:
https://cwiki.apache.org/confluence/display/FLUME/BatchSize,+ChannelCapacity+and+ChannelTransactionCapacity+Properties
We copied a 150 mb csv file into flume's spool directory, when it is getting loaded into hdfs, the file was splitting into smaller size files like 80 kb's. is there a way to load the file without getting split into smaller files using flume? because more metadata will be generated inside namenode about the smaller files, so we need to avoid it.
My flume-ng code looks like this
# Initialize agent's source, channel and sink
agent.sources = TwitterExampleDir
agent.channels = memoryChannel
agent.sinks = flumeHDFS
# Setting the source to spool directory where the file exists
agent.sources.TwitterExampleDir.type = spooldir
agent.sources.TwitterExampleDir.spoolDir = /usr/local/flume/live
# Setting the channel to memory
agent.channels.memoryChannel.type = memory
# Max number of events stored in the memory channel
agent.channels.memoryChannel.capacity = 10000
# agent.channels.memoryChannel.batchSize = 15000
agent.channels.memoryChannel.transactioncapacity = 1000000
# Setting the sink to HDFS
agent.sinks.flumeHDFS.type = hdfs
agent.sinks.flumeHDFS.hdfs.path = hdfs://info3s7:54310/spool5
agent.sinks.flumeHDFS.hdfs.fileType = DataStream
# Write format can be text or writable
agent.sinks.flumeHDFS.hdfs.writeFormat = Text
# use a single csv file at a time
agent.sinks.flumeHDFS.hdfs.maxOpenFiles = 1
# rollover file based on maximum size of 10 MB
agent.sinks.flumeHDFS.hdfs.rollCount=0
agent.sinks.flumeHDFS.hdfs.rollInterval=2000
agent.sinks.flumeHDFS.hdfs.rollSize = 0
agent.sinks.flumeHDFS.hdfs.batchSize =1000000
# never rollover based on the number of events
agent.sinks.flumeHDFS.hdfs.rollCount = 0
# rollover file based on max time of 1 min
#agent.sinks.flumeHDFS.hdfs.rollInterval = 0
# agent.sinks.flumeHDFS.hdfs.idleTimeout = 600
# Connect source and sink with channel
agent.sources.TwitterExampleDir.channels = memoryChannel
agent.sinks.flumeHDFS.channel = memoryChannel
What you want is this:
# rollover file based on maximum size of 10 MB
agent.sinks.flumeHDFS.hdfs.rollCount = 0
agent.sinks.flumeHDFS.hdfs.rollInterval = 0
agent.sinks.flumeHDFS.hdfs.rollSize = 10000000
agent.sinks.flumeHDFS.hdfs.batchSize = 10000
From the flume documentation
hdfs.rollSize: File size to trigger roll, in bytes (0: never roll based on file size)
In your example you use rollInterval of 2000 which will roll over the file after 2000 seconds, resulting in small files.
Also note that batchSize reflects the number of events before the file is flushed to HDFS, not necessarily the number of events before the file is closed and a new one created. You'll want to set that to some value small enough to not time out writing a large file but large enough to avoid overhead of many requests to HDFS.
I have tested Apache Flume to transfer files from local to HDFS. But if the source files from multiple servers (transfer files from different servers' local to HDFS), can I just run one Flume instance and just add more agents into flume-conf.properties?
If I can, how can I edit the following parameters in flume-conf.properties:
agent1.sources.spooldirSource1.spoolDir = ?(server1/path)
agent2.sources.spooldirSource2.spoolDir = ?(server2/path)
And also, how can I run flume?
./flume-ng agent -n agent -c conf -f apache-flume-1.4.0-bin/conf/flume-conf.properties
can only run one flume. What about more than two?
Add multiple sources for what you need but configure them to use the same channel - which will then use the same source. So it's something like (note that this snippet is incomplete):
agent1.sources.spooldirSource1.spooldir = server1/path
agent1.sources.spooldirSource1.channel = myMemoryChannel
agent1.sources.spooldirSource2.spooldir = server2/path
agent1.sources.spooldirSource2.channel = myMemoryChannel
Using the same channel for two source isn't the good pratice , you can easly get outOfMemory for the channel (for MemoryChannel) and in this case.
it's better to use a channel for every source (for the same agent)
a1.sources = r1 r2
a1.sinks = k1 k2
a1.channels = c1 c2
then link source r1 to the channel c1 and source r2 to channel c2