Usable space exhausted in flume using file channel - flume

I’m working on Flume with Spool Directory as the Source,HDFS as sink and File as channel. When executing the flume job. I’m getting below issue. Memory channel is working fine. But we need to implement the same using File channel. Using file channel I’m getting below issue.
I have configured the JVM memory size to 3GB in flume.env.sh file. Please let me know any other settings we need to do.
20 Jan 2016 20:05:27,099 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160) - Unable to deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=Artiva-memory-channel]. Due to java.io.IOException: Usable space exhausted, only 427515904 bytes remaining, required 524288000 bytes

File channel has nothing to do with memory but with HDD (disk). Such a channel uses the file system for storing the data. Thus, check how much free space is available in the disks where the checkpoint file and data files are written (please, have a look on the FileChannel parameters).

The ERROR message is about the Memory Channel. See "Channel closed [channel=Artiva-memory-channel]".
Check what channel is assigned to the HDFS sink in question.
It would be in flume.conf file, a property like:
agent_name.sinks.hdfs_sink_name.channel

Related

Thingsboard rule chain: How to decompress telemetry?

I am trying to use thingsboard to allow users to request log files from devices. When requested, the devices send the log files to my TB server as telemetry. First, the logs are compressed with gzip and then base64 encoded. I want to have the rule chain decompress these logs and email to the requestor. I've found code to convert the base64 string to a byte array, but I haven't found a way to decompress the resulting byte array. I tried to invoke zlib using:
var zlib = require('zlib');
but it results in an message that 'require' is not defined.
Any hints? What language exactly is the TB rule node environment?
We send it to S3 and then have a link to it available on the TB gui - the user can request a log file(s) on the UX and then a few minutes later can click on the logfile and it downloads to their computer/device as a zip file. Device is linux based.

TraceView able to capture logs occurring during driver installation, but shown as Unknown

I am developing a UMDF driver, and I am able to use its PDB file to confirm events/function calls during its lifetime. However, I am also able to capture events prior to its DriverEntry function. This events have become a concern for me, because I suspect that they alter some values initialized by the driver, thereby causing issues. I would like to know more about these events, but information on TraceView shows them as 'Unknown' as shown below:
Is there a way to capture these trace logs better? It seems like the driver pdb does not contain information for these logs to show up correctly.
EDIT: I extracted TMF files from my PDB file using tracepdb, and it seems like I do not have a TMF file that corresponds to the message GUIDs that are marked "No format information found". Could it be that these trace messages are from external entities, and not coming from the driver?
Fortunately, we have the complete list of PDB files that have been released. We found the a matching trace file after looking at each one of these files, and therefore got the information we wanted.

Converted pcap file not able to load in veloview

I have captured udp log files. I can create the pcap file with captured udp data from log file using pcapDotnet but the newly created pcap files im not able to open veloview.exe. Tool itself getting crashed but same pcap files i can open in wireshark.
Wireshark captures and opens network-level traffic (meaning all kind of packets)
Veloview expects packets generated by Velodyne Lidars, with a specific dataformat (it runs at application level). Please check that your pcap file contains the original datapacket format.
For now, Veloview only reads Velodyne generated data, saved as legacy ".pcap" files (no .pcap.ng, and newer formats). So please try saving in this format.
Best regards,
Bastien Jacquet, VeloView Lead Developer

open cart import/export tool Out of memory

I'm having problems with a free open cart module and was hoping to get some help.
While using the import/export tool I'm Getting the following error
"Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 25165824 bytes) in /home3/haas12/public_html/breslovcenter.org/system/PHPExcel/Classes/PHPExcel/Style/Supervisor.php on line 126"
I only have about 700 items and my xlsx file it is only 291k but it's saying 256MB in the error message.
I created a php info file and it is at:
http://breslovcenter.org/phpinfo.php
Anyone have any ideas on how to fix this? Any help would be greatly appreciated. I'm guessing this problem has to be due to some bug that makes it leak memory. I'm kind of stuck and not sure what to do.
The file might be small, but PHP uses a lot of memory/processing to open the excel file. While it does seem like a lot of memory, it's pretty well known for having that issue and you will need to increase your memory limit or find a better way to import (there are numerous other ways to import, they're just not as convenient)
The error message:
Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 25165824 bytes) ...
This says that Your Apache server has allowed memory size for PHP of 268435456 bytes (256 MB). This memory is shared between all possible threads Apache server may open. Your script is trying to allocate 25165824 bytes (24 MB) more (but this may differ per each request and depending on the loading file size).
PHPExcel, though excellent PHP extension for working with XLS(X) files, has one critical weakness - a requirement for lot of resources, especially memory. On a shared hosting with 32 or 64 MB of allowed memory size there is even no chance to run this extension.
The solutions
If You have the chance to modify the memory limit for Your PHP, then do so. Open up Your php.ini file and search for memory_limit setting. It should be now memory_limit 256M - change it to e.g. this value: memory_limit 350M or even to 512M if You want to be completely sure this won't happen in the future. If You have the possibility to modify the PHP settings from within the PHP scripts using the ini_set() directive, this may be even better to use only in the import script so that there is not so much useless memory allocated for each request: ini_set('memory_limit', '350M'); - it's best if You call this as the first line in Your import script.
If the option one is not possible (You do not have rights to access or modify the PHP settings on Your hosting) then the other possibility it to export the XLS(X) file into a CSV file and import the data from CSV which is maybe not so comfortable but for sure uses as less resources as possible.
As Jay Gilford says PHPExcel is well known for this issue. You can try either:
Editing the php.ini files
If your website is hosted on a shared server or you do not have access to the PHP Configuration you will need to amend two 'php.ini' files in your OpenCart installation. The first is in the root folder of your OpenCart installation and the second is in the '/admin' folder. Change:
memory_limit = 64M;
To:
memory_limit = 256M;
If you’re on a shared server there may be a limit imposed by your provider (from experience 1&1 is around 80MB) which would override these 'php.ini' files, in which case you may need upgrade to a dedicated server or VPS if you want to increase your PHP memory limit beyond this.
Increasing the PHP memory limit on your server
If you have access to the server PHP Configuration you can increase the PHP memory limit directly on the server through your Control Panel or via SSH. You will most likely need to restart your server for the changes to take effect.
Of course deleting some old products would do the trick and free up some memory usage, however you will encounter the issue again once you get back up to the same level. Alternatively you could try a different extension which is not so memory-hungry, however the import/export functionality of this extension still seems to be the best of its kind.

Why am I sometimes getting files filled with zeros at their end after being downloaded?

I'm developing a download manager using Indy and Delphi XE (The application uses Multithreading to attempt several connections to the server). Everything works fine but sometimes the final downloaded file is broken and when I check downloaded temp files I see that 2 or 3 of them is filled with zero at their end. (Each temp file is download result of each connection).
The larger the file is, the more broken temp files I get as the result.
For example in one of the temp files which was 65,536,000 bytes, only the range of 0-34,359,426 was valid and from 34,359,427 to 64,535,999 it was full of zeros. If I delete those zeros, application will automatically download the missing segments and what I get as the result, well if the problem wouldn't happen again, is the healthy downloaded file.
I want to get rid of those zeros at the end of the temp files without having a lost in download speed.
P.S. I'm using TFileStream and I'm sending it directly to TIdHTTP and downloading the files using GET method.
Additional Info: I handle OnWork event which assigns AWorkCount to a public int64 variable. Each time the file is downloaded, the downloaded file size (That Int64 variable) is logged to a text file and from what the log says is that the file has been downloaded completely (even those zero bytes).
Make sure the server actually supports downloading byte ranges before you request a range to download. If the server does not support ranges, a requested range will be ignored by the server and the entire file will be sent instead. If you are not already doing so, you should be using TIdHTTP.Head() to text for range support before then calling TIdHTTP.Get(). You also need to do this anyway to detect if the remote file has been altered since the last time you downloaded it. Any decent download manager needs to be able to handle things like that.
Also keep in mind that if TIdHTTP knows up front how many bytes are being transferred, it will pre-allocate the size of the destination TStream before then downloading data into it. This is to speed up the transfer and optimize disc I/O when using a TFileStream. So you should NOT use TFileStream to access the same file as the destination for multiple simultaneous downloads, even if they are writing to different areas of the file. Pre-allocating multiple TFileStream objects will likely trample over each other trying to set the file size to different positions. If you need to download a file in multiple pieces simultaneously then either:
1) download each piece to a separate file and copy them into the final file as needed once you have all of the pieces that you need.
2) use a custom TStream class, or Indy's TIdEventStream class, to manage the file I/O yourself so you can ignore TIdHTTP's pre-allocation attempts and ensure that multiple file I/O operatons do not overlap each other incorrectly.

Resources