problem receiving Data on HandleHTTPRequest not working properly - port

I have a problem with the HandleHTTPRequest Processor, I'm using it to receive Data from Sensors via a path and a port, the problem is that even If sensors send Data and when using wireshark I see Data coming from sensors to my Integration server in the port assigned , but the processor doesn't show anything until I restart NIFI , Data reach the processor for some minutes or sometime for 2 days and after it stop again , don't know what is the problem , because restarting Nifi every time is not a good solution for me , someone can help me on that ?

I solved the issue by increasing the Request expiration which was set to 1 min Solution

Related

Slowness in the geolocation API

I'm working on a project that uses HERE's geolocation service.
The project is basically a feature in our system that will route a list of addresses. This routing will happen every day and will have around 7000 points, at least.
Today we use the HERE service to geolocate these addresses and send them to our routing service. However, we are facing a huge bottleneck in this implementation: Of the 7000 points we use for testing, we were able to send only about 200 to geolocate, if we send a larger number of points, we simply do not receive any more response, nor the return of timeout or anything like that.
About the implementation: we do not send all points in the same request, each point to be geocoded is sent in a request. We adjusted our software to send only four requests per second thinking that there could be a QPS block, but we were not successful in solving the problem. We thought about also implementing a massage queue, but this could end up increasing the total time of geolocation + routing, which for us makes the solution unfeasible.
In the code, we have an array that stores the addresses to be geocoded, and for each position of the array we execute a GET request for the following URL: https://geocoder.ls.hereapi.com/6.2/geocode.json?apiKey=TOKEN&searchtext=ADDRESS
If you can help me find a solution.
For a large numbers of geocodes you may wish to consider the Batch Geocoder API:
https://developer.here.com/documentation/batch-geocoder/dev_guide/topics/quick-start-batch-geocode.html
I cannot replicate a problem with more than 200 Geocoder requests in a row, so we may need to see some code before we can help further.
Are you using our freemium service ? just to let you know that our 6.2 version of geocoder API is no longer support any new feature development, and hence if you are still implmenting the use case. Please try to switch to V7. Do you mean that you are not able to send entire 7000 addresses and fetch response even in chunks. It could be also due to Linux system that has restricted number of pool network connections on the same moment.try to send requests from some home endpoint (that not behind firewall ) and from Windows system

Handling streaming data from a mobile app (via POST)

At some point a dedicated iot device and app may be created but I'm working with an app on an iPhone for now that doesn't address the requirements but still helpful.
The app can stream it's data via POST. I have a php file set up that captures the data and writes it out to a csv file.
The data is a time series with several columns of data that is sent as a POST at every second. It's about 10 minutes in total time.
Instead of writing to a csv, the data needs to be persisted to a database
What I'm unsure about...
Since this is just testing a proof of concept, it may not be an issue till later but can the high frequency of new connections inserts be expensive? Assuming that each for POST a new connection is needed. For now I have no way of authenticating the device so I'm assuming I can use a local account for all known devices.
Is there a better way of handling the data than running a web server with a php script that grabs the data? I was thinking of Kafka + a connector for a database to persist the data but I have no way of configuring the mobile app to know what it needs to do to send data to the server. Communication is not both ways. Otherwise, my experience with POST requests are the typical web form inputs
Anyone able to give some guidance?

Axibase Time-Series Database data sampling maximum rate

I am using Axibase Time Series Database Community Edition, version 10379. I try to store my data that comes from a force sensor and save it every 2 milliseconds, how can I configure the portal to accept this time resolution?
I made an attempt to send the data in that rate by using an Arduino board with WiFi shield but the TCP connection disconnected after sending a little data.
Time resolution in Axibase Time-Series Database is 1 millisecond by default, so the problem is probably occurring for other reasons such as:
Invalid timestamp
Missing end-of-line character at the end of the series command
Same timestamp for multiple commands with the same entity/metric/tags. For example, these commands are duplicates and one of the them will be discarded:
series ms:1445762625574 e:e-1 m:m-1=100
series ms:1445762625574 e:e-1 m:m-1=125
Overflow of receiving queue in ATSD. This can occur if ingestion rate is higher than disk write speed for long period of time. Open ATSD portal in the GUI and check the top right chart if rejected_count metric is greater than zero. This can be addressed by changing default configuration settings.
Other reasons specified in https://axibase.com/docs/atsd/api/data/#errors
I would recommend starting netcat in server mode and recording data from the Arduino board to file to see exactly what commands are sent into ATSD.
Stop ATSD with ./atsd-tsd.sh stop
Launch netcat in server mode and record received data to command.log file:
netcat -lk 8081 > command.log
Restart Arduino and send some data into ATSD (now netcat). Review command.log file
Start ATSD with ./atsd-tsd.sh start
Disclosure: I work for Axibase.

Why is networking so slow in this Fig/Docker container?

I'm using Fig and Docker to containerise a sample Rails app. Currently, it works fine, the database and server start up. When I have an active Internet connection it all works perfectly. However when I don't have an Internet connection it takes a long time to connect (20 seconds from the browser requesting the localhost page) to the Rails/WEBrick server.
I've looked into the logs and nothing is out of the ordinary. It just takes a long time for the container to receive the initial connection and furthermore a long time to transmit the data.
Okay, I tested it, and it was because of DNS resolution. When you "disable" typical Google DNS and instead use localhost, the latency goes away. This is probably because without doing this Docker assumes that 127.0.0.1 is some address that needs to be looked up via a NS, and spends a lot of time waiting for a response (presumably because it sent it via UDP, it waits longer because of lost/dropped packets). This is also why the request wasn't recorded immediately, as DNS is at a lower-level on the net stack.

MVC cold startup not connecting

Notice how I say not connecting rather than just being slow.
This has been very difficult to reproduce, I am yet to get it to happen consistently and even went to far as to move the application on to a fresh machine thinking it was hardware related but alas, new machine - same issue.
Some captures with Fiddler seems to indicate that the connection is never completed.
Any suggestions on further investigative measures?
Apologies in advance for the vagueness of the question, I am just at a loss.
Can you instrument it from the IIS side? Request tracing could tell you if something is taking so long that the request times out before it returns . . .
Are you attempting to connect via HTTPS? I've had issues in the past when trying to run on my local box and I initiate a HTTPS connection which isn't supported by cassini. When a HTTPS connection fails it's not always obvious why.

Resources