Has anyone posted a response to this problem? There have been other posts with no answers. Our situation is that we are pushing messages onto a topic that is backing a KTable in the first step of our stream process. We are then pulling a small amount of data from those messages and passing them along. We are doing multiple computations on that smaller amount of data for grouping and aggregation. At the end of the streaming process, we simply want to join back to that original topic via a KTable to pick up the full message content again. The results of the join are only a subset of the data because it can not find the entries in the KTable.
This is just the beginning of the problem. In another case, we are using KTables as indexes for lookups meant to enrich the data coming in. Think of these lookups as identifying whether we have seen a specific pattern in the streaming message before. If we have seen the pattern we want to tag it with an ID (used for grouping) pulled from an existing KTable. If we have not seen the pattern before we would assign it an ID and place it back into the KTable to be used to tag future messages. What we have found is that there is no guaranty that the information will be present in the KTable for future messages. This lack of guaranty seems to make KTables useless. We can not figure out why there is a very little discussion of this on the forums.
Finally, none of this seemed to be a problem when running with a single instance of the streams application. However, as soon as our data got large and we were forced to have 10 instances of the app, everything broke. As well, there is no way that we could use things like GlobalKTables because there is too much data to be loaded into a single machine's memory.
What can we do? We are currently planning to abandon KTables all together and use something like Hazelcast to store the lookup data. Should we just move to Hazelcast Jet and drop Kafka streams all together?
Adding flow:
Kafka data flow
I'm sorry for this non-answer answer, but I don't have enough points to comment...
The behavior you describe is definitely inconsistent with my understanding and experience with streams. If you can share the topology (or a simplified one) that is causing the problem, there might be a simple mistake we can point out.
Once we get more info, I can edit this into a "real" answer...
Thanks!
-John
Related
I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core
To our Streaming pipeline, we want to submit unique GCS files, each file containing multiple event information, each event also containing a key (for example, device_id). As part of the processing, we want to shuffle by this device_id so as to achieve some form of worker to device_id affinity (more background on why we want to do it is in this another SO question. Once all events from the same file are complete, we want to reduce (GroupBy) by their source GCS file (which we will make a property of the event itself, something like file_id) and finally write the output to GCS (could be multiple files).
The reason we want to do the final GroupBy is because we want to notify an external service once a specific input file has completed processing. The only problem with this approach is that since the data is shuffled by the device_id and then grouped at the end by the file_id, there is no way to guarantee that all data from a specific file_id has completed processing.
Is there something we could do about it? I understand that Dataflow provides exactly_once guarantees which means all the events will be eventually processed but is there a way to set a deterministic trigger to say all data for a specific key has been grouped?
EDIT
I wanted to highlight the broader problem we are facing here. The ability to mark
file-level completeness would help us checkpoint different stages of the data as seen by external consumers. For example,
this would allow us to trigger per-hour or per-day completeness which are critical for us to generate reports for that window. Given that these stages/barriers (hour/day) are clearly defined on the input (GCS files are date/hour partitioned), it is only natural to expect the same of the output. But with Dataflow's model, this seems impossible.
Similarly, although Dataflow guarantees exactly-once, there will be cases where the entire pipeline needs to be restarted since something went horribly wrong - in those cases, it is almost impossible to restart from the correct input marker since there is no guarantee that what was already consumed has been completely flushed out. The DRAIN mode tries to achieve this but as mentioned, if the entire pipeline is messed up and draining itself cannot make progress, there is no way to know which part of the source should be the starting point.
We are considering using Spark since its micro-batch based Streaming model seems to fit better. We would still like to explore Dataflow if possible but it seems that we wont be able to achieve it without storing these checkpoints externally from within the application. If there is an alternative way of providing these guarantees from Dataflow, it would be great. The idea behind broadening this question was to see if we are missing an alternate perspective which would solve our problem.
Thanks
This is actually tricky. Neither Beam nor Dataflow have a notion of a per-key watermark, and it would be difficult to implement that level of granularity.
One idea would be to use a stateful DoFn instead of the second shuffle. This DoFn would need to receive the number of elements expected in the file (from either a side-input or some special value on the main input). Then it could count the number of elements it had processed, and only output that everything has been processed once it had seen that number of elements.
This would be assuming that the expected number of elements can be determined ahead of time, etc.
How much data can a column of mnesia can store.Is there any limit on it or we can store as much as we want.Any pointer?(If table is disc_only_copy)
As with any potentially large data set (in terms of total entries, not total volume of bytes) the real question isn't how much you can cram into a single table, but how you want to partition the data and how unified or distinct those partitions should appear to the system.
In the context of a chat system, for example, you may want to be able to save the chat history forever, which is a reasonable goal. But you may not want all chat entries to be in the same table forever and ever (10 years? how long? who knows!) right next to chat entries made yesterday. You may also discover as time moves on that storing every chat message in a single table to be a painfully naive decision to overcome later on down the road.
So this brings up the issue of partitioning. How do you want to do it? (Staying within the context of a chat system, but easily transferrable to another problem...) By time? By channel? By user? By time and channel?
How do you want to locate the data later? This brings up obvious answers that are the same as above: By time? By channel? By user? By time and channel?
This issue exists whether you're dealing with Mnesia or with Postgres -- or any database -- when you're contemplating the storage of lots of entries. So think about your problem in the context of how you want to partition the data.
The second issue is the volume of the data in bytes, and the most natural representation of that data. Considering basic chat data, its not that hard to imagine simply plugging everything into the database. But if its a chat system that can have large files attached within a message, I would probably want to have those files stored as what they are (files) somewhere in a system made for that (like a file system!) and store only a reference to it in the database. If I were creating a movie archive I would certainly feel comfortable using Mnesia to store titles, actors, years, and a pointer (URL or file system path) to the movie, but I wouldn't dream of storing movie file data in my database, even if I was using Postgres (which can actually stand up to that sort of abuse... but think about new awkwardness of database dumps, backups and massive bottleneck introduced in the form of everyone's download/upload speed being whatever the core service's bandwidth to the database backend is!).
In addition to these issues, you want to think about how the data backend will interface with the rest of the system. What is the API you wish you could use? Write it now and think it through to see if its silly. Once it seems perfect, go back through critically and toss out any elements you don't have an immediate need to actually use right now.
So, that gives us:
Partition scheme
Context of future queries
Volume of data in bytes
Natural state of the different elements of data you want to store
Interface to the overall system you wish you could use
When you start wondering how much data you can put into a database these are the questions you have to start asking yourself.
Now that all that's been written, here is a question that discusses Mnesia in terms of entries, bytes, and how many bytes different types of entries might represent: What is the storage capacity of a Mnesia database?
Mnesia started as an in-memory database. It means that it is not designed to store large amount of data. When you ask yourself this question, it means you should look at another ejabberd backend.
I searched a lot but couldnt pretty much find what I was specifically looking for. The Question is simple and straightforward.
I have a database table, which gets populated every second!
Next, I have almost defined the Analysis Methods/classes in the Apache Storm Spout/Bolts classes.
All I wish to do is, send those new rows being inserted every second to the Spout class as a stream input.
How Do I do this?
Thanks,
There are several ways you could accomplish this, but without knowing more about the nature of the data it's hard to give a good answer. One way would be to use another table to track which records have already been processed by storm based on some field in the original table. For instance, if you used a timestamp column you could track the maximum timestamp you have already processed. There are some potential race conditions you have to be careful of with both the reading/updating of the metadata table as well as the actual data table, but both of those can be managed with transactions and proper time synchronization.
Teradata provide functionality of Queue tables. These tables support "select and consume" operation, which means it will remove rows from table as soon as you select them. For more information: http://www.info.teradata.com/htmlpubs/DB_TTU_14_00/index.html#page/SQL_Reference/B035_1146_111A/ch01.032.045.html#ww798205
This approach assumes that table in Teradata is used as buffer and nobody else needs it.
If you need to have both: permanent full table (for some other application) as well as streaming this data to Storm, you may want to modify your loading process in a way to populate permanent table as well as queue table. In this case other applications can use whole data depth in permanent table, and Storm will consume data from queue table with minimal space impact.
Here's and example of what I am talking about:
Take Twitter for iOS. Whenever you tweet, the tweet is sent to the database, and then it is also displayed on your device as part of the list of tweets.
How is the list of tweets that you see on your device updated after just sending one tweet? Here are some possible ways that I thought of how it could be done, but what Im asking for is which one is the best method of doing so:
The whole list of recent Tweets is re-downloaded from the remote Twitter server after sending a tweet (I highly doubt this, as this would take a relatively long time, when it really is just appending one Tweet to the array of Tweets displayed)
The local array that holds the Tweet objects is updated separately from the database (For example, it updates the database, and then updates its array with the same data you sent to the database, and never downloads the Tweet you just sent since you don't need to, because you already have it locally, since you composed it)
Is Core Data capable of updating the remote data server AND the array all in one (or relatively few) step(s)? (Sorry, if this is the obvious answer and if it sounds like I didn't look into it, but I did read about Core Data and started a tutorial. Its just that there is so much content that it would take me a whole day or two just to figure out if its appropriate for my application)
Is there an alternative way of managing this?
Also, if its one of the latter two ideas above, are you able to update the table view cells by just updating the local array and reloading the cells from that array without loading your one tweet from the database? I'm just curious about what would be the most efficient way of doing this.
So again, my main question reworded is: how do you keep data that you sent to a remote database and the local data (stored in a mutable array) in sync whenever you do a tiny single update (such as sending a Tweet) without having to reload all of the data from the database (when there is other content [i.e. other Tweets]) already loaded.
(I am aware that no one except Twitter developers know exactly how Twitter actually done, but I'm just using this Twitter functionality as an example. This same concept could be applied to any similar app.)
(Also, this is a conceptual question about dataflow, so I don't need to see any code, but suggestions to use different technologies like Core Data, or just updating an array will be appreciated.)
(I've been looking into this, and all the different ways of doing it, and it is becoming very time consuming, so I figured to ask you guys who have experience. Additionally, this could help someone else who has similar questions.)
(Sorry if it looks like I'm asking a bunch of questions, but I'm basically asking the same question in different ways, and offering possible solutions.)
Any insight is appreciated!
Immutable messages like tweets are actually quite easy to handle -- server side, and in your app.
When you send a tweet from your client to the server, you also update your "main context" (see "Managed Object Context") which in turn sends notifications to your controller (see NSFetchedResultsController which in turns updates your table view according your local model residing in the Managed Object Context.
Updating from the server is just merging the local tweets with the new ones added in the meantime.
Since there is no mutable tweet, synchronization is really no big deal. As mentioned in the comment, if there were mutable tweets (or any kind of messages) the synchronization will become much more complex.
Core Data will NOT automatically update a remote server. But there are solutions to "view" a remote database through Core Data - see NSIncrementalStore and a related third party libraries (AFIncrementalStore).
This is ridiculously trivial. You update your local database and send off the remote update at the same time.
You use the remote response to mark your local record as synched or try updating again later.