Specifically I am concerned with inserts. Though it would be interesting to know the answer for any writes.
I have critical system where if even a single insert gets lost then it's a problem.
Is there way to prevent this?
The same code has already been written. You can get idea from source code from "rabbit_mnesia.erl".
I have used it directly with a little change in my project.
https://www.rabbitmq.com/download.html
https://github.com/rabbitmq/rabbitmq-server/tree/rabbitmq_v3_5_1
Related
I am currently implementing a web application in .net core(C#) using entity framework. While working on the project, I actually encountered quite a few challenges but I will start with the one which I think are most important. My questions are as follows:
Instead of frequent loading data from the database, I am having a set of static objects which is a mirror of the data in the database. However, it is tedious and error prone when I want to ensure any changes, i.e., adding/deleting/modifying of objects are being saved to the database at real time. Is there any good example or advice that I can refer to improve my approach to do this?
Another thing is that value of some objects' properties will be changed on the fly according to the value of some other objects' properties. Something like a spreadsheet where a cell's value will be changed automatically if the value in the cell that the formula is referring to changes. I do not have a solution to do this yet. Appreciate if anyone has any example that I can refer to. But this will add another layer of complexity to sync the changes of the objects in memory to database.
At the moment, I am unsure if there is any better approach. Appreciate if anyone can help. Thanks!
Basically, you're facing a problem that's called eventual consistency. Something changes and two or more systems need to be aware at the same time. The problem here is that both changes need to be applied in order to consider the operation successful. If either one fails, you need to know.
In your case, I would use the Azure Service Bus. You can create queues and put messages on a queue. An Azure Function would handle these queue messages. You would create two queues, one for database updates, and one for the in-memory update (I think changing this to a cache service may be something to think off). Now the advantage of these queues is that you can easily drop messages on these queues from anywhere. Because you mentioned the object is going to evolve, you may need to update these objects either in the database or in memory (cache).
Once you've done that, I'd create a topic, with two subscriptions. One forwarding messages to Queue 1, and the other to Queue 2. This will solve your primary problem. In case an object changes, just send it to the topic. Both changes (database and memory) will be executed automagically.
The only problem you have now, it that you mentioned you wanted to update the database in real-time. With this scenario, you're going to have to leave that.
Also, you need to make sure you have proper alerts in place for the queues so in case you did miss a message, or your functions didn't handle it well enough, you'll receive an alert to check & correct errors.
I'm totally agree with #nineedm's and answer, but there are also other solutions.
If you introduce cache, you will always face cache revalidation problem - you have to mark cache as invalid when data were changed. Sometimes it is easy, depending on nature of cached data and how often data are changed.
If you have just single application, MemoryCache can be enough with proper specified expiration options.
If there is a cluster - you have to look at Distributed Cache solutions, for example Redis. There is MS article about that Distributed caching in ASP.NET Core
I'm considering overwriting the same small file 1,000's - 100,000's of times in an iOS app. Are there any downsides to this, given that flash memory is rated for 1000's of writes (but not, say, 100,000's)?
Will the system file cache save me if I stick to standard FileHandle operations? (without me having to implement my own such cache)
This has been addressed before: Reading/Writing to/from iPhone's Documents folder performance
Any new insights?
Update in response to some comments below: in general I agree with you that sometimes examining the choice of solution is more critical than helping with the proposed solution itself.
However, for this case, I feel the question is legit. Basically, it applies to any program where there is a small amount of very volatile data that needs to be persisted often: say, a position in a game, or a stock tick, or some counter, or the last key pressed, or something like that. It needs to be reliably read after process restart, so the app can pick up where it left off, hence the question:
Can I use the iOS file system for that? I know I can't write 10,000's of times to actual flash memory - that would burn it out. But will file system operations solve this for me, through some form of caching? Or do I need to do that myself, 'by hand'?
I sort of assume 'yes' (file system will solve) - otherwise other apps that do this (there must be some) would be burning out phones all the time! But: hard to know for sure...
Update again: asked this question on apple forums:
https://forums.developer.apple.com/thread/116740
Still no clear answer. Some answers are: just cache it yourself to avoid any such potential problems (and there can be: a file write can fail, and increasing the frequency increases the probability of failure in weird ways). Another is: iOS logs so much stuff, there's no way I can write more frequently than that, and that's fine, so no worries... I guess I'll leave this question open for now.
I am using gocraft/web in a project and am trying to debug some high memory usage. gocraft/web uses reflection to call handlers. I've set up the net/http/pprof profiler which works very well, but the largest block of memory, and the one that I am iterested in, only shows reflect.Value.call as the function. That's not very helpful.
How can I get around the fact that gocraft/web is using reflection and dig deeper into the memory profile?
Here's an example of the profile output I am seeing:
Thanks to #thwd for filing http://golang.org/issue/11786 about this. This is a display issue in pprof. All the data is there, just being hidden. You can get the data you need by invoking pprof with the -runtime flag. It will also show data you don't need, but it should serve as a decent workaround until Go 1.6 is out.
The short answer is that you can't directly. reflect.Value.call calls reflect.call which forwards to runtime.reflectcall which is an assembly routine implemented in the runtime, for example for amd64, here. This circumvents what the profiler can see.
Your best bet is to invoke your handlers without reflection and test them like that individually.
Also, enabling the profiler to follow reflective calls would arguably be an acceptable change to propose for the next Go iteration. You should follow the change proposal process for this.
Edit: issue created.
I'm having an issue with an umbraco site of mine: For some reason some of the nodes are timing out when I try to click on them in the back-end of the site.
The front-end works fine and there aren't any slowdown issues there, however I'm unable to edit these same nodes in the back-end as the system seems to just hang. This is making it incredibly difficult to debug as I have no idea what properties are actually causing the problems here. What's strange is I can create a node of the same document type and enter in some dummy values and that works fine, yet I can't seem to edit the existing nodes.
I've tried republishing the entire site, republishing the individual nodes, deleting the umbraco.config file and nothing has worked up to this point.
What's also interesting is that if I close down the browser the system seems to stop hanging and I can log in and try again.
Has anyone encountered this before or know where to begin?
Thanks
I have encountered something similar. The longer you work with Umbraco the slower it becomes and if you check the memory usage in Chrome's task manager, you can see that certain actions upon nodes bump the memory usage up a little further. The answer is just to close down the tab and open a new one.
I have reported this and Umbraco cannot replicate this. However, I do think that this is possibly due to maybe a package installed into Umbraco, maybe uComponents. It's very difficult to pin point.
Update:
If you can access some nodes but not others, then this is actually slightly easier to debug. I would check what similarities the nodes that timeout have.
Are they all of the same document type?
Do they all use the same data type?
I would guess that the nodes in question are using a data type that is performing an operation when the node is loading, and that operation is timing out. For example, do you have any data types that load data from the database, like enums? Do you have any datatypes that load data from a web service?
Do you have any usercontrol data types wrapped in the UserControlWrapper data type? These would be somewhere to check.
Finally, check:
The databases [umbracoLog] table. Any Umbraco-specific errors will be listed there.
Check the computer's event viewer. This will show any unhandled errors.
My money's on a database timeout.
I found a post about how to kill the program itself one year ago. It suggested writing some values in registry or windows directory or a location in disk when it runs first time. When it tries to run for the second time, the program just check the value in that location, if not match, it terminates itself.
This is simple and a little naive as any realtime anti-virus application would easily watch what value and where your program wrote in a disk. And in a true sense, that method did not 'kill' itself, the program just lies thare and sleeps intact and complete, only because of lack of trigger.
Is there a method that, in true meaning, kills itself such as deleting itself permanently, disemboweling itself, disrupting classes or functions or fragmenting itself?
Thank you.
+1 to this question.
It is so unfortunate that people often tend to vote down, if somebody asks questions that are related to tricky ways of doing things! Nothing illegal but at times this qustion may sound to other people that this method is unnecessary. But there are situations where one wants to delete itself (self) once it is executed.
To be clear - it is possible to delete the same exe once it is executed.
(1) As indicated in the earlier answer, it is not possible for an exe to get deleted once it is executed from disk. Because OS simply doesn't allow that.
(2) However, at this point, to achieve this, what we need to do is, just execute the EXE in momory! It is pretty easy and the same EXE could be easily deleted from disk once it is executed in memory.
read more on this unconventional technique here:
execute exe in memory
Please follow above post and see how you can execute an exe in momory stream; or you can even google it and find out yet another way. There are numerous examples that shows how to execute an exe in memory. Once it is executed, you can safely delete it from disk.
Hope this throws some light into your question.
An application cannot delete itself off the disk directly, because while the application is running the disk file is 'open' - hence it cannot be deleted.
See if MoveFileEx with the MOVEFILE_DELAY_UNTIL_REBOOT fits your requirement.
If you can't wait for a reboot, you'll have to write a second application (or batch file) that runs when the first application closes to wait for the first application to complete closing and then delete it.
It's chicken and egg though - how do you delete the second application/batch file? It can't delete itself. But you could put it in the %temp% directory and then use MoveFileEx() to delete it next time the machine is rebooted.