For our system we mark important messages with the delivery mode = 2, and are sending them on durable exchanges and queue's. The problem is that rabbitmq is being hosted on a docker container, and if that container goes down, the messages that have been persisted are lost upon container restart.
I want to know if there is a way to change the location of the persistence of messages to a mounted volume instead of the container-backed disk, and if so how. I also currently cant figure out where the messages are actually being persisted right now, and so finding to config for that is definitely a start, I'm just not sure where this is set as I cant find anything related to mnesia, and that seems to be a default for some people. This change to location could be at runtime or not, it is unimportant to me.
Also for help, try to keep in mind that all of this is very new to me so I'm not the most educated on how this system functions in all of its glory, so simple explanations will help a good deal more than those with unnecessarily complex solutions. Let me know if I can provide any other helpful info.
It's right here in the RabbitMQ documentation.
Create the /etc/rabbitmq/rabbitmq-env.conf file with the following contents to change the persistent data location:
MNESIA_DIR=/path/to/mounted/volume
Note that the RABBITMQ_ prefix is not necessary for variables defined in rabbitmq-env.conf
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Related
I've been trying to connect Freeboard to visualize context information from OCB, however came across difficulties that prevent me from receiving any data from there. My thinking is that there is a problem with connecting Freeboard to OCB, because in OCB's subscription list there are no any new entries, and datasource in Freeboard shows that it has never been updated.
OCB is turned on as a docker container. Freeboards run in docker host.
I tried setting the ip as ip that I extracted from docker by:
sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' orion1
It gave me 172.17.0.3, but on that it didn't work either. I guess it shouldn't have anyways, because I can communicate with OCB by localhost:1026 as long as I do it via cUrl or Insomnia. I can push new entities, update and so on.
The accumulation server that has not been working (link here) is ok right now. But the thing is, I add subscription by myself and can't run the acc server on localhost (loopback interface), but rather on other avaliable interface, then add ip of that interface to subscription payload that i send to OCB. Maybe there is a conflict with Freeboard somewhere.
The issue here was connected to lack of CORS support. The easy solution for this is just enabling CORS functionality while launching Orion Context Broker as described here.
I have conducted quite an (actually unnecessary) research on this topic and came up with over-the-top solution for the problem which is described in this github post. There is a proxy server approach for solving the issue. I wanted to propose adding CORS support to Orion Context Broker, and was kindly surprised when found out about it being already implemented.
There are posts like this, this and this which was very helpful in solving the case.
However, I have a two requests. I guess #fgalan is a go to person right now, regarding back-end and documentation of OCB and peripheral software.
Can there be a stronger emphasis put on CORS and ACCESS-CONTROL-ALLOW-ORIGIN soulution? The reasoning behind it is that it gives a seamless connection between OCB and any front-endy application or site (i.e. Freeboard) running in internet Browser. It shouldn't be so hidden that I came across the solution for my problems just by accident while looking for something else. I guess putting it in some walkthrough documentation on I don't know some other visible place. The problem is that I spent two weeks trying to solve it and after all went for the over-the-top and unnecessary solution while the easy and accessible was just under my nose. Good thing is that I have a good connection on stack and git so it was resolved. There are probably people that gave up on Freeboard after any slip with it. And it's a shame, because for now there is no better opensource piece of software for visualization than Freeboard. And the problem is not only with Freeboard, as I said it concerns many more front-end applications and solutions. As we go with FIWARE's way of thinking, those things should be resolved differently.
The FIWARE datasource plugin for Freeboard is not worth a dime at the moment. As #fgalan pointed out in comment it was developed for v1 version of Orion Context Broker API and has not been updated. Therefore it's way more complicated than it's suppose to be. As documentation of OCB fairly point out, v1 approach is not really RESTfull like. After conducting a short code review of OCB plugin for Freeboard I can say that's not worth using. As far as I understand it should still be working, because OCB allows for v1 request to be conducted (but it doesn't work for me anyway), those request are deprecated. In my opinion new post regarding topic should appear (not sure who should I contact about it), because this is a bit misleading. What's the point of using piece of software that's deprecated and spreading bad habits regarding interacting with OCB?
Solution for this is in my opinion simple. Just use JSON datasource in Freeboard. I understand motivation behind creating individual datasource plugin for Freeboard in 2015, when there was not RESTfull v2 version of OCB API, but there is one now, so why not use it? I used ever since got rid of difficulties with CORS and it works pretty well in my opinion. Freeboard as I said earlier provides great opportunities while being easy in setup and maintenance. It should not be abandoned so easily.
By using GET request for JSON payload in Freeboard, now we have whole access to query for context from OCB. It doesn't need any POST methods as long as we use Freeboard as it supposed to be used (by querying for data to visualize). Throw in
?options=keyValues
to the request's URL and we've gotten ourselves a really smart and compact way of visualizing data coming from the Broker.
That's just the way I thing it should be resolved. Last update on this topic coming in 2015 is just not enough in my opinion, especially if there were better methods developed on accessing context data from OCB.
I am building an application that uses the native neo4j JavaScript driver. I want to make sure that my code will work if we migrate to a causal cluster.
The online documentation doesn't seem to be clear about how to do this: I notice sparse references to things like "bookmarks" and "reading what you have written", etc. But how it all fits together is unclear.
Can someone please provide a synopsis?
To use causal cluster you will need to change :
1) the url connection : replace bolt://localhost:7687 by bolt+routing://localhost:7687
This will allow your application to make some LB query to the cluster, and be fault tolerant without doing anything else
2) When you open a new session, you should specified what you will do into this session, ie. READ or WRITE.
This will help the driver to choose the good server (ie a core or a replica server). Otherwise it assumes you will do some WRITE operations, and the driver will always choose a core server ...
3) because you will be on a cluster env., there is some lag (some secondes) for the propagation of an update inside the cluster.
Or sometimes, you need to read your own writes within two sessions. It's where you will need the bookmark functionality.
Documentation is here : https://neo4j.com/docs/developer-manual/current/drivers/
Cheers.
I have Docker swarm full of containers. I need to monitor when something is up or down. I can do this in 2 ways:
attaching to the swarm and listen to events.
polling service list
The issue with events is that there might be huge traffic, plus if some event is not processed, we will simply loose information on whats going on.
For me it is not super important to get immediate results, but to have correct information on whats going on.
Any pros/cons from real-life project?
Listening to events- its immediate, but risky as if your event listening program crashes because of any reason, you will miss an important information and lead to wrong result. This Registrator program is based on events.
Polling- eventual consistent result. but if it solves your problem it is less painful way to grabbing the data. No matter if your program crashes or restart. We are using this approach for service discovery in our project and so far it served the purpose.
From my experience, checking if something is up or down should be done using a health check, and should be agnostic to the underlying architecture running your service (otherwise you will have to write a new health check every time you change platform). Of course - you might have services with specific needs that cannot be monitored that way - if this is the case you're welcome to comment on that.
If you are using Swarm for stateless services only, I suggest creating a health check route that can verify the service is healthy and even disconnect faulty containers from the service.
If you are running statefull stuff this might be trickier, but there are solutions for that too, usually using some kind of monitoring agent over your statefull container (We are using cloudwatch since we run on AWS, but there are many alternatives)
Hope this helps.
Can we share a common/single named volume across multiple hosts in docker engine swarm mode, what's the easiest way to do it ?
If you have an NFS server setup you can use use some nfs folder as a volume from docker compose like this:
volumes:
grafana:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xxx.xx,rw
device: ":/PathOnServer"
In the grand scheme of things
The other answers are definitely correct. If you feel like you're still missing something or are coming to the conclusion that things might never really improve in this space, then you might want to reconsider the use of the typical POSIX-like hierarchical filesystem abstraction. Not all applications really need it (I might go as far as to say that few do). Maybe yours doesn't either.
In defense of filesystems
It is still very common in many circles, but usually these people know their remote/distributed filesystems very well and know how to set them up and leverage them properly (and they might be very good systems too, though often not with existing Docker volume drivers). Sometimes it's also in part because they're simply forced to (codebases that can't or shouldn't be rewritten to support other storage backends). Using, configuring or even writing arbitrary Docker volume drivers would be a secondary concern only.
Alternatives
If you have the option however, then evaluate other persistence solutions for your applications. Many implementations won't use POSIX filesystem interfaces but network interfaces instead, which pose no particular infrastructure-level difficulties in clusters such as Docker Swarm.
Solutions managed by third-parties (e.g. cloud providers)
Should you succeed in removing all dependencies to filesystems for persistent and shared data (it's still fine for transient local state), then you might claim to have fully "stateless" applications. Of course there is often always state persisted somewhere still, but the idea is that you don't handle it yourself. Many cloud providers (if that's where you're hosting things) will offer fully managed solutions for handling persistent state such that you don't have to care about it at all. If you're going this route, do consider managed services that use APIs compatible with implementations that you can use locally for testing (for example by running a Docker container based on an image for that implementation that is provided by a third-party or that you can maintain yourself).
DIY solutions
If you do want to manage persistent state yourself within a Docker Swarm cluster, then the filesystem abstraction is often inevitable (and you'd probably have more difficulties targeting block devices directly anyway). You'll want to play with node and service constraints to ensure the requirements of whatever you use to persist data are fulfilled. For certain things like a central DBMS server it could be easy ("always run the task on that specific node only"), for others it could be way more involved.
The task of setting up, scaling and monitoring such a setup is definitely not trivial, which is why many application developers are happy to let somebody else (e.g. cloud providers) do it. It's still a very cool space to explore however, though given you had to ask that question it's likely not something you should focus on if you're on a deadline.
Conclusion
As always, use the right abstraction for the job, and pause to think about what your strengths are and where to spend your resources.
From scratch, Docker does not support this by itself. You must use additional components either a docker plugin which would provide you with a new layer type for your volumes, or a sync tool directly on your FS which will sync the data for you.
From my point of view, the easiest solution is rsync or more accurately lsyncdn the daemon version of rsync. But I never tried it for docker volumes, so I can't tell if it handle it fine.
Other solutions are offered using Infinit.sh. It basically does the same thing as lsyncd does. It's a one way sync. So if your docker container are RW in their volumes it won't match your expectations. I tried this solution, and it works pretty well for RO operations. And not in production. It's still an alpha version. Infinit is also on the way to provide a docker driver. Not released yet. So I didn't even tried it. Too risky.
Other solutions I found but was unable to install (and so to try) are flocker and glusterFS. Both are designed to create FS Volume based on several HDD from several machines. But none of their repositories were working these past weeks.
Sorry for giving you only weak solutions, but I'm facing the same problem and haven't find yet a perfect solution.
Cheers,
Olivier
I am receiving SAP broadcasts, which I can normally use and play using the standalone vlc application.
I have been asked to provide a dump of the same. I have 2 questions:
I dont clearly understand what exactly dump is
How can I obtain the same?
There are multiple types of dumps, so you might first find out, what kind of dump is meant. It could be a database dump, which is similar to a backup, but usually it's a memory dump.
A memory dump or crash dump is a copy of the application including its memory at a specific point in time. Usually you want to create a dump exactly at the time an application is crashing or hanging. The dump will then be helpful to find the cause of the problem.
There are many ways to obtain a dump. First, Windows might do that for you, when it asks "Send information to Microsoft". Second, you can create it using Task Manager. Right click a process and choose "Create dump file". Third, there are many tools out there, e.g. Process Explorer or ProcDump, which all have pros and cons and serve different purposes.
To suggest a tool for your specific case, we would need more information. Exact wording might matter in this situation.
Update
In your particular case it looks like SAP means Service Advertising Protocol, which is related to the network. A broadcast is a message which is sent to everybody.
You could capture that one with Wireshark, but you would need a lot of network knowledge to get the filters set up. In this case the term "dump" probably refers to a something similar to a database dump, because SAP uses tables to store lists of services.