I want to create isolated environments on a single MQTT server. Like a database server can have multiple schemas and table names can be repeated in different sachems. I want to have a "MQTT schema" where topics/subscriptions in one schema are isolated from those in another "MQTT schema" so that the same topic can be used in different schemas. It would be even better if security can be applied on a per schema basis but that would be asking for a lot. Right now, I am just looking for a way to have isolated environments on the same server - it will probably require a separate TCP port per schema just to identify the destination schema of a connecting client, as the protocol itself does not have have any concept of schema. Or the clients can be mapped to a particular schema based on the username or client ID.
Note: I am aware of how to use ACL to restrict topic access for each user. ACLs do not solve this problem. I don't simply want to restrict topic access, I want to create separate environment where users are free to do what the want with the topics without out me telling them which topic names they cannot use etc.
The other option is the mount_point configuration option that can be used with a listener declaration (man page).
mount_point topic prefix
This option is used with the listener option to isolate groups of clients. When a client connects to a listener which uses this option,
the string argument is attached to the start of all topics for this
client. This prefix is removed when any messages are sent to the
client. This means a client connected to a listener with mount point
example can only see messages that are published in the topic
hierarchy example and above.
The difference between this and the other option (docker container) is that you can have listener declarations that can see all the traffic of all the different partitions by having a listener with no mount point.
Just use a docker container running mosquitto and spin up new instances for each schema. Map each instance to a separate external port. Total isolation and of you include the auth plugin you can map the security to a separate db table for each schema with environment variables
Related
Considering MQTT's pub/sub behavior, topic namespace is not isolated and any user can access every other user's data on a topic.
I've seen services like flespi which claim they provide isolated name spaces but some of them use containers to isolate users...
Is it possible to modify an MQTT broker, e.g. Mosquitto, for that purpose? Or is there such open source broker?
Mosquitto can set access control to topics based on authentication username. This allows the administrator to restrict access to topics and restrict which clients can subscribe, publish or receive messages on particular topics. This is documented in Mosquitto’s documentation.
For greater flexibility you can also use the dynamic security plugin, or the mosquitto-go-auth plugin which allows you to use a variety of different data sources for authorization and ACL configuration.
I have recently started working on Docker, K8s and Argo. I am currently working on creating 2 containerized applications and then link them up in such a way that they can run on Argo. The 2 containerized applications would be as follows:
ReadDataFromAFile: This container would have the code that would receive a url/file with some random names. It would separate out all those names and return an array/list of names.
PrintData: This container would accept the list of names and then print them out with some business logic involved.
I am currently not able to understand how to:
Pass text/file to the ReadData Container.
Pass on the processed array of names from the first container to the second container.
I have to write an Argo Workflow that would regularly perform these steps!
Posting this as Community wiki for better visibility with a general solution.
Feel free to expand it.
Since you don't need to store any artifacts, the best options to pass data between Kubernetes Pods are (as #David Maze mentioned in his comment):
1. Pass the data in the body of HTTP POST requests.
There is a good article with examples of HTTP POST requests here.
POST is an HTTP method designed to send data to the server from an HTTP client. The HTTP POST method requests the web server accept the data enclosed in the body of the POST message.
2. Use a message broker, for example, RabbitMQ.
RabbitMQ is the most widely deployed open source message broker. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.
RabbitMQ provides a wide range of developer tools for most popular languages.
You can install RabbitMQ into the Kubernetes cluster using the Bitnami Helm chart.
I'm developing an app which live-streams video and/or audio from different entities. Those entities' IDs and configurations are stored as records in my DB. My app's current architecture is something such as the following:
a CRUD API endpoint for system-wide functionalities, such as logging in or editing an entity's configuration.
N-amount of other endpoints (where N is the number of entities and every endpoint's route is defined by the specific entity's ID, like so: "/:id/api/") for each entity's specific functionalities. Each entity is loaded by the app on initialization. Those endpoints are both a REST API handler and a WebSocket server for live-streaming media received from the backend which was configured for that entity.
On top of that, there's an NGINX instance which acts as a proxy and hosts our client files.
Obviously, this isn't very scalable at the moment (a single server instance handles an ever-growing amount of entities) and requires restarting my server's instance when adding/deleting an entity - which isn't ideal. I was thinking of splitting my app's server into micro-services: one for system-wide CRUD, and N others for each entity defined in my DB. Ultimately, I'd like those micro-services to be run as Docker containers. The problems (or questions to which I don't know the answers) I'm facing at the moment are:
How does one run Docker containers dynamically, according to a DB (or programmatically)? Is it even possible?
How does one update the running Docker container to be able to reconfigure that entity during run-time?
How would one even configure NGINX to proxy those dynamic micro-services? I'm guessing I'll have to use something like Consul?
I'm not very knowledgeable, so pardon me if I'm too naive to think I can achieve such architecture. Also, if you can think of a better architecture, I'd love to hear your suggestions.
Thanks!
I have an Jenkins server named "jenkins" in a remote machine, and I currently use its actual IP address to access it. And I have a domain name to use for my Web server on another machine: www.mysite.com.
Is it possible to configure DNS names to use "jenkins.mysite.com" to access my Jenkins server machine without registering another independent domain name?
Further, I might have another machine to host my wiki, so I would like to access it as "wiki.mysite.com".
Thanks.
Yes, it is not only possible, but extremely common. It is a perfectly ordinary use of DNS. The entity controlling mysite.com can add whatever names they want under it (barring some technical limitations).
The details of what you personally need to do to add those other names will, of course, depend entirely on your environment. It can be anything from editing a zone file or using a web administration interface to talking to a sysadmin.
Our Java app writes to MQ Series queues via a Weblogic JMS Message Bridge. The actual MQ Series connection/queue details are stored in the MQ Series .bindings file on the app server. I've never really got my head around the bindings file and what all the entries mean. Can anyone provide guidance to understand this file?
Before addressing the .bindings file, we need to step back a bit and look at JNDI - the Java Naming and Directory Interface - and how it is used by JMS. The Queue, Topic and various types of Connection Factory are all run-time JMS objects with methods and attributes. But you can pre-define them and store them in a registry where the JMS application can retrieve them using JNDI lookups.
This is helpful because the objects are like coins in that they have a JMS side and a provider-specific side. On the JMS side, any administered object looks about the same. Regardless of the underlying transport provider, a ConnectionFactory has the same methods and attributes. However, on the provider-specific side, the administered objects look very different from one transport provider to the next. For example, the ConnectionFactory used with a WebSphere MQ transport will have an attribute for the Queue Manager. No other transport provider has a "queue manager" so this attribute is only valid in a WMQ context.
The two aspects of administered objects are the "glue" that allows JMS to work independently of transport provider. In your code you just have to look up a ConnectionFactory and you get an object suitable to perform method calls against. Under the covers, the provider's JMS classes use the provider-specific object attributes to supply context to convert the generic JMS API calls into provider-specific calls. Thus the connection object that you instantiate results in a WMQ CONNECT call which specifies a QMgr name, host, port, channel and a variety of other parameters.
OK, I promised to get to the .bindings file. I said previously that the JNDI lookup was against "a registry" and that usually means LDAP or similar. But Sun engineered JNDI like JMS in that there is an API that your program uses and an SPI or Service Provider Interface that is used by the registry. So, although JNDI can be implemented in LDAP, there is nothing that says it must be implemented in LDAP. One of the base implementations that Sun provided right out of the box was to use the local filesystem as the registry. In this implementation, the root context is a file folder. Each context can store either another sub-context (another file folder) or object definitions. Typically there is one folder for the root context and all of the objects are defined at that level. The file that holds the object definitions is...you guessed it... the .bindings file.
The objects in the .bindings file are represented in Name/Type/Value triplets. So each .bindings file typically has many objects. Each object has many attributes. Each attribute has a name, a value and the type of variable that holds the value. The best way to get a handle on the .bindings file is to sort it which will put all the objects and their attributes together and make it more human-readable. For a list of possible properties, see the manual.
Of course, the .bindings file is supposed to be a compiled artifact and not intended to be human readable. IBM provides the JMSAdmin tool to generate and read the .bindings file. You can also use WMQ Explorer to manage the administered objects in a .bindings file. These are also discussed in the manual linked above. There is also a (some say) good tutorial in developerWorks here.