Understanding MQ Series bindings files - binding

Our Java app writes to MQ Series queues via a Weblogic JMS Message Bridge. The actual MQ Series connection/queue details are stored in the MQ Series .bindings file on the app server. I've never really got my head around the bindings file and what all the entries mean. Can anyone provide guidance to understand this file?

Before addressing the .bindings file, we need to step back a bit and look at JNDI - the Java Naming and Directory Interface - and how it is used by JMS. The Queue, Topic and various types of Connection Factory are all run-time JMS objects with methods and attributes. But you can pre-define them and store them in a registry where the JMS application can retrieve them using JNDI lookups.
This is helpful because the objects are like coins in that they have a JMS side and a provider-specific side. On the JMS side, any administered object looks about the same. Regardless of the underlying transport provider, a ConnectionFactory has the same methods and attributes. However, on the provider-specific side, the administered objects look very different from one transport provider to the next. For example, the ConnectionFactory used with a WebSphere MQ transport will have an attribute for the Queue Manager. No other transport provider has a "queue manager" so this attribute is only valid in a WMQ context.
The two aspects of administered objects are the "glue" that allows JMS to work independently of transport provider. In your code you just have to look up a ConnectionFactory and you get an object suitable to perform method calls against. Under the covers, the provider's JMS classes use the provider-specific object attributes to supply context to convert the generic JMS API calls into provider-specific calls. Thus the connection object that you instantiate results in a WMQ CONNECT call which specifies a QMgr name, host, port, channel and a variety of other parameters.
OK, I promised to get to the .bindings file. I said previously that the JNDI lookup was against "a registry" and that usually means LDAP or similar. But Sun engineered JNDI like JMS in that there is an API that your program uses and an SPI or Service Provider Interface that is used by the registry. So, although JNDI can be implemented in LDAP, there is nothing that says it must be implemented in LDAP. One of the base implementations that Sun provided right out of the box was to use the local filesystem as the registry. In this implementation, the root context is a file folder. Each context can store either another sub-context (another file folder) or object definitions. Typically there is one folder for the root context and all of the objects are defined at that level. The file that holds the object definitions is...you guessed it... the .bindings file.
The objects in the .bindings file are represented in Name/Type/Value triplets. So each .bindings file typically has many objects. Each object has many attributes. Each attribute has a name, a value and the type of variable that holds the value. The best way to get a handle on the .bindings file is to sort it which will put all the objects and their attributes together and make it more human-readable. For a list of possible properties, see the manual.
Of course, the .bindings file is supposed to be a compiled artifact and not intended to be human readable. IBM provides the JMSAdmin tool to generate and read the .bindings file. You can also use WMQ Explorer to manage the administered objects in a .bindings file. These are also discussed in the manual linked above. There is also a (some say) good tutorial in developerWorks here.

Related

Run Docker containers dynamically according to DB?

I'm developing an app which live-streams video and/or audio from different entities. Those entities' IDs and configurations are stored as records in my DB. My app's current architecture is something such as the following:
a CRUD API endpoint for system-wide functionalities, such as logging in or editing an entity's configuration.
N-amount of other endpoints (where N is the number of entities and every endpoint's route is defined by the specific entity's ID, like so: "/:id/api/") for each entity's specific functionalities. Each entity is loaded by the app on initialization. Those endpoints are both a REST API handler and a WebSocket server for live-streaming media received from the backend which was configured for that entity.
On top of that, there's an NGINX instance which acts as a proxy and hosts our client files.
Obviously, this isn't very scalable at the moment (a single server instance handles an ever-growing amount of entities) and requires restarting my server's instance when adding/deleting an entity - which isn't ideal. I was thinking of splitting my app's server into micro-services: one for system-wide CRUD, and N others for each entity defined in my DB. Ultimately, I'd like those micro-services to be run as Docker containers. The problems (or questions to which I don't know the answers) I'm facing at the moment are:
How does one run Docker containers dynamically, according to a DB (or programmatically)? Is it even possible?
How does one update the running Docker container to be able to reconfigure that entity during run-time?
How would one even configure NGINX to proxy those dynamic micro-services? I'm guessing I'll have to use something like Consul?
I'm not very knowledgeable, so pardon me if I'm too naive to think I can achieve such architecture. Also, if you can think of a better architecture, I'd love to hear your suggestions.
Thanks!

Partitioning a Mosquitto MQTT Server

I want to create isolated environments on a single MQTT server. Like a database server can have multiple schemas and table names can be repeated in different sachems. I want to have a "MQTT schema" where topics/subscriptions in one schema are isolated from those in another "MQTT schema" so that the same topic can be used in different schemas. It would be even better if security can be applied on a per schema basis but that would be asking for a lot. Right now, I am just looking for a way to have isolated environments on the same server - it will probably require a separate TCP port per schema just to identify the destination schema of a connecting client, as the protocol itself does not have have any concept of schema. Or the clients can be mapped to a particular schema based on the username or client ID.
Note: I am aware of how to use ACL to restrict topic access for each user. ACLs do not solve this problem. I don't simply want to restrict topic access, I want to create separate environment where users are free to do what the want with the topics without out me telling them which topic names they cannot use etc.
The other option is the mount_point configuration option that can be used with a listener declaration (man page).
mount_point topic prefix
This option is used with the listener option to isolate groups of clients. When a client connects to a listener which uses this option,
the string argument is attached to the start of all topics for this
client. This prefix is removed when any messages are sent to the
client. This means a client connected to a listener with mount point
example can only see messages that are published in the topic
hierarchy example and above.
The difference between this and the other option (docker container) is that you can have listener declarations that can see all the traffic of all the different partitions by having a listener with no mount point.
Just use a docker container running mosquitto and spin up new instances for each schema. Map each instance to a separate external port. Total isolation and of you include the auth plugin you can map the security to a separate db table for each schema with environment variables

Programmatically getting the stream name

System Information
Spring Cloud Data Flow Cloud Foundry: v1.1.0.RELEASE
Pivotal Cloud Foundry: v1.7.12
CF Client (Windows): cf.exe version 6.23.1+a70deb3.2017-01-13
cf-v3-plugin: 0.6.7
I would like to inject the stream name into a bean defined in my custom source module. From reviewing the /env end-point of a deployed stream I found the SPRING_CLOUD_APPLICATION_GROUP system property so I've injected this into my bean like so.
/**
* application name
*/
#Value("#{ systemProperties['SPRING_CLOUD_APPLICATION_GROUP'] }")
private String applicationName;
The issue here is that this appears to be tied to the Cloud Foundry deployer, which from my perspective is not good for portability.
In Spring XD the xd.stream.name placeholder existed for this purpose.
Is there any way to do this in a way that is portable.
Thank you
All deployer implementations should honor this variable name, so you should be good to go.
There is no strong requirement that this is passed as an environment variable though (your code assumes system property, not even sure it works, does it?). Using the Spring Environment abstraction is the best way to stay portable here.

OData on top of 2+ OData Feeds

Say I have the following model
I would like to present a unified front for these OData feeds to my clients.
Is there a nice way with OData to do this? Or should I just take IQueryables from the OData feeds and make a reflection endpoint on top of these?
If I use the reflection stuff on top of the OData that talks to the database (via Entity Framework) what kind of problems am I going to encounter?
I would not use the reflection provider over the client library, mainly because the client library LINQ provider doesn't support all the constructs used by the server. As a result some queries would simply not work at all (projections and expansions usually get broken).
Assuming you don't want to create any associations between the databases, you should be able to simply point the users at the right service. You can still expose something which looks like a unified endpoint without the need of having the same URL for all of them.
The main idea is that you unify the $metadata (if your model is static you can do this manually, if not you should be able to write some kind of "merge" tool pretty easily) and then provide a service document which points to the respective URLs for each entity set. In the WCF Data Services client, there's now support for these kind of services through entity set resolver: http://blogs.msdn.com/b/astoriateam/archive/2010/11/29/entity-set-resolver.aspx
The latest CTP with that support is here: http://blogs.msdn.com/b/astoriateam/archive/2011/06/30/announcing-wcf-data-services-june-2011-ctp-for-net4-amp-sl4.aspx
Not happy with the current accepted answer for this question, for me it's more of an anti-answer, of what not to do. My solution here applies as much today as it did in '11
To support a tenancy scenario, where each user/client data will always reside on the same Database, and the data schemas all match then all you need to do is change the connection string when the data context is instantiated.
Another term for this concept is Sharding, MS have some tools and APIs that can help, This is a simple enough walkthrough: Azure SQL Database Elastic database tools: Shard Elasticity but you can do this pretty easily from first principals.
If swapping out the connection string will work for your scenario we need to identify the mechanism that you will use to determine the connection string, there are two common solutions to this:
The simple way out is to use fixed host headers, a route or token in each request to the service, then you can hardcode the logic for determining the connection string without complicated mapping logic.
Use a master / header / mapping DB to store your configuration.
This database has a separate schema that's primary purpose is for retrieving the correct connection string for each request.
In most cases we combine this with the Authentication process, in which case
you keep the authentication in this central database, not in the individual databases.
In terms of the OData Controller, even with WCF Data Services, you just need to implement your logic for retrieving the connection string and use that when you instantiate your data context.
Of course, this doesn't help you if your client's data is spread across multiple databases, but it is a pretty common pattern for sclaing out large databases withough having to deploy a new farm of services for each database.

What is JNDI? What is its basic use? When is it used?

What is JNDI?
What is its basic use?
When is it used?
What is JNDI ?
It stands for Java Naming and Directory Interface.
What is its basic use?
JNDI allows distributed applications to look up services in an abstract, resource-independent way.
When it is used?
The most common use case is to set up a database connection pool on a Java EE application server. Any application that's deployed on that server can gain access to the connections they need using the JNDI name java:comp/env/FooBarPool without having to know the details about the connection.
This has several advantages:
If you have a deployment sequence where apps move from devl->int->test->prod environments, you can use the same JNDI name in each environment and hide the actual database being used. Applications don't have to change as they migrate between environments.
You can minimize the number of folks who need to know the credentials for accessing a production database. Only the Java EE app server needs to know if you use JNDI.
What is JNDI ?
The Java Naming and Directory InterfaceTM (JNDI) is an application programming interface (API) that provides naming and directory functionality to applications written using the JavaTM programming language. It is defined to be independent of any specific directory service implementation. Thus a variety of directories(new, emerging, and already deployed) can be accessed in a common way.
What is its basic use?
Most of it is covered in the above answer but I would like to provide architecture here so that above will make more sense.
To use the JNDI, you must have the JNDI classes and one or more service providers. The Java 2 SDK, v1.3 includes three service providers for the following naming/directory services:
Lightweight Directory Access Protocol (LDAP)
Common Object Request Broker Architecture (CORBA) Common Object Services (COS) name service
Java Remote Method Invocation (RMI) Registry
So basically you create objects and register them on the directory services which you can later do lookup and execute operation on.
JNDI in layman's terms is basically an Interface for being able to get instances of internal/External resources such as
javax.sql.DataSource,
javax.jms.Connection-Factory,
javax.jms.QueueConnectionFactory,
javax.jms.TopicConnectionFactory,
javax.mail.Session, java.net.URL,
javax.resource.cci.ConnectionFactory,
or any other type defined by a JCA resource adapter.
It provides a syntax in being able to create access whether they are internal or external. i.e (comp/env in this instance means where component/environment, there are lots of other syntax):
jndiContext.lookup("java:comp/env/persistence/customerDB");
JNDI Overview
JNDI is an API specified in Java
technology that provides naming and
directory functionality to
applications written in the Java
programming language. It is designed
especially for the Java platform using
Java's object model. Using JNDI,
applications based on Java technology
can store and retrieve named Java
objects of any type. In addition, JNDI
provides methods for performing
standard directory operations, such as
associating attributes with objects
and searching for objects using their
attributes.
JNDI is also defined independent of
any specific naming or directory
service implementation. It enables
applications to access different,
possibly multiple, naming and
directory services using a common API.
Different naming and directory service
providers can be plugged in seamlessly
behind this common API. This enables
Java technology-based applications to
take advantage of information in a
variety of existing naming and
directory services, such as LDAP, NDS,
DNS, and NIS(YP), as well as enabling
the applications to coexist with
legacy software and systems.
Using JNDI as a tool, you can build
new powerful and portable applications
that not only take advantage of Java's
object model but are also
well-integrated with the environment
in which they are deployed.
Reference
What is JNDI ?
JNDI stands for Java Naming and Directory Interface. It comes standard with J2EE.
What is its basic use?
With this API, you can access many types of data, like objects, devices, files of naming and directory services, eg. it is used by EJB to find remote objects. JNDI is designed to provide a common interface to access existing services like DNS, NDS, LDAP, CORBA and RMI.
When it is used?
You can use the JNDI to perform naming operations, including read operations and operations for updating the namespace. The following operations are described here.
I will use one example to explain how JNDI can be used to configure database without any application developer knowing username and password of the database.
1) We have configured the data source in JBoss server's standalone-full.xml. Additionally, we can configure pool details also.
<datasource jta="false" jndi-name="java:/DEV.DS" pool-name="DEV" enabled="true" use-ccm="false">
<connection-url>jdbc:oracle:thin:#<IP>:1521:DEV</connection-url>
<driver-class>oracle.jdbc.OracleDriver</driver-class>
<driver>oracle</driver>
<security>
<user-name>usname</user-name>
<password>pass</password>
</security>
<security>
<security-domain>encryptedSecurityDomain</security-domain>
</security>
<validation>
<validate-on-match>false</validate-on-match>
<background-validation>false</background-validation>
<background-validation-millis>1</background-validation-millis>
</validation>
<statement>
<prepared-statement-cache-size>0</prepared-statement-cache-size>
<share-prepared-statements>false</share-prepared-statements>
<pool>
<min-pool-size>5</min-pool-size>
<max-pool-size>10</max-pool-size>
</pool>
</statement>
</datasource>
Now, this jndi-name and its associated datasource object will be available for our application.application.
2) We can retrieve this datasource object using JndiDataSourceLookup class.
Spring will instantiate the datasource bean, after we provide the jndi-name.
Now, we can change the pool size, user name or password as per our environment or requirement, but it will not impact the application.
Note : encryptedSecurityDomain, we need to configure it separately in JBoss server like
<security-domain name="encryptedSecurityDomain" cache-type="default">
<authentication>
<login-module code="org.picketbox.datasource.security.SecureIdentityLoginModule" flag="required">
<module-option name="username" value="<usernamefordb>"/>
<module-option name="password" value="894c8a6aegc8d028ce169c596d67afd0"/>
</login-module>
</authentication>
</security-domain>
This is one of the use cases. Hope it clarifies.
A naming service associates names with objects and finds objects based on their given names.(RMI registry is a good example of a naming service.) JNDI provides a common interface to many existing naming services, such as LDAP, DNS.
Without JNDI, the location or access information of remote resources would have to be hard-coded in applications or made available in a configuration. Maintaining this information is quite tedious and error prone.
The best explanation to me is given here
What is JNDI
It is an API to providing access to a directory service, that is, a service mapping name (strings) with objects, reference to remote objects or simple data. This is called
binding. The set of bindings is called the context. Applications use the JNDI interface to access resources.
To put it very simply, it is like a hashmap with a String key and Object values representing resources on the web.
What Issues Does JNDI Solve
Without JNDI, the location or access information of remote resources would have to be hard-coded in applications or made available in a configuration. Maintaining this information is quite tedious and error prone.
If a resources has been relocated on another server, with another IP address, for example, all applications using this resource would have to be updated with this new information. With JNDI, this is not necessary. Only the corresponding resource binding has to be updated. Applications can still access it with its name and the relocation is transparent.
I am just curious why the official docs are so ignored which elaborate the details meticulously already.
But if you'd like to understand the cases, please refer to duffymo's answer.
The Java Naming and Directory InterfaceTM (JNDI) is an application programming interface (API) that provides naming and directory functionality to applications written using the JavaTM programming language. It is defined to be independent of any specific directory service implementation. Thus a variety of directories--new, emerging, and already deployed--can be accessed in a common way.
And its architecture
And normally how you use it.
The Java Naming and Directory InterfaceTM (JNDI) is an application programming interface (API) that provides naming and directory functionality to applications written using the JavaTM programming language. It is defined to be independent of any specific directory service implementation. Thus a variety of directories--new, emerging, and already deployed--can be accessed in a common way.
While JNDI plays less of a role in lightweight, containerized Java applications such as Spring Boot, there are other uses. Three Java technologies that still use JNDI are JDBC, EJB, and JMS. All have a wide array of uses across Java enterprise applications.
For example, a separate DevOps team may manage environment variables such as username and password for a sensitive database connection in all environments. A JNDI resource can be created in the web application container, with JNDI used as a layer of consistent abstraction that works in all environments.
This setup allows developers to create and control a local definition for development purposes while connecting to sensitive resources in a production environment through the same JNDI name.
reference :
https://docs.oracle.com/javase/tutorial/jndi/overview/index.html

Resources