This is a basic questions. I want to apply to an entry level java developer position with the following requirement:
Familiarity with the Sailpoint Identity IQ standard adapters/connectors
By standard connectors do they basically mean how Sailpoint exchanges data with third party tools? And by adapter do they mean that the adapter pattern would be used? Thanks
This is going to probably appear well after your interview - but to answer the question:
1) Standard adapters/connectors:
SailPoint ships with a "standard" set of connectors which are part of the purchase price there are those ie EPIC which do not ship as part of the standard product and must be enabled. To give you a deeper view into connectors..
Connectivity Methods:
Direct Connectivity - This is where a connector communicates directly to a system using APIs or data-sources. Some advantages of using direct connect are that you don't have to generate or transmit files, and you can be more efficient in processing only things that have changed. Some disadvantages are the they are subject to availability and downtime concerns like any connected system. They are also typically subject to advantages and disadvantages that APIs might impose as well.
Some people also refer to this as an 'online' method of connectivity.
File-Based Connectivity - This is where a connector reads from a snapshot of data presented in a file, rather than connecting directly to the system. Some advantages of using a file, are that files are portable, easily inspected for data issues, and not typically subject to availability. Some disadvantages are that files are usually processed in their entirety, and may require processing or transformation in order to work effectively.
Some people also refer to this as a 'decoupled' or 'offline' method of connectivity.
Connector Implementations
Source-Specific Implementation - These are connectors built with a specific target-system in mind. These typically use specific APIs targeted to the system they are integrated with. Because the systems and APIs are known, these typically require less configurations to get working.
Examples of these are Active Directory, Workday, Salesforce, SAP, etc.
General Implementation - These are general-purpose connectors which can be used to connect to a variety of sources or systems. These tend to be more flexible in general, but typically do require a bit more setup and configuration to meet needs.
Examples of these are Web Services, SCIM, JDBC, Delimited Files, etc.
Custom Implementation - These are completely custom connectors and tailored to the system and API of your choice. This approach offers the most flexibility of all connector options, however making custom connectors is definitely a development-level activity, and is not to be taken lightly. The code written for custom connectors is maintained and supported by the customer who owns the connector.
Examples of these are custom in-house applications, etc.
Understanding these connector implementations is important, because if a source-specific implementation isn't available, another general or custom connector implementation may be used instead.
Related
How does AppDynamics and similar problems retrieve data from apps ? I read somewhere here on SO that it is based on bytecode injection, but is there some official or reliable source to this information ?
Data retrieval by APM tools is done in several ways, each one with its pros and cons
Bytecode injection (for both Java and .NET) is one technique, which is somewhat intrusive but allows you to get data from places the application owner (or even 3rd party frameworks) did not intend to allow.
Native function interception is similar to bytecode injection, but allows you to intercept unmanaged code
Application plugins - some applications (e.g. Apache, IIS) give access to monitoring and application information via a well-documented APIs and plugin architecture
Network sniffing allows you to see all the communication to/from the monitored machine
OS specific un/documented APIs - just like application plugins, but for the Windows/*nix
Disclaimer : I work for Correlsense, provider of APM software SharePath, which uses all of the above methods to give you complete end-to-end transaction visibility.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Is there any decent alternative to OPC-UA as a solution for accessing process data of a system composed of various PLCs? Something that is platform independent and can "speak" with products of different brands ?
I've heard of MQTT but it seems to be much more like a transport protocol, and only that. It does not have all the higher level stuff like the information modeling, etc.
Thanks for your help!
OPC is the only standard way for communicating with PLCs. OPC DA is the old alternative. OPC UA is the new one and recommended, nowadays. Before OPC there was just proprietary protocols and shared protocols like Modbus, but they are just lower level transport protocols as you've mentioned.
OPC UA is pretty unique with the Information Modeling, especially. With that feature it is enabling new communication possibilities for higher level systems and applications as well, in addition to plain PLC communication.
Note that some PLCs can also talk OPC UA natively, which makes it a standard in that way.
And OPC UA is really standardised as IEC 62541, ensuring that it's independent.
Update 17/07/19: OPC UA is now defined also as the Industry 4.0 Communication as I wrote in my recent article.
Update 20/05/05: OPC UA version 1.04 defines Pub/Sub alternatives, using UDP for secure data multicast in local networks and AMQP/MQTT for secure broker based data & event delivery to cloud systems. Version 1.04 also defines a WebSocket/JSON protocol alternative, which enable easier usage in web applications. None of these are broadly available, yet, but hopefully will become popular in 2020-21 time frame.
OPC-UA has some very interesting parts, especially concerning information modelling, interoperability and the publish/subscribe pattern.
However, even though it's a standard in the strictest of senses, I've found that to use it in a webapp you need to code a gateway server. Because it uses raw sockets and a binary (although fast) serialization protocol.
This is why we created an alternative protocol called Woopsa at my university. We decided to base it on HTTP + JSON. We tried to make a protocol that's similar to OPC-UA: it has Information Modelling, publish/subscribe, and even multi-requests. It's all completely open-source.
We've just released version 1.0 here: http://www.woopsa.org/
You can get the source code directly on our GitHub: https://github.com/woopsa-protocol/Woopsa
Basically, our protocol is just a standardized RESTful API using HTTP+JSON. For example, you can read a value by making a GET /woopsa/read/Temperature and it will reply you in JSON:
{"Value":24.2,"Type":"Real"}
You can also get the object tree by using the meta word, for example: GET /woopsa/meta/ which will give you something like that:
{
"Name":"WeatherStation",
"Properties": [
{"Name":"Temperature","Type":"Real"},
...
],
"Methods": { ... }
"Items": [
"Thermostat",
...
]
}
In a practical industrial application, MQTT is not an alternative to OPC-UA. The original goal of OPC, back in the '90s, was to provide a standard communication mechanism and data model that would provide interoperability among clients and servers that implemented the specification. OPC-UA expands and generalizes the data model and the communication without giving up on that core goal. In order to do this, the standard must specify things like the format of a time stamp, the encoding of data types, historical values, alarms, etc.
MQTT is a message transport layer that does not provide interoperability by design. It does not stipulate the format of the payload, does not specify how one transmits a particular data type, timestamp, value, hierarchy, or anything else that would allow an application to understand the data being transmitted. You can create a valid MQTT server that emits XML, JSON, or custom formatted data that is plain-text, encrypted, base-64 encoded, or anything else you like. The only way a client application can interact with your server is by knowing in advance what data format the server will produce and accept.
OPC-UA has recently introduced a publish/subscribe mechanism to improve bandwidth utilization, reducing a communication bandwidth advantage that MQTT currently offers. At the same time, the MQTT specification will need to grow to specify data formats in order to promote interoperability. Expect to see a convergence of functionality between MQTT and OPC-UA, mostly MQTT growing to meet OPC-UA.
MQTT is a much simpler implementation at the moment, which holds advantages for embedded and resource-constrained systems. The addition of a data modeling specification would act to reduce this advantage.
The bottom line is that OPC-UA is for interoperability and MQTT is for simple custom communication. MQTT needs to grow before it can be an alternative to OPC-UA.
MQTT is growing in popularity as the protocol of choice for I.o.T. It does have its short comings - however its simplicity is often seen as a strength whereas OPCUA carries the overhead of design by committee.
If you need to combine the two, you may like to consider trying our simple gateway mqtt2opcua
Unserver is a product designed to solve the exact problem described in this question.
It is capable of talking to different field devices and provide a unified HTTP API on
top of them.
It integrates with devices via Modbus RTU, but other common protocols will be added in the future.
In short, first you configure a data 'tag' like this:
{
"name": "tank1",
"device": "plc1",
"properties": [
{
"name": "level",
"address": "HR0",
"type": "numeric",
"raw": "int16"
}
]
}
Then you can work with the tag using an API endpoint created automatically:
GET http://localhost:9000/tags/tank1
{
data:{
level: 1
}
}
Check out the documentation for more info.
The product is free for evaluation and non-commercial use.
Disclaimer: I'm part of the team. Hope this is useful.
I just released another approach to this challenge. The project is called ELTRA IoT.
It's cloud service as mediator and end-user components that act as device representation or operator interface (https://www.eltra.ch/)
Primarily, it was created to simplify integration of CANopen devices with smartphone applications, but I quickly realized, that it can be used for any IoT project.
This project is inspired mainly by CANopen and FDT architecture.
The first idea was to deliver the solution, that allows bringing your device into internet using web standards like REST/JSON (avoid binary protocols, gateways, firewall, proxies issues and all this staff, that makes this whole process more complicated) within short time.
Web standard like HTTP/REST/JSON/WebSocket plays well with all operating systems and architectures and allows also easy end-user app integration in any modern language.
Main features:
Same API both sides (device and operator)
CANopen CiA-311 data model representation
Nodes, object dictionary, index, subindex, strong data typing, ranges etc. You know CANopen = you are at home
History data
RPC support - custom commands execution
Simple cloud service API https://eltra.ch/docs
Standard authentication scheme
SSL encryption
Cross-platform solution for Windows, Linux, Android, IPhone, Raspberry PI
SDK is available as open source on Github:
https://github.com/eltra-ch/eltra-sdk
At the moment, the library is implemented in .NET Standard and tested with Windows, Linux (x64 and ARM32), Android, IPhone.
Nuget package is available under:
https://www.nuget.org/packages/Eltra.Connector/
If the complexity of OPC UA is an overkill and Woopsa doesn't fit your design, then ELTRA could be an alternative.
disclaimer: This project is part of my masterthesis and eltra.ch service is my privately held website
I need to make a recommendation on approaches for allowing web service (WCF) documentation (wsdl, schemas, locations etc.) to be stored and found. Being able to monitor the services would be a definite bonus.
This needs to be considered in the wider context of moving to an SOA built, where possible, with Microsoft technologies that should be accessible by clients from other frameworks. The aim is to develop a system in which clients do not need to change if a service is moved or new versions are brought online - it should be possible to write the client 'knowing' just one address / location which is capable of directing them appropriately.
Having a central location for the service documentation is important too; our Business Analysts should be able to find all they need to about the services we provide from a central place. We would also want (potentially) to expose that repository of service information to partners as well. I know we could generate wsdls and manually manage them (create a folder somewhere and zip them up before sending them out) but that seems very labour intensive and prone to error (on my part).
As I see it at the moment there are two broad approaches;
Write something bespoke that uses WS-Discoverability and a dynamic routing service which can respond to the client requests.
Get an off the shelf solution.
I have to say that an off the shelf solution is the most likely approach that will be accepted but I have to at least consider the alternatives. For the off the shelf solutions I have identified
BizTalk
WSO2 ESB and WSO2 Governance Registry
as possibly providing the features.
What I need to know
Am I right with my understanding of the broad approaches?
Are there any other approaches I should consider evaluating?
Specifically I also need to know pros and cons of any approach I consider and have an idea of how it could be implemented.
To start with I would definitely not go with Biztalk or any WS-Whatever SOAP based protocol.
Go simpler and you'll be an happy man in the end.
For the middleware I would go Mass Transit
or if you prefer, NServiceBus, which I'm not a big fan off, but which provides another level of enterprise support. If you choose to go with Event SOA you'd get async operations as a bonus.
With the middleware layer defined it is time to define the API Layer. I would not expose my services to the outside world, and if the middleware is event based, the services within it they can only respond to events placed in the bus, so I would use ASP.NET Web API with a REST interface to get the requests to the outside, and based on the request type create the related message (command) and place it on the bus.
Way to high level but I hope it helps.
I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!
We've got a bunch of data that we'd like to expose to the world hosted on an asp-net.mvc website. I'd like to ensure that we deliver it using technology that is easy for end developers to implement and not tied to any particular platform, rather than using technology that is unpopular/incompatible with developers.
The kind of requests we expect are mainly to retrieve search results (not many parameters), but down the like we'd like to be able to provide catalogue lookups and the like, which may be more complex.
Bearing this in mind, what is the preferred means of doing this?
Windows Communication Foundation can be used to create both SOAP services (great if your consumers are businesses, using Visual Studio/.NET or Java) or REST services (for people on other platforms). Those are the preferred means of exposing public APIs.
If you want maximum exposure, probably best to use the REST approach, since it is easier to consume from "web" languages like JavaScript. Microsoft has extensive resources on putting together a REST API using WCF.
Honestly, for the kinds of requests you say you need to handle, which all seem to be looking up data as opposed to modifying it, the difference is almost trivial - you can switch from SOAP to REST simply by changing a few attributes/configuration options and you could technically even host both at the same time using very little additional code. As long as you stick to WCF and don't use outdated technology like ASMX/WSE then you will be fine.
Reasons to use REST:
Consumable from almost anywhere (including JavaScript, RSS readers, etc.);
It's popular (in use by Google, Twitter, etc.)
Supports many different data formats (JSON, Atom, etc.)
Reasons to use SOAP:
Standardized security protocol (encryption, non-repudiation, etc.)
Distributed transactions
Message Queuing
That's not an exhaustive list but it should give you an idea of who the target markets are for each. If you're hosting a very open, very public site designed to be consumed by anyone and everyone, go with REST. If the service is part of a business system and you need to guarantee reliability, security, and consistency of data, you'll want to go with SOAP. Choose the appropriate technology based on your target market.
Create a RESTful API. As a developer who often consumes web services, it's what I would expect and prefer.
Many popular services (digg/twitter/netflix/google) are moving to REST over SOAP, so you would be wise to follow suit.
If you do create a REST API you should also create a WADL file. It's WISDL for REST. They're not well supported yet, but they're not hard to create and they'll become more useful as support increases.
YOu will want to check out odata. Look at odata.org and live.visitmix.com/videos
This will give you REST access, metadata support like in SOAP, interoperability with the whole office stack and if you are using WCF Data Services you can implement it in a matter of hours, days at most.
Take a look at netflix.com, they have done it right (IMHO).