S7-1500 communication with SCADA system - communication

I'm working with a client who are in the process of installing a smart waste solution in a new building. This solution will be using s7-1500 PLC and ET200SP as remote I/O units. The communication between PLC and I/O units will be using modbus master/slave (CM PtP, RS485). This system needs to communicate with a top system (central operation) based on Schneider teknology running with modbus.
My question: what is the best way to setup this communication between ths PLC and the top system?

Related

What are possible Scalability options for an application supporting ONLY Single TCP Socket Connection?

There is a legacy implementation(to an extent company proprietary in Pascal, C with some java macros) which processes TCP Socket based requests from TCP client application. It supports multiple client applications(around 5K) connecting over TCP Socket, however, it only supports single socket connection with backend(database). There are two instances of the server, so in total, it supports 10K client applications over two TCP Socket connection with database. All database related communication happens in synchronous manner over single socket connection. There are massive issues in this application, especially higher RTT(Round Trip Time) and occasional outages due to back-pressure. We have an ops team for such issues. They mostly resolve them by restarting the server. Hardly, we have people in our team who know coding details of this application and there is not much documentation. As this is a critical application we can not afford messing with it. We don't want to touch the code at least for now. This even becomes more critical due to shift in business priorities. There is a need to add another 30K client applications of another business with this setup.
Task before us is to integrate it with another application which is based on microservice architecture with middleware using RabbitMQ. This is a customer facing application sensitive to higher QoS. We can not afford outage & downtime in it. As part of this integration, there is a need to process request messages coming from the above legacy application over TCP Socket before passing them to database. In other words, we want to introduce a component which would process requests of legacy application before handing over to database. This additional process is part of our client request. Some of the processing requirement is very intensive and resource hungry in terms of CPU Cycle, Memory and socket i/o. As a result, there are chances, such processing may lead to server downtime & higher RTT. Our this layer is very flexible, we can easily add more server or replace faulty ones. But, this doesn't sound very efficient in this integration as we are limited with single socket connection of legacy application. So in total at max, we can only have 2(+ 6 for new 30k client application) servers. This is our cause of concern.
I want know, what different possible options are available to address high availability, scalability and latency issues of such integration? Especially with limitation of single TCP socket connection, how can we make this integration efficient, something which can handle back-pressure, better application uptime etc.
We were thinking of leveraging RabbitMQ, Layer 4 Load balancer(like haProxy, NginX), IPVS, NAT etc.. But all lead toward making some changes(or not very efficient technique) in the legacy code, which we don't want.

What is the fastest way of communicating from PC to a Controller using LabVIEW?

I am working on a project wherein I need to communicate 8 boolean outputs to the controller based on the result generated by a program built using LabVIEW on a PC.
I have discussed this with a few colleagues who suggest using a parallel port data-bus and use TTL signals to communicate to a micro-controller which will give maximum transfer speed.
I understand it being a cost effective solution but will it be fastest way to communicate with a micro-controller? Also, considering it being a legacy technology which limits its availability on standard PC's I have to buy an additional PCI-E card with parallel port interface.

What is the most suitable virtual machine software for sharing hardware ports (COM, LPT etc) at register level?

I'm using Delphi to develop real-time control software and over the last couple of years I have done some work running older Windows installations under Microsoft's VirtualPC and it works fine for 'pure software' development (i.e no or limited access to the outside world). Such tools seem able to work with network connections but I have to maintain software which performs I/O via the parallel port (via a device driver). We also use USB I/O. In the past I've liked Microsoft's virtual tools because it takes time to install a new operating system and then (in my case) install Delphi and a load of libraries and components to provide development support. In these circumstances I've not been too bothered by my lack of access to the low-level I/O ports.
I want to up my game and I'm happy to pay for a good virtualisation tool IF I can have access from it to the outside world, i.e I want to be able to configure it to allow access to my machine's parallel port and com ports in the same way as if it was running natively. This access has to be able to expose the parallel port in register terms, i.e to 'see' the port at address $03f8 for example and to support I/O operations of those registers (via the appropriate kernel access) as my Windows 7 64-bit installation is able to do.
I see that there are a number of virtualisation solution out there now but it's quite hard to acertain the capability of each at such a low level. Does anyone have any experience or knowledge in this area?
The VMware products would be suited best for this. You can add virtual serial and parallel ports and forward them to a physical port on the host, or even to a file or a named pipe.
You can also connect any USB device that is connected to the host machine.
This works with VMware Workstation, but might even work with the free VMware player too.

Scaling a TCP/IP based system and ensuring high availability

I have a TCP/IP based component which is communicating with a c++ based system. In fact it is reading raw bytes from that system and then marshaling those raw bytes in objects and storing it in the DB. This multi-threaded tcp/ip based component is in java and could be deployed on a dual core or quad core processor (not sure if its important for my question but nevertheless a detail I am giving). Now I have a few questions:
How can I scale this tcp/ip based component. This component is deployed on a server and is listening to a port. In future if there's more data that is envisaged at this point that comes from the C++ system we should be able to scale this java component.
What about security. One thing which I can probably do is employ this communication on secure sockets or probably get encrypted data (any particular encryption that I could use here??). Any other way to take care of security?
There is also a requirement of high availability to be satisfied. How do I handle that? How could I possible have redundancy here?
Yes, we are working on the system architecture of a product and therefore, I was wondering if some experienced architect or designer could help me.
How can I scale this tcp/ip based component. This component is deployed on a server and is listening to a port. In future if there's more data that is envisaged at this point that comes from the C++ system we should be able to scale this java component.
You normally use a network load-balancer to scale these kind of services across multiple servers. That load-balancer can distribute load using a variety of algorithms, such as:
CPU load (usually measured with snmp)
Client ip address (if you need persistence when mapping clients to your services)
Number of active sockets
etc
Look at HAProxy for a popular open-source load-balancer. F5 has the most popular commercial load-balancer solution.
What about security. One thing which I can probably do is employ this communication on secure sockets or probably get encrypted data (any particular encryption that I could use here??). Any other way to take care of security?
As mentioned, SSL is an option, but understand that is a big performance hit on your services if you encrypt on the same hardware that is performing your customer services. One option along these lines is using a commercial load-balancer that implements SSL in hardware; that load-balancer would then forward unencrypted sockets to your TCP services farm.
Under some circumstances you can use IPSec network-level encryption; often, this is another network hardware solution. Typically your clients will download an IPSec application that resides on their PC... then they make a connection into your IPSec server, which encrypts between their client and your IPSec termination point
SSH Tunneling with port-forwarding (low-tech solution)
tcpcrypt looks interesting as a future technology, but I'm not sure how mature it is right now.
There is also a requirement of high availability to be satisfied. How do I handle that? How could I possible have redundancy here?
A lot depends on what you mean by high availability, and what kind of recovery timing you need. At a high level, you have a few options:
DNS-based HA works if you don't need client to socket mapping persistence; if you use DNS, you need to be willing to accept typical DNS A-record timeouts (usually people don't go lower than ~5 minutes / 300 seconds). This also assumes you find a way to synchronize your databases across multiple sites.
Load-balancer solutions. Same issue with synchronizing back-end databases
To do any kind of HA, you probably want to hire a consultant that has a proven track record of implementing these services (if you don't have this kind of resource in-house).

Writing Device Drivers for a Microcontroller(any)

I am very enthusiastic in writing device drivers for a microcontroller(like PIC, Atmel etc).
Since I am a newbie in this controller-coding-area I just want to know whether writing device drivers for controller is same as we write for linux( or any other OS) ?
Also can anyone suggest some online device driver building tutorial for the same ..?
Thanks,
If you are thinking about developing the device drivers to interface your device with a host computer (probably using USB), then most of the microcontrollers nowadays implement default classes that rely on native drivers.
A concrete example:
If you use a PIC18F4555, you can use the regular HID (human interface device) windows driver to communicate with your microcontroller (given you implemented it correctly). No need to develop any driver.
Writing a device driver for an MCU is a pretty far cry from writing it for a OS. Most MCUs won't have an OS running on them at all. You'll generally end up writing some low level Interrupt Service Routines (ISRs) and filling up buffers, that your application software will end up emptying. You don't have to fit into any device driver paradigm that an O/S has defined. You basically have to read the datasheet for the device you are wanting to interface with and read and write to its memory over whatever interface it might use (e.g. SPI, I2C, UART, etc.). Ultimately the device driver ought to provide intuitive function calls to the application software.
If you are using AVR MCU like atmega then you can use vusb (https://www.obdev.at/products/vusb/index.html) for those MCU that don't have any HID and handles the interrupts by connecting D+ and D- pins of the USB to digital I/O ports of the MCU.
The atmegaU2 packages have their own USB communication ports and HID.

Resources