I'm new to ruby and rails. I'm working on windows 10. Rails server is starting on tcp://0.0.0.0:3000 instead of http://localhost:3000. I'm using the following command.
rails server
When your Rails server says that it started on tcp://localhost:3000 , that actually means http://localhost:3030 , and on Windows machines you have to use http://127.0.01:3030 instead or else Windows messes it up somehow. Magic!
I think you are a bit confused with some basics in networks. I'll take the chance to clarify this to you.
Based on the Open Systems Interconnection model (OSI model) https://en.wikipedia.org/wiki/OSI_model
There are 7 layers to standardize the communication functions.
TCP is at Transport layer
The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host, while maintaining the quality of service functions.
The transport layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and re-transmit those that fail delivery. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates segments out of the message received from the application layer. Segmentation is the process of dividing a long message into smaller messages.
OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:
HTTP is at Application layer.
The application layer is the OSI layer closest to the end user, which means both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application-entity and the application. For example, a reservation website might have two application-entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer per se has no means to determine the availability of resources in the network.
This means that TCP is not something else than HTTP. Basically, HTTP (Layer 7) is built on TCP/IP (Layer 4).
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
Related
There's something I don't understand about TCP/IP stack : ports.
There's an IP to identify a machine and port for a specific process on that machine.
For me ports have to do with application layer ; there are some ports for some process (80 for HTTP, 25 for SMTP etc...). Ports have nothing to do with TCP layer (transport). Ports should be implemented at a higher level (application layer). So why do you say "TCP port" and not "application port" ?
Thanks
TCP or UDP ports are defined in either layer 4 of the OSI model or layer 3 of the TCP/IP model, both are defined as the 'transport' layer.
OSI layer 5 'session layer' uses the ports defined in layer 4 to create sockets and sessions between communicating devices/programs/etc.
Reminder about OSI model:
It is a conceptual model. That means it describes an idealized, abstract, theoretical group of networking functions. It does not describe anything that someone actually built (at least nothing that is in use today).
It is not the only model. There are other models, most notably the TCP/IP protocol suite (RFC-1122 and RFC-1123), which is much closer to what is currently in use.
The most important things to understand about the OSI (or any other) model are:
We can divide up the protocols into layers
Layers provide encapsulation
Layers provide abstraction
Layers decouple functions from others
Dividing the protocols into layers allows us to talk about their different aspects separately. It makes the protocols easier to understand and easier to troubleshoot. We can isolate specific functions easily, and group them with similar functions of other protocols.
Each “function” (broadly speaking) encapsulates the layer(s) above it. The network layer encapsulates the layers above it. The data link layer encapsulates the network layer, and so on.
Layers abstract the layers below it. Your web browser doesn’t need to know whether you’re using TCP/IP or something else at at the network layer (as if there were something else). To your browser, the lower layers just provide a stream of data. How that stream manages to show up is hidden from the browser. TCP/IP doesn’t know (or care) if you’re using Ethernet, a cable modem, a T1 line, or satellite. It just processes packets. Imagine how hard it would be to design an application that would have to deal with all of that. The layers abstract lower layers so software design and operation becomes much simpler.
I need to create a distributed system, where I have the following node types:
Client [1-n instances]
Server [1-n instances]
Proxy [1 instance - basically a forwarder to any Server]
Cloud Server [1 instance - basically a forwarder to any Proxy]
[Client] -> [Cloud Server] -> [Proxy -> Server] - a Distributed Setup
[Client -> Server] - a Local Setup
[The Proxy and Server are running on the same node or network]
The Client, once on the same network with the Server, should also be allowed to connect directly on the Server instead of going through the Cloud Server / Proxy.
The Server can have multiple Clients connected to it but it can also publish messages for the Client(s) apart from responding to requests from Client(s). The Server/Cloud Server need to differentiate the clients nodes by id and know at any time whether they are connected / disconnected.
To my understanding, the Server should provide a REQ/REP endpoint in order to allow message exchange with the Proxy / Local Client and also a PUB endpoint, where the Proxy / Local Client will be subscribed for any notifications coming from the Server.
Concerning the Proxy, it looks like I will have to have a two endpoints; one on the inside and two endpoints on the outside. Basically I will have a ROUTER/DEALER endpoints for REQ/REP and XPUB/XSUB endpoints for PUB/SUB notifications targeting remote Clients. But my concern is that the proxy on the outside will always have one node to reply to and only one node subscribed to notifications and this is the Cloud Server.
Concerning the Cloud Server, it looks like I will have something similar to the Proxy I described above, but unlike the Proxy above I see that the ROUTER/DEALER and XPUB/XSUB fill the bill.
Obviously I am new to ZeroMQ and it offers a lot. I would like to focus on what is needed for the time being and I would really appreciate your help.
Well, ZeroMQ is a great tool to design & build systems with, but the first thing I would recommend anyone, being a keen young novice, or a seasoned hands-on experienced Computer Science veteran, "Forget to consider all the patterns a Plug-and-Play solution to everything."Having built "a few" distributed, low-latency systems, there are many similarities one will, sooner or later, meet in person.Some of the ZeroMQ's built-in primitives for the Formal Scaleable Communication Patterns have "almost" matching behaviour, but one needs other ordering than an in-built round-robin stepper, some other is "almost" matching, but has some particular sequence-of-steps requirements, which one cannot guarantee in the reality of the worlds of distributed-agents. Simply put, there are many moments, when one feels "almost" done, but some tiny ( or hidden ) detail makes a ( hidden ) call to "Houston, we have a problem..."
How to focus on what is needed?
Forget to think in a classical, sequential manner.
Distributed systems are several orders of magnitude more complex, than a plain, SEQ-tools programmed monolythic system. Besides the principal design targets, there are much more things, that can and will go wrong in production.
Revisit Design-rules and carefully check for new:
1. scaling aspects: define hidden needs - ( nodes, message sizes, traffic peaks )
2. blocking states: define additional signalling needs ( to allow to get out of all potential distributed-state dead-/live-locking )
3. surviveability needs - distributed system will meet lost messages, lost node(s)
4. incoherent protocol versions - for cases where no one can guarantee an enforced unity in distributed systems
5. self-defensive needs - in case a remote node starts some panic/flawed flooding of signalling/messaging channel ( OOP as-is does not provide self-defensive tools, and cannot limit remote-reqestors injected calls, built a set of protective tools for internal self-healing protection in cases, when an objects service is over-consumed or mis-used from external caller, so as to harden your design against such erroneous / malicious mode-of-operations, which a normal, typical OOP-model method typically cannot protect itself from ).
The Best Next Step:
Real-world System Architecture simply must contain more "wires"
If your code strives to go into production state, not to remain as just an academia example, there will have to be much more work to be done, to provide surviveability measures for the hostile real-world production environments.
An absolutely great perspective for doing this and a good read for realistic designs with ZeroMQ is Pieter HINTJEN's book "Code Connected, Vol.1" ( may check my posts on ZeroMQ to find the book's direct pdf-link ).
Plus another good read comes from Martin SUSTRIK, the co-father of ZeroMQ, on low-level truths about the ZeroMQ implementation details & scale-ability
Epilogue: As a bonus, theREQ/REP primitive behaviour communication pattern is dangerous per-se in the real-world as it can self-deadlock the communication processes pair in case a transport ( for whatever reason ) has lost a packet and the "pendulum"-style message delivery gets incomplete and locked forever.
I am designing an application that communicates with devices over various connection types / transport mechanisms. For example, USB Virtual COM, serial port, and TCP connections. In each case I will be using a custom/device-specific application protocol (e.g. to send commands, receive data, etc.) passed through the underlying transport. For the cases mentioned so far it seems pretty clear to me that the "Application Protocol" is the proprietary command/response one and TCP connection (or serial port or "whatever magically transports the bytes") is the Transport Protocol.
However, what is the best way to talk about intermediate protocols, for example, when encapsulating the aforementioned proprietary application layer protocol inside of another application layer protocol like SSH, HTTP, or SSL/TLS?
This answer to a different question suggests that:
It's not so important what you call it (with which OSI layer it is associated)
Something like this might be correctly described as an application layer protocol that happens to have another application layer protocol built on top of it.
Bottom line: How should I label this detail in the GUI? Perhaps one of the following?
✔ "Tunnel" or "Tunneling Protocol" (as suggested by #EJP)
Other possibilities included:
"Application protocol" (misleading because there is actually a higher level protocol that would be more appropriately given this name)
"Transport protocol" (may be misleading since there would be a lower level protocol like TCP underneath, which is usually what this name would make me think of)
"Encapsulation" or "Encapsulation protocol"
"Wrapping protocol" (similar to above)
"Underlying protocol" (vague / ambiguous -- there are probably many more underlying protocols...TCP, IP, Ethernet, ...)
"Protocol" (too generic?)
Nothing seems to jump out at me as fitting.
Update: As #EJP pointed out, SSL/TLS is an application layer protocol not a transport protocol; the question now reflects this.
However, what is the best way to talk about intermediate protocols, for example, when encapsulating the aforementioned proprietary application layer protocol inside of another application layer protocol like SSH or HTTP or even another transport layer protocol like SSL/TLS?
Every protocol you have mentioned here is an application layer protocol.
You might want to use the word 'tunnelling'.
It's not so important what you call it (with which OSI layer it is associated)
There's no reason to associate it with any of the OSI layers at all. The OSI model is for the OSI protocol suite, which is defunct, ergo so is the model. It's unfortunate that generations of teachers have taught it as though it was a fundamental law of nature. It isn't. If you're using TCP/IP, it has its own layer model, for example, and even the OSI guys admit that nobody ever knew what went into the presentation layer.
I have already developed a Clinic management application for Allergy Control Clinics which stores patients' medical files and test results in a database and generates reports for analysis.
there's a section for storing spirometry results in the database. currently i get results
from an Excel file which is exported by WinspiroPro (the application that comes with spirolab devices) and store them in the database.
few days ago i came across the word "HL7" which seems to be a Standard protocol for communicating with these medical devices, so i can directly get the results from the device using Delphi.
also in spirolab device user manual it is mentioned that the device is compatible with this system.
now my question is, how can I implement this system (HL7) in delphi?
Thanks
As is usual with these kind of inter-professional standards, you need to pay to get them, at least on http://www.hl7.org in this case.
If I search around on the net, there may be existing tools that you can use, or have a look how they work internally:
http://code.ohloh.net/search?s=HL7
https://code.google.com/hosting/search?q=HL7&sa=Search
http://sourceforge.net/directory/?q=HL7
HL7 is not bound to a specific transport layer. It is a protocol on the application level, the seventh layer of the ISO 7-layer-model, hence Level 7. It describes messages and the events, when this messages should be send.
It just gives some recommendations how to do message transfer on the subjacent layers, e.g. MLLP with tcp socket communication. But in principle you are free to use any transport layers you want, may it be direct socket communication, file transfer or what ever.
Although most systems now can use tcp, it is also possible to use HL7 with different underlying transport protocols as RS232. If I remember right, there was also an example about message transfer / coupling with RS232 in the implementation guides of the documentation. And yes, the documentation and protocol standard documetation is free after registering.
Did you ask your provider for the WinspiroPRO version with HL7 ability? Maybe it supports already socket communication with tcp.
Otherwise you either need access to the sourcecode of ldTCPCClient and replace the tcp part with an RS232 part or you have to use a software just for parsing/building (unmarshalling/marshalling) of HL7 messages together with a software, that handles the transportation level.
By the way, just from the name, I guess that ldTCPclient is not apt for your need, as you will probably need a host and not a client component.
I have a software architecture problem.
I have to design an IOS application which will communicate with a Linux application to get the state of a sensor, and to publish an actuator command. The two applications run in a Local network with an Ad-Hoc WiFi connection between the IOS device and the Linux computer.
So I have to synchronize two values between two applications (as described in figure 1). In a Linux/Linux system, I resolve this kind of problem thanks to any publisher / subscriber middleware. But how can I solve this problem in an IOS / Linux world ?
Actually the Linux application embed an asynchronous TCP Server, and the IOS application is an asynchronous TCP client. Both applications communicate through the TCP Socket. I think that this method is a low level method, and I would like to migrate the communication layer to a much higher level Service based communication framework.
After some bibliographic research I found three ways to resolve my problem :
The REST Way :
I can create a RESTful Web Service which modelize the sensor state, and which is able to send command to the actuator. An implementation of a RESTful web service client exists for IOS, that is "RESTKit", and I think I can use Apache/Axis2 on the server side.
The RPC Way :
I can create on my Linux computer a RPC service provider thanks to the libmaia. On the IOS side, I can use xmlrpc (https://github.com/eczarny/xmlrpc). My two programs will communicate thanks to the service described in the figure below.
The ZeroConf way :
I didn't get into detail of this methods, but I suppose I can use Bonjour on the IOS side, and AVAHI on the linux side. And then create custom service like in RPC on both side.
Discussion about these methods :
The REST way doesn't seem to be the good way because : "The REST interface is designed to be efficient for large-grain hypermedia data transfer" (from the Chapter 5 of the Fielding dissertation). My data are very fined grain data, because my command is just a float, and my sensor state too.
I think there is no big difference between the ZeroConf way and the RPC Way. ZeroConf provide "only" the service discovering mechanism, and I don't need this kind of mechanism because my application is a rigid application. Both sides knows which services exists.
So my question are :
Does XML RPC based method are the good choice to solve my problem of variable synchronization between an iPhone and a Computer ?
Does it exist other methods ?
I actually recommend you use "tcp socket + protobuf" for your application.
Socket is very efficient in pushing messages to your ios app and protobuf can save your time to deliver a message instead of character bytes. Your other high level proposal actually introduces more complications...
I can provide no answers; just some things to consider in no particular order.
I am also assuming that your model is that the iOS device polls the server to synchronize state.
It is probably best to stay away from directly using Berkeley sockets on the iOS device. iOS used to have issues with low level sockets not connecting after a period of inactivity. At the very least I would use NSStream or CFStream objects for transport or, if possible, I'd use NSURL, NSURLConnection, NSURLRequest. NSURLConnection's asynchronous data loading capability fits well with iOS' gui update loop.
I think you will have to implement some form of data definition language independent of your implementation method (RES, XML RPC, CORBA, roll your own, etc.)
The data you send and receive over the wire would probably be XML or JSON. If you use XML you would have to write your own XML document handler as iOS implements the NSXMLParser class but not the NSXMLDocument class. I would refer JSON as the JSON parser will return an NSArray or NSDictionary hierarchy of NSObjects containing the unserialized data.
I have worked on a GSOAP implementation that used CFStreams for transport. Each request and response was handled by a request specific class to create request specific objects. Each new request required a new class definition for the returned data. Interactivity was maintained by firing the requests through an NSOperationQueue. Lots of shim here. The primary advantage of this method was that the interface was defined in a wsdl schema (all requests, responses, and data structures were defined in one place.
I have not looked at CORBA on iOS - you would have to tie in C++ libraries to your code and change the transport to use CFStreams Again, lots of shim but the advantage of having the protocol defined in the idl file. Also you would have a single connection to the server instead of making and breaking TCP connections for each request.
My $.02
XML RPC and what you refer to as "RESTful Web Service" will both get the job done. If you can use JSON instead of XML as the payload format, that would simplify things somewhat on the iOS side.
Zeroconf (aka bonjour) can be used in combination with either approach. In your case it would allow the client to locate the server dynamically, as an alternative to hard-coding an URL or other address in the client. Zeroconf doesn't play any role in actual application-level data transfer.
You probably want to avoid having the linux app call the iOS app, since that will complicate the iOS app a lot, plus it will be hard on the battery.
You seem to have cherry picked some existing technologies and seem to be trying to make them fit the problem.
I would like to migrate the communication layer to a much higher level Service based communication framework
Why?
You should be seeking the method which meets your requirements in terms of available resources (should you assume that the client can maintain a consistent connection? how secure does it need to be?) However besides functionality, availability and security, the biggest concern should be how to implement this with the least amount of effort.
I'd be leaning towards the REST aproach because:
I do a lot of web development so that's where my skills lie
it has minimal dependencies
there is well supported code implementing the protocol stack at both ends
it's trivial to replace either end of the connection to test out the implementation
it's trivial to monitor the communications (if they're not encrypted) to test the implementaiton
adding encryption / authentication does not change the data exchange
Regards your citation, no HTTP is probably not the most sensible for SCADA - but then neither is iOS.