I wanted to set Ingress and Egress Rate of Solace Appliance on below mentioned 3 level.
1 : Appliances Level
2 : Message-VPN Level
3 : Queue Level
Please let me know the possibility & share CLI Commend for same.
I assume you're asking if there is a way to set rate limits to ingress or egress traffic.
There is egress traffic shaping facility available on some NABs (e.g. NAB-0610EM, NAB-0210EM-04, NAB-0401ET-04, and NAB‑0801ET‑04) but it is per physical interface (port), e.g,
solace(configure/interface)# traffic-shaping
solace(configure/interface/traffic-shaping)# egress
solace(configure/interface/traffic-shaping/egress)# rate-limit <number-in-MBPS>
If your network RTT is significant enough, you might be able to do some "rate-limit" by constraining the TCP maximum send window size with a desired bandwidth-delay product. Note that this is not a generic solution and will only work for certain cases, depending on the network environment. It can be set on the client-profile for egress traffic:
solace(configure)# client-profile <name> message-vpn <vpn-name>
solace(configure/client-profile)# tcp max-wnd <num-kilo-bytes>
There is nothing you can do for rate-limiting ingress traffic on the appliance.
I don't believe SolOS-TR supports rate limiting on the appliance side.
https://sftp.solacesystems.com/Portal_Docs/#page/Solace_Router_Feature_Provisioning/B_Managing_Consumption_of_Router_Resources.html
Related
Is there a way to monitor max number of target groups per ALB? It is 100 and can be easily reached when using ALB Ingress Controller in Kubernetes.
In the Solace CLI, I type in the following command:
solace> show message-spool message-vpn Solace_VPN
The result output contains a difference between "actual" versus "configured":
Flows
Max Egress Flows: 100
Configured Max Egress Flows: 1000
Current Egress Flows: 60
Max Ingress Flows: 100
Configured Max Ingress Flows: 1000
Current Ingress Flows: 22
How do I get "Max Egress Flows" and "Configured Max Egress Flows" to align?
Is it as easy as restarting my Message VPN (but this will disconnect all my existing clients)?
It this just a limitation of the community edition?
From the output, it would appear that your message-broker is configured to only operate at the default 100 connection scaling tier.
There are two options to get the limits to align:
If you meet the system requirements for the 1000 connections scaling tier, you can increase your connection scaling tier to 1000 using the procedure here.
Manually lower the VPN limits. This can be easily done by going to "Message VPNs, ACLs & Bridges" tab in SolAdmin, clicking on "Edit Message VPN", and adjusting the limits in the "Advanced Properties" tab.
I have started reading some details about MQTT protocol and its implementation. I came across the term 'cluster' a lot. Can anyone help me understand what does 'cluster' mean for MQTT protocol?
In this comparison of various MQTT protocol, there is a column for the term 'cluster'
Forwarding messages with topic bridge loops will not result in a true MQTT broker cluster, which will lead to drawbacks lined out above.
A true MQTT broker cluster is a distributed system that represents one logical MQTT broker. A cluster consists of various individual MQTT broker nodes, that are typically installed on separate physical or virtual machines and or connect over a network.
Typical advantages of MQTT broker clusters include:
Elimination of the single point of failure
Load distribution across multiple cluster nodes
The ability for clients to resume sessions on any broker cluster
Scalability
Resilience and fault tolerance - especially useful in cloud environments
I recommend this blogpost, if you're looking for a more detailed explanation.
A cluster is a collection of MQTT brokers set up to bridge all topics between each other so that a client can connect to any one of the cluster members and still publish and receive messages to all other clients no matter which cluster member they are connected to.
A few things to be aware of:
Topic bridge loops, where a message is published to one cluster member which is then forwarded to another cluster member, then another and finally back to the original. If this happens the original broker doesn't have a way to know it originally pushed this to the other cluster members so the message and end up in a loop. Shared message state databases or using a single bridging replication broker can fix this.
Persistent subscriptions/sessions, unless brokers have a pooled session cache then clients will not retain session or subscription status if they connect to a different cluster member when reconnecting.
I have been doing some experiments on ovs these days. I have 2 physical machines with openstack running on it, and GRE tunnel is configured. I add 2 internal ports on br-int (integration bridge) of each machine and assign them to different namespace(ns1, ns2, ns3, ns4) and ip from same subnet(172.16.0.200,172.16.0.201,172.16.0.202,172.16.0.203). After configuration is done, VM(in same subnet)<-> virtual ports , virtual port <->virtual port on same/different nodes are all reachable(Use ping to test). However, weird thing shows up: I have used iperf to test the bandwidth, testing result shows as following:
Physical node<-> Physical node: 1GB/s
VM<->VM on same machine: 10GB/s
VM<->VM on different machines: 1GB/s
VM<->Virtual port same machine: 10GB/s
VM<->Virtual port different machines: 1GB/s
Virtual port<->Virtual port same machine: 16GB/s
Virtual port<->Virtual port different machines: 100~200kb/s (WEIRD!)
I have tried replace internal port with veth pairs, same behavior shows up.
As I expect, the veth pair should behave similar to a VM because they both have separate namespace , and openstack VM uses same way (Veth pairs) to connect to br-int. But the experiment shows that the VM(node1) -> Virtual port(node2) has 1GB/s bandwidth but Virtual port(node1) -> Virtual port(node2) only has 100kb/s ? Anybody has any idea?
Thanks for your help.
When using GRE (or VXLAN, or other overlay network), you need to make sure that the MTU inside your virtual machines is smaller than the MTU of your physical interfaces. The GRE/VXLAN/etc header adds bytes to outgoing packets, which means that an MTU sized packet coming from a virtual machine will end up larger than the MTU of your host interfaces, causing fragmentation and poor performance.
This is documented, for example, here:
Tunneling protocols such as GRE include additional packet headers that
increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure,
instances attempt to send packets using the default Ethernet maximum
transmission unit (MTU) of 1500 bytes. Internet protocol (IP) networks
contain the path MTU discovery (PMTUD) mechanism to detect end-to-end
MTU and adjust packet size accordingly. However, some operating
systems and networks block or otherwise lack support for PMTUD causing
performance degradation or connectivity failure.
Ideally, you can prevent these problems by enabling jumbo frames on
the physical network that contains your tenant virtual networks. Jumbo
frames support MTUs up to approximately 9000 bytes which negates the
impact of GRE overhead on virtual networks. However, many network
devices lack support for jumbo frames and OpenStack administrators
often lack control over network infrastructure. Given the latter
complications, you can also prevent MTU problems by reducing the
instance MTU to account for GRE overhead. Determining the proper MTU
value often takes experimentation, but 1454 bytes works in most
environments. You can configure the DHCP server that assigns IP
addresses to your instances to also adjust the MTU.
If MQTT is already a lightweight protocol and it uses small amount of power and bandwidth, then why do we have MQTT-SN? When is it appropriate to use MQTT and when MQTT-SN?
There are few advantages in MQTT-SN (SN for Sensors Network) over MQTT, especially for embedded devices.
Advantages
MQTT-SN uses topic ID instead of topic name. First client sends a registration request with topic name and topic ID (2 octets) to a broker. After the registration is accepted, client uses topic ID to refer the topic name. This saves media bandwidth and device memory - it is quite expensive to keep and send topic name e.g: home/livingroom/socket2/meter in memory for each publish message.
Topic name to topic ID can be configured in MQTT-SN gateway, so that topic registration message can be skipped before publish.
MQTT-SN does not require TCP/IP stack. It can be used over a serial link (preferred way), where with simple link protocol (to distinguish different devices on the line) overhead is really small. Alternatively it can be used over UDP, which is less hungry than TCP.
Disadvantages
You need some sort of gateway, which is nothing else than a TCP or UDP stack moved to a different device. This can also be a simple device (e.g.: Arduino Uno) just serving multiple MQTT-SN devices without doing other job.
MQTT-SN is not well supported.
If you are running out of resources, or you do not have Ethernet/Wifi in your device, use MQTT-SN.
MQTT-SN (wher SN means Sensors Network) is different from MQTT.
MQTT goes over TCP/IP and it can used for LAN communication or over Internet and the Cloud (if you have a client inside your network but the broker is outside on Internet).
MQTT-SN can be used on more protocols suited for sensors network like ZigBee, Z-Wave and so on.
The specification is different from MQTT ... so it isn't MQTT not over TCP/IP.
It's more lightweight and needs a bridge to translate MQTT-SN messages into MQTT messages.
Paolo.