SCTP Multihoming INIT messages - sctp

In a SCTP multihoming configuration, I have configured two sets - primary and secondary paths. My question is where is the INIT command sent by default on primary or secondary (assuming both the paths are up )?
Is there any condition that the INIT needs to be sent on primary path only if it is in UP state ?

Any SCTP packet, by default should be sent using primary path. This is what i found in RFC 4960 - Sec:6.4 - Para:III
By default, an endpoint SHOULD always transmit to the primary path,
unless the SCTP user explicitly specifies the destination transport
address (and possibly source transport address) to use.
Regarding second question - "Is there any condition that the INIT needs to be sent on primary path only if it is in UP state ?"
It does not look logical to send via an interface which is down. Below is the RFC 4960 - Sec:6.4.1 - Para:II
When there is outbound data to send and the primary path becomes
inactive (e.g., due to failures), or where the SCTP user explicitly
requests to send data to an inactive destination transport address,
before reporting an error to its ULP, the SCTP endpoint should try to
send the data to an alternate active destination transport address if
one exists.

Related

Can the INIT OData Source Kafka Source Connector pull data from XSODATA services?

I'd like to preface this with the fact that I am completely new to SAP and SAP HANA, and OData.
I was tasked with pulling changes from a SAP HANA table and transfer those to Kafka.
I noticed there was a Kafka source connector already written, which can be found here.
For this task, I was given a URL, a username and a password.
The URL looks like this:
https://blablabla.companyName.com/companyName/Foo/Bar/Baz/Foo/Table/Resource.xsodata
And this is a sample of the source connector's configs:
# The first few settings are required for all connectors:
# a name, the connector class to run, and the maximum number of
# tasks to create.
name = odatav4-source-connector
connector.class = org.init.ohja.kafka.connect.odatav4.source.OData4SourceConnector
tasks.max = 1
# The remaining configs are specific to the OData v4 source connector.
# OData server host as either DNS or IP
sap.odata.host.address = services.odata.org
# OData server port
sap.odata.host.port = 443
# OData protocol (supported values are http or https)
sap.odata.host.protocol = https
# OData user name for basic authentication
# For services not requiring authentication this can be set to any value
sap.odata.user.name = anonymous
# OData user password for basic authentication
# For services not requiring authentication this can be set to any value
sap.odata.user.pwd = anonymous
# Optional list of service URL query parameters in the form of "param1=value1,param2=value2", e.g. sap-client=200
#sap.odata.query-params=
# none(default): DECIMALs will be mapped to Connect Decimal data type
# primitive: DECIMALs will be mapped to INT64(id scale = 0) anf FLOAT64
#sap.odata.decimal.mapping = none
# maximum amount of retries in case of service connection/communication errors (e.g. HTTP status codes 400-599)
#sap.odata.max.retries = 30
# The backoff strategy applied will select a random number of milliseconds
# to wait between min.retry.backoff.ms and max.retry.backoff.ms before starting
# the next retry.
#sap.odata.min.retry.backoff.ms = 20000
#sap.odata.max.retry.backoff.ms = 180000
# Timeout in milliseconds for establishing http connections
#sap.odata.connection.connect.timeout.ms=3000
# Timeout in milliseconds for reading data from a http connection
#sap.odata.connection.read.timeout.ms=10000
# Individual configurations for each OData v4 service entity.
# service and entityset build up the primary key for each OData configuration.
# OData v4 URL service path
sap.odata#00.service = /V4/Northwind/Northwind.svc/
# OData v4 entity set name
# The entity set name can be queried from the /$metadata service URL
sap.odata#00.entityset = Order_Details
# Kafka topic name the data for this OData service entity set will be pushed to
sap.odata#00.topic = Order_Details
# Execution interval in seconds for the scheduled data extractions
# Set to -1 to process subscription events only
#sap.odata#00.exec-period = 900
# If changes to entities selected by the first query should be tracked and returned as deltas in subsequent polls
# Set to 1 to enable odata delta mode
#sap.odata#00.track-changes = 0
# Paging mode (server or client) determines the type of paging
# server: use HTTP prefer-headers to request a maximum package size from the odata server
# client: use query functions skip and top (not compatible to change tracking)
#sap.odata#00.paging.mode = server
# Packaging size in count of entity set records
#sap.odata#00.paging.size = 50000
# Optional: Hierarchy level up to which recommendations for the expand.list configuration (query option $expand) will
# be shown in the Confluent Control Center
#sap.odata#00.expand.level = 1
# Optional: List of expand query options that will define the deep structure of returned entity messages
#sap.odata#00.expand.list =
# Optional: comma separated list of selected non-key fields to be extracted
#sap.odata#00.projection =
# Optional: filter query options
# Supported logical operations/options are: eq, ne, le, lt, ge, gt, bt, nb, in
#sap.odata#00.select#00.fieldname =
#sap.odata#00.select#00.option =
#sap.odata#00.select#00.low =
#sap.odata#00.select#00.high =
# If set to 1 the connector will subscribe to push-notifications issued by the corresponding OData service entity
#sap.odata#00.subscription.enable = 0
So I tried to create my own, like so:
{
"name": "sap-hana-source-connector",
"config": {
"connector.class": "org.init.ohja.kafka.connect.odatav4.source.OData4SourceConnector",
"sap.odata.user.name": "username",
"sap.odata.host.address": "blablabla.companyName.com",
"sap.odata.host.port": "443",
"sap.odata.host.protocol": "https",
"sap.odata#00.service": "/companyName/Foo/Bar/Baz/Foo/Table/Resource.xsodata",
"sap.odata#00.entityset": "Resource",
"sap.odata.user.pwd": "pwd"
}
}
The issue is that the only error I get is this:
{
"error_code": 400,
"message": "Connector configuration is invalid and contains the following 14 error(s):\nInvalid configuration sap.odata.host.address: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.host.protocol: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.host.port: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.user.name: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.user.pwd: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.max.retries: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.min.retry.backoff.ms: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.max.retry.backoff.ms: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.connection.connect.timeout.ms: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.connection.read.timeout.ms: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.query-params: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.trace.mode: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.trace.path: No configured service reachable. Maybe invalid destination configuration?\nInvalid configuration sap.odata.decimal.mapping: No configured service reachable. Maybe invalid destination configuration?\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"
}
As someone who is completely new to OData and SAP, I don't know how I'd debug this.
I noticed that, in the OData's Kafka Source Connector documentation the services end in .svc and not .xsodata; so maybe it's something to do with that?
Also, what am I supposed to be for the sap.odata#00.entityset config?
Is there a way to get a more detailed error message?
Thanks.

Google MQTT broker - is IP address stable from mqtt.googleapis.com

The new NBIOT demo modules from O2 - we are testing - they only accept an IP address as a broker host rather than URL [mqtt.googleapis.com]. If i run DNS lookup this is fine - but how stable is the IP address associated with the mqtt.googleapis.com ??
I have the DNS lookup here 74.125.201.206
How long will it remain stable / the same ??
stream {
upstream google_mqtt {
server mqtt.googleapis.com:8883;
}
server {
listen 8883;
proxy_pass google_mqtt;
}
}
Instead of the mqtt url i want to insert IP address
Why would you want to hard code the IP address? You are just setting yourself up for it to fail at the moment you can't fix it (e.g. while on vacation)
You shouldn't assume an IP address returned by a DNS query is good for any longer than the TTL value returned with the response.
Hostnames are a deliberate abstraction so you don't have to worry about if the IP address changes, be it due to a failure, maintenance, load balancing.
Just DON'T hardcode the IP address.
If the module you mentioned REALLY only accepts IP addresses then you need to raise a bug against the supplier saying this needs fixing, especially as this is for a field deployed device that you probably can't easily update once deployed.

What is Radius Server Response code to the following test cases?

I am testing my Radius server implementation but I'm not sure about the correct response code in the following cases:
1-Client Logging in without password
2-Client send bad request code
Do you have any idea?
According to RFC 2865 0-1 Instances of the User-Password are allowed in a given Access-Request, and one of either the User-Password, Chap-Password or State attributes must be present.
An Access-Request MUST contain either a User-Password or a
CHAP-Password or State. An Access-Request MUST NOT contain both a
User-Password and a CHAP-Password. If future extensions allow other
kinds of authentication information to be conveyed, the attribute for
that can be used in an Access-Request instead of User-Password or
CHAP-Password.
The RFC is silent on what should happen if none of these attributes are present, however.
If you wish to emulate popular RADIUS solutions (such as FreeRADIUS), you should return an Access-Reject in this instance.
This is dealt with in RFC 2865.
The Code field is one octet, and identifies the type of RADIUS
packet. When a packet is received with an invalid Code field, it
is silently discarded.
i.e. no response should be sent.

How can I access HTTPS using direct ip without editing /etc/hosts in iOS?

By default, example.com resolve to 123.123.123.123,
But If I want it to be resolved to 100.100.100.100.
For http, I can simply change the url to http://100.100.100.100 with a header "Host: example.com".
But it's not working for HTTPS.(Error: SSL certificate problem: Invalid certificate chain).
My question is not why, and I do not want to skip the certificate validation.
How can I get the same effect in Objective-C like curl's
--resolve option:
--resolve <host:port:address>
Provide a custom address for a specific host and port pair. Using this, you can make the curl requests(s)
use a specified address and prevent the otherwise normally resolved address to be used. Consider it a sort
of /etc/hosts alternative provided on the command line. The port number should be the number used for the
specific protocol the host will be used for. It means you need several entries if you want to provide
address for the same host but different ports.
In other words, How to make custom DNS query in HTTPS requests in Objective-C?
When you are using https, the address that you use in your request, and the address given to you by the certificate returned by the server, must agree.
If you send a request to https://100.100.100.100 then the server must return a certificate for 100.100.100.100. Even if you connected successfully to https:// www.xyz.com, and www.xyz.com resolved to 100.100.100.100, connecting to https://100.100.100.100 isn't going to work, cannot work, and absolutely must not work, because the server will return a certificate for www.xyz.com and not for 100.100.100.100.
I see following options:
Use your own DNS server with corresponding configuration of host/ip entry
If you want to stick with Objective C, there is a guideline frome apple Overriding SSL Chain Validation Correctly
Use libcurl which supports the feature you mentioned: http://curl.haxx.se/libcurl/c/resolve.html
example
#include <stdio.h>
#include <curl/curl.h>
int main(void)
{
CURL *curl;
CURLcode res = CURLE_OK;
struct curl_slist *host = NULL;
/* Each single name resolve string should be written using the format
HOST:PORT:ADDRESS where HOST is the name libcurl will try to resolve,
PORT is the port number of the service where libcurl wants to connect to
the HOST and ADDRESS is the numerical IP address
*/
host = curl_slist_append(NULL, "example.com:80:127.0.0.1");
curl = curl_easy_init();
if(curl) {
curl_easy_setopt(curl, CURLOPT_RESOLVE, host);
curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
res = curl_easy_perform(curl);
/* always cleanup */
curl_easy_cleanup(curl);
}
curl_slist_free_all(host);
return (int)res;
}
Update:
Since author don't want to skip certificate validation this is not an option now:
You can try to ignore ssl certificate in AFNetworking in your case
I want to allow invalid SSL certificates with AFNetworking

IdUDPServer sending header checksum as 0x00

I am making a simple UDP P2P Chat Program with a well known server.
The client's send and recieve data from server and clients through a single IdUDPServer.
The clients as of now can login and logout i.e. they can send data to the server.
Whenever the server sends any data it gets dropped at the NIC side of the node as the embedded ip header checksum is 0x00 as notified by wireshark.
IdUDPServer Settings (Client/Server)
Active : True
Bindings :
Broadcast : False
BufferSize : 8192
DefaultPort : 10000
IPVersion : Id_IPv4
ThreadedEvent : False
Command Used
only one command is used within
UDPServer.SendBuffer ( ED_Host.Text, StrToInt ( ED_Port.Text ), Buffer );
A similar configuration is working perfectly in another program of mine.
Most NICs will perform checksum validation and generation these days instead of the os network stack. This is to improve performance and is known as checksum offloading. As such wiresshark will report the fact the checksum is missing as an error but it can usually be ignored or the error turned off in the wire shark settings.
Some NIC drivers allow you to turn off checksum offloading. Try this and retest the code

Resources