Send all traffic over VPN connection Swift - ios

I am creating an application to connect to a VPN server. I have worked out how to do this, but need to be able to send all traffic over the connection. Is there any code which can do this? So far I have tried:
let manager: NEVPNManager = NEVPNManager.sharedManager()
var p = NEVPNProtocolIPSec()
manager.`protocol` = p
let pw = ""
p.username = ""
p.passwordReference = pw.dataUsingEncoding(NSUTF8StringEncoding)
p.serverAddress = ""
p.authenticationMethod = NEVPNIKEAuthenticationMethod.SharedSecret
//p.sharedSecretReference = getPasscodeNSData("vpnSharedSecret")
p.useExtendedAuthentication = true
p.disconnectOnSleep = false

From Apple:
This is the default routing method. The IP routes are specified by the
Packet Tunnel Provider extension at the time that the VPN tunnel is
fully established. See NETunnelProvider for more details.
This is something the is specified from the Server end typically, l2tp does allow the user to toggle this setting. For that, take a look at routingMethod part of the NETunnelProvider. Scroll down to the Routing Network Data to the VPN section for the information you are looking for.

Related

Creating transaction to send BNB via BNB Smart Chain with Wallet Core on iOS

I try to create, sign and broadcast transactions through the Tatum. But can't find any example of how to do this. All I know is that BSC network is based on Ethereum Virtual Machine. So my transaction should be like this:
let signerInput = EthereumSigningInput.with {
$0.chainID = chainID
$0.gasPrice = gasPrice
$0.gasLimit = gasLimit
$0.to = address
$0.nonce = transactionsCount
$0.transaction = EthereumTransaction.with {
$0.transfer = EthereumTransaction.Transfer.with {
$0.amount = amountData
}
}
$0.privateKey = wallet.getKeyForCoin(coin: .smartChain).data
}
let output: EthereumSigningOutput = AnySigner.sign(
input: signerInput,
coin: .smartChain
)
Am I wrong? Should I prepare the transaction in some other way?
Thank you!
I try to use Ethereum style transactions via BNB Smart Chain network.

How use "android.net.wifi.NetworkSpecifier.Builder" with jnius?

The issue is:
I was using "android.net.wifi.WifiConfiguration" for connecting to WIFI like this:
Correctly Working.
from jnius import autoclass
ssid = str("ssid_name")
print("app says-->connecting to wifi:",ssid)
String = autoclass('java.lang.String')
WifiConfigure = autoclass('android.net.wifi.WifiConfiguration')
PythonActivity = autoclass('org.kivy.android.PythonActivity')
activity = PythonActivity.mActivity
service = activity.getSystemService("wifi")
WifiManager = autoclass('android.net.wifi.WifiManager')
WifiConfig = WifiConfigure()
> # # # #
Connectname = String(ssid)
connectkey = String("Wifi Password")
WifiConfig.SSID = "\""+Connectname.toString()+"\""
WifiConfig.hiddenSSID = True
WifiConfig.preSharedKey ="\""+ connectkey.toString()+"\""
added = WifiManager.addNetwork(WifiConfig)
WifiManager.enableNetwork(added, True)`
But after API 29 that java library is deprecated, and I need to deploy on Play Store the Android App Bundle with at least API 30.
So:
On site https://developer.android.com they speaking about use "android.net.wifi.NetworkSpecifier.Builder" instead of "android.net.wifi.WifiConfiguration", is there anyone to tell me how to use integrate with jnius and autoclass?
I expect python programmers to help me solve the problem
You cannot programmatically connect to Wi-Fi after API 30.

Kafka sink to InfluxDB

I'm trying to get data from my kafka topic into InfluxDB using the Confluent/Kafka stack. At the moment, the messages in the topic have a form of {"tag1":"123","tag2":"456"} (I have relatively good control over the message format, I chose the JSON to be as above, could include a timestamp etc if necessary).
Ideally, I would like to add many tags without needing to specify a schema/column names in the future.
I followed https://docs.confluent.io/kafka-connect-influxdb/current/influx-db-sink-connector/index.html (the "Schemaless JSON tags example") as this matches my use case quite closely. The "key" of each message is currently just the MQTT topic name (the topic's source is an MQTT connector). So I set the "key.converter" to "stringconverter" (instead of JSONconverter as in the example).
Other examples I've seen online seem to suggest the need for a schema to be set, which I'd like to avoid. Using InfluxDB v1.8, everything on Docker/maintained on Portainer.
I cannot seem to start the connector and never get any data to move across.
Below is the config for my InfluxDBSink Connector:
{
"name": "InfluxDBSinkKafka",
"config": {
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false",
"name": "InfluxDBSinkKafka",
"connector.class": "io.confluent.influxdb.InfluxDBSinkConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"topics": "KAFKATOPIC1",
"influxdb.url": "http://URL:PORT",
"influxdb.db": "tagdata",
"measurement.name.format": "${topic}"
}
}
The connector fails, and each time I click "start" (the play button) the following pops up in the connect container's logs:
[2022-03-22 15:46:52,562] INFO [Worker clientId=connect-1, groupId=compose-connect-group]
Connector InfluxDBSinkKafka target state change (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
[2022-03-22 15:46:52,562] INFO Setting connector InfluxDBSinkKafka state to STARTED (org.apache.kafka.connect.runtime.Worker)
[2022-03-22 15:46:52,562] INFO SinkConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.influxdb.InfluxDBSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class org.apache.kafka.connect.storage.StringConverter
name = InfluxDBSinkKafka
predicates = []
tasks.max = 1
topics = [KAFKATOPIC1]
topics.regex =
transforms = []
value.converter = class org.apache.kafka.connect.json.JsonConverter
(org.apache.kafka.connect.runtime.SinkConnectorConfig)
[2022-03-22 15:46:52,563] INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.confluent.influxdb.InfluxDBSinkConnector
errors.deadletterqueue.context.headers.enable = false
errors.deadletterqueue.topic.name =
errors.deadletterqueue.topic.replication.factor = 3
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
header.converter = null
key.converter = class org.apache.kafka.connect.storage.StringConverter
name = InfluxDBSinkKafka
predicates = []
tasks.max = 1
topics = [KAFKATOPIC1]
topics.regex =
transforms = []
value.converter = class org.apache.kafka.connect.json.JsonConverter
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig)
I am feeling a little out of my depth and would appreciate any and all help.
The trick here is getting the data in the right format to Kafka in the first place. My MQTT source stream needed to have the value converter set to Bytearray with e schema url and schema = true. Then the Influx Sink started working when I used the jsonconverter, with schema=false. Then it started working. This is deceptive because the message queue looks the same with different valueconverters for the MQTT source connecter, so it took a while to figure out that was the problem.
After getting this working, and realising the confluent stack was perhaps a little overkill for this task, I went with the (much) easier route of pushing MQTT directly to Telegraf and having Telegraf push into InfluxDB. I would recommend this.

Offlineimap stops retrieving after first 20-30 messages

I have been trying to set up offlineimap to sync mail from gmail to the local folders on my mac machine.
The issue with my current set-up is that, offlineimap will start to sync the mail from both accounts, I can see lines like -
Copy message 3 (3 of 10966) repo1_remote:[Gmail]/Important -> repo1_local
But, after around 20-30 copy message, these lines just stop. Offlineimap is still connected though, it refreshes after 10 minutes and syncs again but, I cant see any more copy message lines in the repos any longer, it just stops. I can see these new 20-30 new messages in mutt, but not more. Killing and restarting offlineimap agains copies 20-30 new messages and again stops. I have no clue as to what is wrong. I guess it should copy all messages locally. Here is my offlineimaprc. I have the python file set up correctly.
[general]
metadata = ~/.offlineimap
accounts = repo1, repo2
maxsyncaccounts = 10
#ui = blinkenlights
ui = ttyui
pythonfile = ~/Development/OfflineIMAP/mail/offlineimap.py
#socktimeout = 60
[mbnames]
[Account repo2]
localrepository = repo2_local
remoterepository = repo2_remote
autorefresh = 10
status_backend = sqlite
synclabels = yes
[Account repo1]
localrepository = repo1_local
remoterepository = repo1_remote
autorefresh = 10
status_backend = sqlite
synclabels = yes
[Repository repo2_local]
type = GmailMaildir
nametrans = get_remote_name
localfolders = ~/Development/OfflineIMAP/mail/repo2
sep = /
restoreatime = yes
[Repository repo1_local]
type = GmailMaildir
nametrans = get_remote_name
localfolders = ~/Development/OfflineIMAP/mail/repo1
sep = /
restoreatime = yes
[Repository repo2_remote]
type = Gmail
folderfilter = is_included
nametrans = get_local_name
cert_fingerprint = 3ffdb8519c1c8242ce8387d3d9fccc208a776b4a
remoteuser = asd#gmail.com
remotepasseval = get_password('asd')
usecompression = yes
maxconnections = 3
[Repository repo1_remote]
type = Gmail
folderfilter = is_included
nametrans = get_local_name
cert_fingerprint = 3ffdb8519c1c8242ce8387d3d9fccc208a776b4a
remoteuser = qwe#gmail.com
remotepasseval = get_password('qwe')
usecompression = yes
maxconnections = 3
I would like to know what is preventing offlineimap from copying further messages and what should I change in the config to make it work properly.
I've just recently ran into the same problem with gmail. In my case disabling compression and setting connections limit to 1 resolved the issue (didn't have time to investigate fully). Have you tried doing this?

Graphite carbon-relay not working

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

Resources