Offlineimap stops retrieving after first 20-30 messages - imap

I have been trying to set up offlineimap to sync mail from gmail to the local folders on my mac machine.
The issue with my current set-up is that, offlineimap will start to sync the mail from both accounts, I can see lines like -
Copy message 3 (3 of 10966) repo1_remote:[Gmail]/Important -> repo1_local
But, after around 20-30 copy message, these lines just stop. Offlineimap is still connected though, it refreshes after 10 minutes and syncs again but, I cant see any more copy message lines in the repos any longer, it just stops. I can see these new 20-30 new messages in mutt, but not more. Killing and restarting offlineimap agains copies 20-30 new messages and again stops. I have no clue as to what is wrong. I guess it should copy all messages locally. Here is my offlineimaprc. I have the python file set up correctly.
[general]
metadata = ~/.offlineimap
accounts = repo1, repo2
maxsyncaccounts = 10
#ui = blinkenlights
ui = ttyui
pythonfile = ~/Development/OfflineIMAP/mail/offlineimap.py
#socktimeout = 60
[mbnames]
[Account repo2]
localrepository = repo2_local
remoterepository = repo2_remote
autorefresh = 10
status_backend = sqlite
synclabels = yes
[Account repo1]
localrepository = repo1_local
remoterepository = repo1_remote
autorefresh = 10
status_backend = sqlite
synclabels = yes
[Repository repo2_local]
type = GmailMaildir
nametrans = get_remote_name
localfolders = ~/Development/OfflineIMAP/mail/repo2
sep = /
restoreatime = yes
[Repository repo1_local]
type = GmailMaildir
nametrans = get_remote_name
localfolders = ~/Development/OfflineIMAP/mail/repo1
sep = /
restoreatime = yes
[Repository repo2_remote]
type = Gmail
folderfilter = is_included
nametrans = get_local_name
cert_fingerprint = 3ffdb8519c1c8242ce8387d3d9fccc208a776b4a
remoteuser = asd#gmail.com
remotepasseval = get_password('asd')
usecompression = yes
maxconnections = 3
[Repository repo1_remote]
type = Gmail
folderfilter = is_included
nametrans = get_local_name
cert_fingerprint = 3ffdb8519c1c8242ce8387d3d9fccc208a776b4a
remoteuser = qwe#gmail.com
remotepasseval = get_password('qwe')
usecompression = yes
maxconnections = 3
I would like to know what is preventing offlineimap from copying further messages and what should I change in the config to make it work properly.

I've just recently ran into the same problem with gmail. In my case disabling compression and setting connections limit to 1 resolved the issue (didn't have time to investigate fully). Have you tried doing this?

Related

How use "android.net.wifi.NetworkSpecifier.Builder" with jnius?

The issue is:
I was using "android.net.wifi.WifiConfiguration" for connecting to WIFI like this:
Correctly Working.
from jnius import autoclass
ssid = str("ssid_name")
print("app says-->connecting to wifi:",ssid)
String = autoclass('java.lang.String')
WifiConfigure = autoclass('android.net.wifi.WifiConfiguration')
PythonActivity = autoclass('org.kivy.android.PythonActivity')
activity = PythonActivity.mActivity
service = activity.getSystemService("wifi")
WifiManager = autoclass('android.net.wifi.WifiManager')
WifiConfig = WifiConfigure()
> # # # #
Connectname = String(ssid)
connectkey = String("Wifi Password")
WifiConfig.SSID = "\""+Connectname.toString()+"\""
WifiConfig.hiddenSSID = True
WifiConfig.preSharedKey ="\""+ connectkey.toString()+"\""
added = WifiManager.addNetwork(WifiConfig)
WifiManager.enableNetwork(added, True)`
But after API 29 that java library is deprecated, and I need to deploy on Play Store the Android App Bundle with at least API 30.
So:
On site https://developer.android.com they speaking about use "android.net.wifi.NetworkSpecifier.Builder" instead of "android.net.wifi.WifiConfiguration", is there anyone to tell me how to use integrate with jnius and autoclass?
I expect python programmers to help me solve the problem
You cannot programmatically connect to Wi-Fi after API 30.

The first few messages are lost when transmitted to mqtt clients that were offline

I have vernemq server and mqtt clients using paho mqtt library (with python or C - no matter). Both subscribers and publishers use Qos2 and clean_session == False. So the problem is when subscriber is offline, I try to send some messages. Some of them are lost. After a detailed study of the parameters, I found out that the first max_inflight_messages are lost. What I mean. In the config file vernemq.conf I set max_inflight_messages = 20 (by default). Then subscriber go to offline, I send 21 messages, then subscriber go online, and first 20 are lost, 21s is delivered. I tried it many times with different amount of messages - the same result, first 20 messages are lost, from 21 and next are received. When I try max_inflight_messages = 1, first message is lost, others are received. Any ideas? My file vernemq.conf:
allow_anonymous = on
allow_register_during_netsplit = off
allow_publish_during_netsplit = off
allow_subscribe_during_netsplit = off
allow_unsubscribe_during_netsplit = off
allow_multiple_sessions = off
coordinate_registrations = on
max_inflight_messages = 20
max_online_messages = 1000
max_offline_messages = 1000
max_message_size = 0
upgrade_outgoing_qos = off
listener.max_connections = 10000
listener.nr_of_acceptors = 10
listener.tcp.default = 0.0.0.0:1883
listener.vmq.clustering = 0.0.0.0:44053
listener.http.default = 0.0.0.0:8888
systree_enabled = on
systree_interval = 20000
graphite_enabled = off
graphite_host = localhost
graphite_port = 2003
graphite_interval = 20000
shared_subscription_policy = prefer_local
plugins.vmq_passwd = off
plugins.vmq_acl = on
plugins.vmq_diversity = off
plugins.vmq_webhooks = off
plugins.vmq_bridge = off
metadata_plugin = vmq_plumtree
vmq_acl.acl_file = ./etc/vmq.acl
vmq_acl.acl_reload_interval = 10
vmq_passwd.password_file = ./etc/vmq.passwd
vmq_passwd.password_reload_interval = 10
vmq_diversity.script_dir = ./share/lua
vmq_diversity.auth_postgres.enabled = off
vmq_diversity.postgres.ssl = off
vmq_diversity.postgres.password_hash_method = crypt
vmq_diversity.auth_cockroachdb.enabled = off
vmq_diversity.cockroachdb.ssl = on
vmq_diversity.cockroachdb.password_hash_method = bcrypt
vmq_diversity.auth_mysql.enabled = off
vmq_diversity.mysql.password_hash_method = password
vmq_diversity.auth_mongodb.enabled = off
vmq_diversity.mongodb.ssl = off
vmq_diversity.auth_redis.enabled = off
vmq_bcrypt.pool_size = 1
log.console = both
log.console.level = debug
log.console.file = ./log/console.log
log.error.file = ./log/error.log
log.syslog = off
log.crash = on
log.crash.file = ./log/crash.log
log.crash.maximum_message_size = 64KB
log.crash.size = 10MB
log.crash.rotation = $D0
log.crash.rotation.keep = 5
nodename = VerneMQ#127.0.0.1
distributed_cookie = vmq
erlang.async_threads = 64
erlang.max_ports = 262144
leveldb.maximum_memory.percent = 70
The problem was in paho mqtt library. When client connect to broker, he receive all messages, but handlers for this messages assigned only when it subscrybe to concrete topic.

Flume Multiplexing not working

I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.

Graphite carbon-relay not working

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

Creating JIRA issues with Jira4R

I'm trying to write a small Rails app that will interface with Jira using the Jira4R gem. However, whilst I'm having no problem creating an issue, I'm having real trouble attaching a custom field to an issue.
Any ideas on how I would do this?
At the moment I'm creating the issue like:
issue = Jira4R::V2::RemoteIssue.new
issue.project = "TEST"
issue.summary = params[:issue][:summary]
issue.description = params[:issue][:description]
issue.type = 6
issue = #jira.createIssue(issue)
issue = Jira4R::V2::RemoteIssue.new
issue.project = "TEST"
issue.summary = params[:issue][:summary]
issue.description = params[:issue][:description]
issue.type = 6
c = Jira4R::V2::RemoteCustomFieldValue.new
c.customfieldId = "customfield_10000"
c.values = current_user.full_name
issue.customFieldValues = [c]
issue = #jira.createIssue(issue)

Resources