S3cmd not working in ubuntu - ruby-on-rails

I have Installed s3cmd on my machine which is ubuntu 11.10 and when i am trying to download some data from s3 it gives me this error, I have also configure s3cmd with the access keys which i have (.s3cfg file is there in home folder)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the following lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Problem: KeyError: 'content-length'
S3cmd: 1.0.0
Traceback (most recent call last):
File "/usr/bin/s3cmd", line 2006, in <module>
main()
File "/usr/bin/s3cmd", line 1950, in main
cmd_func(args)
File "/usr/bin/s3cmd", line 513, in cmd_object_get
response = s3.object_get(uri, dst_stream, start_position = start_position, extra_label = seq_label)
File "/usr/share/s3cmd/S3/S3.py", line 285, in object_get
response = self.recv_file(request, stream, labels, start_position)
File "/usr/share/s3cmd/S3/S3.py", line 691, in recv_file
size_left = int(response["headers"]["content-length"])
KeyError: 'content-length'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the above lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Please try a newer version of s3cmd, such as 1.5.0-alpha2 released last night on the s3tools project on SourceForge or in github. In this specific case, you are trying to download a 0-length file, which triggers this bug

Related

Why does my Python Dataflow job gets stuck at the Write phase?

I wrote a Python Dataflow job which managed to process 300 files, unfortunately, when I try to run it on 400 files it gets stuck in the Write phase forever.
The logs aren't really helpful, but I think that the issue comes from the writing logic of the code, initially, I only wanted 1 output file, so I wrote:
| 'Write' >> beam.io.WriteToText(
known_args.output,
file_name_suffix=".json",
num_shards=1,
shard_name_template=""
))
Then, I removed, num_shards=1 and shard_name_template="" and I was able to process more files but it'd still get stuck.
Extra Information
the files to process are small, less than a 1MB
also, when removing the num_shards and shard_name_template fields, I noticed that the data got output a temporary folder in the target path, but the job never finishes
I have the following DEADLINE_EXCEEDED exception and I tried solving it by increasing --num_workers to 6 and --disk_size_gb to 30 but it doesn't work.
Error message from worker: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 638, in do_work work_executor.execute() File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 179, in execute op.start() File "dataflow_worker/shuffle_operations.py", line 63, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 64, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 79, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 80, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "dataflow_worker/shuffle_operations.py", line 82, in dataflow_worker.shuffle_operations.GroupedShuffleReadOperation.start File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 441, in __iter__ for entry in entries_iterator: File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 282, in __next__ return next(self.iterator) File "/usr/local/lib/python3.7/site-packages/dataflow_worker/shuffle.py", line 240, in __iter__ chunk, next_position = self.reader.Read(start_position, end_position) File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 133, in shuffle_client.PyShuffleReader.Read OSError: Shuffle read failed: b'DEADLINE_EXCEEDED: (g)RPC timed out when extract-fields-three-mont-10090801-dlaj-harness-fj4v talking to extract-fields-three-mont-10090801-dlaj-harness-1f7r:12346. Server unresponsive (ping error: Deadline Exceeded, {"created":"#1602260204.931126454","description":"Deadline Exceeded","file":"third_party/grpc/src/core/ext/filters/deadline/deadline_filter.cc","file_line":69,"grpc_status":4}). Typically one can self manage this issue, please read: https://cloud.google.com/dataflow/docs/guides/common-errors#tsg-rpc-timeout'
Can you please recommend ways to troubleshoot this type of issues?
After trying to pump resources, I managed to get it working by enabling the Dataflow shuffle service fixed the situation. Please see resource
Just add --experiments=shuffle_mode=service to your PipelineOptions.

How to close Appium driver properly in Python?

I am using Python 3.7 with Appium 1.15.1 on real Android Device.
When my script finish the job, I close the driver with these lines:
if p_driver:
p_driver.close()
but I get this error ouput:
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 688, in close
self.execute(Command.CLOSE)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 29, in check_response
raise wde
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 24, in check_response
super(MobileErrorHandler, self).check_response(response)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: An unknown server-side error occurred while processing the command. Original error: Could not proxy. Proxy error: Could not proxy command to remote server. Original error: 404 - undefined
I would like to understand what I am doing wrong?
What is the way to close properly the driver?
Can you help me please?
Step1: you should get the appium session dictionary first:
session_instance = webdriver.Remote(str(url), caps_dic)
where url is your appium server url. something like: "http://127.0.0.1:4723/wd/hub"
and caps_dic is a dictionary of all your desired capabilities
Step2: you run the quit() method on session:
session_instance[session].quit()
So the whole snippet is:
session_instance = webdriver.Remote(str(url), caps_dic)
session_instance[session].quit()

Dataflow jobs fail with: Shuffle close failed: FAILED_PRECONDITION: Precondition check failed

My Dataflow jobs fail with the following error:
INFO:root:2018-10-15T18:55:37.417Z: JOB_MESSAGE_ERROR: Workflow failed.
Causes: S17:fold2/Write/WriteImpl/WindowInto(WindowIntoFn)+write instances fold2/Write/WriteImpl/GroupByKey/Reify+write instances fold2/Write/WriteImpl/GroupByKey/Write failed.,
A work item was attempted 4 times without success.
Each time the worker eventually lost contact with the service. The work item was attempted on:
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-41dd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd
Digging into the logs shows only one error:
An exception was raised when trying to execute the workitem 6479210647275353150 :
Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 642, in do_work work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 158, in execute op.finish()
File "dataflow_worker/shuffle_operations.py", line 144, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish def finish(self):
File "dataflow_worker/shuffle_operations.py", line 145, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish with self.scoped_finish_state:
File "dataflow_worker/shuffle_operations.py", line 147, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish self.writer.__exit__(None, None, None)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 599, in __exit__ self.writer.Close()
File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 202, in shuffle_client.PyShuffleWriter.Close IOError: Shuffle close failed: FAILED_PRECONDITION: Precondition check failed.
Any ideas?
I finally figured out the problem by removing various pieces for code, printing tons of logs and running the job again. It turned out that I had a regular expression that blew up for one particular entry. Unfortunately, Dataflow logs were not helpful at all.

DataStax OpsCenter 6.0 won't start if HTTPS is enabled

I recently upgraded DSE to 5.X and OpsCenter to 6.0. Everything works great except the OpsCenter fails to start if I enable HTTPS using our own wildcard certificates. I have .pem file with cert chain and .key file with password removed. Same setup works perfectly with the OpsCenter 5.2.4 but with 6.0 I get the error below:
[opscenterd] ERROR: Traceback (most recent call last):
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 111, in setupWebServer
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/WebServer.py", line 108, in makeWebServer
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/SslUtils.py", line 24, in make_ssl_context_factory
File "/usr/share/opscenter/lib/py/twisted/internet/legacy_ssl.py", line 1133, in __init__ self.cacheContext()
File "/usr/share/opscenter/lib/py/twisted/internet/legacy_ssl.py", line 1142, in cacheContext ctx.load_cert_chain(self.certificateFileName, keyfile=self.privateKeyFileName) # Automatically checks against private key
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/ssl.py", line 1035, in load_cert_chain self._key_managers = _get_openssl_key_manager(certfile, keyfile, password, _key_store=self._key_store)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/ssl.py", line 1035, in load_cert_chain self._key_managers = _get_openssl_key_manager(certfile, keyfile, password, _key_store=self._key_store)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 113, in _get_openssl_key_manager _certs, _private_key = _extract_certs_for_paths([cert_file], password)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 211, in _extract_certs_for_paths _certs, _private_key = _extract_cert_from_data(f, password, key_converter, cert_converter)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 230, in _extract_cert_from_data certs, private_key = _read_pem_cert_from_data(f, password, key_converter, cert_converter)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 246, in _read_pem_cert_from_data for br in _extract_readers(f):
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 94, in _extract_readers raise SSLError(SSL_ERROR_SSL, "PEM lib (no start line or not enough data)")
SSLError: [Errno 1] PEM lib (no start line or not enough data)
(MainThread)
[opscenterd] ERROR: There was an error starting the OpsCenterd process: Traceback (most recent call last):
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 49, in startService
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 123, in setupWebServer
NameError: global name 'System' is not defined
(MainThread)
Note that OpsCenter starts if I use default certificate and key included in the installation.
Thanks for help!
Michael

Running Reddit - Paster Shell errors re: webhelpers.rails deprecated

ubuntu#ubuntu-02:/reddit/r2$ paster serve --reload example.ini http_port=8080Starting subprocess with file monitor
/usr/local/lib/python2.6/dist-packages/Pylons-0.9.6.2-py2.6.egg/pylons/middleware.py:11: DeprecationWarning: The webhelpers.rails package is deprecated.
- Please begin migrating to the new helpers in webhelpers.html,
webhelpers.text, webhelpers.number, etc.
- Import url_for() directly from routes, and redirect_to() from
pylons.controllers.util (if using Pylons) or from routes.
- All Javascript support has been deprecated. You can write link_to_remote()
yourself or use one of the third-party Javascript libraries.
from webhelpers.rails.asset_tag import javascript_path
/reddit/r2/r2/lib/manager/tp_manager.py:22: DeprecationWarning: the sha module is deprecated; use the hashlib module instead
import pylons, sha
Traceback (most recent call last):
File "/usr/local/bin/paster", line 8, in <module>
load_entry_point('PasteScript==1.7.3', 'console_scripts', 'paster')()
File "/usr/local/lib/python2.6/dist-packages/PasteScript-1.7.3-py2.6.egg/paste/script/command.py", line 84, in run
invoke(command, command_name, options, args[1:])
File "/usr/local/lib/python2.6/dist-packages/PasteScript-1.7.3-py2.6.egg/paste/script/command.py", line 123, in invoke
exit_code = runner.run(args)
File "/usr/local/lib/python2.6/dist-packages/PasteScript-1.7.3-py2.6.egg/paste/script/command.py", line 218, in run
result = self.command()
File "/usr/local/lib/python2.6/dist-packages/PasteScript-1.7.3-py2.6.egg/paste/script/serve.py", line 276, in command
relative_to=base, global_conf=vars)
File "/usr/local/lib/python2.6/dist-packages/PasteScript-1.7.3-py2.6.egg/paste/script/serve.py", line 313, in loadapp
**kw)
File "/usr/local/lib/python2.6/dist-packages/PasteDeploy-1.3.4-py2.6.egg/paste/deploy/loadwsgi.py", line 203, in loadapp
return loadobj(APP, uri, name=name, **kw)
File "/usr/local/lib/python2.6/dist-packages/PasteDeploy-1.3.4-py2.6.egg/paste/deploy/loadwsgi.py", line 224, in loadobj
return context.create()
File "/usr/local/lib/python2.6/dist-packages/PasteDeploy-1.3.4-py2.6.egg/paste/deploy/loadwsgi.py", line 617, in create
return self.object_type.invoke(self)
File "/usr/local/lib/python2.6/dist-packages/PasteDeploy-1.3.4-py2.6.egg/paste/deploy/loadwsgi.py", line 109, in invoke
return fix_call(context.object, context.global_conf, **context.local_conf)
File "/usr/local/lib/python2.6/dist-packages/PasteDeploy-1.3.4-py2.6.egg/paste/deploy/util/fixtypeerror.py", line 57, in fix_call
val = callable(*args, **kw)
File "/reddit/r2/r2/config/middleware.py", line 558, in make_app
load_environment(global_conf, app_conf)
File "/reddit/r2/r2/config/environment.py", line 54, in load_environment
config['pylons.g'] = app_globals.Globals(global_conf, app_conf, paths)
File "/reddit/r2/r2/lib/app_globals.py", line 173, in __init__
self.memcache = CMemcache(self.memcaches, num_clients = num_mc_clients)
File "/reddit/r2/r2/lib/cache.py", line 108, in __init__
client.behaviors.update(behaviors)
File "build/bdist.linux-x86_64/egg/pylibmc.py", line 105, in update
File "build/bdist.linux-x86_64/egg/pylibmc.py", line 172, in set_behaviors
_pylibmc.MemcachedError: memcached_behavior_set returned 45
I'm an absolute noob when it comes to running web services, just started learning Linux, and only know T-SQL and ActionScript 2. So, suffice to say that I'm a bit out of my depth here.
I know there are issues with various versions of python-webhelpers, and at least libmemcached, and at this point, I'm pretty stuck. I'm not great at Linux, so I'm never sure what version of a program I've got installed, and neither am I sure which versions are the working versions for what's in the git repository at the moment. What I'd like to do would be to uninstall libmemcached and webhelpers, and reinstall to the correct version. I get the feeling that doing this would require me to re-do much of the process, which is fine, provided it works.
Any help on how to resolve this error would be MUCH appreciated. I've gotten a lot of help previously from answered questions on this site, and I'm hoping someone much smarter than me has the answer to this one!
I encountered this exact same problem, and downgrading to libmemcached-0.48 fixed it.
Make sure you rebuild pylibmc after you do this.

Resources