I am using Python 3.7 with Appium 1.15.1 on real Android Device.
When my script finish the job, I close the driver with these lines:
if p_driver:
p_driver.close()
but I get this error ouput:
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 688, in close
self.execute(Command.CLOSE)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 29, in check_response
raise wde
File "C:\Users\Nino\AppData\Roaming\Python\Python37\site-packages\appium\webdriver\errorhandler.py", line 24, in check_response
super(MobileErrorHandler, self).check_response(response)
File "C:\Users\Nino\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: An unknown server-side error occurred while processing the command. Original error: Could not proxy. Proxy error: Could not proxy command to remote server. Original error: 404 - undefined
I would like to understand what I am doing wrong?
What is the way to close properly the driver?
Can you help me please?
Step1: you should get the appium session dictionary first:
session_instance = webdriver.Remote(str(url), caps_dic)
where url is your appium server url. something like: "http://127.0.0.1:4723/wd/hub"
and caps_dic is a dictionary of all your desired capabilities
Step2: you run the quit() method on session:
session_instance[session].quit()
So the whole snippet is:
session_instance = webdriver.Remote(str(url), caps_dic)
session_instance[session].quit()
Related
I have a docker container running PySpark, hadoop and all the required dependecies. I am using spark-submit to query the minio and I want to write the output dataframe to the file. Reading the file works but writing does not. If I execute python in that container and try to create file at the same path, it works.
Am I missing some spark configuration?
This is the error I get:
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o38.save
: java.net.ConnectException: Call From 10d3463d04ce/10.0.1.132 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Relevant code:
spark = SparkSession.builder.getOrCreate()
spark_context = spark.sparkContext
spark_context._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'minio')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.secret.key', AWS_SECRET_ACCESS_KEY
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.path.style.access', 'true')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.endpoint', AWS_S3_ENDPOINT)
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.connection.ssl.enabled', 'false'
)
df = spark.sql(query)
df.show() # this works perfectly fine
df.coalesce(1).write.format('json').save(output_path) # here I get the error
Solution was to prepend file:// to output_path.
I recently purchased a used dev board to start working on. I managed to flash it with Mendel 5 (Eagle) enabled SSH and followed the instructions on the get started without any issue until I tried the demo.
I can't get it to work at all!
If I run edgetpu_demo --stream
I get:
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
Press 'q' to quit.
Press 'n' to switch between models.
Traceback (most recent call last):
File "/usr/bin/edgetpu_detect_server", line 11, in <module>
load_entry_point('edgetpuvision==1.0', 'console_scripts', 'edgetpu_detect_server')()
File "/usr/lib/python3/dist-packages/edgetpuvision/detect_server.py", line 33, in main
run_server(add_render_gen_args, render_gen)
File "/usr/lib/python3/dist-packages/edgetpuvision/apps.py", line 43, in run_server
camera = make_camera(args.source, next(gen), args.loop)
File "/usr/lib/python3/dist-packages/edgetpuvision/detect.py", line 144, in render_gen
engines, titles = utils.make_engines(args.model, DetectionEngine)
File "/usr/lib/python3/dist-packages/edgetpuvision/utils.py", line 53, in make_engines
engine = engine_class(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/detection/engine.py", line 80, in __init__
super().__init__(model_path)
File "/usr/lib/python3/dist-packages/edgetpu/basic/basic_engine.py", line 92, in __init__
self._engine = BasicEnginePythonWrapper.CreateFromFile(model_path)
RuntimeError: No Edge TPU device detected!
running as root gives me:
shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
Press 'q' to quit.
Press 'n' to switch between models.
job-working-directory: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused
Unable to init server: Could not connect: Connection refused
(edgetpu_detect_server:7283): Gtk-WARNING **: 02:24:31.914: cannot open display:
Which seems closer but no idea how to fix either. I've searched as much as I can (added the user the plugdev, made sure everything was installed correctly)
Any ideas? Pretty sure if this doesn't work something is fairly broken.
Thanks for reaching out, this is definitely not a good sign..
RuntimeError: No Edge TPU device detected!
Sounds like the board some how lost access to the edgetpu, could you send us an email to coral-support#google.coon this issue?
My Dataflow jobs fail with the following error:
INFO:root:2018-10-15T18:55:37.417Z: JOB_MESSAGE_ERROR: Workflow failed.
Causes: S17:fold2/Write/WriteImpl/WindowInto(WindowIntoFn)+write instances fold2/Write/WriteImpl/GroupByKey/Reify+write instances fold2/Write/WriteImpl/GroupByKey/Write failed.,
A work item was attempted 4 times without success.
Each time the worker eventually lost contact with the service. The work item was attempted on:
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd,
yuri-nine-gag-recommender-10151140-3kmq-harness-41dd,
yuri-nine-gag-recommender-10151140-3kmq-harness-mdgd
Digging into the logs shows only one error:
An exception was raised when trying to execute the workitem 6479210647275353150 :
Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 642, in do_work work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 158, in execute op.finish()
File "dataflow_worker/shuffle_operations.py", line 144, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish def finish(self):
File "dataflow_worker/shuffle_operations.py", line 145, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish with self.scoped_finish_state:
File "dataflow_worker/shuffle_operations.py", line 147, in dataflow_worker.shuffle_operations.ShuffleWriteOperation.finish self.writer.__exit__(None, None, None)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/shuffle.py", line 599, in __exit__ self.writer.Close()
File "third_party/windmill/shuffle/python/shuffle_client.pyx", line 202, in shuffle_client.PyShuffleWriter.Close IOError: Shuffle close failed: FAILED_PRECONDITION: Precondition check failed.
Any ideas?
I finally figured out the problem by removing various pieces for code, printing tons of logs and running the job again. It turned out that I had a regular expression that blew up for one particular entry. Unfortunately, Dataflow logs were not helpful at all.
When i run this script from user jenkins (Linux Mint) i get this error, and when running it from my user it works. Jenkins user is created with jenkins service. I have installed virtualenv.
import unittest
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
DRIVER = None
def getOrCreateWebdriver():
global DRIVER
DRIVER = DRIVER or webdriver.Firefox()
return DRIVER
class LoginTest(unittest.TestCase):
def setUp(self):
self.browser = getOrCreateWebdriver()
def test_Loggin(self):pass
browser = self.browser
def tearDown(self):
self.browser.close()
if __name__ == '__main__':
unittest.main(verbosity=2)
From user jenkins when i run this script i get error :
test_Loggin (__main__.LoginTest) ... ERROR
/usr/lib/python3.4/unittest/case.py:602: ResourceWarning: unclosed file <_io.BufferedWriter name='/dev/null'>
outcome.errors.clear()
======================================================================
ERROR: test_Loggin (__main__.LoginTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "Test.py", line 16, in setUp
self.browser = getOrCreateWebdriver()
File "Test.py", line 10, in getOrCreateWebdriver
DRIVER = DRIVER or webdriver.Firefox()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/webdriver.py", line 64, in __init__
self.binary, timeout),
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 51, in __init__
self.binary.launch_browser(self.profile)
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 70, in launch_browser
self._wait_until_connectable()
File "/usr/local/lib/python3.4/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 100, in _wait_until_connectable
raise WebDriverException("The browser appears to have exited "
selenium.common.exceptions.WebDriverException: Message: The browser appears to have exited before we could connect. If you specified a log_file in the FirefoxBinary constructor, check it for details.
When you're logged in as yourself, you need to do echo $DISPLAY and note the display info it prints. Subsequently when you login as jenkins service you need to do xhost +, DISPLAY=[display-info]; export DISPLAY. (display-info is what you got from echo $DISPLAY, ignore the square brackets, they shouldn't be specified in the command)
Hopefully this should work. I don't have similar env to test, just mentioning what i recollect having done it quite sometime back.
I have Installed s3cmd on my machine which is ubuntu 11.10 and when i am trying to download some data from s3 it gives me this error, I have also configure s3cmd with the access keys which i have (.s3cfg file is there in home folder)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the following lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Problem: KeyError: 'content-length'
S3cmd: 1.0.0
Traceback (most recent call last):
File "/usr/bin/s3cmd", line 2006, in <module>
main()
File "/usr/bin/s3cmd", line 1950, in main
cmd_func(args)
File "/usr/bin/s3cmd", line 513, in cmd_object_get
response = s3.object_get(uri, dst_stream, start_position = start_position, extra_label = seq_label)
File "/usr/share/s3cmd/S3/S3.py", line 285, in object_get
response = self.recv_file(request, stream, labels, start_position)
File "/usr/share/s3cmd/S3/S3.py", line 691, in recv_file
size_left = int(response["headers"]["content-length"])
KeyError: 'content-length'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the above lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please try a newer version of s3cmd, such as 1.5.0-alpha2 released last night on the s3tools project on SourceForge or in github. In this specific case, you are trying to download a 0-length file, which triggers this bug