In Spyder I can run the script without errors:
filename = '$tmp$.tmp'
# create binary file with all bytes
with open(filename, 'wb') as f:
f.write(bytearray(list(range(255))))
# read bytes back
with open(filename, 'rb') as f:
buffer=f.read()
print(buffer)
But during debugging an exception is thrown after execute the line "buffer=f.read()"
...
File "C:\Miniconda3\lib\site-packages\spyder\widgets\variableexplorer\utils.py", line 458, in value_to_display
display = display[:70].rstrip() + ' ...'
TypeError: can't concat str to bytes
No error during debugging occurs with:
buffer=bytearray(list(range(255)))
print(buffer)
I use conda 4.5.0, spyder 3.2.8 py36_0. For a new debugging session, the buffer variable must delete or reassigned to number, text in the console.
Exists a workaround?
(Spyder maintainer here) This is a bug. I created an issue for it here.
We'll try to fix it in our next release (Spyder 3.3).
Related
I have a docker container running PySpark, hadoop and all the required dependecies. I am using spark-submit to query the minio and I want to write the output dataframe to the file. Reading the file works but writing does not. If I execute python in that container and try to create file at the same path, it works.
Am I missing some spark configuration?
This is the error I get:
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o38.save
: java.net.ConnectException: Call From 10d3463d04ce/10.0.1.132 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Relevant code:
spark = SparkSession.builder.getOrCreate()
spark_context = spark.sparkContext
spark_context._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'minio')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.secret.key', AWS_SECRET_ACCESS_KEY
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.path.style.access', 'true')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.endpoint', AWS_S3_ENDPOINT)
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.connection.ssl.enabled', 'false'
)
df = spark.sql(query)
df.show() # this works perfectly fine
df.coalesce(1).write.format('json').save(output_path) # here I get the error
Solution was to prepend file:// to output_path.
I'm trying to run a Dask job on a YARN cluster. This jobs reads and writes to HDFS using the hdfs3 library.
When I run it on a cluster without a Kerberos security layer, it runs fine.
But, on a cluster with a Kerberos security layer, I had to implement the solution here to avoid Kerberos related errors. Running the same job, led to the following error:
File "/fsstreamdevl/f6_development/acoustics/acoustics_analysis_dask/acoustics_analytics/task_runner/task_runner.py", line 123, in run
dask.compute(tasks)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/dask/base.py", line 446, in compute
results = schedule(dsk, keys, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 2568, in get
results = self.gather(packed, asynchronous=asynchronous, direct=direct)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1822, in gather
asynchronous=asynchronous,
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 753, in sync
return sync(self.loop, func, *args, **kwargs)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/distributed/client.py", line 1653, in _gather
six.reraise(type(exception), exception, traceback)
File "/anaconda_env/projects/f6acoustics/dev/dask_yarn_test/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
distributed.scheduler.KilledWorker: ('__call__-6af7aa29-2a09-45f3-a5e2-207c06562672', <Worker 'tcp://10.194.211.132:11927', memory: 0, processing: 1>)
Strangely enough, running the same solution on the former cluster without a Kerberos security layer, I get the same error.
Looking at the YARN application logs, I see the following traceback, but cannot tell what it means.
distributed.nanny - INFO - Closing Nanny at 'tcp://10.194.211.133:17659'
Traceback (most recent call last):
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/queues.py", line 242, in _feed
send_bytes(obj)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/opt/hadoop/data/05/hadoop/yarn/local/usercache/hdfsf6/appcache/application_1560931326013_171773/container_e47_1560931326013_171773_01_000003/environment/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
End of LogType:dask.worker.log
I do not see any explicit messages in the logs about low memory. Would anyone know how to diagnose this issue?
hdfs3 is not actively maintained any more. You have two main choices for interacting with HDFS:
pyarrow's hdfs driver (via libhdfs jni library), which requires you to have java and hadoop requirements correctly set up and available to the session calling it
webhdfs such as in fsspec, which does not need java libraries, and can interact with kerberos if HTTP authentication is allowed on your system.
I'm very new to lua development with file manipulation, and now trying to import the lua socket package into my project according to this post, but I can't run even the code below.
I guess the error message indicates I need to import not only the socket.lua but also .\socket\core (probably .dll, since it doesn't have core.lua), while a reply at the post suggested importing only the file.
I'm stuck in just the beginning... What do I have to do for the next step?
local function main()
local socket = require("socket")
end
main()
Exception in thread "main" com.naef.jnlua.LuaRuntimeException: ...n32.win32.x86_64\workspace\TestForCiv\src\socket.lua:13: module 'socket.core' not found:
no field package.preload['socket.core']
no file '.\socket\core.lua'
no file 'C:\Program Files\Java\jre1.8.0_151\bin\lua\socket\core.lua'
no file 'C:\Program Files\Java\jre1.8.0_151\bin\lua\socket\core\init.lua'
...(a bunch of no file errors continues)
Edit: I added the folder structure. Even I add the .dll file it returns the same error.
I don't know the details of you configuration, but try this
require ("src.socket")
you should require a module from the root path of the lib
I am trying to compile the hello world program of ROS tutorials to beaglebone black using bitbake. I am using an Ubuntu PC and have setup the workspace as mentioned in the user manual provided in vmayoral github link
I have modified the local.conf file in the /build/conf folder and contents look like this
DL_DIR = "${OEBASE}/sources"
BBFILES = "${OEBASE}/openembedded/recipes/*/*.bb"
ASSUME_PROVIDED += "help2man-native"
PREFERRED_PROVIDERS = "virtual/qte:qte virtual/libqpe:libqpe-opie"
PREFERRED_PROVIDERS += " virtual/libsdl:libsdl-x11"
PREFERRED_PROVIDERS += " virtual/${TARGET_PREFIX}gcc-initial:gcc-cross-initial"
PREFERRED_PROVIDERS += " virtual/${TARGET_PREFIX}gcc-intermediate:gcc-cross-intermediate"
PREFERRED_PROVIDERS += " virtual/${TARGET_PREFIX}gcc:gcc-cross"
PREFERRED_PROVIDERS += " virtual/${TARGET_PREFIX}g++:gcc-cross"
MACHINE = "beaglebone"
DISTRO = "angstrom-2008.1"
IMAGE_FSTYPES = "jffs2 tar"
BBINCLUDELOGS = "yes"
The bitbake recipe as below
DESCRIPTION = "Beginner_tutorials, talker/listener ROS package"
SECTION = "devel"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://package.xml;;beginline=16;endline=16;md5=05c8b019cf5b0834bc5e547a1 4f26ca3"
DEPENDS = "roscpp catkin rospy std-msgs"
RDEPENDS = "roscpp rospy std-msgs"
SRC_URI = "git://github.com/vmayoral/beginner_tutorials.git"
SRCREV = "${AUTOREV}"
PV = "1.0.0+gitr${SRCPV}"
S = "${WORKDIR}/git"
inherit catkin
When I run bitbake test.bb from oe/build folder I get this following error
ERROR: Traceback (most recent call last):File /home/srijit/oe/bitbake/lib/bb/cookerdata.py", line 175 in wrapped return func(fn, *args)File "/home/srijit/oe/bitbake/lib/bb/cookerdata.py", line 185, in parse_config_filereturn bb.parse.handle(fn, data, include) File "/home/srijit/oe/bitbake/lib/bb/parse/__init__.py", line 107, in handle return h['handle'](fn, data, include)File "/home/srijit/oe/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 145, in handle feeder(lineno, s, abs_fn, statements) File "/home/srijit/oe/bitbake/lib/bb/parse/parse_py/ConfHandler.py", line 182, in feederraise ParseError("unparsed line: '%s'" % s, fn, lineno);ParseError: ParseError at home/srijit/oe/openembedded/conf/bitbake.conf:377: unparsed line: 'IMAGE_EXTRA_SPACE = 10240' ERROR: Unable to parse conf/bitbake.conf: ParseError at /home/srijit/oe/openembedded/conf/bitbake.conf:377: unparsed line: 'IMAGE_EXTRA_SPACE = 10240'
I dont know what to do
Thanks for the help in advance
As I did more search on google.. I found here that we can't use the latest bitbake with openembedded-classic. So tried with bitbake 1.10 and this error went away.. But I have a new error now. It is
Unknown Event: <bb.event.NoProvider instance at 0x7f05e40ee248>
ERROR: Nothing PROVIDES 'mobile-unit.bb'
Command execution failed: Traceback (most recent call last):
File "/home/srijit/oe/bitbake/lib/bb/command.py", line 88, in runAsyncCommand commandmethod(self.cmds_async, self, options)
File "/home/srijit/oe/bitbake/lib/bb/command.py", line 174, in buildTargets command.cooker.buildTargets(pkgs_to_build, task)
File "/home/srijit/oe/bitbake/lib/bb/cooker.py", line 782, in buildTargets
taskdata.add_provider(localdata, self.status, k)
File "/home/srijit/oe/bitbake/lib/bb/taskdata.py", line 354, in add_provider
self.add_provider_internal(cfgData, dataCache, item)
File "/home/srijit/oe/bitbake/lib/bb/taskdata.py", line 383, in add_provider_internal
raise bb.providers.NoProvider(item)
NoProvider: mobile-unit.bb
Finally I solved the issue.. thought it will be helpful for somebody else. I think the main issue was my immaturity in understanding the ROS meta-ros layer and how it work and that the overall (mis)direction in installing ROS in BBB. I was trying to compile the beagle-ros for the Angstrom distribution that came with BBB. That was the problem.
Actually I downloaded the lates Angstrom distribution source in my Ubuntu PC and compiled for BBB as described here. Few tweaks here and there
Then we have to flash that Angstrom distribution to an SD Card and boot BBB from that.
Then you follow the instructions here to compile the beagle-ros layer and ros packages using the same bitbake setup as you compiled for Angstrom as discussed here and here
Now copy the compiled ipk files to BBB and install it using opkg and now you can run them on BBB
I have Installed s3cmd on my machine which is ubuntu 11.10 and when i am trying to download some data from s3 it gives me this error, I have also configure s3cmd with the access keys which i have (.s3cfg file is there in home folder)
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the following lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Problem: KeyError: 'content-length'
S3cmd: 1.0.0
Traceback (most recent call last):
File "/usr/bin/s3cmd", line 2006, in <module>
main()
File "/usr/bin/s3cmd", line 1950, in main
cmd_func(args)
File "/usr/bin/s3cmd", line 513, in cmd_object_get
response = s3.object_get(uri, dst_stream, start_position = start_position, extra_label = seq_label)
File "/usr/share/s3cmd/S3/S3.py", line 285, in object_get
response = self.recv_file(request, stream, labels, start_position)
File "/usr/share/s3cmd/S3/S3.py", line 691, in recv_file
size_left = int(response["headers"]["content-length"])
KeyError: 'content-length'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
An unexpected error has occurred.
Please report the above lines to:
s3tools-bugs#lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Please try a newer version of s3cmd, such as 1.5.0-alpha2 released last night on the s3tools project on SourceForge or in github. In this specific case, you are trying to download a 0-length file, which triggers this bug