I have a working plotly dash project that generates png files when I run it in my local environment. I now set up a docker container and generating png does suddenly not work anymore.
Error:
"File "C:\venv\lib\site-packages\plotly\basedatatypes.py", line 3821, in write_image
return pio.write_image(self, *args, **kwargs)
File "C:\venv\lib\site-packages\plotly\io\_kaleido.py", line 268, in write_image
img_data = to_image(
File "C:\venv\lib\site-packages\plotly\io\_kaleido.py", line 145, in to_image
img_bytes = scope.transform(
File "C:\venv\lib\site-packages\kaleido\scopes\plotly.py", line 153, in transform
response = self._perform_transform(
File "C:\venv\lib\site-packages\kaleido\scopes\base.py", line 293, in _perform_transform
self._ensure_kaleido()
File "C:\venv\lib\site-packages\kaleido\scopes\base.py", line 198, in _ensure_kaleido
raise ValueError(message)
ValueError: Failed to start Kaleido subprocess"
Used Docker Enviroment:
windows servercore mcr from microsoft, "mcr.microsoft.com/windows/servercore:ltsc2019"
python 3.8.6
kaleido 0.2.1
plotly 5.3.1
dash 2.0.0
Is there a way to solve this ?
Related
I'm trying to run a python script with ros2 in my docker container, and everything up to running the Script works, I can even run Gazebo via a launch file, and it works.
The Error ROS gives me is the following:
root#86d8bf3a6eb9:/# ros2 run field_robot robot_spawner.py
Traceback (most recent call last):
File "/opt/ros/foxy/bin/ros2", line 11, in <module>
load_entry_point('ros2cli==0.9.11', 'console_scripts', 'ros2')()
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2cli/cli.py", line 67, in main
rc = extension.main(parser=parser, args=args)
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2run/command/run.py", line 70, in main
return run_executable(path=path, argv=args.argv, prefix=prefix)
File "/opt/ros/foxy/lib/python3.8/site-packages/ros2run/api/__init__.py", line 61, in run_executable
process = subprocess.Popen(cmd)
File "/usr/lib/python3.8/subprocess.py", line 858, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.8/subprocess.py", line 1704, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py'
And yes, I checked, the File actually exists:
root#86d8bf3a6eb9:/# ls -l /field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py
-rwxr-xr-x 1 root root 1964 Apr 12 14:37 /field_robot/dev_ws/install/field_robot/lib/field_robot/robot_spawner.py
Also, I'm running the Host system on Windows, so it could be that something with windows is fucked up, so if you have an Idea what could be the Problem there, that also might be it
Based on the comments it appears you're running into this issue because of the file type. If they're being edited in Windows first it is likely they are DOS files and not UNIX files. I know this causes issues with ROS1 so I assume it's the case in ROS2 as well. To fix this, you have a couple of options.
Usually the easiest would be to use dos2unix. This isn't installed by default but you can get it via apt install dos2unix assuming your image is Ubuntu. The files can be converted by running dos2unix <filename> inside your container.
I have setup a mlflow server locally at http://localhost:5000
I followed the instructions at https://github.com/mlflow/mlflow/tree/master/examples/docker and tried to run the example docker with
/mlflow/examples/docker$ mlflow run . -P alpha=0.5
but I encountered the following error.
2021/05/09 17:11:20 INFO mlflow.projects.docker: === Building docker image docker-example:7530274 ===
2021/05/09 17:11:20 INFO mlflow.projects.utils: === Created directory /tmp/tmp9wpxyzd_ for downloading remote URIs passed to arguments of type 'path' ===
2021/05/09 17:11:20 INFO mlflow.projects.backend.local: === Running command 'docker run --rm -v /home/mlf/mlf/0/ae69145133bf49efac22b1d390c354f1/artifacts:/home/mlf/mlf/0/ae69145133bf49efac22b1d390c354f1/artifacts -e MLFLOW_RUN_ID=ae69145133bf49efac22b1d390c354f1 -e MLFLOW_TRACKING_URI=http://localhost:5000 -e MLFLOW_EXPERIMENT_ID=0 docker-example:7530274 python train.py --alpha 0.5 --l1-ratio 0.1' in run with ID 'ae69145133bf49efac22b1d390c354f1' ===
/opt/conda/lib/python2.7/site-packages/mlflow/__init__.py:55: DeprecationWarning: MLflow support for Python 2 is deprecated and will be dropped in a future release. At that point, existing Python 2 workflows that use MLflow will continue to work without modification, but Python 2 users will no longer get access to the latest MLflow features and bugfixes. We recommend that you upgrade to Python 3 - see https://docs.python.org/3/howto/pyporting.html for a migration guide.
"for a migration guide.", DeprecationWarning)
Traceback (most recent call last):
File "train.py", line 56, in <module>
with mlflow.start_run():
File "/opt/conda/lib/python2.7/site-packages/mlflow/tracking/fluent.py", line 122, in start_run
active_run_obj = MlflowClient().get_run(existing_run_id)
File "/opt/conda/lib/python2.7/site-packages/mlflow/tracking/client.py", line 96, in get_run
return self._tracking_client.get_run(run_id)
File "/opt/conda/lib/python2.7/site-packages/mlflow/tracking/_tracking_service/client.py", line 49, in get_run
return self.store.get_run(run_id)
File "/opt/conda/lib/python2.7/site-packages/mlflow/store/tracking/rest_store.py", line 92, in get_run
response_proto = self._call_endpoint(GetRun, req_body)
File "/opt/conda/lib/python2.7/site-packages/mlflow/store/tracking/rest_store.py", line 32, in _call_endpoint
return call_endpoint(self.get_host_creds(), endpoint, method, json_body, response_proto)
File "/opt/conda/lib/python2.7/site-packages/mlflow/utils/rest_utils.py", line 133, in call_endpoint
host_creds=host_creds, endpoint=endpoint, method=method, params=json_body)
File "/opt/conda/lib/python2.7/site-packages/mlflow/utils/rest_utils.py", line 70, in http_request
url=url, headers=headers, verify=verify, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/mlflow/utils/rest_utils.py", line 51, in request_with_ratelimit_retries
response = requests.request(**kwargs)
File "/opt/conda/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=5000): Max retries exceeded with url: /api/2.0/mlflow/runs/get?run_uuid=ae69145133bf49efac22b1d390c354f1&run_id=ae69145133bf49efac22b1d390c354f1 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f5cbd80d690>: Failed to establish a new connection: [Errno 111] Connection refused',))
2021/05/09 17:11:22 ERROR mlflow.cli: === Run (ID 'ae69145133bf49efac22b1d390c354f1') failed ===
Any ideas how to fix this? I tried adding the following in MLproject file but it doesn't help
environment: [["network", "host"], ["add-host", "host.docker.internal:host-gateway"]]
Thanks for your help! =)
Run MLflow server such was that it will use your machine IP instead of localhost. Then point the mlflow run to that IP instead of http://localhost:5000. The main reason is that localhost of Docker process is its own, not your machine.
I am trying to buiil Dart-VM on windows, I follow the steps as described here
https://github.com/dart-lang/sdk/wiki/Building
When I run the build.py command as below:
.\tools\build.py --mode release --arch x64 create_sdk
I get the following error:
gn gen --check in out\ReleaseX64
Traceback (most recent call last):
File "D:\ops\dart\sdk\tools\gn.py", line 436, in <module>
sys.exit(main(sys.argv))
File "D:\ops\dart\sdk\tools\gn.py", line 423, in main
results = pool.map(run_command, commands, chunksize=1)
File "C:\app\Python27\lib\multiprocessing\pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "C:\app\Python27\lib\multiprocessing\pool.py", line 567, in get
raise self._value
WindowsError: [Error 2] System cannot find file。
Tried to run GN, but it failed. Try running it manually:
$ python D:\ops\dart\sdk\tools\gn.py -m release -a x64 --os host -v
Traceback (most recent call last):
File "D:\ops\dart\sdk\tools\build.py", line 658, in <module>
sys.exit(Main())
File "D:\ops\dart\sdk\tools\build.py", line 651, in Main
mode, arch, cross_build) != 0:
File "D:\ops\dart\sdk\tools\build.py", line 491, in BuildOneConfig
args = BuildNinjaCommand(options, target, target_os, mode, arch)
File "D:\ops\dart\sdk\tools\build.py", line 473, in BuildNinjaCommand
if UseGoma(out_dir):
File "D:\ops\dart\sdk\tools\build.py", line 431, in UseGoma
return 'use_goma = true' in open(args_gn, 'r').read()
IOError: [Errno 2] No such file or directory: 'out\\ReleaseX64\\args.gn'
It seems missing the args.gn file in out\ReleaseX64 folder. but I cannot find args.gn in Dart source folders. Is it generated during building process? whether I do wrong steps led to no such file generated?
When Running ipyhton notebook on Windows 7 64bit and launching notebook with python 2 kernel I get an error:
Traceback (most recent call last):
File "C:\Users\USER1\Anaconda2\lib\site-packages\notebook\base\handlers.py", line 436, in wrapper
result = yield gen.maybe_future(method(self, *args, **kwargs))
File "C:\Users\USER1\Anaconda2\lib\site-packages\notebook\services\sessions\handlers.py", line 56, in post
model = sm.create_session(path=path, kernel_name=kernel_name)
File "C:\Users\USER1\Anaconda2\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 66, in create_session
kernel_name=kernel_name)
File "C:\Users\USER1\Anaconda2\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 84, in start_kernel
**kwargs)
File "C:\Users\USER1\Anaconda2\lib\site-packages\jupyter_client\multikernelmanager.py", line 109, in start_kernel
km.start_kernel(**kwargs)
File "C:\Users\USER1\Anaconda2\lib\site-packages\jupyter_client\manager.py", line 244, in start_kernel
**kw)
File "C:\Users\USER1\Anaconda2\lib\site-packages\jupyter_client\manager.py", line 190, in _launch_kernel
return launch_kernel(kernel_cmd, **kw)
File "C:\Users\USER1\Anaconda2\lib\site-packages\jupyter_client\launcher.py", line 115, in launch_kernel
proc = Popen(cmd, **kwargs)
File "C:\Users\USER1\Anaconda2\lib\subprocess.py", line 710, in __init__
errread, errwrite)
File "C:\Users\USER1\Anaconda2\lib\subprocess.py", line 958, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
I have investigated further and I have added following print lines before proc = Popen(cmd, **kwargs) inside launcher.py file
print cmd
print kwargs
Now I see that proc = Popen(cmd, **kwargs) is called with cmd=
['C:\\Users\\USER1\\Anaconda2_32bit\\python.exe', '-m', 'ipykernel', '-f', '
C:\\Users\\USER1\\AppData\\Roaming\\jupyter\\runtime\\kernel-a3f46334-4491-4
fef-aeb1-6772b8392954.json']
this is a problem because my python.exe is not in
C:\\Users\\USER1\\Anaconda2_32bit\\python.exe
but in
C:\\Users\\USER1\\Anaconda2\\python.exe
However I have checked paths in Computer/Advanced system settings/Advanced/Enviroment variables and \\Anaconda2_32bit\\ is never specified there.
Thus I suspect that the false path is specified somewhere else. Where could this be and how can I fix it?
Also I have previously had an installation of Anaconda in \\Anaconda2_32bit\\ but I have uninstalled it.
The ipython has kernels registered in special configuration files
I have run the command:
ipython kernelspec list
the output was:
Available kernels:
python2 C:\ProgramData\jupyter\kernels\python2
I have looked into C:\ProgramData\jupyter\kernels\python2\kernel.json file and there was a wrong path set for python2. I have fixed the path and it works now.
I've followed the directions here to build html documentation https://code.ros.org/trac/opencv/browser/trunk/opencv/doc/README.txt (actually my local copy that came with OpenCV 2.2, but they are the same), but I get this error after running sh buildall:
parsing
Traceback (most recent call last):
File "latex.py", line 714, in <module>
fulldoc = latexparser(sys.argv[1])
File "/home/user/OpenCV-2.2.0/doc/latex2sphinx/latexparser.py", line 114, in latexparser
tokens = tokenize(filename)
File "/home/user/OpenCV-2.2.0/doc/latex2sphinx/latexparser.py", line 106, in tokenize
pickle.dump(r, open(cache_filename, 'w'))
IOError: [Errno 2] No such file or directory: 'parse-cache/c83e178b805f6b01ff8d55cda4bd4a29'
The pdf file built fine but I prefer html.
I'm using Ubuntu 10.10.
I was able to get this working by simply doing "mkdir parse-cache" in the opencv/doc/latex2sphinx directory.