I am facing this issue since about 24h: I try to start jupyter lab via conda prompt and receive the error message: A connection to the Jupyter server could not be established. JupyterLab will continue trying to reconnect. Check your network connection or Jupyter server configuration.
It worked earlier, however I have installed needed packages e.g. xarray, cartopy and their dependencies. Since then I cannot connect to the server anymore. Any ideas?
Many thanks!
(base) C:\Users\Judith Marina>jupyter lab
[I 2021-12-15 10:16:58.336 ServerApp] jupyterlab | extension was successfully linked.
[I 2021-12-15 10:16:58.753 ServerApp] nbclassic | extension was successfully linked.
[I 2021-12-15 10:16:58.785 ServerApp] nbclassic | extension was successfully loaded.
[I 2021-12-15 10:16:58.785 LabApp] JupyterLab extension loaded from C:\Users\Judith Marina\miniconda3\lib\site-packages\jupyterlab
[I 2021-12-15 10:16:58.785 LabApp] JupyterLab application directory is C:\Users\Judith Marina\miniconda3\share\jupyter\lab
[I 2021-12-15 10:16:58.800 ServerApp] jupyterlab | extension was successfully loaded.
[I 2021-12-15 10:16:58.800 ServerApp] Serving notebooks from local directory: C:\Users\Judith Marina
[I 2021-12-15 10:16:58.800 ServerApp] Jupyter Server 1.13.1 is running at:
[I 2021-12-15 10:16:58.800 ServerApp] http://localhost:8888/lab?token=f35d2d2631a8bd064aa299fefb0ca4da0d60c940c351d5b1
[I 2021-12-15 10:16:58.800 ServerApp] or http://127.0.0.1:8888/lab?token=f35d2d2631a8bd064aa299fefb0ca4da0d60c940c351d5b1
[I 2021-12-15 10:16:58.800 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 2021-12-15 10:16:58.837 ServerApp]
To access the server, open this file in a browser:
file:///C:/Users/Judith%20Marina/AppData/Roaming/jupyter/runtime/jpserver-5216-open.html
Or copy and paste one of these URLs:
http://localhost:8888/lab?token=f35d2d2631a8bd064aa299fefb0ca4da0d60c940c351d5b1
or http://127.0.0.1:8888/lab?token=f35d2d2631a8bd064aa299fefb0ca4da0d60c940c351d5b1
[I 2021-12-15 10:17:02.723 LabApp] Build is up to date
[W 2021-12-15 10:17:04.374 ServerApp] Notebook phd/Data/Satellite_data/NorthSea_Storms/Generating_statistics_from_EO_data.ipynb is not trusted
[I 2021-12-15 10:17:05.015 ServerApp] Kernel started: b2477d08-fc4c-4989-9907-675635cb706f
[I 2021-12-15 10:17:05.019 ServerApp] Kernel started: d2370dfe-d3d9-4e6f-bd0f-1ff1a3addd11
[W 2021-12-15 10:17:06.208 ServerApp] Got events for closed stream <zmq.eventloop.zmqstream.ZMQStream object at 0x000001E010746760>
Traceback (most recent call last):
File "C:\Users\Judith Marina\miniconda3\Scripts\jupyter-lab-script.py", line 9, in
sys.exit(main())
File "C:\Users\Judith Marina\miniconda3\lib\site-packages\jupyter_server\extension\application.py", line 577, in launch_instance
serverapp.start()
File "C:\Users\Judith Marina\miniconda3\lib\site-packages\jupyter_server\serverapp.py", line 2669, in start
self.start_ioloop()
File "C:\Users\Judith Marina\miniconda3\lib\site-packages\jupyter_server\serverapp.py", line 2655, in start_ioloop
self.io_loop.start()
File "C:\Users\Judith Marina\miniconda3\lib\site-packages\tornado\platform\asyncio.py", line 199, in start
self.asyncio_loop.run_forever()
File "C:\Users\Judith Marina\miniconda3\lib\asyncio\base_events.py", line 596, in run_forever
self._run_once()
File "C:\Users\Judith Marina\miniconda3\lib\asyncio\base_events.py", line 1875, in _run_once
handle = self._ready.popleft()
IndexError: pop from an empty deque
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
The same error popped up after creating a new conda environment and running jupyter-lab for the second time.
Again as mentioned in dukeelo's comment above jupyter-notebook ran fine. I did not try the browser link to jupyter-lab.
Since it was jupyter-lab specific I looked into the ~/.jupyter folder for recently updated files:
cd ~/.jupyter/lab/workspaces (ccdev)
curt#asimov:~/.jupyter/lab/workspaces$ ls -lh total 12K
-rw-rw-r-- 1 curt curt 734 Mar 2 20:31 auto-f-ff01.jupyterlab-workspace
-rw-rw-r-- 1 curt curt 449 Feb 8 19:15 auto-i-81a0.jupyterlab-workspace
-rw-rw-r-- 1 curt curt 685 Mar 15 19:08 default-37a8.jupyterlab-workspace
and deleting the recent 'default-37a8.jupyterlab-workspace' file seemed to fix the server error.
How the 'default-......' got corrupted in the first place, I do not know. Possiblly due to running jupyter-lab in one environment and then running in another?
Yes - deleting the recent 'default-37a8.jupyterlab-workspace' file seemed to fix the server error.
Related
I have a RPi running NGINX and UWSGI serving a webpage and an API via UWSGI.
Web page works fine, both locally and from the web.
API works locally, but not via web. My guess it's either the router or the NGINX configuration.
I am using cloudflare for the DNS, and all appears fine there.
I can GET / POST locally using Postman, but not via the web address. I would greatly appreciate any ideas on where to look.
Output from uwsgi is:
*** Starting uWSGI 2.0.20 (32bit) on [Sat May 14 12:35:08 2022] ***
compiled with version: 8.3.0 on 06 October 2021 05:59:48
os: Linux-5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022
nodename: xxx
machine: armv7l
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /var/www/xxx.xxx/public
detected binary path: /home/pi/.local/bin/uwsgi
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 12393
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :9090 fd 4
spawned uWSGI http 1 (pid: 3176)
uwsgi socket 0 bound to TCP address 127.0.0.1:34881 (port auto-assigned) fd 3
Python version: 3.7.3 (default, Jan 22 2021, 20:04:44) [GCC 8.3.0]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0xd5c950
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 64408 bytes (62 KB) for 1 cores
*** Operational MODE: single process ***
<<<<<<<<<<<<<<<< Loaded script >>>>>>>>>>>>>>>>
WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0xd5c950 pid: 3175 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 3175, cores: 1)
To restore our production db locally, I'm adding a Postgres dump to a Docker build file. Until recently this was a smooth process. But as the db steadily grows (now +80G), it seems as though I've hit an unknown treshold. The build crashes at a simple ADD dmp.sql.gz /tmp/dmp.sql.gz line in the Dockerfile (so before it actually unzips or executes the contents of the file)
Sending build context to Docker daemon 87.42GB
Step 1/6 : FROM ecr.url/postgres96
---> 36f64c15a938
...
Step 5/6 : ADD dmp.sql.gz /tmp/dmp.sql.gz
Error processing tar file(exit status 1): unexpected EOF
logs of the Docker deamon don't give me much of a clue:
Aug 15 10:02:55 raf-P775DM3-G dockerd[2498]: time="2018-08-15T10:02:55.902896948+02:00" level=error msg="Can't add file /var/lib/docker/overlay2/84787e6108e9df6739cee9905989e2aab8cc72298cbffa107facda39158b633d/diff/tmp/dmp.sql.gz to tar: io: read/write on closed pipe"
Aug 15 10:02:55 raf-P775DM3-G dockerd[2498]: time="2018-08-15T10:02:55.904099449+02:00" level=error msg="Can't close tar writer: io: read/write on closed pipe"
I followed up on the actual copying of the file to the overlay fs, expecting to see it crash somewhere in the process, but it actually crashes after the whole file is transferred:
root#raf-P775DM3-G:/home/raf# ls /var/lib/docker/overlay2/e1d241ba14524cff6a7ef3dff8222d4f1ffbc4de05f60cd15d6afbdb2bb9f754/diff/tmp/ -lrta
total 85150928
-rw-r--r-- 1 root root 87194526754 Aug 14 00:01 dmp.sql.gz // -> this is the whole file
drwxr-xr-x 3 root root 4096 Aug 14 17:30 ..
drwxrwxrwt 2 root root 4096 Aug 14 17:30 .
When this dmp file was in the 70GB range, restoring it in this fashion was a time consuming but smooth process,on different OSes and Docker versions.
Does anyone can help figuring out the gist of the problem?
Currently experiencing this issue on Docker version 18.06.0-ce, build 0ffa825
Ps: I read about a tar header limit of 8GB which causes a EOF exception (https://github.com/moby/moby/issues/37581) but again, we were restoring 70GB+ dumps without issue.
Try upgrading to 18.09. They changed the tar backend which should fix this issue. As for why the 70GB file worked, I suspect it has something to do with compression in the layers since you cannot trigger this issue with an 8GB file of zeros. See https://github.com/moby/moby/pull/37771.
https://github.com/linkerd/linkerd#docker
From the instruction on Readme, I have executed the following commands,
; linkerd/docker ;namerd/docker
I get the following exception,
[info] Done packaging.
[trace] Stack trace suppressed: run last linkerd/bundle:docker for the full output.
[error] (linkerd/bundle:docker) java.io.IOException: Cannot run program "docker" (in directory "/home/shaikk/linkerd/linkerd/target/docker"): error=2, No such file or directory
[error] Total time: 284 s, completed Mar 6, 2017 9:13:49 AM
I think the No such file or directory error message is referring to the docker binary itself. Can you try running which docker to see if it's in your path? If it's not there, you can install it using the instructions here: https://docs.docker.com/engine/installation/#platform-support-matrix
I'm trying to set up a uWSGI server with Lua script.
For now I've just a little test script (more or less the one shown in the uWSGI doc http://uwsgi-docs.readthedocs.org/en/latest/Lua.html#your-first-wsapi-application).
Here is my script :
function run(wsapi_env)
local headers = { ["Content-type"] = "text/html" }
local function hello_text()
coroutine.yield("<html><body>")
coroutine.yield("<p>Hello Wsapi!</p>")
coroutine.yield("<p>PATH_INFO: " .. wsapi_env.PATH_INFO .. "</p>")
coroutine.yield("<p>SCRIPT_NAME: " .. wsapi_env.SCRIPT_NAME .. "</p>")
coroutine.yield("</body></html>")
end
return 200, headers, coroutine.wrap(hello_text)
end
return run
I launch uWSGI with this command line ( until I manage to launch it succefully once, then I will use config file) :
uwsgi --socket :63031 --plugins lua --lua main.lua --master
I've run this command from the directory where is stored main.lua (I've tried with main.lua full path ) .
But uWSGI doesn't load the lua script :
*** Starting uWSGI 2.0.7-debian (64bit) on [Thu Feb 5 15:45:00 2015] ***
compiled with version: 4.9.1 on 25 October 2014 19:17:54
os: Linux-3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt2-1 (2014-12-08)
nodename: ns342653.ip-91-121-135.eu
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /home/vincent/web
detected binary path: /usr/bin/uwsgi-core
your processes number limit is 63906
your memory page size is 4096 bytes
detected max file descriptor number: 65536
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :63031 fd 3
Initializing Lua environment... (1 lua_States)
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 145536 bytes (142 KB) for 1 cores
*** Operational MODE: single process ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 8148)
spawned uWSGI worker 1 (pid: 8149, cores: 1)
How can I make uWSGI load my script ?
Thanks for your awnser.
( P.S. : I've successfully launched uWSGI with psgi and perl script with almost the same config)
I am in the process of setting up a server to run a Ruby on Rails application on Fedora 12, using Passenger.
I am at the stage where I've installed Passenger, set it up as prescribed, but get the following errors when I restart Apache:
[Wed Jan 13 15:41:38 2010] [notice] caught SIGTERM, shutting down
[Wed Jan 13 15:41:40 2010] [notice] SELinux policy enabled; httpd running as context unconfined_u:system_r:httpd_t:s0
[Wed Jan 13 15:41:40 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
[Wed Jan 13 15:41:40 2010] [notice] Digest: generating secret for digest authentication ...
[Wed Jan 13 15:41:40 2010] [notice] Digest: done
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
[Wed Jan 13 15:41:40 2010] [error] python_init: Python version mismatch, expected '2.6', found '2.6.2'.
[Wed Jan 13 15:41:40 2010] [error] python_init: Python executable found '/usr/bin/python'.
[Wed Jan 13 15:41:40 2010] [error] python_init: Python path being used '/usr/lib/python26.zip:/usr/lib/python2.6/:/usr/lib/python2.6/plat-linux2:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload'.
[Wed Jan 13 15:41:40 2010] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads.
[Wed Jan 13 15:41:40 2010] [notice] mod_python: using mutex_directory /tmp
[Wed Jan 13 15:41:40 2010] [notice] Apache/2.2.14 (Unix) DAV/2 Phusion_Passenger/2.2.9 PHP/5.3.0 mod_python/3.3.1 Python/2.6.2 mod_ssl/2.2.14 OpenSSL/1.0.0-fips-beta3 mod_perl/2.0.4 Perl/v5.10.0 configured -- resuming normal operations
As you can see, there is a permissions problem when Passenger is trying to initialize:
[Wed Jan 13 15:41:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /tmp/passenger.25235/.guard: Permission denied (13)
When Apache is starts, it does create a file in /tmp:
d-ws--x--x. 2 root root 4096 2010-01-13 16:04 passenger.26117
If instead I run the app by firing up mongrel directly with mongrel_rails start -e production, I see the following:
ActiveRecord::StatementInvalid (Mysql::Error: Can't create/write to file '/tmp/#sql_5d3_0.MYI' (Errcode: 13): SHOW FIELDS FROM `users`):
Again the error points to permission issues with the /tmp directory.
I am at a loss as to what the solution is. I'm not sure if it is related to simply directory permissions or Fedora's SELinux security.
Any help would be appreciated. Thanks.
I did the same as Fred, except that instead of doing it one error at a time:
Go into permissive mode by running setenforce 0
Restart apache, and hit your site and use it for a while as normal
Run grep httpd /var/log/audit/audit.log | audit2allow -M passenger
semodule -i passenger.pp
Go back to enforcing mode by running setenforce 1
Restart apache and test your site - hopefully it should all be working as before!
Note that this is basically a specific example of the procedure on the Centos SELinux help - check it out.
I'm having the same issue in CentOS 5.4, SELinux getting in the way of Passenger.
Setting PassengerTempDir to /var/run/passenger simply gives you the same permission errors in the new directory instead of /tmp :
[Mon Feb 22 11:42:40 2010] [error] *** Passenger could not be initialized because of this error: Cannot create directory '/var/run/passenger/passenger.3686'
I can then change the security context of /var/run/passenger to get past this error:
chcon -R -h -t httpd_sys_content_t /var/run/passenger/
...and that lets Passenger create the temp directory, but not files within that directory:
[Mon Feb 22 12:07:06 2010] [error] *** Passenger could not be initialized because of this error: Cannot create FIFO file /var/run/passenger/passenger.3686/.guard: Permission denied (13)
Oddly, re-running the recursive chcon again doesn't get past this error, it keeps dying at this point, and this is where my SELinux knowledge gets murky.
The Phusion Passenger guide sections 6.3.5 and 6.3.7 have some useful thoughts, but they don't seem to completely resolve the problem.
You need more than just the httpd_sys_content_t permission. I use the following technique to get things started:
start a tail on the audit log: tail -f /var/log/audit/audit.log
reload apache: apachectl restart
Go to the /tmp/directory: cd /tmp
If just 1 line is added use the command: tail -1 /var/log/audit/audit.log | audit2allow -M httpdfifo
Note that the name 'httpdfifo' is just a name chosen to reflect the kind of error that has been observed.
This will create a file named 'httpdfifo.pp'. To allow apache to create a FIFO from here on after you have to issue the command: semodule -i httpdfifo.pp
Continue to do this until all audit errors have been resolved (It took 4 different kind of permissions on my system running Centos 5.4)
Running setenforce 0 before starting will let you test if it's SELinux. Don't forget to run setenforce 1 afterwards.
I tried what Dan Sketcher and Fred Appleman suggested, i.e. repeat the following:
yum install setroubleshoot
echo > /var/log/audit/audit.log # clear irrelevant errors
cd ~
service httpd restart # try booting passenger -- audit.log now shows the relevant permission errors
tail -f /var/log/httpd/error_log # check that passenger is still failing due to permission errors
sealert -a /var/log/audit/audit.log > selinux-diag.txt # translate the permission errors
# read and check that you are happy with selinux-diag.txt
# and either follow its specific advice, or if it just wants you to grep into audit2allow, then:
cat /var/log/audit/audit.log | audit2allow -M mypol # grant everything just denied
semodule -i mypol.p # commit new permissions
But after doing this 5 or 6 times, I kept coming up against new errors, and some of the same errors came up even after I had tried to permit them with "audit2allow".
In the end I just turned off SELinux, with:
echo 0 >/selinux/enforce