Airflow on windows 10 - Module not found errors - environment-variables

I'm new to data science and wanted to do a little tutorial, which requires airflow, among other things. I installed it on windows using git bash in VS Code. I tried running it but it had a problem not being able to load the sqlite3 import
command (module not found error). I figured out that if I added the directory of sqlite3.py to the path, it would run, but now it gives me a similar error: pwd module not found from daemon.py
File "C:\ProgramData\Anaconda3\lib\site-packages\daemon\daemon.py", line 18, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
Strange to me that it can't find pwd. Obviously pwd works in both git bash and powershell natively. It seems like a basic universal command. I'd love to learn more about what's going on. I don't want to have to end up adding 100 things to path just to get this program to run. I'd love any insights anyone can provide.
PS I'm using Anaconda.

it's seems to be the side effects of Spawning new Python Daemons .
You likely can fix this by downgrading the Python-Daemon :
pip install python-daemon==2.1.2

Related

running pycharm interpreter using nvidia-docker2

Im working on Ubuntu 20. I've installed docker, nvidia-docker2. On Pycharm, I've followed jetbrain guide, but in the advanced steps it isn't consistent with what I see in my setup. I use PyCharm Proffesional 2022.2.
In this step:
in the run options I put additionally --runtime=nvidia and --gpus=all.
Step 4 finishes as same as in the guide (almost, but it seems that it doesn't bother anything so on that later) and on step 5 I put manually the path to the interpreter in the virtual environment I've created using the Dockerfile.
In that way I am able to run the command of nvidia-smi and see correctly the GPU, but I don't see any packages I've installed during the Dockerfile build.
There is another option to connect the interpreter a little bit differently in which I do see the packages, but I can't run the nvidia-smi command and the torch.cuda.is_availble return False.
The way is instead of doing this as in the guide:
I press on the little down arrow in left of the Add Interpreter button and then click on Show all:
After which I can press the + button :
works, so it might be PyCharm "Python Console" issue.
and then I can choose Docker:
which will result in the difference mentioned above in functionality and also in the path dispalyed (the first one is the first remote interpreter top to bottom direction and the second is the second correspondingly):
Here of course the effect of the first and the second correspondingly:
Here is the results of the interpreter run with the first method connected interpreter:
and here is the second:
Of the following code:
Here is the Dockerfile file if you want to take a look:
Anyone configured it correctly and can help ?
Thank you in advance.
P.S: if I run the docker from services and enter the terminal the command nvidia-smi works fine and also the import of torch and the command torch.cuda.is_available return True.
P.S.2:
The thing that has worked for me for now is to change the Dockerfile to install directly torch with pip without create conda environement.
Then I set the path to the python2.7 and I can run the code, but not debug it.
for run the result is as expected (the packages list as was shown before is still empty, but it works, I guess somehow my IDE cannot access the packages list of the remote interpreter in that case, I dont know why):
But the debugger outputs the following error:
Any suggestions for the debugger issue also will be welcome, although it is a different issue.
Please update to 2022.2.1 as it looks like a known regression that has been fixed.
Let me know if it still does not work well.

Ghostscript installed but not found RGhost::Config::GS[:path]='/path/to/my/gs'

I've been trying for a few hours now solve this problem and I looked everywhere for a solution and I did not find one. I'm trying to run a spec test for my project and I have the following error coming up:
RuntimeError:
Ghostscript not found in your system environment (linux-gnu).
Install it and set the variable RGhost::Config::GS[:path] with the executable.
Example: RGhost::Config::GS[:path]='/path/to/my/gs' #unix-style
RGhost::Config::GS[:path]="C:\\gs\\bin\\gswin32c.exe" #windows-style
And I do try to put RGhost::Config::GS[:path]='/usr/local/bin/gs' and it returns:
bash: RGhost::Config::GS[:path]=/usr/local/bin/gs: No such file or directory
but ghostscript is installed, I did everything (make, sudo make install, etc etc) and when I run gs -v it returns:
GPL Ghostscript 9.26 (2018-11-20)
Copyright (C) 2018 Artifex Software, Inc. All rights reserved.
When I use the user interface and search for "gs" in the "Files" application, it is found in /home/marcelle/projects/ghostscript-9.26/bin/gs and I also tried:
RGhost::Config::GS[:path]='/home/marcelle/projects/ghostscript-9.26/bin/gs'
and it returns the same error:
bash: RGhost::Config::GS[:path]=/home/marcelle/projects/ghostscript-9.26/bin/gs: No such file or directory
I also tried to delete ghostscript from my notebook with autoremove and purge and installed it again using what I mentioned before (./configure, make, sudo make install), restarted the notebook, the terminal and nothing.
PS: I'm using Ubuntu 20.04.
I managed to figure out a solution. Looking for the code for the Rghost, what I saw in its spec is that the path expected was different than the path the ghostscript really is.
When I run whereis or which on my terminal, it returns:
which gs
/usr/local/bin/gs
So I was trying to point to this path. But seeing the spec test for Rghost which I found on github for Rghost, we can see that it expects /usr/bin/gs:
it 'should detect linux env properly' do
RbConfig::CONFIG.should_receive(:[]).twice.with('host_os').and_return('linux')
File.should_receive(:exists?).with('/usr/bin/gs').and_return(true)
RGhost::Config.config_platform
RGhost::Config::GS[:path].should == '/usr/bin/gs'
end
So it expects /usr/bin and not /usr/local/bin.
Then I just copied to that path and the spec ran with no problems anymore:
sudo cp /usr/local/bin/gs /usr/bin
I've no experience with Ruby at all, but I also got this error when using asciidoctor, which uses rghost for the PDF generation.
The command RGhost::Config::GS[:path]='/path/to/my/gs' mentioned in the error is not a shell command, which is why bash couldn't handle it. However, to me it wasn't immediately clear what to do with this command either. I expected an easy way to set this variable somewhere, but couldn't find it.
What worked for me was searching the rghost.rb file, which is where this variable is set and can be changed. In Windows, it was located in C:\Ruby30-x64\lib\ruby\gems\3.0.0\gems\rghost-0.9.7\lib\rghost.rb.
In this file, I added the following line, which solved the problem:
RGhost::Config::GS[:path]="C:\\Program Files\\gs\\gs9.55.0\\bin\\gswin64c.exe"
NB: the mentioned paths can be different for everyone, so make sure to use that paths that are valid for your system.

Apache Jena Commands not found

I'm trying to set up my system (Ubuntu 16.04) with Apache Jena 3.10.0, and followed the provided instructions, but I'm unable to access any of the commands that I should have access to.
For example, sparql --version and bin/sparql --version both return:
sparql: command not found
I have downloaded and extracted the files to /home/[user]/apache-jena-3.10.0, then run:
export JENA_HOME=/home/[user]/apache-jena-3.10.0
export PATH=$PATH:$JENA_HOME/bin
The command cd $JENA_HOME successfully goes the apache-jena-3.10.0 directory.
I feel that there is a basic linux thing here that I'm missing, but I've tried a lot of things and had no luck so far. Any help would be greatly appreciated. Thanks!
The files in the download from Apache were not marked as executable. From the main apache-jena-3.10.0 directory, chmod -R 775 bin changed all files so I could run them from command line.

Ejabberd installation strange issue

OS: Debian 8.1 X64
trying to install eJabberd Community server based on this tutorial
At the end of installation, it pops error message
Error: Error running Post Install Script.
The installation may have not completed correctly
What am I doing wrong?
It looks like /bin/sh is Dash on your system (apparently the default since Debian Squeeze). However, the postinstall.sh script inside the package uses brace expansion, which while widely supported in various shells is not required by the POSIX standard, and thus Dash is not in error by not supporting it. The postinstall.sh script should either specify /bin/bash instead of /bin/sh in its first line, or abstain from using Bash-specific features.
You should be able to get a functioning ejabberd install by explicitly running the postinstall script with Bash:
sudo bash /opt/ejabberd-15.07/bin/postinstall.sh

dockerfile: vim (compiled python), vim-ipython, and ipython notebook

I would like to build a Dockerfile in linux which
1. compiles vim with python
2. installs python stack (such as numpy, scipy, ipython, etc)
3. creates ssl certificate for ipython-notebook, to view the notebooks on host machine
It seemed straightforward enough. But I have run into problems despite a variety of approaches, such as linking separate containers, using anaconda, as well as with a single unified image vs separate layers, or creating a user or running all as a root.
In order to run vim, simply installing to root, does not activate pathogen bundle/vim-ipython. Creating a user allows pathogen bundles (ie nerdtree works) to install, but :IPython throws error.
:IPython failed
^-- failed '' not found .
Ive tried the above with no layers/1 large Dockerfile, and with different layers for the python stack, vim, and the ipython notebook.
Dockerfile
What am I not seeing here ?
what does the ^-- failed '' not found referring to?
Ive tried running the ipython notebook using --no-browser & and then running vim, or using running two shells on the same container... but cant get past this error.
Here is a working Dockerfile for anyone trying to get vim-ipython working in Docker.
issues:
user/shared home needed to for vim, despite runtimepath in .vimrc to pathogen/bundle
%connect_info >> required with containers
I am running in root, not sure why vim required a USER to install packages, but changing to USER would throw errors with CMD
--best

Resources