How to make UDF to handle default arguments, together with other required arguments? - udf

I have a python function which takes few required arguments together few default arguments, but when I tried as fzumstein mentioned it doesn't works as expected. What am I doing wrong?
def Doublesum(a, b=1):
return (a + b)**2
In excel:
=Doublesum(1)
This returns no value, i.e., #Value!. I have installed xlwings version 0.7.2.

I would recommend trying a new install in a conda environment (as it looks like you have that). Try:
conda create -n xlwings-test python=2.7
activate xlwings-test
conda install xlwings pandas numpy
Try then doing from xlwings import func. Then try the UDF example -- i.e. download the two files (.py and .xlsm) and open the .xlsm in the same command prompt that you are using for your xlwings-test environment.
That worked when I just tried it (on windows). If you keep getting that stack trace about the keyerror then please post it.

Related

running pycharm interpreter using nvidia-docker2

Im working on Ubuntu 20. I've installed docker, nvidia-docker2. On Pycharm, I've followed jetbrain guide, but in the advanced steps it isn't consistent with what I see in my setup. I use PyCharm Proffesional 2022.2.
In this step:
in the run options I put additionally --runtime=nvidia and --gpus=all.
Step 4 finishes as same as in the guide (almost, but it seems that it doesn't bother anything so on that later) and on step 5 I put manually the path to the interpreter in the virtual environment I've created using the Dockerfile.
In that way I am able to run the command of nvidia-smi and see correctly the GPU, but I don't see any packages I've installed during the Dockerfile build.
There is another option to connect the interpreter a little bit differently in which I do see the packages, but I can't run the nvidia-smi command and the torch.cuda.is_availble return False.
The way is instead of doing this as in the guide:
I press on the little down arrow in left of the Add Interpreter button and then click on Show all:
After which I can press the + button :
works, so it might be PyCharm "Python Console" issue.
and then I can choose Docker:
which will result in the difference mentioned above in functionality and also in the path dispalyed (the first one is the first remote interpreter top to bottom direction and the second is the second correspondingly):
Here of course the effect of the first and the second correspondingly:
Here is the results of the interpreter run with the first method connected interpreter:
and here is the second:
Of the following code:
Here is the Dockerfile file if you want to take a look:
Anyone configured it correctly and can help ?
Thank you in advance.
P.S: if I run the docker from services and enter the terminal the command nvidia-smi works fine and also the import of torch and the command torch.cuda.is_available return True.
P.S.2:
The thing that has worked for me for now is to change the Dockerfile to install directly torch with pip without create conda environement.
Then I set the path to the python2.7 and I can run the code, but not debug it.
for run the result is as expected (the packages list as was shown before is still empty, but it works, I guess somehow my IDE cannot access the packages list of the remote interpreter in that case, I dont know why):
But the debugger outputs the following error:
Any suggestions for the debugger issue also will be welcome, although it is a different issue.
Please update to 2022.2.1 as it looks like a known regression that has been fixed.
Let me know if it still does not work well.

Apache Jena: riot does not produce output

I recently installed Apache Jena 3.17.0, and have been trying to use it to convert nquads files to ntriples.
As per the instructions, here (https://jena.apache.org/documentation/tools/) I first set up my WSL (Ubuntu 20.04) environment
$ export JENA_HOME=apache-jena-3.17.0/
$ export PATH=$PATH:$JENA_HOME/bin
and then attempted to run riot to do the conversion (triail.nq is my nquads file).
$ riot --output=NTRIPLES -v triail.nq
When I ran this, I got no output to the terminal. I'm not sure what is going wrong here, since there is no error message. Does anyone know what could be causing this / what the solution could be?
Thanks in advance!
The command will read the quad (multiple graph) data and output only the default graph. Presumably there is no default graph data in triail.nq.
If "convert" means combine all the quads into a single graph, then remove the graph field on each line of the data file with a text editor.
Otherwise, read into a RDF dataset and copy the named graphs into a single graph and output that.

Airflow on windows 10 - Module not found errors

I'm new to data science and wanted to do a little tutorial, which requires airflow, among other things. I installed it on windows using git bash in VS Code. I tried running it but it had a problem not being able to load the sqlite3 import
command (module not found error). I figured out that if I added the directory of sqlite3.py to the path, it would run, but now it gives me a similar error: pwd module not found from daemon.py
File "C:\ProgramData\Anaconda3\lib\site-packages\daemon\daemon.py", line 18, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
Strange to me that it can't find pwd. Obviously pwd works in both git bash and powershell natively. It seems like a basic universal command. I'd love to learn more about what's going on. I don't want to have to end up adding 100 things to path just to get this program to run. I'd love any insights anyone can provide.
PS I'm using Anaconda.
it's seems to be the side effects of Spawning new Python Daemons .
You likely can fix this by downgrading the Python-Daemon :
pip install python-daemon==2.1.2

Pytorch errors: "received an invalid combination of arguments" in Jupyter Notebook

I'm trying to learn Pytorch, but whenever I seem to try any online tutorial (https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html#sphx-glr-beginner-blitz-tensor-tutorial-py), I get errors when trying to run certain functions, but only in Jupyter Notebook.
When running
x = torch.empty(5, 3)
I get an error:
module 'torch' has no attribute 'empty'
Furthermore, when running
x = torch.zeros(5, 3, dtype=torch.long)
I get the error:
module 'torch' has no attribute 'long'
Some other functions work fine like:
x = torch.rand(5, 3)
But generally, most code I try to run seems to run into an error really quickly. I couldn't find any resolution online.
When I go into my docker container and simply run python in the shell, I can run these lines just fine with no errors.
I'm running pytorch in a Docker image that I extended from a fastai image, as it already included things like jupyter notebook and pytorch. I used anaconda to update everything, and committed it to a new image for myself.
I have absolutely no idea what the issue could be. I've tried updating packages through anaconda, pip, aptitude in my docker container, and making sure to commit my changes, but nothing seems to work. I also tried creating a new kernel with python 3.7 as I noticed that my Jupyter Notebook only runs in 3.6.4, and when I run python in the shell it is at 3.7.
I've also tried getting different docker images and extending them with what I need, but all images that I've tried have had errors with anaconda where it gets stuck on "Solving environment" step.
Ok, so the fix for me was to either update pytorch through conda using the following command
conda update pytorch
If it's not installed yet, I've gotten it to work in other environments by simply installing it through conda
conda install pytorch
Kind of stupid that I didn't try this earlier, but I was confused on the difference between conda and pip.

Conda: what happens when you activate an environment?

How does running source activate <env-name> update the $PATH variable? I've been looking in the CONDA-INSTALLATION/bin/activate script and do not understand how conda updates my $PATH variable to include the bin directory for the recently activated environment. No where can I find the code that conda uses to prepend my $PATH variable.
Disclaimer: I am not a conda developer, and I'm not a Bash expert. The following explanation is based on me tracing through the code, and I hope I got it all right. Also, all of the links below are permalinks to the master commit at the time of writing this answer (7cb5f66). Behavior/lines may change in future commits. Beware: Deep rabbit hole ahead!
Note that this explanation is for the command source activate env-name, but in conda>=4.4, the recommended way to activate an environment is conda activate env-name. I think if one uses conda activate env-name, you should pick up the explanation around the part where we get into the cli.main function.
For conda >=4.4,<4.5, looking at CONDA_INST_DIR/bin/activate, we find on the second to last and last lines (GitHub link):
. "$_CONDA_ROOT/etc/profile.d/conda.sh" || return $?
_conda_activate "$#"
The first line sources the script conda.sh in the $_CONDA_ROOT/etc/profile.d directory, and that script defins the _conda_activate bash function, to which we pass the arguments $# which is basically all of the arguments that we passed to the activate script.
Taking the next step down the rabbit hole, we look at $_CONDA_ROOT/etc/profile.d/conda.sh and find (GitHub link):
_conda_activate() {
# Some code removed...
local ask_conda
ask_conda="$(PS1="$PS1" $_CONDA_EXE shell.posix activate "$#")" || return $?
eval "$ask_conda"
_conda_hashr
}
The key is that line ask_conda=..., and particularly $_CONDA_EXE shell.posix activate "$#". Here, we are running the conda executable with the arguments shell.posix, activate, and then the rest of the arguments that got passed to this function (i.e., the environment name that we want to activate).
Another step into the rabbit hole... From here, the conda executable calls the cli.main function and since the first argument starts with shell., it imports the main function from conda.activate. This function creates an instance of the Activator class (defined in the same file) and runs the execute method.
The execute method processes the arguments and stores the passed environment name into an instance variable, then decides that the activate command has been passed, so it runs the activate method.
Another step into the rabbit hole... The activate method calls the build_activate method, which calls another function to process the environment name to find the environment prefix (i.e., which folder the environment is in). Finally, the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
Phew! I hope I got everything right. As I said, I'm not a conda developer, and far from a Bash expert, so please excuse any explanation shortcuts I took that aren't 100% correct. Just leave a comment, I'll be happy to fix it!
I should also note that the recommended method to activate environments in conda >=4.4 is conda activate env-name, which is one of the reasons this is so convoluted - the activation is mostly handled in Python now, whereas (I think) previously it was more-or-less handled directly in Bash/CMD.

Resources