Kubeflow pipeline error "module 'kfp.dsl' has no attribute 'RUN_ID_PLACEHOLDER'" - kubeflow

I am running this Kubeflow pipeline example below in Jupyter NB. In the def gh_summ() part, it gave me an error message: "module 'kfp.dsl' has no attribute 'RUN_ID_PLACEHOLDER'". Any suggestions? Thank you!
The Kubeflow pipeline Jupyter nb:
Need to download it first `
curl -O https://raw.githubusercontent.com/kubeflow/examples/master/github_issue_summarization/pipelines/example_pipelines/pipelines-notebook.ipynb`

You might have a really old SDK package installed.
Try to run
python3 -m pip install --upgrade kfp --user

Related

Drake Mathematical Program Tutorial

I am running Drake on Ubuntu 20.04 using WSL2.
I use python3.8.10 and Drake1.2.0.
I tried running the "Mathematical Program Tutorial" obtained from deepnote on my PC, but the behavior of the ipopt solver is unnatural and does not give the expected results.
The 1st error is occurred in the section using ipopt solver.
All components of the solution is printed as "nan"
The 2nd error is below about "get_solver_details().status"
RuntimeError: The solver_details has not been set yet.
I can see both errors in "Demo on manually choosing a solver" in the tutorial.
The result is following
SolutionResult.kUnknownError
x* = [nan nan]
Solver is IPOPT
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-12-2d1b3835c54a> in <module>
25 print("x* = ", result.GetSolution(x))
26 print("Solver is ", result.get_solver_id().name())
---> 27 print("Ipopt solver status: ", result.get_solver_details().status,
28 ", meaning ", result.get_solver_details().ConvertStatusToString())
RuntimeError: The solver_details has not been set yet.
Thank you in advance.
P.S.
I installed pydrake for venv by pip commands
python3 -m venv env
env/bin/pip install --upgrade pip
env/bin/pip install drake
sudo apt-get install --no-install-recommends \
libpython3.8 libx11-6 libsm6 libxt6 libglib2.0-0
source env/bin/activate
I just download the folder "Tutorial" from deepnote and put it under env.
Then, I run it by Jupyter Notebook as
jupyter notebook
and open env/Tutorials/mathematical_program.ipynb
It turns out that the pip drake == 1.2.0 version has a bug in the IpoptSolver compilation.
As a work-around, you can use SnoptSolver instead, or else use the https://drake.mit.edu/from_binary.html release (unpacking a zipped binary, instead of using pip).
It's possible that the pydrake.solvers.ipopt.IpoptSolver class (which is a wrapper around the https://coin-or.github.io/Ipopt/ library) does not run correctly under WSL2, due to using some odd libc API which doesn't work on Windows. We will need more information to reproduce the problem and try to debug.
Can you state exactly how you installed pydrake (i.e., show us the command lines you used). Was it via pip (https://drake.mit.edu/pip.html) or just via binary (https://drake.mit.edu/from_binary.html)?
Can you state exactly how you ran Jupyter (the command line) to launch the notebook? Was it python3 -m pydrake.tutorials or something else?
Looks like this may not be tied to WSL, but instead pip build (or just binary build). Ran into this on Ubuntu 20.04 (no WSL). Per Drake Slack, filed issue:
https://github.com/RobotLocomotion/drake/issues/17162

Mamba installing a package into wrong environment

The background is, I'm responsible for maintaining a fancy Docker image that is used by our team for analytics. It uses a Jupyter notebook image as the base, and then adds various customisations, extra packages, etc.
One of the team members recently wanted to run Tensorflow. No problem, I'll just run mamba install and add it to the image. However, this created an issue: Tensorflow 2.4.3 (the latest version) is somehow incompatible with R 4.1.1 (also the latest version) or something else in the ecosystem, causing R to to be downgraded to 3.6.3. So I created a new environment and installed TF into that:
FROM hongooi/jupytermodelrisk:1.0.0
RUN mamba create -n tensorflow --clone base
# Make RUN commands use the new environment
RUN echo "conda activate tensorflow" >> ~/.bashrc
SHELL ["/bin/bash", "--login", "-c"]
RUN mamba install -y 'tensorflow=2.4.3'
But when I rebuilt the image, I found that while the tensorflow env had been created, the Tensorflow package had been installed into the base env, not the tensorflow env. Has anyone else encountered this? I can verify, if I login to the container, that the tensorflow env has been created: it just doesn't contain the Tensorflow package.
I don't get this problem if run the create, activate and install commands from inside the container. It's only when I try to do it in the Dockerfile.
I use mamba instead of conda because the latter takes forever to run, given the number of packages installed. In fact, trying to run conda install tensorflow crashes after ~5 hours.
Not an expert on dockerfiles, but in general you could just use the -n flag to the install command to specify the target environment for the installation like so:
mamba install -n tensorflow -y tensorflow=2.4.3

How to add community plugins in kong installed using docker

We are trying to install community plugin Kong Service Virtualization. As I am completely new to kong, I am not able find any solution where detailed installation steps have been given like where and how to add that plugin, how to edit kong.conf etc. Can anyone help me with the issue.
Thanks in advance.
you can install any plugin in kong using luarocks
For example here is one sample docker file
FROM kong
ENV LUA_PATH /usr/local/share/lua/5.1/?.lua;/usr/local/kong-oidc/?.lua;;
# For lua-cjson
ENV LUA_CPATH /usr/local/lib/lua/5.1/?.so;;
# Install unzip for luarocks, gcc for lua-cjson
RUN yum install -y unzip gcc
RUN luarocks install luacov
here one example of oidc plugin : https://github.com/nokia/kong-oidc
we can install plugin using : luarocks install <plugin name>
build your own custom docker image and use kong image as base docker image.
here whole example working Dockerfile
FROM kong:latest
USER root
RUN apk update && apk add git unzip luarocks
RUN luarocks install kong-oidc
USER kong
Here is an example of the Dockerfile I use for install the kong-oidc plugin with dependencies:
FROM kong:2.0.2-alpine
USER root
ENV KONG_PLUGINS=bundled,oidc
# Add libs
ADD lib/resty/openidc.lua /usr/local/openresty/lualib/resty/openidc.lua
# Add oidc plugin
ADD plugins/oidc /usr/local/share/lua/5.1/kong/plugins/oidc
# Install dependencies
RUN luarocks install lua-resty-http
RUN luarocks install lua-resty-session
RUN luarocks install lua-resty-jwt 0.2.2
USER kong
I'm adding the oidc plugin from my source code instead luarocks because the repository is unmaintained, and you will need to update some dependencies to make it work.
If you need a functional example of Kong + OpenID + Keycloak, check this repository and this article.

RuntimeWarning: 'nltk.downloader' found in sys.modules after import of package 'nltk', but prior to execution of 'nltk.downloader'

I'm using docker to run an NLP system that uses nltk, languagetool etc...
When I use docker-compose build --build-arg env=dev I get the warning message:
/usr/local/lib/python3.6/runpy.py:125: RuntimeWarning:
'nltk.downloader' found in sys.modules after import of package 'nltk',
but prior to execution of 'nltk.downloader'; this may result in
unpredictable behaviour warn(RuntimeWarning(msg))
Then when I use docker-compose up I get these errors when trying to run my system:
Please help me figure out how to fix this!
In your Dockerfile:
RUN python -c "import nltk;nltk.download('your_library')"
After pip install nltk

Dataflow wordcount.py example " Import by filename is not supported"

Using Ubuntu 14.04,
DataFlow Python SDK
Following instructions at [https://github.com/GoogleCloudPlatform/DataflowPythonSDK#status-of-this- release] , after everything is loaded when I try the wordcount example I try get the error "Import by filename is not supported".
I suspect the issue is at line 23 of the wordcount.py example
import google.cloud.dataflow as df
Is there a workaround for this issue?
I have tried the solution posted at Python / ImportError: Import by filename is not supported , but that does not solve the problem.
Since this fails at the first import statement the immediate thing to check is if the Python Dataflow package is installed at all. Th way to do that is by running 'pip freeze'. Here is some output from running this in a virtual environment:
$ pip freeze
... Nothing since it is a clean virtual environment ...
$ pip install https://github.com/GoogleCloudPlatform/DataflowPythonSDK/archive/v0.2.3.tar.gz
... Output from installing packages ...
$ pip freeze
...
python-dataflow==0.2.3
...
Now you can run python and execute 'import google.cloud.dataflow as df' and it should work.
Hopefully this helps!

Resources