VersionMismatchWarning: Mismatched versions found - blosc - dask

I cannot do a 'pip install blosc' on windows. I devop on windows and have my workers and schedule running on vm's with dask-docker. Anyone have any ideas? Seem like dask really wants all linux all the time.
blosc
+-----------------------+---------+
| | version |
+-----------------------+---------+
| client | None |
| scheduler | 1.9.1 |
| tcp://127.0.0.1:38323 | 1.9.1 |
+-----------------------+---------+
(venv) D:\dev\code\datacrunch>pip install -U blosc
Collecting blosc
Using cached blosc-1.9.1.tar.gz (809 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Building wheels for collected packages: blosc
Building wheel for blosc (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: 'd:\dev\code\netsense.support\datacrunch\venv\scripts\python.exe' 'd:\dev\code\netsense.support\datacrunch\venv\lib\site-packages\pip_vendor\pep517_in_process.py' build_wheel 'C:\Users\H166631\AppData\Local\Temp\tmpwgt4t634'
cwd: C:\Users\H166631\AppData\Local\Temp\pip-install-r1476vwy\blosc
Complete output (162 lines):
Not searching for unused variables given on the command line.
-- The C compiler identification is unknown
CMake Error at CMakeLists.txt:3 (ENABLE_LANGUAGE):
No CMAKE_C_COMPILER could be found.

The compression has to match throughout the dask cluster and because you don't have blosc installed you run into some issues. As a side note, there is an effort to improve messaging of the error in PR #3742 . I can think of two solutions:
Switch to conda instead of pip (though this is perhaps a non-starter for you)
Use a different compression (one that you have installed or can easily install on your machine)
For 2. you can either set the compression programmatically like the following:
In [1]: import dask
In [2]: import distributed
In [3]: dask.config.set({'distributed.comm.compression': 'lz4'})
Or on the CLI:
DASK_DISTRIBUTED__COMM__COMPRESSION=zlib dask-worker
Or with with the dask config file. For more info, I would recommend reading through: https://docs.dask.org/en/latest/configuration.html and https://docs.dask.org/en/latest/configuration-reference.html#distributed.comm.compression

You can always just not install blosc on your linux machines. Dask is happy to run on Windows. It's even happy (to a certain extent) to mix between Windows and Linux. But it's not happy if you have libraries on some of your machines that you don't have on others. Library uniformity is key.

Related

Nixos 22.05 system build fails when referencing channel "file 'nixos-2111' was not found in the Nix search path (add it using $NIX_PATH or -I)"

When running sudo nixos-rebuild --upgrade boot I run into an error of:
error: file 'nixos-2111' was not found in the Nix search path (add it using $NIX_PATH or -I)
at /etc/nixos/chris.nix:30:23:
29| };
30| nixos-2111 = import <nixos-2111> {
| ^
31| config = config.nixpkgs.config;
This only occurred after I updated the nixos channel with sudo nix-channel --add https://nixos.org/channels/nixos-22.05 nixos (for the purpose of upgrading my Nixos system). This config built correctly previously (which is on 20.09).
What might be the cause of this issue? How can I diagnose this further? That channel definitely exists according to:
sudo nix-channel --list
nixos https://nixos.org/channels/nixos-22.05
nixos-2003 https://nixos.org/channels/nixos-20.03
nixos-2111 https://nixos.org/channels/nixos-21.11
nixos2003 https://nixos.org/channels/nixos-20.03
nixos2009 https://nixos.org/channels/nixos-20.09
unstable https://nixos.org/channels/nixpkgs-unstable
And:
nix repl '<nixos-2111>'
Welcome to Nix version 2.3.11. Type :? for help.
Loading '<nixos-2111>'...
Added 15491 variables.
My only suspicion is the new nix version of 2.8 from the prior 2.3.11 - perhaps it uses an different set of channels?
Update: Just removed the references to this channel (and the associated packages), and it built successfully. Which is weird as there were other channels referenced in this exact format so maybe something specifically to do with 21.11...

nix-shell slow execution of program

I am working with clash (haskell -> verilog) compilation the demo project at https://github.com/mheinzel/clash-yosys-demo provides both a nix branch and one using stack.
running
time clash -v --verilog src/Top.hs -outputdir build
takes
~ 1m if I run it inside a nix shell, compared to
~ 2s if I use the version I installed with stack install --resolver lts-12.12 clash-ghc
I am very much a Nix beginner and using Nixpkgs, so can anyone give me pointers where to look at why my Nix environment version is so much slower? It seems the linking of used libraries is taking ages (this is what I got from using -v, the verbose flag).

Run Vagrant inside Docker container

I am looking for a way to run vagrant inside docker container. I tried using Ubuntu base container but had faced some issues while doing vagrant up, it failed.
root#991baf290ddc:/srv# vagrant up
VirtualBox is complaining that the installation is incomplete. Please
run VBoxManage --version to see the error message which should contain
instructions on how to fix this error.
root#991baf290ddc:/srv# VBoxManage --version
WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-dkms package and the appropriate headers, most likely linux-headers-.
You will not be able to start VMs until this problem is fixed.
5.0.40_Ubuntur115130
I tried installing virtualbox-dkms package but no help.
Deleting module version: 5.0.40 completely from the DKMS tree.
Done.
Loading new virtualbox-5.0.40 DKMS files...
dpkg: warning: version '*-*' has bad syntax: version number does not start with digit
dpkg: warning: version '3.10.0-514.16.1.el7.x86_64' has bad syntax: invalid character in revision number
It is likely that 3.10.0-514.16.1.el7.x86_64 belongs to a chroot's host
Building only for *
Building initial module for *
: using /lib/modules/4.4.0-83-generic/build/.config
(I hope this is the correct config for this kernel)
Done.
Error! The directory /lib/modules/* doesn't exist.
You cannot install a module onto a non-existant kernel.
The command '/bin/sh -c dpkg-reconfigure virtualbox-dkms' returned a non-zero code: 6
[root#test-docker vagrant-in-docker]#

Suse Linux docker file

I have a suse linux 12 ec2 instance. I have activated a image sles11sp3-docker-image using sledocker. In the Dockerfile when I try to install ibm java 1.6 using
RUN zypper in java-1_6_0-ibm, I get following error .
Refreshing service 'container-suseconnect'.
Problem retrieving the repository index file for service 'container-suseconnect':
[|]
Skipping service 'container-suseconnect' because of the above error.
Warning: No repositories defined. Operating only with the installed resolvables. Nothing can be installed.
Loading repository data...
Reading installed packages...
'java-1_6_0-ibm' not found in package names. Trying capabilities.
Resolving package dependencies...
No provider of 'java-1_6_0-ibm' found.
Nothing to do.
The command '/bin/sh -c zypper in java-1_6_0-ibm' returned a non-zero code: 104
Please help
According to the docs (https://www.suse.com/documentation/sles-12/singlehtml/dockerquick/dockerquick.html), running zypper ref -s only gets you repo URLs with 12 hour tokens. Moreover, this command only appears to work while running in Docker on a SLES12 host.
Once I push the image to a repo and run it on another host, zypper ref -s no longer works (same error as yours). I'm basically stuck pre-installing all the base stuff before I publish the image.

How to get past MongoDB/PCRE “symbol lookup error” when upgrading from MongoDB 1.8 to 2.4

I’m doing some Ruby development on servers that I have inherited (aka: I never originally set up.) that haven’t been maintained in a while and noticed that the installed MongoDB version was 1.8 when a 2.4 series upgrade was available. Since the box is running a RedHat/CentOS variant that uses yum to install RPMs, I went ahead and do what I usually do to upgrade. First, stop the running MongoDB instance like this:
sudo service mongod stop
And then upgrade the packages from the repo.
sudo yum install mongodb mongodb-server libmongodb
All went well including dependencies being installed as well. But when I went to restart MongoDB bia this command:
sudo service mongod start
Nothing appeared to happen. Connections were dead. Checking the MongoDB log showed the following one sad error line:
/usr/bin/mongod: symbol lookup error: /usr/bin/mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE
What the heck is that about? I saw this question and answer thread that recommending rebuilding from the RPM source as well as other posts online that advise some variant of the same: Download source code to recompile or download the RPM directly from the MongoDB site. But all of those solutions seem to radical for what should be a simple package installer update? What could be happening?
I figured it out. Somewhat accidentally, but fairly certain this is the solution. The short answer? If you get that /usr/bin/mongod: symbol lookup error: /usr/bin/mongod: undefined symbol: _ZN7pcrecpp2RE4InitEPKcPKNS_10RE_OptionsE then you should install pcre and pcre-devel from the repository like this:
sudo yum install pcre pcre-devel
Details on how I discovered this are basically, I was resigning myself to recompiling from scratch as outlined in this answer. That is something I do not want to do unless there is a very good reason. But as the answerer states, before recompiling one should install the following compiler items and related libraries:
sudo yum install rpm-build redhat-rpm-config gcc gcc-c++ make yum install openssl-devel snappy-devel v8-devel boost-devel python-devel python-nose scons pcre-devel readline-devel libpcap-devel gperftools-devel
Okay, so I did that to lay down the groundwork for a source code rebuild. But also noticed in the install that pcre was being installed as it was apparently missing and required dependency for pcre-devel; this is key. As I was getting ready to recompile I just decided to attempt to start mongod again like this:
sudo service mongod start
And checked. Lo and behold, the MongoDB install was running again! But why? This answer here holds the clue:
The error was caused by libpcre changing the signature of
RE::Init() to only take a std::string, rather than a char*. This
is fixed if you get a newer version of libpcrecpp, which adds the
old interface for backwards compat.
That answer also recommended recompiling from source, but now that made little sense since it was clear my MongoDB install was up and running again. So I ran lsof on the development box and saw this:
sudo lsof | grep pcre
nginx 892 deploy mem REG 253,2 97140006 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
nginx 893 deploy mem REG 253,2 97140006 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
nginx 1369 root mem REG 253,2 97140006 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
mongod 26841 mongodb mem REG 253,2 1052673 /usr/lib64/libpcrecpp.so.0.0.0 (path dev=0,53)
mongod 26841 mongodb mem REG 253,2 97126735 /lib64/libpcre.so.0.0.1 (path dev=0,53)
grep 28590 deploy mem REG 253,2 97126735 /lib64/libpcre.so.0.0.1 (path dev=0,53)
Note how the mongod user is loading /lib64/libpcre.so.0.0.1. That has to be it, right?
I confirmed this by jumping onto the partner/twin production box of this setup—where I did not upgrade MongoDB on yet—and ran the same lsof command and this was the result:
sudo lsof | grep pcre
nginx 922 root mem REG 253,2 81795343 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
nginx 923 deploy mem REG 253,2 81795343 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
nginx 924 deploy mem REG 253,2 81795343 (deleted)/lib64/libpcre.so.0.0.1 (stat: No such file or directory)
grep 8067 deploy mem REG 253,2 81791051 /lib64/libpcre.so.0.0.1 (path dev=0,61)
Note how in comparison there is 100% no instance of mongod loading /lib64/libpcre.so.0.0.1. So the solution to this issue was not to recompile from source—and thus dealing with the headaches of a no-RPM install—but rather just to install pcre from the repository.

Resources