I am trying to install google-cloud-bigquery==1.5.0 PyPI packages on GCP composer for a new environment recently has been created. I get this error:
Successfully installed google-cloud-bigquery-1.5.0 google-cloud-core-0.28.1 pypd-1.1.0 strict-rfc3339-0.7
+ [[ -z fail ]]
+ python3 -m pipdeptree --warn fail
Warning!!! Possibly conflicting dependencies found:
* google-cloud-translate==2.0.1
- google-cloud-core [required: >=1.1.0,<2.0dev, installed: 0.28.1]
* google-cloud-storage==1.29.0
I tried another version (2.2.0) and it had conflicts with some other google pre-installed packages.
The new environment image version is composer-1.12.2-airflow-1.10.6.
There is another environment created few months ago and all pypi packages are installed successfully and airflow dags are running smoothly, the image version for it is composer-1.10.0-airflow-1.10.6 .
Question 1: I think the current issue is linked to the image version and probably I have to recreate the new environment with older image version, am I correct?
Question 2: To create new environment I have only three options for image version which are composer-1.12.2-airflow-1.10.6, composer-1.12.2-airflow-1.10.9, composer-1.12.2-airflow-1.10.10. How I can create environment with image version composer-1.10.0 ? We have several other projects and environment with the same location/zone but with composer-1.10.0
Please, have a look to the official documentation for the Apache Airflow and Python versions that Cloud Composer supports. You can refer to the section for the Python packages which comes with composer-1.12.2-airflow-1.10.6. A specific Cloud Composer Airflow version already comes with a set of included packages. When you upgrade/downgrade a specific installed PyPi package or try to use any other packages, you can run into conflicts.
As for now, there is no way to check conflicts ahead of time within Cloud Composer environment. I would like to suggest you to adjust your packages to be able to use in the one of the Composer environments: composer-1.12.4-airflow-1.10.10,
composer-1.12.4-airflow-1.10.9 or composer-1.12.4-airflow-1.10.6.
Related
bug
I am using ubuntu version 20.0.4
I have followed all the turtlebot simulation instruction here but don't what is the issue I also check my bashrc file all the path are correct. Can anyone help me
Update by bashrc file
Run the turtlebot launch file
You're running 20.04, which means your ROS version will be Noetic. However the tutorial you're referencing is written for Kinetic; which means you're installing a package for the wrong version.
You can either repeat the install instructions with: git clone -b noetic-devel https://github.com/ROBOTIS-GIT/turtlebot3_simulations.git. Or cd into the cloned package and checkout the right branch via: git checkout noetic-devel.
Note that you'll most likely need to clean your workspace as it will not have built right.
The github repo for the Spyder IDE Unittest Plugin lists only 2 options for installing the plugin: using the conda spyder-ide channel, as well as pip.
I have been able to install the plugin using the conda forge channel, as indicated in here.
Does it make a difference which channel is used to install the plugin ?
Short answer: no it shouldn't make a difference.
Longer answer: before pressing y at the Proceed ([y]/n? prompt you may want to check which versions of any dependencies are going to be installed, and which channels they will be installed from - especially if you are installing into an existing environment where you may want to upgrade other packages later. If you're happy for your environment to become dependent on packages from conda-forge, there's no issue with using the conda-forge package; otherwise (unless someone more knowledgeable can correct me) I would try and stick to the spyder-ide channel package.
This article on the conda-forge website says
The conda-forge and defaults are not 100% compatible. (...) that
mismatch can lead to errors when the install environment is mixing
packages from multiple channels.
For a longer discussion see the answers to this question.
As always, this advice from the conda-forge page is worth following:
we recommend always installing your packages inside a new environment
instead of the base environment from anaconda/miniconda. Using envs
make it easier to debug problems with packages and ensure the
stability of your root env.
I'm trying to deploy an asp.net core 3.1 API on cloud foundry. I don't have admin rights, i just have developer rights. Is there a way to specify the URL of these libraries (libc6-dev, libgdiplus and libx11-dev) (maybe git or some official repository) so that i can execute the manifest.yml file during deployment and install these dependencies? Also to mention, i cannot turn on support for docker file on cloud foundry, as i get a message (insufficient rights)
I would suggest you give the apt-buildpack a try. You can give it additional Ubuntu package names, and it will install those for you.
You do that through an apt.yml file. Check out this post for instructions.
It's important to understand that the apt-buildpack will install these packages into a non-standard location. Since it also runs as a non-root user, it cannot install them into standard locations.
To work around this limitation, it sets variables like $PATH and $LD_LIBRARY_PATH to point to the locations where it has installed items. Most build tools will pick up these env variables and be able to locate what you install.
It's not perfect though, and some tools require additional env variables to be set. If you still get errors when building, look at your build tools and check if there are ways you can point to where apt-buildpack is installing stuff. The path it writes to can vary based on your buildpack order, but if you print out $PATH you can see the location. It's often /home/vcap/deps/0/... but the index can change based on your buildpack order.
Apache-beam has been frustrating to manage with the correct google-cloud libraries for me to use with Dataflow.
I discovered for what I'm doing I need apache-beam=2.3.0 rather than 2.4.0 (2.4.0 gives a pickling error that I cannot resolve refer to Dataflow Error: 'Clients have non-trivial state that is local and unpickleable')
I need DataflowRunner to use apache-beam=2.3.0 as well so following this persons instructions Custom Apache Beam Python version in Dataflow I just need the actual tar.gz file I thought I had installed it via pip with a pip install apache-beam=2.3.0 so if I look in my system I can't find any tar.gz. When I go to the apache website to download the source code the link is broken.
Where can I find a tar.gz for apache-beam-2.3.0?
The latest and all history releases of apache-beam packages can be found on github - github.com/apache/beam/releases.
I followed the Hyperledger fabric documentation to install and configure it in Windows 10. However when I run the command - "./byfn.sh -m generate" for first-network sample application, I get the following error,
I have gone thru all StackOverflow questions regarding this and made sure following steps are done,
Have set the $PATH variable correctly to include bin folder.
Have downloaded the platform-specific binary and my bin folder looks like this,
I have doubts about following steps,
I have installed Docker for Windows and was able to verify the docker installation by running hello-world image in Docker. However, I have not shared any of my local drives in Docker. Not sure whether this is the cause of this error.
Please note that this is my first question in StackOverflow. Forgive me for any mistakes/redundancies. Any help is greatly appreciated.
I'd suggest making sure that you run the script to download / install the binaries and images from within the fabric-samples directory.
The $Path is exported every time you run the byfn.sh script, confirm that the path configuration in the byfn.sh is correct and points to your correct bin location
# prepending $PWD/../bin to PATH to ensure we are picking up the correct binaries
# this may be commented out to resolve installed version of tools if desired
export PATH=${PWD}/../../bin:${PWD}:$PATH
export FABRIC_CFG_PATH=${PWD}