Airflow installation with sudo doesn't take AIRFLOW_HOME setting - environment-variables

I am trying to install AIRFLOW as sudo user to /etc/airflow, here is what I ran and succeed but with a wrong installation folder:
sudo su
export AIRFLOW_HOME=/etc/airflow
export SLUGIFY_USES_TEXT_UNIDECODE=yes
pip install apache-airflow
So it seems sudo lost the environment setting here, printenv does not have the settings made with sudo.
What am I wrong here?
Thanks.

Related

upgrade 4.0.17 to 4.1 version

How to upgrade an earlier version to the latest?
I am running 4.0.17 (bitnami) version and trying to start using the latest 4.1 version. Platform Debian.
Unpack 4.1 files
CD into the folder and run composer update --no-dev
Copied .env file from 4.0.17 version backup
Install javascript assets using npm install
Compile javascript assets using npm run dev
Has anyone seen any upgrade steps? I am only getting error 500 in the browser. How to get access to detailed error logging to get more detailed error messages?
I was encountering similar issues when trying to upgrade the Processmaker 4 AMI to the latest version. After some trial and error and a bit of help from folks with experience in laravel, I seem to have resolved most issues with my processmaker upgrade. These are the full steps I used to upgrade the AMI:
sudo su - bitnami
cd /opt/bitnami
sudo wget https://github.com/ProcessMaker/processmaker/releases/download/v4.1.0/pm4.1.tar.gz
sudo ./ctlscript.sh stop
sudo mv processmaker/ processmaker-old/
sudo tar -xzvf pm4.1.tar.gz -C .
sudo cp processmaker-old/.env processmaker/
sudo cp processmaker-old/laravel-echo-server.json processmaker/
sudo cp /opt/bitnami/processmaker-old/storage/oauth-p* /opt/bitnami/processmaker/storage/
sudo cp -R /opt/bitnami/processmaker-old/storage/app/* /opt/bitnami/processmaker/storage/app/
sudo chown -R bitnami:daemon processmaker/
cd processmaker/
composer install --no-dev
npm install
npm run dev
sudo find /opt/bitnami/processmaker/ | sudo xargs sudo chmod a+w
php artisan migrate
sudo /opt/bitnami/ctlscript.sh start
My current sticking point is that previously uploaded media is not getting displayed on the site, but I am no longer getting errors with laravel-echo-server or MySQL.
Aside from the files which needed to be copied from the old installation (.env, laravel-echo-server.json, oauth keys and app data) The biggest hurdle for me here was php artisan migrate, which modifies tables in the processmaker database to support changes in laravel/processmaker.

Pipenv: command not found in Jenkins

I am getting /var/lib/jenkins/workspace/<workspace name>#tmp/durable-9687b918/script.sh: line 1: pipenv: command not found while running a Jenkins pipeline.
It fails while running the following command:
pipenv install --dev
If I run the same command in the server where Jenkins is hosted it works fine. This started failing after I reinstalled Pipenv with below steps:
Uninstalled using: pip uninstall pipenv
Installed using: pip3 install pipenv, tried sudo -H pip3 install -U pipenv as well issue persist.
I had to switch to pip3 because I am using Python 3 now instead of 2.
check PATH, you might running python2.x and PIP module installed with pip3. so set your PATH accordingly.

Unable to create superuser in cvat

I am able to build and run cvat tool. But when I trying to create a superuser then it is giving me below error.
ImportError: No module named 'gitdb.utils.compat'
I am running below command for creating a superuser.
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
Does anyone have any idea or suggestion for the above problem?
It seems the newer version of gitdb does not work with cvat (default version is 4.0.2), you can follow Furkan Kirac answer but with gitdb version is 0.6.4:
# pip uninstall gitdb
# pip install gitdb==0.6.4
This problem is most probably due to a newer gitdb2 python package.
If cvat is already built as a docker container, for testing, you must log into the container as root, uninstall it and install an older gitdb.
docker exec -it -u root cvat bash
pip3 uninstall gitdb2
pip3 install gitdb
Then, running python script must work. If that is the case, then a persistent solution is to rebuild the containers.
You need to edit Dockerfile as below:
# Install git application dependencies
...
fi
RUN pip3 uninstall -y gitdb2
RUN pip3 install --no-cache-dir gitdb
Run "docker-compose build".
Hope this helps.

Why dockered centos doesn't recognize pip?

I want to create a container with python and few packages over centos. I've tried to run several commands inside raw centos container. Everything worked fine I've installed everything I want. Then I created Dockerfile with the same commands executed via RUN and I'm getting /bin/sh: pip: command not found What could be wrong? I mean the situation at all. Why everything could be executed in the command line but not be executed with RUN? I've tried both variants:
RUN command
RUN command
RUN pip install ...
and
RUN command\
&& command\
&& pip install ...
Commands that I execute:
from centos
run yum install -y centos-release-scl\
&& yum install -y rh-python36\
&& scl enable rh-python36 bash\
&& pip install django
UPD: Full path to the pip helped. What's wrong?
You need to install pip first using
yum install python-pip
or if you need python3 (from epel)
yum install python36-pip
When not sure, ask yum:
yum whatprovides /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : #System
Matched from:
Filename : /usr/bin/pip
python2-pip-18.1-1.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : updates
Matched from:
Filename : /usr/bin/pip
python2-pip-18.0-4.fc29.noarch : A tool for installing and managing Python 2 packages
Repo : fedora
Matched from:
Filename : /usr/bin/pip
This output is from Fedora29, but you should get similar result in Centos/RHEL
UPDATE
From comment
But when I execute same commands from docker run -ti centos everything
is fine. What's the problem?
Maybe your PATH is broken somehow? Can you try full path to pip?
As it has already been mentioned by #rkosegi, it must be a PATH issue. The following seems to work:
FROM centos
ENV PATH /opt/rh/rh-python36/root/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN yum install -y centos-release-scl
RUN yum install -y rh-python36
RUN scl enable rh-python36 bash
RUN pip install django
I "found" the above PATH by starting a centos container and typing the commands one-by-one (since you've mentioned that it is working).
There is a nice explanation on this, in the slides of BMitch which can be found here: sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#24
Q: Why doesn't RUN work?
Why am I getting ./build.sh is not found?
RUN cd /app/srcRUN ./build.sh
The only part saved from a RUN is the filesystem (as a new layer).
Environment variables, launched daemons, and the shell state are all discarded with the temporary container when pid 1 exits.
Solution: merge multiple lines with &&:
RUN cd /app/src && ./build.sh
I know this was asked a while ago, but I just had this issue when building a Docker image, and wasn't able to find a good answer quickly, so I'll leave it here for posterity.
Adding the scl enable command wouldn't work for me in my Dockerfile, so I found that you can enable scl packages without the scl command by running:
source /opt/rh/<package-name>/enable.
If I remember correctly, you won't be able to do:
RUN source /opt/rh/<package-name>/enable
RUN pip install <package>
Because each RUN command creates a different layer, and shell sessions aren't preserved, so I just ran the commands together like this:
RUN source /opt/rh/rh-python36/enable && pip install <package>
I think the scl command has issues running in Dockerfiles because scl enable <package> bash will open a new shell inside your current one, rather than adding the package to the path in your current shell.
Edit:
Found that you can add packages to your current shell by running:
source scl_source enable <package>

Invoke Ansible playbook in Jenkins

I have jenkins build and I am trying to invoke a ansible playbook file for an s3 upload. When I execute a post-build-script for invoking an ansible playbook file, I am ending with below error.
Cannot run program "ansible-playbook" (in directory "/var/jenkins_home/workspace/mybuild"): error=2, No such file or directory
Below screenshot is ansible post build script configuration.
FYI: There is a file(ansibledemo.yml) in my build folder. I tried giving absolute path(/var/jenkins_home/workspace/mybuild/ansibledemo.yml). Still no go.
When I try running ansible-playbook myplaybook.yml directly in jenkins image(terminal) I am ending up with bash: ansible-playbook: command not found
When I tried installing ansible in my jenkins server, I couldn't execute any installation commands. Please see the below screenshot.
Ansible is not install on your Jenkins machine, first you need to install the ansible on the jenkins machine:
On Ubuntu/Debian:
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible
On CentOS/RedHat:
sudo yum install epel-release
sudo yum install ansible
After that you will be able to run the ansible-playbook.
You can try to install using pip version as an alternative and try, Please see the below steps,
$ virtualenv venv
$ source venv/bin/activate
$ pip install ansible-container[docker,openshift]
You can see more options to install in docs: https://docs.ansible.com/ansible-container/installation.html
But always it is a good option to keep a separate vm / docker like "ansible-controller" and use that as a slave to jenkins, So that you don't need ansible plugins in ansible. And jenkins will be always stable without much load
Download package information from the configured sources.
# apt update
Install ansible
# apt install ansible
That's it.
If you run official jenkins container (based on debian) than repo with ansible build in already and you don't need "apt-add-repository". But you could install apt-add-repository by installing software-properties-common for further using.
dpkg -S apt-add-repository tells that this packet belongs to software-properties-common.
Error appears because the author of container always tries to make it as light as possible and remove package information.
You don't need sudo, because you become root in container by default. You become another user only if you mention it in intentionally.
Please, add information that you work in container to your question.

Resources