Installing Apache Airflow on Amazon Lightsail Instance - memory

So i figured out the basics of Apache Airflow and I can run dags/tasks on my computer (so sleek!). However, I want to be able to have these run when my computer's off - so I bought a $5/month Lightsail instance and tried to install Airflow on there pip install airflow.
I keep getting the attached output. It seems as though there isn't enough memory on the instance to finish the command or something but I feel like if that were true, it would output an error message...
Thoughts?

I've found an answer to my own question. I tried out the solution provided for this question, and it worked:
First - I created a virtual environment and entered it by typing these commands into the command line:
virtualenv my-env,
source my-venv/bin/activate
Second - Once in the virtual environment, instead of inputing pip install airflow, I input pip --no-cache-dir install airflow. This worked to avoid the memory error!

Related

Let webserver look at fresh pip install

I'm working on a poc to work with airflow on k8s.
I'm missing an pip package and I'm adding that through the shell in kubectl.
That works and when i look in the shell and do pip list i see the new package.
Im adding it to the webserver of airflow.
But the webserver UI still gives me an error about the missing package.
What should I do to let the webserver that there is an new package.
Thanks in advance

How to modify 'docker-compose-local.yml' for it to install all requirements (needed to run Amazon MWAA environment locally)

I am running aws-mwaa-local-runner in order to run a local Apache Airflow environment (in Docker for Windows).
However, after creating the container using ./mwaa-local-env start, I repeatedly get the Broken DAG ModuleNotFoundError' However, when I check my /docker/config/requirements.txt file (see here, although my file has a few more requirements that I need in it). When I compare my /docker/config/requirements.txt file with the output of pip freeze command run in Airflow container, I can see those requirements I need for my DAGs are missing.
I tried to pip install my other requirements in Airflow container but to no avail.
Is there a way to modify docker-compose-local.yml file so it installs all of my requirements.txt when creating the container (i.e. running the Airflow)?
Is there maybe something I might be missing? Any help or suggestion would be greatly appreciated.
Look at this: https://github.com/aws/aws-mwaa-local-runner . You should run the requirements file located in dags, locally.
pip install -r requirements.txt
Add your extra requirements to dags/requirements.txt, not docker/config/requirements.txt. The former is installed every time you start the service, but the latter is only installed when you build or rebuild the image.
Additionally, keeping your added requirements separate is important because you will need to upload the list to your MWAA environment.

When Jenkins job in running IP gets frozen and inaccessible

I have set up some python script in Jenkins with AWS/Ubuntu server.
However, when I run a job, my ip address get inaccessible http://3.82.243.44:8080/, just spinning, and I can't do anything within Jenkins app
My AWS instance is showing as Running so I don't its an issue there.
This is what I latest installed on it
sudo apt-get install python3-pip
And this is what I'm trying to build (customer python build) in Jenkins
pip3 install -r requirements.txt
sbase install chromedriver latest
pytest --headless
If anyone has experience and what I may be doing wrong, please let me know.
It's happening the same to me. The only solution I have found for the moment is to stop de instance from the AWS console and start it again. I'm sill looking for a good solution for this problem, but i'm new in this "world".

What does the USER keyword mean in Ubuntu

I'm just recently started to using the Windows Subsystem for Linux. I was trying to install Angular and ran into an error. I found a potential solution, but I don't understand part of the solution. In the script bellow, what do the keywords USER, ENV, and RUN mean, and what are they called? I tried running "USER node" and i got an error
USER node
RUN mkdir /home/node/.npm-global
ENV PATH=/home/node/.npm-global/bin:$PATH
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
RUN npm install -g #angular/cli
Here is the entire answer in case you need more context https://github.com/angular/angular-cli/issues/7389
USER sets a username to use when executing commands that follow later in Dockerfile. See Dockerfile docs
That is not a script. Those directives have no meaning in Ubuntu.
That is a dockerfile. It is used by Docker to build images.

dockerfile: vim (compiled python), vim-ipython, and ipython notebook

I would like to build a Dockerfile in linux which
1. compiles vim with python
2. installs python stack (such as numpy, scipy, ipython, etc)
3. creates ssl certificate for ipython-notebook, to view the notebooks on host machine
It seemed straightforward enough. But I have run into problems despite a variety of approaches, such as linking separate containers, using anaconda, as well as with a single unified image vs separate layers, or creating a user or running all as a root.
In order to run vim, simply installing to root, does not activate pathogen bundle/vim-ipython. Creating a user allows pathogen bundles (ie nerdtree works) to install, but :IPython throws error.
:IPython failed
^-- failed '' not found .
Ive tried the above with no layers/1 large Dockerfile, and with different layers for the python stack, vim, and the ipython notebook.
Dockerfile
What am I not seeing here ?
what does the ^-- failed '' not found referring to?
Ive tried running the ipython notebook using --no-browser & and then running vim, or using running two shells on the same container... but cant get past this error.
Here is a working Dockerfile for anyone trying to get vim-ipython working in Docker.
issues:
user/shared home needed to for vim, despite runtimepath in .vimrc to pathogen/bundle
%connect_info >> required with containers
I am running in root, not sure why vim required a USER to install packages, but changing to USER would throw errors with CMD
--best

Resources