CircleCI 2.1 build failing - circleci

I am having some issues setting up my CircleCI config.yml file to accommodate a Cypress e2e test after I upgraded it to version 2.1. It keeps failing with the following error:
#!/bin/sh -eo pipefail
# ERROR IN CONFIG FILE:
# [#/jobs/build] 0 subschemas matched instead of one
# 1. [#/jobs/build] only 1 subschema matches out of 2
# | 1. [#/jobs/build] 2 schema violations found
# | | 1. [#/jobs/build] extraneous key [branches] is not permitted
# | | | Permitted keys:
# | | | - description
# | | | - parallelism
# | | | - macos
# | | | - resource_class
# | | | - docker
# | | | - steps
# | | | - working_directory
# | | | - machine
# | | | - environment
# | | | - executor
# | | | - shell
# | | | - parameters
# | | | Passed keys:
# | | | - working_directory
# | | | - docker
# | | | - steps
# | | | - branches
# | | 2. [#/jobs/build/docker/0] extraneous key [env] is not permitted
# | | | Permitted keys:
# | | | - image
# | | | - name
# | | | - entrypoint
# | | | - command
# | | | - user
# | | | - environment
# | | | - aws_auth
# | | | - auth
# | | | Passed keys:
# | | | - image
# | | | - env
# 2. [#/jobs/build] expected type: String, found: Mapping
# | Job may be a string reference to another job
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
This is my yml file:
version: 2.1
jobs:
build:
working_directory: ~/myapp-web
docker:
- image: node:10.13.0-stretch
env:
- DISPLAY=:99
- CHROME_BIN=/usr/bin/google-chrome
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: Install Dependencies
command: |
npm install -g #angular/cli
npm install
npm install -g firebase-tools
apt-get -y -qq update
apt-get -y -qq install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
apt-get -y -qq update
apt-get -y -qq install python-dev
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user
echo 'export PATH=/root/.local/bin:$PATH' >> ~/.bash_profile
source ~/.bash_profile
pip install awscli --upgrade --user
~/.local/bin/aws configure set default.s3.signature_version s3v4
fi
cd /root/myapp-web/src/app/functions/ && npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Run Tests
command:
npm run test-headless
- run:
name: Deploy to AWS
command: |
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
ng build --prod --configuration=production --progress=false
~/.local/bin/aws --region eu-west-2 s3 sync /root/myapp-web/dist/myapp-web/ s3://$AWS_BUCKET_TARGET --delete --exclude '.git/*'
fi
- run:
name: Deploy to Firebase
command: |
cd /root/myapp-web/src/app/functions/
if [[ "$CIRCLE_BRANCH" == "develop" ]]; then
firebase use myapp-dev
fi
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
firebase use myapp-live
fi
firebase deploy --token=$FIREBASE_TOKEN --non-interactive
branches:
only:
- develop
- master
orbs:
cypress: cypress-io/cypress#1
workflows:
test_then_build:
jobs:
- cypress/run:
start: npm run serve
wait-on: 'http://localhost:4200'

I guess the location where you are filtering branch is wrong. You should filter branches in workflow and not in jobs. Also not worked with orbs, so I'm not sure of the orbs location also.
branches:
only:
- develop
- master

This may help: https://support.circleci.com/hc/en-us/articles/115015953868-Filter-branches-for-jobs-and-workflows
You can only filter branches in jobs if you have only one job and you aren't use the workflows keyword.
i.e.
jobs:
build:
branches:
only:
- master
- /rc-.*/
Otherwise you need to use it like so with workflows:
workflows:
version: 2
build:
jobs:
- test:
filters:
branches:
only:
- master

Related

Permission denied when running Ruby on Rails in a docker container

Trying to run a ruby on rails app, here is the docker-compose.yml:
version: '3'
services:
postgres:
image: postgres:11
volumes:
- ./tmp/db:/app/tmp/db
environment:
POSTGRES_HOST_AUTH_METHOD: trust
app:
build: .
command: bash -c "yarn install --check-files ; rm -f tmp/pids/server.pid ; /app/bin/webpack-dev-server --host 0.0.0.0 & bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
- /home/$USER/.ssh:/root/.ssh
ports:
- "3000:3000"
- "3035:3035"
depends_on:
- postgres
- solr
solr:
image: solr:8.2
And here is the Dockerfile:
FROM <OMITTED>/ruby_2.5.1:latest
MAINTAINER <OMITTED>
# Install apt-get dependencies and nodejs
RUN apt-get update && apt-get install -y \
build-essential
# Set working directory for following commands
RUN mkdir -p /app
WORKDIR /app
# Copy Gemfile and then install gems through bundler
COPY Gemfile Gemfile.lock ./
RUN gem install bundler && bundle install
# Installing nodejs, yarn, and the java runtime environment
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-get update && apt-get install -y yarn postgresql-client-12
RUN yarn install --check-files
RUN apt-get install -y default-jre ffmpeg nano
RUN apt-get install -y ruby-chromedriver-helper
# Copy the main application.
COPY . ./
# Start SSH agent and add key
RUN echo 'eval "$(ssh-agent -s)" &>/dev/null' >> ~/.bashrc
RUN echo 'ssh-add $(find /root/.ssh -type f ! -name "*.*" | grep id) &>/dev/null' >> ~/.bashrc
RUN yarn install --check-files
#Searches in the /.ssh directory in the container for files that do not have an extension,
#searches them for 'id' which will be a key, and then adds to the ssh agent
RUN ln -sf /usr/local/rvm/rubies/ruby-2.5.1/bin/* /usr/local/bin
# Expose port 3000 so it can be seen outside of the container
EXPOSE 3000
EXPOSE 3035
# Run the rails server
CMD ["rails", "server"]
The project is a pretty standard ruby on rails app; it uses postgresql for the database, solr for search functionality on the website, and yarn for javascript packages. When I run docker-compose up, the project builds successfully (gets through all 25 steps in the Dockerfile), the solr and postgresql images start up fine, but when it comes to the web app itself I get this error:
app_1 | yarn install v1.22.18
app_1 | Error: EACCES: permission denied, open '/app/package.json'
app_1 | at Object.openSync (fs.js:462:3)
app_1 | at Object.readFileSync (fs.js:364:35)
app_1 | at onUnexpectedError (/usr/share/yarn/lib/cli.js:88608:106)
app_1 | at /usr/share/yarn/lib/cli.js:88727:9
app_1 | bash: line 1: /app/bin/webpack-dev-server: Permission denied
app_1 | Your RubyGems version (3.0.9) has a bug that prevents `required_ruby_version` from working for Bundler. Any scripts that use `gem install bundler` will break as soon as Bundler drops support for your Ruby version. Please upgrade RubyGems to avoid future breakage and silence this warning by running `gem update --system 3.2.3`
app_1 | Could not locate Gemfile or .bundle/ directory
<OMITTED>-rails_app_1 exited with code 10
I've tried so far:
Setting permissions (sudo chown -R $USER:$USER . and variations)
Pruning docker (running docker system prune -a)
Changing git branches (I originally thought it was some code on my feature branch causing an issue)
Rebooting computer and updating packages
None of these have remedied the issue.

Docker Compose MariaDB ends with code 0 when using command

I have a dockercompose with MariaDB+PHPMyadmin. I'm running some commands inside db service but after them it ends with code 0 while i'm expecting a mariadb server running.
I checked my docker-compose.yml without commands and it worked fine.
This is my compose:
version: '3.1'
services:
db:
image: mariadb:10.3
command: |
sh -c " echo 'Starting Commands'&& apt-get update && apt-get install -y wget && wget https://downloads.mysql.com/docs/sakila-db.tar.gz && tar xzf sakila-db.tar.gz & & echo 'Extraction Finished' && mv sakila-db/sakila-schema.sql /docker-entrypoint-initdb.d/1.sql && mv sakila-db/sakila-data.sql /docker-entrypoint-initdb.d/2.sql && echo ' Finished Commands'"
environment:
MYSQL_ROOT_PASSWORD: notSecureChangeMe
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
This is the output:
db_1 | Starting Commands
db_1 | Get:1
db_1 | Get:2
db_1 | Get:3
db_1 | Get:BLA BLA BLA
db_1 | Unpacking wget (1.20.3-1ubuntu1) ...
db_1 | Setting up wget (1.20.3-1ubuntu1) ...
db_1 | BLA BLA BLA
db_1 | Connecting to downloads.mysql.com (downloads.mysql.com)|137.254.60.14|:443... connected.
db_1 | HTTP request sent, awaiting response... 200 OK
db_1 | Length: 732133 (715K) [application/x-gzip]
db_1 | Saving to: 'sakila-db.tar.gz'
db_1 | BLA BLA BLA
db_1 | 2021-11-10 23:28:49 (1.25 MB/s) - 'sakila-db.tar.gz' saved [732133/732133]
db_1 |
db_1 | Extraction Finished
db_1 | Finished Commands
root_db_1 exited with code 0
I supose thay "command" function could be overriding something but cannot find what.
If you look at the original Dockerfile for mariadb you will see that they have an ENTRYPOINT and CMD which start the database.
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mysqld"]
So try adding this to the list of command you run, like so (notice the last line in the command listing):
db:
image: mariadb:10.3
command: |
sh -c "echo 'Starting Commands' && \
apt-get update && \
apt-get install -y wget && \
wget https://downloads.mysql.com/docs/sakila-db.tar.gz && \
tar xzf sakila-db.tar.gz && \
echo 'Extraction Finished' && \
mv sakila-db/sakila-schema.sql /docker-entrypoint-initdb.d/1.sql && \
mv sakila-db/sakila-data.sql /docker-entrypoint-initdb.d/2.sql && \
echo 'Finished Commands' && \
docker-entrypoint.sh mysqld"
environment:
MYSQL_ROOT_PASSWORD: notSecureChangeMe
exec'ing into the container and checking existing databases:
root#c875454e15cb:/# mysql -u root -pnotSecureChangeMe -e "show databases"
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sakila | <<<<<<<<<<<<<<<<<<<
+--------------------+
This is the MariaDB definition in my docker-compose.yaml and I don't have a problem with it.
services:
mariadb:
image: mariadb:10.6-focal
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: <password>
MYSQL_DATABASE: <database>
volumes:
- mariadb-data:/var/lib/mysql

Conda fails to build, when inside docker container

I am trying to build a docker image. This is the full dockerfile:
FROM ubuntu
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/conda/bin:$PATH
ENV TZ=Europe/Athens
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update --fix-missing && apt-get install -y wget bzip2 ca-certificates \
libglib2.0-0 libxext6 libsm6 libxrender1 \
git mercurial subversion
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda2-4.5.11-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
RUN apt-get install -y curl grep sed dpkg && \
TINI_VERSION=`curl https://github.com/krallin/tini/releases/latest | grep -o "/v.*\"" | sed 's:^..\(.*\).$:\1:'` && \
curl -L "https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini_${TINI_VERSION}.deb" > tini.deb && \
dpkg -i tini.deb && \
rm tini.deb && \
apt-get clean
ENTRYPOINT [ "/usr/bin/tini", "--" ]
CMD [ "/bin/bash" ]
#SECOND PART
RUN apt install -y libgl1-mesa-glx
RUN conda install conda-build
RUN apt-get install -y git
WORKDIR /
RUN git clone https://github.com/cadquery/cadquery.git
WORKDIR /cadquery
RUN conda env create -n cq -f environment.yml
RUN echo "source activate cq" > ~/.bashrc
ENV PATH /opt/conda/envs/cq/bin:$PATH
WORKDIR /testing
However, when it is time to conda build - more specifically on STEP 12, this line here:
RUN conda install conda-build
i get errors.
It seems to be installing the packages normally, then it fails.:
Proceed ([y]/n)?
ruamel_yaml-0.15.100 | 268 KB | ########## | 100%
readline-8.1 | 464 KB | ########## | 100%
bzip2-1.0.8 | 105 KB | ########## | 100%
tzdata-2020f | 123 KB | ########## | 100%
xz-5.2.5 | 438 KB | ########## | 100%
tk-8.6.10 | 3.2 MB | ########## | 100%
conda-build-3.21.4 | 585 KB | ########## | 100%
cffi-1.14.5 | 227 KB | ########## | 100%
ld_impl_linux-64-2.3 | 645 KB | ########## | 100%
urllib3-1.26.4 | 99 KB | ########## | 100%
pyyaml-5.4.1 | 180 KB | ########## | 100%
pip-21.1.1 | 2.0 MB | ########## | 100%
lz4-c-1.9.3 | 216 KB | ########## | 100%
beautifulsoup4-4.9.3 | 86 KB | ########## | 100%
python-3.9.5 | 22.7 MB | ########## | 100%
pkginfo-1.7.0 | 42 KB | ########## | 100%
tqdm-4.59.0 | 90 KB | ########## | 100%
setuptools-52.0.0 | 880 KB | ########## | 100%
python-libarchive-c- | 50 KB | ########## | 100%
cryptography-3.4.7 | 1.0 MB | ########## | 100%
icu-58.2 | 22.7 MB | ########## | 100%
pysocks-1.7.1 | 31 KB | ########## | 100%
libxml2-2.9.10 | 1.3 MB | ########## | 100%
certifi-2020.12.5 | 143 KB | ########## | 100%
openssl-1.1.1k | 3.8 MB | ########## | 100%
libgcc-ng-9.1.0 | 8.1 MB | ########## | 100%
patchelf-0.12 | 92 KB | ########## | 100%
glob2-0.7 | 12 KB | ########## | 100%
idna-2.10 | 52 KB | ########## | 100%
liblief-0.10.1 | 2.0 MB | ########## | 100%
pycparser-2.20 | 94 KB | ########## | 100%
chardet-4.0.0 | 198 KB | ########## | 100%
py-lief-0.10.1 | 1.3 MB | ########## | 100%
markupsafe-2.0.1 | 22 KB | ########## | 100%
zlib-1.2.11 | 120 KB | ########## | 100%
wheel-0.36.2 | 31 KB | ########## | 100%
conda-4.10.1 | 3.1 MB | ########## | 100%
libffi-3.3 | 54 KB | ########## | 100%
yaml-0.2.5 | 87 KB | ########## | 100%
libarchive-3.4.2 | 1.6 MB | ########## | 100%
ca-certificates-2021 | 120 KB | ########## | 100%
conda-package-handli | 967 KB | ########## | 100%
filelock-3.0.12 | 10 KB | ########## | 100%
requests-2.25.1 | 51 KB | ########## | 100%
ncurses-6.2 | 1.1 MB | ########## | 100%
pytz-2021.1 | 244 KB | ########## | 100%
pycosat-0.6.3 | 108 KB | ########## | 100%
psutil-5.8.0 | 342 KB | ########## | 100%
sqlite-3.35.4 | 1.4 MB | ########## | 100%
zstd-1.4.9 | 809 KB | ########## | 100%
jinja2-3.0.0 | 99 KB | ########## | 100%
brotlipy-0.7.0 | 349 KB | ########## | 100%
ripgrep-12.1.1 | 1.5 MB | ########## | 100%
_libgcc_mutex-0.1 | 3 KB | ########## | 100%
six-1.15.0 | 13 KB | ########## | 100%
soupsieve-2.2.1 | 30 KB | ########## | 100%
pyopenssl-20.0.1 | 48 KB | ########## | 100%
Downloading and Extracting Packages
UnicodeDecodeError('ascii', '/info/test/tests/data/\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e\xa8.zip.json', 22, 23, 'ordinal not in range(128)')
The command '/bin/sh -c conda install conda-build' returned a non-zero code: 1
Conda is Too Old
I replicated this error with the continuumio/miniconda2:4.5.11 Docker image:
$ docker run --rm -it continuumio/miniconda2:4.5.11 bash
(base) root#a285050719ad:/# conda install -y conda-build
# ... similar output as OP ...
UnicodeDecodeError('ascii', '/info/test/tests/data/\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e\xa8.zip.json', 22, 23, 'ordinal not in range(128)')
Additionally, attempting to upgrade the conda package fails with some extra advice:
(base) root#a285050719ad:/# conda update conda
Solving environment: done
EncodingError: A unicode encoding or decoding error has occurred.
Python 2 is the interpreter under which conda is running in your base environment.
Replacing your base environment with one having Python 3 may help resolve this issue.
If you still have a need for Python 2 environments, consider using 'conda create'
and 'conda activate'. For example:
$ conda create -n py2 python=2
$ conda activate py2
Error details: UnicodeDecodeError('ascii', '/info/test/tests/data/\xed\x94\x84\xeb\xa1\x9c\xea\xb7\xb8\xeb\x9e\xa8.zip.json', 22, 23, 'ordinal not in range(128)')
That is, you really shouldn't be using these old Miniconda2 images because the conda is no longer compatible with the Anaconda Cloud repository.
Use Newer Miniconda Installer
A clean solution is to install a newer Miniconda (or Miniforge or Mambaforge). The latest ones all have Python 3 in the base. If for some reason one must have Python 2 in the base, which means you can't have the latest conda nor the latest conda-build, then it seems Miniconda up to 4.8.3 supported Python 2.
If possible, use the latest version, as in Python 3. One can always create a Python 2 environment if needed - just better that it not be in the base. Suggested solution:
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh && \
...
Also, consider whether you can more simply start from an existing Docker image with Conda preinstalled.
Update Conda in Place (Not Recommended)
A dirty version would be still using the same 4.5.11 installer, but then upgrading immediately. In the Docker image I can get it to upgrade to 4.8 and keep Python 2.7, then conda-build will install at 3.18.11 (current as of May 2021 is 3.21.4).
This could similarly be done in the Dockerfile with something like
RUN conda install -y python=2.7 conda=4.8 && \
conda clean -qafy && \
conda install -y conda-build && \
conda clean -qafy
Note: I needed the first clean to get conda-build to install.

cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64

I have a docker which was built from this dockerfile:
FROM nvidia/cuda
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
COPY requirement.txt /tmp/requirement.txt
WORKDIR /tmp
RUN pip install -r requirement.txt
requirement.txt contains (the important parts):
Keras==2.4.3
Keras-Preprocessing==1.1.2
tensorboard==2.4.0
tensorboard-plugin-wit==1.7.0
tensorflow-estimator==2.3.0
tensorflow-gpu==2.3.1
When I run nvidia-smi on the running machine I'm getting:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2080 Off | 00000000:01:00.0 On | N/A |
| 32% 28C P8 16W / 215W | 740MiB / 7973MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 473 G /usr/bin/gnome-shell 281MiB |
| 0 1726 G /usr/lib/xorg/Xorg 18MiB |
| 0 2094 G /usr/bin/gnome-shell 77MiB |
| 0 30481 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 146MiB |
| 0 32763 G /usr/lib/xorg/Xorg 212MiB |
I run my docker and I'm getting the following errors:
Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
2020-11-30 09:10:10.385750: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.ing errors:
What is missing in the dockerfile ?
How can I fix the errors ?

Include man pages in ubuntu docker image?

https://github.com/tianon/docker-brew-ubuntu-core/issues/122
RUN apt-get -y update && \
dpkg -l | grep ^ii | cut -d' ' -f3 | xargs apt-get install -y --reinstall
I use the above command to install manpages in an ubuntu docker image.
But I got this error. Does anybody know how to fix the problem? Thanks.
...
update-alternatives: warning: forcing reinstallation of alternative /usr/bin/w.procps because link group w is broken
Processing triggers for libc-bin (2.31-0ubuntu9) ...
E: Could not configure 'libc6:amd64'.
E: Could not perform immediate configuration on 'libgcc-s1:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2)
The command '/bin/sh -c apt-get -y update && dpkg -l | grep ^ii | cut -d' ' -f3 | xargs apt-get install -y --reinstall' returned a non-zero code: 123
The problem seems to be just with reinstalling the libgcc-s1:amd64 package. I found I could 'skip' that one and everything works okay.
Here is the modified RUN line that I use
RUN apt-get -y update && \
dpkg -l | grep ^ii | cut -d' ' -f3 | grep -v '^libgcc-s1:amd64$' | xargs apt-get install -y --reinstall

Resources