deploy pypi after sucess travisci - travis-ci

I've successfully deployed a package to PyPI using Travis, however I find it deploys no matter scripts are successful or not. How to deploy only if unit tests are successful?
language: python
python:
- '3.6'
os:
- linux
install:
- pip install -q -r requirements-dev.txt
- pip install coverage
- pip install coveralls
script:
- python test.py
- coverage run test.py
after_success:
- coverage report
deploy:
provider: pypi
user: user
distributions: "bdist_wheel"
password:
secure: secure_pẁ
on:
tags: false
branch: dev

Sounds like you want to define two separate build stages, like "Test" and "Deploy", where the "Deploy" stage is combined with a PyPI deployment.

Related

How to cache/reuse docker image instead of downloading for each step in bitbucket pipeline?

I'm trying to streamline my pipeline a little bit to speed things up (using parallel steps).
The issue that I'm running into is that it downloads the image for each step; is there a way to avoid that?
I'd like it to use the image I state, but then use a cached version of that for each step afterward. Is that possible??
I've tried a few things but haven't been able to manage it. My pipelines file looks like this:
######
# Docker Image
######
image: my_user/test_ci:latest
######
# Step Definitions
######
definitions:
- step: &build-dev
name: Build Project With Dev Branches
caches:
- pip
script:
- git clone --branch dev git#bitbucket.org:my_user/launcher.git
- git clone --branch dev git#bitbucket.org:my_user/reports.git
artifacts:
- reports/**
- launcher/**
- step: &install-requirements
name: Install Requirements
caches:
- pip
script:
- virtualenv venv
- source venv/bin/activate
- pip install -r requirements-dev.txt
artifacts:
- reports/**
- launcher/**
- venv/**
######
# Pipelines
######
pipelines:
pull-requests:
dev:
- step: *build-dev
- step: *install-requirements
- parallel:
- step:
name: Test Launcher
caches:
- pip
script:
- source venv/bin/activate
- pytest launcher
- step:
name: Test Reports
caches:
- pip
script:
- source venv/bin/activate
- pytest reports
This works fine, but download the image for each step really, really slows it all down.
Is there a way around this?

Do I need to install newman on my desktop if I want to run a newman command in Gitlab?

I'm new to Gitlab, newman and docker, and I'm sort of in a crash course on how to integrate everything together.
On my desktop (Windows OS), I've installed newman, and I have managed to run "newman run [postman collections]" via windows commandline.
What i ultimately want to do is to do run a newman command in Gitlab.
In the .gitlab-ci, I have this:
stages:
- test
Test_A:
stage: test
image: postman/newman
script:
- newman run Collection1.json
A few questions come to mind:
Do I need to also run the "npm install -g newman" command in the .gitlab-ci file?
If not, how does Gitlab know the syntax of a newman command?
Example: newman run
Do I need to specify in my .gitlab-ci file a command for docker?
Example: docker pull postman/newman
Update#2
stages:
- test
before_script:
- npm install -g newman
- npm install -g npm
Test_A:
stage: test
script:
- newman run Collection1.json
The first thing you have to identify is how your GitLab pipelines are executed.
My personal choice is to use Docker-based runner.
If you're using a GitLab docker-runner to run your pipeline then you just have to define your container image in the .gitlab-ci.yml file.
Here's a tested (on GitLab.com) version of pipeline yml
stages:
- test
run_postman_tests:
stage: test
image:
name: postman/newman
entrypoint: [""] #This overrides the default entrypoint for this image.
script:
- newman run postman_collection.json

Tests in Travis CI are not found

I am trying to implement Travis CI in my Django/ Vue.js project.
I added this .travis.yml file to my root folder:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- ./pizza/manage.py test --keepdb
But as I run the build I get this output:
pip install -r requirements.txt
./pizza/manage.py test --keepdb
System check identified no issues (0 silenced).
Ran 0 tests in 0.000s
OK
The command "./pizza/manage.py test --keepdb" exited with 0.
Done. Your build exited with 0.
Running my tests locally with 'python3 manage.py test --keepdb' works perfectly.
My manage.py is not in my root folder.
Looks like my tests are not found… How can I fix it?
If I get it right, your manage.py is not in your root directory but in a /pizza/ directory. Travis should run the script inside this directory.
Change your .travis.yml this way:
language: python
python:
- '3.7.3'
sudo: required
before_install:
- chmod +x ./pizza/manage.py
before_script:
- pip install -r requirements.txt
- cd ./pizza/
env: DJANGO_SETTINGS_MODULE="pizzago.settings"
services:
- postgresql
script:
- python manage.py test --keepdb

Bitbucket : is it possible to have one bitbucket-pipelines.yml file per branch?

I'd like to have one pipeline per branch
I put one on branch develop and another on branch master but they're no taken into account.
Yes, its possible.
But, you don't need to set a different file for each branch. You can organize pipelines for every and each branch within the same file according to the documentation.
The best way to set up your pipelines is defining each step and then calling the steps for each branch you want to.
Don't forget to define the default steps (these steps will run for every branch you didn't define previously).
Your bitbucket-pipelines file would look something like this:
image: python:3.7.3
definitions:
steps:
- step: &test
name: Test project
caches:
- pip
script:
- apt-get -y update
- pip install --upgrade pip
- pip install -r requirements.txt
- python -m unittest discover tests
- step: &lint
name: Execute linter
script:
- pip install flake8
- chmod a+x ./linter.sh
- ./linter.sh
- step: &bump
name: Bump version
script:
- git config remote.origin.url $BITBUCKET_URL_ORIGIN
- python bump.py
pipelines:
branches:
master:
- step: *test
- step: *lint
- step: *bump
develop:
- step: *test
- step: *lint
default:
- step: *lint

Share result of a step between different jobs in CircleCi

I have this generic config.yml in CircleCi.
version: 2
jobs:
build:
docker:
- image: circleci/node:7.10
steps:
- checkout
- run: npm install
- run: npm run lint
deploy:
machine: true
steps:
- checkout
- run: npm install
- run: npm run build
As you can see, npm install is called twice, which is a duplication of tasks.
Is it possible to share the results of npm install between the 2 jobs?
The end goal is to install the package only one time.
What you're looking for is Workspaces: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs

Resources