I am trying to deploy a python Flask Application on Cloudfoundry but it fails.
It shows the output
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
Please find the screenshot of the error
Here is what my travis.yml looks like:
stages:
- test
- deploy
language: python
python:
- '3.6'
env:
- PORT=8080
cache: pip
script: python hello.py &
jobs:
include:
- stage: test
install:
- pip install -r requirements.txt
- pip install -r tests/requirements_test.txt
script:
- python hello.py &
- python tests/test.py
- stage: deploy
deploy:
provider: cloudfoundry
username: vaibhavgupta0702#gmail.com
password:
secure: myencrytedpassword
api: https://api.run.pivotal.io
organization: Hello_Flask
space: development
on:
repo: vaibhavgupta0702/flask_helloWorld
Here is what my manifest.yml file looks like
---
applications:
- name: hello
memory: 128M
buildpacks:
- https://github.com/vaibhavgupta0702/flask_helloWorld.git
command: python hello.py &
timeout: 60
env:
PORT: 8080
I do not understand why the error is coming. Any help would be highly appreciated.
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
This means exactly what it says. The domain cfapps.io is a shared domain which can be used by many people on the platform. When you see this error, it is telling you that someone else using the platform has already pushed an app which is utilizing that route.
There's a couple possibilities here:
Routes are scoped to a space. If you have multiple spaces, it's possible that the route in question could be used by an app in one of your other spaces. What you can do is run cf routes --orglevel. This will list all the routes in all the spaces under your organization. If you see the route hello listed under one of your spaces, simply run cf delete-route cfapps.io --hostname hello in the space where the route exists. That will delete it. Then deploy again.
Someone else is using the route. This means it would be in another org & space where you can't see it being used. In this case, there's not much you can do. You just need to pick another route or use a custom, private domain (note that custom, private domains require you to register a domain name & configure DNS as described here).
You can pick another route in a couple ways.
Use a random route. This works OK for testing, but not for anything where you want a consistent address. To use, just add random-route: true to your manifest.
Change your app name. By default, the route assigned to your app will be <app-name>.<default-domain>. Thus you get hello.cfapps.io because hello is your app name and cfapps.io is the default domain on PWS. If you change your app name to something unique, that'll result in a unique route that no one else is using.
Specifically define one or more routes. You can do this in your manifest.yml file. You need to add a routes: block and then add one or more routes.
Example:
---
...
routes:
- route: route1.example.com
- route: route2.example.com
- route: route3.example.com
Related
I've an issue wit a redirect-middleware in traefik V2.
We want to add a trailing-slash to a sublocation and then remove
the path with a PathPrefix-Rule to get correct paths from the docker service. (MkDOCS)
We defined the rule in dynamic_conf.toml for traefik as a general middleware.:
[...]
[http.middlewares.add-trailing-slash.redirectregex]
regex= "(https?://[^/]+/[a-z0-9_]+)$$"
replacement= "$${1}/"
permanent = true
[...]
At the moment this is our label-file included with docker-run:
traefik.enable=true
traefik.http.routers.dockerservice.entryPoints=websecure
traefik.http.routers.dockerservice.rule=PathPrefix(`/dockerservice`)
traefik.http.routers.dockerservice.tls=true
traefik.http.middlewares.dockerservice-strip.stripprefix.prefixes=/dockerservice
traefik.http.routers.dockerservice.middlewares=add-trailing-slash#file,doc-strip
At https://regex101.com/ the rule seems to work fine for eg https://domain.tld/dockerservice
If the service is up and we navigate to https://domain.tld/dockerservice
it redirects to https://domain.tld/${1}/
The Variable is not expanded. Instead we get the 404-not found error (as expected because a service route with this name does not exists in our traefik setup)
So the trailing-slash is added as desired, but the dockerservice-capture is not expanded.
We've also tried this as a #docker rule on the label_file for the docker-run command but the "error" remains.
We also tried this which we found on the web first (as #file in dyanmic_conf or #docker as label-file for docker run):
traefik.http.middlewares.add-trailing-slash.chain.middlewares=strip-prefix-1,strip-prefix-2
traefik.http.middlewares.strip-prefix-1.redirectregex.regex=^(https?://[^/]+/[a-z0-9_]+)$$
traefik.http.middlewares.strip-prefix-1.redirectregex.replacement=$${1}/
traefik.http.middlewares.strip-prefix-1.redirectregex.permanent=true
traefik.http.middlewares.strip-prefix-2.stripprefixregex.regex=/[a-z0-9_]+
We where trying with ${0} and multiple other attempts where made using double quotes, and single quotes or $-signs.
Our toolchain is as follows:
pushing into the git-repo on the master branach
gitlab-runner executes a .sh file with docker build and docker run command
label-file is provided in the git-repo
We would like to have a generic redirect for all services which have this middleware added
to add a trailing slash if only one Path-Element is added and the trailing slash is missng
So
https://domain.tld/dockerservice should redirect to https://domain.tld/dockerservice/
a Request like https://domain.tld/dockerservice/page should not be changed because
of the strip in the mkdocs container only /page is needed.
At this point we tried a lot and we don't know why traefik is not expanding the variable.
Anyone knows what we are doing wrong?
Best wishes
Exa.Byte
I've finally found a solution which suits well for our purpose:
I just used one $ sign in conjunction with two for the regex option.
added in dynamic.toml for traefik itself:
[http.middlewares.add-trailing-slash.redirectRegex]
regex= "(https?://[^/]+/[a-z0-9_]+)$$"
replacement= "${1}/"
permanent = true
lg
exa.byte
Somehow I feel this should be very simple, and I cannot seem to find a way to do it with the CircleCI config.
I have a simple task—take one value and map it to another based on a predefined map.
Something like this:
buckets:
staging:
dashboard: staging-dashboard.example.com
referrer: referrer-stage.example.com
YAML allows me to define this without problems. My issue is I cannot figure out how to use it in CircleCI config:
deploy:
executor: gcp/google
parameters: *build_parameters
steps:
- attach_workspace:
root: .
- gcp/install
- configure_google_sdk
- gcp/initialize:
gcloud-service-key: GCLOUD_SERVICE_KEY_DECODED
- get_bucket: <<< HOW_TO_DO_THIS >>>
environment: <<parameters.environment>>
dashboard_type: <<parameters.dashboard_type>>
- gcp-storage/upload:
source_path: dist/apps/<<parameters.dashboard_type>>-admin/**
destination_bucket: $BUCKET_NAME
My only solution so far has been to write a custom bash script which would map things inside code and set $BUCKET_NAME environment variable. It seems like a massive overkill for such a simple thing, and the last thing I want is to store a mapping in some cryptic bash script.
Any better ideas?
With GitHub Actions I can read from JSON files directly. Maybe something like this exists for CircleCI?
I have a circle config which includes the following custom command:
remove-circle-ip:
description: "remove current Circle CI box IP from inbound security group rules for DB"
steps:
- aws-white-list-circleci-ip/remove:
tag-key: circleci
tag-value: whitelistmeplease
port: 5432
which I use in my job as follows:
jobs:
test:
docker:
- image: nikolaik/python-nodejs:python3.8-nodejs12
environment:
AWS_DEFAULT_REGION: us-east-2
steps:
- setup
- install-python-deps
- add-circle-ip
- run:
name: run tests
command: |
poetry run coverage run --source='.' manage.py test
- run:
name: remove circle IP
command: remove-circle-ip
when: always
I'd like the step for remove circle IP to run even if the tests which run before it fail. I can't seem to figure out the syntax for this. Previously, I had just used - remove-circle-ip to run the command rather than putting a run block, i.e.:
jobs:
test:
docker:
...
steps:
- setup
- ...
- add-circle-ip
- ...
- remove-circle-ip
but couldn't figure out how to specify when: always if I did it that way.
But now, when switching to calling my command as part of a run block, it fails with "remove-circle-ip: command not found"
So how can I make this command always run even if steps before fail?
I'm fairly new to CircleCI so there may be a better way to do this, or maybe this shouldn't be done at all, however something similar was done (before I joined) to a project I'm working on. It was achieved by making every step report success, whether it actually succeeded or failed, which allows the command at the end to always run. The commands are all terminal commands, so they just have || true at the end. I'm not sure how you would achieve that with a more complex command or using a builtin command.
In our case the steps that can fail are optional and we don't care if they actually fail or not. However if you want to report the failure I think that you should be able to store the failure from a previous step somewhere, and add a final step that reports it.
I created my first pipeline yesterday and I wanted to replace a placeholder in my bundle.gradle file with the CIRCLE_BUILD_NUM environment variable. The only method I found find was writing my own ‘sed’ command and executing the regex in a run statement. This worked fine to get up and running, since there was only one variable to replace, however this method obviously won’t scale, down the road. Is there a CircleCI feature/orb or other method to do a more comprehensive placeholder/envar swap throughout my project?
- run:
name: Increment build id
command: sed "s/_buildNum/${CIRCLE_BUILD_NUM}/g" -i build.gradle
EDIT
Looking for a utility/tools/orb/CircleCI best practice similar to what they have in Azure DevOps (Jenkins performs a similar feature as well): simply replace all placeholders in specified files with environment variables matching the same name.
https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens
There is envtpl tool with myriad of implementation in various languages.
It allows for interpolating variables in templates with values set in environment variables.
The following described command installs an implementation in Rust.
commands:
replace-vars-from-env:
description: Replace variables in file from environment variables.
parameters:
filename:
type: string
steps:
- run:
name: Replace variables in build.gradle file
command: |
if ! [ -x /usr/local/bin/envtpl ]; then
curl -L https://github.com/niquola/envtpl/releases/download/0.0.3/envtpl.linux > /usr/local/bin/envtpl
chmod +x /usr/local/bin/envtpl
fi
mv <<parameters.filename>> <<parameters.filename>>.tpl
cat <<parameters.filename>>.tpl | envtpl > <<parameters.filename>>
rm <<parameters.filename>>
and use that in other commands or as a part of your jobs. For example,
executors:
linux:
machine:
image: ubuntu-1604:201903-01
jobs:
build:
executor: linux
steps:
- replace-vars-from-env:
filename: build.gradle
You could use envsubst which provides that basically out of the box.
Depending on your primary container you can install envsubst on top of alpine/your distro, or use some image that has that already, like datasailors/envsubst.
In that case, you would just need to run configure like:
- run:
name: Increment build id
command: envsubst < build.gradle.template > build.gradle
And in your template file you can have ${CIRCLE_BUILD_NUM}, as many other variables directly.
I'm aware of the variable substitutions available, where I could use a .env at the root of the project and that would be done, but in this case I'm adapting an existing project, where existing .env file locations are expected and I would like to prevent having to have var entries on multiple files!
See documentation for more info, and all the code is available as WIP on the docker-support branch of the repo, but I'll succinctly describe the project and issue below:
Project structure
|- root
| |- .env # mongo and mongo-express vars (not on git!)
| |- docker-compose.yaml # build and ups a staging env
| |- docker-compose.prod.yaml # future wip
| |- api # the saas-api service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
| |- app # the saas-app service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
Or see the whole thing here, it works great by the way for the moment, but there's one problem with saas-app when building an image for staging/production that I could identify so far.
Issue
At build time Next.js builds a static version of the pages using webpack to do it's thing about process.env substitution, so it requires the actual eventual running vars to be included at docker build stage so next.js doesnt need to rebuild again at runtime and also so that I can safely spawn multiple instances when traffic requires!
I'm aware that if at runtime the same vars are not sent it will have to rebuild again defying the point of this exercise, but that's precisely what I'm trying to prevent here, to that if the wrong values are sent it's on us an not the project!
And I also need to consider Next.js BUILD ID managemement, but that's for another time/question.
Attempts
I've been testing with including the ARG and ENV declarations for each of the variables expected by the app on it's Dockerfile, e.g.:
ARG GA_TRACKING_ID=
ENV GA_TRACKING_ID ${GA_TRACKING_ID}
This works as expected, however it forces me to manually declare them on the docker-compose.yml file, which is not ideal:
saas-app:
build:
context: app
args:
GA_TRACKING_ID: UA-xXxXXXX-X
I cannot use variable substitution here because my root .env does not include this var, it's on ./app/.env, and I also tested leaving the value empty but it is not picking it up from the env_file or enviroment definitions, which I believe is as expected.
I've pastbinned a full output of docker-compose config with the existing version on the repository:
Ideally, I'd like:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
To become:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
BUCKET_FOR_POSTS: xxxxxx
BUCKET_FOR_TEAM_AVATARS: xxxxxx
GA_TRACKING_ID: ''
LAMBDA_API_ENDPOINT: xxxxxxapi
NODE_ENV: development
STRIPEPUBLISHABLEKEY: pk_test_xxxxxxxxxxxxxxx
URL_API: http://api.saas.localhost:8000
URL_APP: http://app.saas.localhost:3000
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
Questions
How would I be able to achieve this, if possible, but:
Without merging the existing .env files into a single root, or having to duplicate vars on multiple files.
Without manually declaring the values on the compose file, or having to infer them on the command e.g. docker-compose build --build-arg GA_TRACKING_ID=UA-xXxXXXX-X?
Without having to COPY each .env file during the build stage, because it doesn't feel right and/or secure?
Maybe a args_file on the compose build options feature request for the compose team seems to me to be a valid, would you also say so?
Or perhaps have a root option on the compose file where you could set more than one .env file for variable substituion?
Or perhaps another solution i'm not seeing? Any ideas?
I wouldn't mind sending each .env file as a config or secret, it's a cleaner solution than splitting the compose files, is anyone running such an example for production?
Rather than trying to pass around and merge values in multiple .env's would you consider making one master .env and having the API and APP services inherit the same root .env?
I've managed to achieve a compromise that does not affect any of the existing development workflows, nor does it allow for app to build without env variables (a requirement that will be more crucial for production builds).
I've basically decided to reuse the internal ability of docker to read the .env file and use those in variable substitution on the compose file, here's an example:
# compose
COMPOSE_TAG_NAME=stage
# common to api and app (build and run)
LOG_LEVEL=notice
NODE_ENV=development
URL_APP=http://app.saas.localhost:3000
URL_API=http://api.saas.localhost:8000
API_PORT=8000
APP_PORT=3000
# api (run)
MONGO_URL=mongodb://saas:secret#saas-mongo:27017/saas
SESSION_NAME=saas.localhost.sid
SESSION_SECRET=3NvS3Cr3t!
COOKIE_DOMAIN=.saas.localhost
GOOGLE_CLIENTID=
GOOGLE_CLIENTSECRET=
AMAZON_ACCESSKEYID=
AMAZON_SECRETACCESSKEY=
EMAIL_SUPPORT_FROM_ADDRESS=
MAILCHIMP_API_KEY=
MAILCHIMP_REGION=
MAILCHIMP_SAAS_ALL_LIST_ID=
STRIPE_TEST_SECRETKEY=
STRIPE_LIVE_SECRETKEY=
STRIPE_TEST_PUBLISHABLEKEY=
STRIPE_LIVE_PUBLISHABLEKEY=
STRIPE_TEST_PLANID=
STRIPE_LIVE_PLANID=
STRIPE_LIVE_ENDPOINTSECRET=
# app (build and run)
STRIPEPUBLISHABLEKEY=
BUCKET_FOR_POSTS=
BUCKET_FOR_TEAM_AVATARS=
LAMBDA_API_ENDPOINT=
GA_TRACKING_ID=
See the updated docker-compose.yml I've also made use of Extension fields to make sure only the correct and valid vars are sent across on build and run.
It breaks rule 1. from the question, but I feel it's a good enough compromise, because it no longer relies on the other .env files, that would potentically be development keys most of the time anyway!
Unfortunately we will need to mantain the compose file if the vars change in the future, and the same .env file has to be used for a production build, but since that will probably be done externally on some CI/CD, that does not worry much.
I'm posting this but not fully closing the question, if anyone else could chip in with a better idea, I'd be greatly appreciated.