Serverless - Service configuration is expected to be placed in a root of a service (working directory) - serverless

I have this warning on GitHub Action:
Serverless: Deprecation warning: Service configuration is expected to be placed in a root of a service (working directory). All paths, function handlers in a configuration are > resolved against service directory".
Starting from next major Serverless will no longer permit configurations nested in sub directories.
Does it mean I have to put serverless.yml (Service configuration) in the working directory?
If yes, which one is the working directory?
.github/workflows/deploy.yml
service: myservice
jobs:
deploy:
# other steps here #
- name: Serverless
uses: serverless/github-action#master
with:
args: deploy --config ./src/MyStuff/serverless.yml
I store the serverless.yml in that path because it is related to Stuff.
I want to use multiple serverless.yml.
For AnotherStuff I will create src/AnotherStuff/serverless.yml .
So, what is the error, and the right way to do it?
[edit 21/02/2022]
I'm using the following workaround.
In GitHub Actions I have this job step in my build:
- name: Serverless preparation
run: |
# --config wants the serverless files in the root, so I move them there
echo move configuration file to the root folder
mv ./serverless/serverless.fsharp.yml ./serverless.fsharp.yml
Essentially, they want the file in the root folder... I put the file in the root folder.

Related

circleci python -t flag when running tests does not work

I have this run step in my circle.yaml file with no checkout or working directory set:
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t dataloader tests
The problem with this is that the working directory from the -t flag does not get set. I have moduleNotFound Errors when trying to find an assertions folder inside the dataloader class.
My tree:
├── dataloader
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── __pycache__
│   ├── assertions
But this works:
version: 2
defaults: &defaults
docker:
- image: circleci/python:3.6
jobs:
dataloader_tests:
working_directory: ~/dsys-2uid/dataloader
steps:
- checkout:
path: ~/dsys-2uid
...
- run:
name: Running dataloader tests
command: venv/bin/python3 -m unittest discover -t ~/app/dataloader tests
Any idea as to what might be going on?
Why doesn't the first one work with just using the -t flag?
What does working directory and checkout with a path actually do? I don't even know why my solution works.
The exact path to the tests folder from the top has to be specified for 'discovery' to work. For example:'python -m unittest discover src/main/python/tests'. That must be why its working in the second case.
Its most likely a bug with 'unittest discovery' where discovery works when you explicitly specify namespace package as a target for discovery.But it does not recurse into any namespace packages inside namespace_pkg. So when you simply run 'python3 -m unittest discover' it doesn't go under all namespace packages (basically folders) in cwd.
Some PRs are underway(for example:issue35617) to fix this, but are yet to be released
checkout = Special step used to check out source code to the configured path (defaults to the working_directory). The reason this is a special step is because it is more of a helper function designed to make checking out code easy for you. If you require doing git over HTTPS you should not use this step as it configures git to checkout over ssh.
working_directory = In which directory to run the steps. Default: ~/project (where project is a literal string, not the name of your specific project). Processes run during the job can use the $CIRCLE_WORKING_DIRECTORY environment variable to refer to this directory. Note: Paths written in your YAML configuration file will not be expanded; if your store_test_results.path is $CIRCLE_WORKING_DIRECTORY/tests, then CircleCI will attempt to store the test subdirectory of the directory literally named $CIRCLE_WORKING_DIRECTORY, dollar sign $ and all.

Docker compose build time args from file

I'm aware of the variable substitutions available, where I could use a .env at the root of the project and that would be done, but in this case I'm adapting an existing project, where existing .env file locations are expected and I would like to prevent having to have var entries on multiple files!
See documentation for more info, and all the code is available as WIP on the docker-support branch of the repo, but I'll succinctly describe the project and issue below:
Project structure
|- root
| |- .env # mongo and mongo-express vars (not on git!)
| |- docker-compose.yaml # build and ups a staging env
| |- docker-compose.prod.yaml # future wip
| |- api # the saas-api service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
| |- app # the saas-app service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
Or see the whole thing here, it works great by the way for the moment, but there's one problem with saas-app when building an image for staging/production that I could identify so far.
Issue
At build time Next.js builds a static version of the pages using webpack to do it's thing about process.env substitution, so it requires the actual eventual running vars to be included at docker build stage so next.js doesnt need to rebuild again at runtime and also so that I can safely spawn multiple instances when traffic requires!
I'm aware that if at runtime the same vars are not sent it will have to rebuild again defying the point of this exercise, but that's precisely what I'm trying to prevent here, to that if the wrong values are sent it's on us an not the project!
And I also need to consider Next.js BUILD ID managemement, but that's for another time/question.
Attempts
I've been testing with including the ARG and ENV declarations for each of the variables expected by the app on it's Dockerfile, e.g.:
ARG GA_TRACKING_ID=
ENV GA_TRACKING_ID ${GA_TRACKING_ID}
This works as expected, however it forces me to manually declare them on the docker-compose.yml file, which is not ideal:
saas-app:
build:
context: app
args:
GA_TRACKING_ID: UA-xXxXXXX-X
I cannot use variable substitution here because my root .env does not include this var, it's on ./app/.env, and I also tested leaving the value empty but it is not picking it up from the env_file or enviroment definitions, which I believe is as expected.
I've pastbinned a full output of docker-compose config with the existing version on the repository:
Ideally, I'd like:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
To become:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
BUCKET_FOR_POSTS: xxxxxx
BUCKET_FOR_TEAM_AVATARS: xxxxxx
GA_TRACKING_ID: ''
LAMBDA_API_ENDPOINT: xxxxxxapi
NODE_ENV: development
STRIPEPUBLISHABLEKEY: pk_test_xxxxxxxxxxxxxxx
URL_API: http://api.saas.localhost:8000
URL_APP: http://app.saas.localhost:3000
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
Questions
How would I be able to achieve this, if possible, but:
Without merging the existing .env files into a single root, or having to duplicate vars on multiple files.
Without manually declaring the values on the compose file, or having to infer them on the command e.g. docker-compose build --build-arg GA_TRACKING_ID=UA-xXxXXXX-X?
Without having to COPY each .env file during the build stage, because it doesn't feel right and/or secure?
Maybe a args_file on the compose build options feature request for the compose team seems to me to be a valid, would you also say so?
Or perhaps have a root option on the compose file where you could set more than one .env file for variable substituion?
Or perhaps another solution i'm not seeing? Any ideas?
I wouldn't mind sending each .env file as a config or secret, it's a cleaner solution than splitting the compose files, is anyone running such an example for production?
Rather than trying to pass around and merge values in multiple .env's would you consider making one master .env and having the API and APP services inherit the same root .env?
I've managed to achieve a compromise that does not affect any of the existing development workflows, nor does it allow for app to build without env variables (a requirement that will be more crucial for production builds).
I've basically decided to reuse the internal ability of docker to read the .env file and use those in variable substitution on the compose file, here's an example:
# compose
COMPOSE_TAG_NAME=stage
# common to api and app (build and run)
LOG_LEVEL=notice
NODE_ENV=development
URL_APP=http://app.saas.localhost:3000
URL_API=http://api.saas.localhost:8000
API_PORT=8000
APP_PORT=3000
# api (run)
MONGO_URL=mongodb://saas:secret#saas-mongo:27017/saas
SESSION_NAME=saas.localhost.sid
SESSION_SECRET=3NvS3Cr3t!
COOKIE_DOMAIN=.saas.localhost
GOOGLE_CLIENTID=
GOOGLE_CLIENTSECRET=
AMAZON_ACCESSKEYID=
AMAZON_SECRETACCESSKEY=
EMAIL_SUPPORT_FROM_ADDRESS=
MAILCHIMP_API_KEY=
MAILCHIMP_REGION=
MAILCHIMP_SAAS_ALL_LIST_ID=
STRIPE_TEST_SECRETKEY=
STRIPE_LIVE_SECRETKEY=
STRIPE_TEST_PUBLISHABLEKEY=
STRIPE_LIVE_PUBLISHABLEKEY=
STRIPE_TEST_PLANID=
STRIPE_LIVE_PLANID=
STRIPE_LIVE_ENDPOINTSECRET=
# app (build and run)
STRIPEPUBLISHABLEKEY=
BUCKET_FOR_POSTS=
BUCKET_FOR_TEAM_AVATARS=
LAMBDA_API_ENDPOINT=
GA_TRACKING_ID=
See the updated docker-compose.yml I've also made use of Extension fields to make sure only the correct and valid vars are sent across on build and run.
It breaks rule 1. from the question, but I feel it's a good enough compromise, because it no longer relies on the other .env files, that would potentically be development keys most of the time anyway!
Unfortunately we will need to mantain the compose file if the vars change in the future, and the same .env file has to be used for a production build, but since that will probably be done externally on some CI/CD, that does not worry much.
I'm posting this but not fully closing the question, if anyone else could chip in with a better idea, I'd be greatly appreciated.

DreamFactory how to disable wrapper "resource" in docker container

I'm using DreamFactory REST API in a Docker container and I need to disable wrapper "resource" in payload. How can I achieve this?
I have replaced the following in all of these four files:
opt/bitnami/dreamfactory/.env-dist
opt/bitnami/dreamfactory/vendor/dreamfactory/df-core/config/df.php
opt/bitnami/dreamfactory/installer.sh
bitnami/dreamfactory/.env
DF_ALWAYS_WRAP_RESOURCES=true
with:
DF_ALWAYS_WRAP_RESOURCES=false
but this doesn't fix my problem.
The change you describe is indeed the correct one as found in the DreamFactory wiki. Therefore I suspect the configuration has been cached. Navigate to your DreamFactory project's root directory and run this command:
$ php artisan config:clear
This will wipe out any cached configuration settings and force DreamFactory to read the .env file in anew. Also, keep in mind you only need to change the .env file (or manage your configuration variables in your server environment). Those other files won't play any role in configuration changes.

How to copy a file or jar file that has built from jenkins to a diff host server

Am having a jenkins job where am building a jar file. after the build is done I need to copy that jar file to a different server and deploy it there.
Am trying this yml file to achieve the same but it is looking for the file in the different server other than the jenkins server.
---
# ansible_ssh_private_key_file: "{{inventory_dir}}/private_key"
- hosts: host
remote_user: xuser
tasks:
- service: name=nginx state=started
become: yes
become_method: sudo
tasks:
- name: test a shell script
command: sh /home/user/test.sh
tasks:
- name: copy files
synchronize:
src: /var/jenkins_home/hadoop_id_rsa
dest: /home/user/
could you please suggest is there any other way or what could be approach to copy a build file to the server using jenkins to deploy.
Thanks.
Hi as per my knowledge you can use Publish Over ssh plugin in jenkins. actually i am not clear about your problem. but hoping that this can help you. plugin details: https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin if it wont help you, please comment. can you please more specific. (screen shot if possible)
Use remote ssh script in the build step no plug in is required
scp –P 22 Desktop/url.txt user#192.168.1.50:~/Desktop/url.txt
Setup passwords less authentication use the below link for help
https://www.howtogeek.com/66776/how-to-remotely-copy-files-over-ssh-without-entering-your-password/

AWS Elastic Beanstalk Post Deploy Script

I'm attempting to add a post deployment script to my Elastic Beanstalk. I've read a few blog posts (read the references) that reference adding a config file to the .elasticbeanstalk directory. The config file is supposed to create a shell script and copy it to the directory:
/opt/elasticbeanstalk/hooks/appdeploy/post
In my .elasticbeanstalk folder, I have the following files:
config.yml
branch-defaults:
staging:
environment: env-name
global:
application_name: app-name
default_ec2_keyname: aws-eb
default_platform: 64bit Amazon Linux 2015.03 v1.3.1 running Ruby 2.2 (Passenger Standalone)
default_region: us-west-2
profile: eb-cli
sc: git
test.config (the config I added to test out how the post deployment script works):
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/01_test.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
# This is my test shell script
After I run eb deploy though, the shell script doesn't get copied to the above directory. My /opt/elasticbeanstalk/appdeploy/post directory is empty. The .elasticbeanstalk folder is correct in the /var/app/current/ directory. I also can not see anything in the log files that references the new script or an error. I've checked eb-tools.log, eb-activity.log, cron, and grepped all logs for keywords around those files. Still, no luck.
Thanks in advance for any help/suggestions!
References:
https://forums.aws.amazon.com/thread.jspa?threadID=137136
http://www.dannemanne.com/posts/post-deployment_script_on_elastic_beanstalk_restart_delayed_job
The right place to drop your config file is .ebextensions directory in your app and not . elasticbeanstalk. You should then see your shell script in the /opt/elasticbeanstalk/hooks/appdeploy/post directory.
As noted in the forum post
"Dropping files directly into the hooks directories is risky as this is not the documented method, is different in some containers and may change in the future"

Resources