Codeception environments config in 2.1.1 (environment matrix) - environment

I am trying to run a test suite using configs from two environments (this is a feature implemented in 2.1 - http://codeception.com/docs/07-AdvancedUsage#Environments) and when I run bin/codecept suite --env env1,env2 it just runs full resolution on chrome, which is the default setting in codeception.yml. Here is the contents of env1 and env2:
env2:
modules:
config:
WebDriver:
window_size: 320x450
capabilities: []
env1:
modules:
config:
WebDriver:
browser: 'firefox'
env1.yml and env2.yml are correctly placed in the _envs forlder, and the path to this folder is specified in codeception.yml.
The yml of the suite I am trying to run is:
class_name: AcceptanceTester
modules:
enabled:
- \Helper\Acceptance
- WebDriver
This is codeception.yml:
actor: Tester
paths:
tests: tests
log: tests/_output
data: tests/_data
helpers: tests/_support
envs: tests/_envs
settings:
bootstrap: _bootstrap.php
colors: true
memory_limit: 1024M
modules:
enabled:
- \Helper\Acceptance
- WebDriver
config:
WebDriver:
url: 'http://myurl.com/'
browser: 'chrome'
host: 127.0.0.1
port: 4444
window_size: 1920x1080

you have to run it with the following, otherwise codeception combines the settings:
--env env1 --env env2

Related

How to access environment variables and pass to Lambda function using useDotenv: true option in serverless.yml?

I am trying to pass environment variables to my Lambda function in serverless.yml (version 2.32.0) but I am not sure the way of doing it. The doucmentaiton: https://www.serverless.com/framework/docs/environment-variables/ doesn't mention how to. Right now, I am using the useDotenv: true option and then trying to access the environment variables by ${process.env.ENV1} but it is not working. Below is my serverless.yml file:
serverless.yml
service: service-name
frameworkVersion: "2.32.0"
useDotenv: true
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
region: <region>
stage: dev
profile: default
functions:
function-name:
handler: handler
timeout: 120
environment:
ENV1: ${process.env.ENV1}
ENV2: ${process.env.ENV2}
ENV3: ${process.env.ENV3}
I am getting no errors or warning when I run sls deploy but the environment variables are not being uploaded. How would I be able to do it?
Okay I got it by replacing process.env. with env:
serverless.yml:
service: service-name
frameworkVersion: "2.32.0"
useDotenv: true
provider:
name: aws
versionFunctions: false
runtime: nodejs12.x
region: <region>
stage: dev
profile: default
functions:
function-name:
handler: handler
environment:
ENV1: ${env:ENV1}
ENV2: ${env:ENV2}
ENV3: ${env:ENV3}
timeout: 120

aws serverless.yml file "A valid option to satisfy the declaration 'opt:stage' could not be found" error

Getting below warning when trying to run serverless.
Serverless Warning --------------------------------------------
A valid option to satisfy the declaration 'opt:stage' could not be found.
Below is my serverless.yml file
# Serverless Config
service: api-service
# Provider
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, 'ap-east-1'}
stage: ${opt:stage, 'dev'}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${opt:stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${opt:stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
myStage: ${opt:stage, self:provider.stage}
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
# Function
functions:
testFunc:
handler: index.handler
description: ${opt:stage} API's
events:
- http:
method: any
path: /{proxy+}
cors:
origin: '*'
#package
package:
exclude:
- .env
- node_modules/aws-sdk/**
- node_modules/**
In the description of the testFunc you're using ${opt:stage}. If you use that directly you need to pass the --stage flag when you run the deploy command.
What you should do there is to use the ${self:provider.stage}, because there you will have the stage calculated.
I will suggest you to do below implementation
provider:
name: aws
runtime: nodejs8.10
region: ${opt:region, self:custom.environments.region.${self:custom.environments.myStage}}
stage: ${opt:stage, self:custom.environments.myStage}
# Enviroment Varibles
environment:
STAGE: ${self:custom.myStage}
MONGO_DB_URI: ${file(./serverless.env.yml):${self:provider.stage}.MONGO_DB_URI}
LAMBDA_ONLINE: ${file(./serverless.env.yml):${self:provider.stage}.LAMBDA_ONLINE}
# Constants Varibles
custom:
# environments Variables used for convert string in upper case format
environments:
# set the default stage if not specified
myStage: dev
stages:
- dev
- qa
- staging
- production
region:
dev: 'ap-east-1'
stage: 'ap-east-1'
production: 'ap-east-1'
Basically, if stage and region is not specified using command line, then use defaults. Otherwise the command line one will be used.

web-component-tester cannot bind to port

I have a docker setup with following containers:
selenium-hub
selenium-firefox
selenium-chrome
spring boot app
node/java service for wct tests
All these containers are defined via docker-compose.
The node/java service is created as follows (extract from docker-compose):
wct:
build:
context: ./app/src/main/webapp
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
image: wct
container_name: wct
depends_on:
- selenium-hub
- selenium-chrome
- selenium-firefox
- webapp
The wct tests are run using:
docker-compose run -d --name wct-run wct npm run test
And the wct.conf.js looks like following:
const seleniumGridAddress = process.env.bamboo_selenium_grid_address || 'http://selenium-hub:4444/wd/hub';
const hostname = process.env.FQDN || 'wct';
module.exports = {
activeBrowsers: [{
browserName: "chrome",
url: seleniumGridAddress
}, {
browserName: "firefox",
url: seleniumGridAddress
}],
webserver: {
hostname: hostname
},
plugins: {
local: false,
sauce: false,
}
}
The testrun fails with this stacktrace:
ERROR: Server failed to start: Error: No available ports. Ports tried: [8081,8000,8001,8003,8031,2000,2001,2020,2109,2222,2310,3000,3001,3030,3210,3333,4000,4001,4040,4321,4502,4503,4567,5000,5001,5050,5432,6000,6001,6060,6666,6543,7000,7070,7774,7777,8765,8777,8888,9000,9001,9080,9090,9876,9877,9999,49221,55001]
at /app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:384:15
at Generator.next (<anonymous>)
at fulfilled (/app/node_modules/polymer-cli/node_modules/polyserve/lib/start_server.js:17:58)
I tried to fix it as per: polyserve cannot serve the app but without success.
I also tried setting hostnameto wct as this is the known hostname for the container inside the docker network, but it shows the same error.
I really do not know what to do next.
Any help is appreciated.
The problem was that the hostname was incorrect, so WCT was unable to bind to an unknown hostname.

Unable to configure my Docker container using intermediate_instructions and pid_one_command in Test Kitchen

The following .kitchen.yml file fails to configure my docker container with the required tools mentioned in intermediate_instructions. The pid_one_command also does not work as the container still loads with the bash shell
Any ideas what is wrong with the file?
driver:
name: docker
socket: tcp://localhost:2375
binary: docker.exe
chef_version: latest
privileged: true
provisioner:
name: chef_zero
# You may wish to disable always updating cookbooks in CI or other testing environments.
# For example:
# always_update_cookbooks: <%= !ENV['CI'] %>
always_update_cookbooks: true
verifier:
name: inspec
platforms:
- name: ubuntu-16.04
driver:
image: ubuntu:16.04
pid_one_command: /bin/systemd
intermediate_instructions:
- RUN /usr/bin/apt-get install -y lsof which initscripts net-tools
suites:
- name: default
run_list:
- recipe[testy::default]
verifier:
inspec_tests:
- test/smoke/default
attributes:
I think you're trying to use config options for kitchen-dokken with kitchen-docker. The two projects are unrelated.

Ansible - Include environment variables from external YML

I'm attempting to store all my environment variables in a file called variables.yml that looks like so:
---
doo: "external"
Then I have a playbook like so:
---
- hosts: localhost
tasks:
- name: "i can totally echo"
environment:
include: variables.yml
ugh: 'internal'
shell: echo "$doo vs $ugh"
register: results
- debug: msg="{{ results.stdout }}"
The result of the echo is ' vs internal'.
How can I change this so that the result is 'external vs internal'. Many thanks!
Assuming the external variable file called variables.ext is structured as follow
---
EXTERNAL:
DOO: "external"
than, according Setting the remote environment and Load variables from files, dynamically within a task a small test could look like
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Load environment variables
include_vars:
file: variables.ext
- name: Echo variables
shell:
cmd: 'echo "${DOO} vs ${UGH}"'
environment:
DOO: "{{ EXTERNAL.DOO }}"
UGH: "internal"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
resulting into an output of
TASK [Show result] ********
ok: [localhost] =>
msg: external vs internal

Resources