Symfony4 Inject env variable and autowire error "Cannot autowire service" - autowired

I'm trying to create a service who receive two parameters defined in .env file.
Symfony throws the next error:
Cannot autowire service
"App\Modules\Sales\Infrastructure\Persistence\Http\HttpSalesRepository":
argument "$credentials" of method "__construct()" is type-hinted "ar
ray", you should configure its value explicitly.
This is my service.yml file:
parameters:
services:
_defaults:
autowire: true # Automatically injects dependencies in your services.
autoconfigure: true # Automatically registers your services as commands, event subscribers, etc.
App\:
resource: '../src/*'
exclude: '../src/{DependencyInjection,Entity,Migrations,Tests,Kernel.php}'
App\Controller\:
resource: '../src/Controller'
tags: ['controller.service_arguments']
imports:
- { resource: "../src/Modules/Sales/Infrastructure/DependencyInjection/sales_module.yml" }
This is the code inside sales_module.yml:
services:
module_sales.http.sales.repository:
public: false
class: App\Modules\Sales\Infrastructure\Persistence\Http\HttpSalesRepository
arguments: ["%env(json:CREDENTIALS)%", "%env(json:TAGS)%"]
And this the HttpSalesRepository class
class HttpSalesRepository implements SalesRepository
{
private $credentials;
private $tags;
public function __construct(array $credentials, array $tags)
{
$this->credentials = $credentials;
$this->tags = $tags;
}
...
}
And this is the .env file:
###> symfony/framework-bundle ###
APP_ENV=dev
APP_SECRET=xxxxxx
###< symfony/framework-bundle ###
###> doctrine/doctrine-bundle ###
DATABASE_URL=mysql://xxx:yyy#mysql:3306/db
###< doctrine/doctrine-bundle ###
###> application/variables ###
CREDENTIALS='{"username":"email#test.com","password":"secretpass"}'
TAGS='["tag1","tag2","tag3"]'
###< application/variables ###
Any ideas?

You can name the service arguments to explicitly match the parameter name to the value, instead of just having the argument list:
services:
module_sales.http.sales.repository:
public: false
class: App\Modules\Sales\Infrastructure\Persistence\Http\HttpSalesRepository
arguments:
$credentials: "%env(json:CREDENTIALS)%"
$tags: "%env(json:TAGS)%"]

Is not the best option, but is a work arround while I find the final solution. If I define the service in config/services.yaml, the variables are passed to the services perfectly.
App\Modules\Sales\Infrastructure\Persistence\Http\HttpSalesRepository:
public: false
arguments: ["%env(json:AMAZON_AFFILIATES_CREDENTIALS)%", "%env(json:AMAZON_AFFILIATES_TAGS)%"]
Any other solution will be welcomed.

Related

How to use custom ingest pipelines with docker autodiscover

I'm having a hard time using custom Elasticsearch ingest pipelines with Filebeat's Docker autodiscovery. I've started out with custom processors in my filebeat.yml file, however I would prefer to shift this to custom ingest pipelines I've created.
Firstly, here is my configuration using custom processors that works to provide custom grok-like processing for my Servarr app Docker containers (identified by applying a label to them in my docker-compose.yml file). The processor copies the 'message' field to 'log.original', uses dissect to extract 'log.level', 'log.logger' and overwrite 'message'. The final processor is a JavaScript function used to convert the log.level to lowercase (overkill perhaps, but humour me).
Filebeat configuration:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
processors:
- if:
equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
then:
- copy_fields:
fields:
- from: message
to: log.original
fail_on_error: false
ignore_missing: true
- dissect:
tokenizer: "[%{log.level}] %{log.logger}: %{message}"
field: message
target_prefix: ""
overwrite_keys: true
ignore_failure: true
- script:
lang: javascript
id: lowercase
source: >
function process(event) {
var level = event.Get("log.level");
if(level != null) {
event.Put("log.level", level.toString().toLowerCase());
}
}
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
Excerpt from docker-compose.yml file...
lidarr:
image: ghcr.io/linuxserver/lidarr:latest
container_name: lidarr
labels:
co.elastic.logs/custom_processor: "servarr"
And an example log line (in json):
{"log":"[Info] DownloadDecisionMaker: Processing 100 releases \n","stream":"stdout","time":"2021-08-07T10:10:49.125702754Z"}
This works well, and achieves my aims of extracting fields, but ideally I'd like to use Elasticsearch's (more powerful) ingest pipelines instead, and live with a cleaner filebeat.yml, so I created a working ingest pipeline "filebeat-7.13.4-servarr-stdout-pipeline" like so (ignore the fact that for now, this only does the grokking):
[
{
"grok": {
"field": "message",
"patterns": [
"\\[%{LOGLEVEL:log.level}\\] %{WORD:log.logger}: %{GREEDYDATA:message}"
],
"trace_match": true,
"ignore_missing": true
}
}
]
I tested the pipeline against existing documents (not ones that have had my custom processing applied, I should note). The pipeline worked against all the documents I tested it against in the Kibana interface.
So now I come to shift my Filebeat config to use this pipeline for containers with my custom_processor label. This is the filebeat.yml I came up with, which is apparently valid and works for the most part, but doesn't apply the grokking:
filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
reload.period: 60s
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition.equals:
docker.container.labels.co_elastic_logs/custom_processor: servarr
config:
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
output.elasticsearch:
hosts: 'elasticsearch:9200'
username: 'elastic'
password: '*************'
setup.kibana.host: 'kibana:5601'
logging.json: true
logging.metrics.enabled: false
If I use Filebeat's inbuilt modules for my other containers such as nginx, by using a label such as in this example below, the inbuild module pipelines are used:
nginx-repo:
image: nginx:latest
container_name: nginx-repo
mem_limit: 2048m
environment:
- VIRTUAL_HOST=repo.***.***.***,repo
- VIRTUAL_PORT=80
- HTTPS_METHOD=noredirect
networks:
- default
- proxy
labels:
co.elastic.logs/module: "nginx"
co.elastic.logs/fileset.stdout: "access"
co.elastic.logs/fileset.stderr: "error"
What am I doing wrong here? The logs still end up in Elasticsearch and Kibana, and are processed, but my grok isn't applied, new fields aren't created, and the 'message' field is unchanged.
EDIT: In response to one of the comments linking to a post on the elastic forums, which suggested both the path(s) and the pipeline need to be made explicit, I tried the following filebeat.yml autodiscovery excerpt, which also fails to work (but is apparently valid config):
filebeat.autodiscover:
providers:
- type: docker
hints.enabled: true
json.keys_under_root: true
appenders:
- type: config
condition:
equals:
docker.container.labels.co_elastic_logs/custom_processor: "servarr"
config:
- type: docker
containers:
ids:
- "${data.docker.container.id}"
stream: all
paths:
- /var/lib/docker/containers/${data.docker.container.id}/${data.docker.container.id}-json.log
pipeline: filebeat-7.13.4-servarr-stdout-pipeline
I tried with the docker.container.labels.co_elastic_logs/custom_processor value both quoted and unquoted. I have the same behaviour where the logs end up in Elasticsearch / Kibana, but they are processed as if they skipped my ingest pipeline.
We're using Kubernetes instead of Docker with Filebeat but maybe our config might still help you out.
We have autodiscover enabled and have all pod logs sent to a common ingest pipeline except for logs from any Redis pod which use the Redis module and send their logs to Elasticsearch via one of two custom ingest pipelines depending on whether they're normal Redis logs or slowlog Redis logs, this is configured in the following block:
All other detected pod logs get sent in to a common ingest pipeline using the following catch-all configuration in the "output" section:
Something else that we do is add the name of the ingest pipeline to ingested documents using the "set" processor:
This has proven to be really helpful when diagnosing whether or not a pipeline was actually executed when viewing an event document in Kibana.

How can I pass variables to docker when I build a container with Azure Devops Pipeline

I'm trying to build a container with a PHP project that has a DDBB connection, and I'm looking to set the DDBB through environment variables.
I have defined the variables with pipeline variables.
I also set the variables in the variable sections
variables:
imageName: 'project'
repositoryNameDes: 'portalweb-des/project'
repositoryNamePro: 'portalweb-pro/project'
connectionECRpredes: 'amazon container registry w-predes'
connectionECRpro: 'amazon container registry w-pro'
tName: $(Build.SourceBranchName)_$(Build.SourceVersion)
ecrRepositoryNameBaseUrl: '513537361685.dkr.ecr.eu-west-1.amazonaws.com'
dirNameS3: 'project'
bucketNameDes: 'cluster-drupal-des-deploy-s3'
deployOnECS: $[or(startsWith(variables['Build.SourceBranch'], 'refs/heads/tags/'), startsWith(variables['Build.SourceBranch'], 'refs/heads/develop'), startsWith(variables['Build.SourceBranch'], 'refs/heads/feature/generardocker'))]
ddbb_name: $(DDBB_NAME)
But I'm not able to see how to load the variables when I make the build, I'm not sure if there is an option so set enviroment variables, if I have to write a custom build to run with the variable or how to make it.
steps:
- task: Docker#2
displayName: Build an image
inputs:
repository: $(imageName)
command: build
Dockerfile: Dockerfile
tags: $(tName)
- task: ECRPushImage#1
inputs:
ENV: '$(dev)'
awsCredentials: '$(connectionECRpredes)'
regionName: 'eu-west-1'
imageSource: 'imagename'
sourceImageName: '$(imageName)'
sourceImageTag: '$(tName)'
repositoryName: '$(repositoryNameDes)'
pushTag: $(tName)
In the most common case, you set the variables and use them within the YAML file. This allows you to track changes to the variable in your version control system. You can also define variables in the pipeline settings UI (see the Classic tab) and reference them in your YAML.
Here's an example that shows how to set two variables, configuration and platform, and use them later in steps. To use a variable in a YAML statement, wrap it in $().
# Set variables once
variables:
configuration: debug
platform: x64
steps:
# Use them once
- task: MSBuild#1
inputs:
solution: solution1.sln
configuration: $(configuration) # Use the variable
platform: $(platform)
# Use them again
- task: MSBuild#1
inputs:
solution: solution2.sln
configuration: $(configuration) # Use the variable
platform: $(platform)
You could check the following link for more details:
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch

How to access AWS SAM Mapping in swagger.yaml / openapi.yaml files

I have declared a mapping named StageMap in my sam.yaml file:
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Parameters:
ProjectName:
Type: String
SubProjectName:
Type: String
Stage:
Type: String
AllowedValues:
- dev
- test
- preprod
- prod
...
Mappings:
StageMap:
dev:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
test:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-test-AuthorizerFunction-UQ1EQ2SP5W6G/invocations
preprod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-preprod-AuthorizerFunction-UQ1W6EQ2SP5G/invocations
prod:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-prod-AuthorizerFunction-5STBUB61RR2YJ/invocations
I would like to use this mapping in my swagger.yaml I have tried the following:
...
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerArn
I also tried this solution but I got an error Every Mappings attribute must be a String or a List.
Can you please let me know how to access one of the values in the mapping in the swagger.yaml? Thanks!
I found the following in the AWS SAM docs:
You cannot include parameters, pseudo parameters, or intrinsic functions in the Mappings section.
So I changed:
AuthorizerArn: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6/invocations
For:
AuthorizerFunctionName: auth-bk-main-dev-AuthorizerFunction-1RR2YJ5STBUB6
And in the swagger.yaml I used the following:
x-amazon-apigateway-authorizer:
type: request
authorizerUri:
Fn::Sub:
- arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:${AuthorizerFunctionName}/invocations
- AuthorizerFunctionName:
Fn::FindInMap:
- 'StageMap'
- Ref: 'Stage'
- 'AuthorizerFunctionName'

Conditional resource in serverless

I would like to add an AWS resource conditionally based on presence of an env var. I tried serverless-cloudformation-parameter-setter but I get a generic error on deployment and I don't see what I need to do to fix it
I'm trying to deploy a simple lambda + SQS stack and if a env var is defined also subscribe the queue to the topic denoted by the env var - or if the var is not defined then not do that part at all, just the lambda and the queue
This is what I tried:
plugins:
- serverless-cloudformation-parameter-setter
provider:
name: aws
runtime: nodejs8.10
region: eu-west-1
functions:
update:
handler: index.update
events:
- sqs:
arn:
Fn::GetAtt:
- Queue
- Arn
custom:
cf-parameters:
SourceTopicArn: "${env:UPDATE_SNS_ARN}"
resources:
Parameters:
SourceTopicArn:
Type: string
Resources:
Queue:
Type: "AWS::SQS::Queue"
Subscription:
Type: "AWS::SNS::Subscription"
Condition: SourceTopicArn
Properties:
TopicArn:
Ref: SourceTopicArn
Endpoint:
Ref: Queue
The error I receive is: The CloudFormation template is invalid: Template format error: Unrecognized parameter type: string
If I remove all the parameter stuff it works fine
The Type has to be String, not string. See the supported parameter data types section in the docs.

Set environment variables from external file in serverless.yml

I'm using serverless and serverless-local for local development.
I've got an external file which holds references to environment variables which I retrieve from node.env in my app.
From what I understand, I should be able to set my environment variables such as
dev:
AWS_KEY: 'key',
SECRET: 'secret
test:
AWS_KEY: 'test-key',
SECRET: 'test-secret',
etc:
...
and have those environment variables included in my app through the following line in my serverless.yml
provider:
name: aws
runtime: nodejs4.3
stage: ${opt:stage, self:custom.default_stage}
deploymentBucket: serverless-deploy-packages/${opt:stage, self:custom.default_stage}
environment:
${file(./serverless-env.yml):${opt:stage, self:custom.default_stage}}
then in the commandline, I call
serverless offline --stage dev --port 9000
I thought this would include the correct vars in my app, but it isn't working. Is this not how it is supposed to work? Am I doing something wrong here?
From docs:
You can set the contents of an external file into a variable:
file: ${file(./serverless-env.yml)}
And later you can use this new variable to access the file variables.
secret: file.dev.SECRET
Or you can use the file directly:
secret: ${file(./serverless-env.yml):dev.SECRET}
You can also now use remote async values with the serverless framework. See https://serverless.com/blog/serverless-v1.13.0/
This means you can call values from s3 or remote databases etc.
Example:
serverless.yml
service: serverless-async-vars
provider:
name: aws
runtime: nodejs6.10
custom:
secret: ${file(./vars.js):fetchSecret} # JS file running async / promised
vars.js
module.exports.fetchSecret = () => {
// async code
return Promise.resolve('SomeSecretKey');
}
This is how you can separate your environments by different stages:
serverless.yml:
custom:
test:
project: xxx
prod:
project: yyy
provider:
...
stage: ${opt:stage, 'test'}
project: ${self:custom.${opt:stage, 'test'}.project}
environment:
${file(.env.${opt:stage, 'test'}.yml):}
package:
exclude:
- .env.*
.env.test.yml:
VARIABLE1: value1
VARIABLE2: value2
During deploy, pass --stage=prod or skip and test project will be deployed. Then in your JS code you can access ENV variables with process.env.VARIABLE1.
Set Lambda environment variables from JSON file ( using AWS CLI)
aws lambda update-function-configuration --profile mfa --function-name test-api --cli-input-json file://dev.json
I had this correct, but I was referencing the file incorrectly.
I don't see this in the docs, but passing a file to environment will include the files yaml file, and the above structure does work.

Resources