Stack creation would get stuck in CREATE_IN_PROGRESS. The resource Service specifically. The stack creation would complete without it. This is what the cloudformation looks like. I have checked CloudTrail but can't find anything out of the ordinary.
AWSTemplateFormatVersion: '2010-09-09'
Description: Amazon ECS Preview Quickstart Template
Parameters:
SubnetID:
Type: String
SubnetID2:
Type: String
ClusterName:
Description: Name of your Amazon ECS Cluster
Type: String
ConstraintDescription: must be a valid Amazon ECS Cluster.
Default: TestCluster
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: Container Instance type
Type: String
Default: t2.micro
AllowedValues:
- t2.micro
- t2.small
- t2.medium
- m3.medium
- m3.large
- m3.xlarge
- m3.2xlarge
- c3.large
- c3.xlarge
- c3.2xlarge
- c3.4xlarge
- c3.8xlarge
- r3.large
- r3.xlarge
- r3.2xlarge
- r3.4xlarge
- r3.8xlarge
- i2.xlarge
- i2.2xlarge
- i2.4xlarge
- i2.8xlarge
- hi1.4xlarge
- hs1.8xlarge
- cr1.8xlarge
- cc2.8xlarge
ConstraintDescription: must be a valid EC2 instance type.
SSHLocation:
Description: " The IP address range that can be used to SSH to the EC2 instances"
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Mappings:
AWSInstanceType2Arch:
t2.micro:
Arch: HVM64
t2.small:
Arch: HVM64
t2.medium:
Arch: HVM64
m3.medium:
Arch: HVM64
m3.large:
Arch: HVM64
m3.xlarge:
Arch: HVM64
m3.2xlarge:
Arch: HVM64
c3.large:
Arch: HVM64
c3.xlarge:
Arch: HVM64
c3.2xlarge:
Arch: HVM64
c3.4xlarge:
Arch: HVM64
c3.8xlarge:
Arch: HVM64
r3.large:
Arch: HVM64
r3.xlarge:
Arch: HVM64
r3.2xlarge:
Arch: HVM64
r3.4xlarge:
Arch: HVM64
r3.8xlarge:
Arch: HVM64
i2.xlarge:
Arch: HVM64
i2.2xlarge:
Arch: HVM64
i2.4xlarge:
Arch: HVM64
i2.8xlarge:
Arch: HVM64
hi1.4xlarge:
Arch: HVM64
hs1.8xlarge:
Arch: HVM64
cr1.8xlarge:
Arch: HVM64
cc2.8xlarge:
Arch: HVM64
AWSRegionArch2AMI:
us-east-1:
HVM64: ami-34ddbe5c
Resources:
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: ClusterName
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: deployment-example-log-group
ContainerInstance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile:
Ref: ECSIamInstanceProfile
ImageId:
Fn::FindInMap:
- AWSRegionArch2AMI
- Ref: AWS::Region
- Fn::FindInMap:
- AWSInstanceType2Arch
- Ref: InstanceType
- Arch
InstanceType:
Ref: InstanceType
SecurityGroups:
- Ref: ECSQuickstartSecurityGroup
KeyName:
Ref: KeyName
UserData:
Fn::Base64:
Fn::Join:
- ''
- - "#!/bin/bash -xe\n"
- echo ECS_CLUSTER=
- Ref: ClusterName
- " >> /etc/ecs/ecs.config\n"
ECSQuickstartSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable HTTP access via SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp:
Ref: SSHLocation
ECSIamInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: "/"
Roles:
- Ref: ECSQuickstartRole
ECSQuickstartRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: ECSQuickstart
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: ecs:*
Resource: "*"
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: deployment-example-task
Cpu: 256
Memory: 512
# NetworkMode: awsvpc
TaskRoleArn: !Ref ECSQuickstartRole
ContainerDefinitions:
-
Name: engine
Image: gcr.io/xxxxx
Environment:
- Name: db_instance
Value: "clouform2"
- Name: LOG_LEVEL
Value: 1
- Name: HOST
Value: 0.0.0.0
- Name: HTTP_PORT
Value: 8181
PortMappings:
- ContainerPort: 8181
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
-
Name: encyou
Image: gcr.io/xxxx3
DependsOn:
- Condition: START
ContainerName: engine
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
-
Name: packager
Image: gcr.io/xxxxx
DependsOn:
- Condition: START
ContainerName: engine
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
RequiresCompatibilities:
- EC2
Service:
Type: AWS::ECS::Service
DependsOn: ContainerInstance
Properties:
ServiceName: ServiceName
Cluster: !Ref Cluster
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 75
DesiredCount: 2
TaskDefinition: !Ref 'TaskDefinition'
LaunchType: EC2
I would see that the Service, although it is the resource that get's stuck, would successfully be registered in the cluster but tasks would not.
Any idea what I am doing wrong?
Seems you are using wrong cluster names. Your cluster will be just called ClusterName, not TestCluster. Subsequently, your instance will be trying to register with non-existing cluster.
This is because instead of:
ClusterName: ClusterName
there should be:
ClusterName: !Ref ClusterName
Please note that there could be other issues, not yet apparent though. What's more, you are using custom images gcr.io/xxxxx witch does not allow to re-produce the possible issues.
Related
I am new to ECS and I am trying to deploy it with Cloudformation.
I was making the following cloudformation template by looking at the documentation and some examples I found from blogs and some articles.
However, for some reason, it got stuck in updating one of the resources and eventually timed out.
I am not sure why it gets stuck and fails.
Can someone spot the mistake I am making?
For now, my goal is to deploy and see the app on the internet. I am not really looking for the advanced configuration.
I also pass the ecr url to this upon deployment aws cli.
Thank you in advance.
AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
Description: >
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: <url for ecr repo>
ContainerPort:
Type: Number
Default: 8080
Resources:
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "${Environment}-fake-user-api-logGroup"
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub "${Environment}-MyFargateCluster"
ExecutionRole:
Type: AWS::IAM::Role
Properties:
RoleName: !Sub "${Environment}-${AWS::AccountId}-ExecutionRole"
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: AWS::ECS::Service
Properties:
ServiceName: !Sub "${Environment}-${AWS::AccountId}-ECSService"
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
TaskRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub "${Environment}-${AWS::AccountId}-Container"
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: true
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
HostPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
I went through your CFN stack and found some things missing. I noticed that your Cluster name is ENV-MyFargateCluster so I am assuming your goal is to create a fargate service. To run a fargate service, you need to provide the networking configurations and notify that you want to create a Fargate service by specifying the Launch Type. Plus Fargate tasks cannot be Privileged.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ecs-taskdefinition-containerdefinitions.html#cfn-ecs-taskdefinition-containerdefinition-privileged
Below is my snippet of the code :
AWSTemplateFormatVersion: 2010-09-09
Transform: 'AWS::Serverless-2016-10-31'
Description: |
ECS Service
Parameters:
Environment:
Type: String
Default: alpha
AllowedValues:
- alpha
- beta
- production
ECRDockerUri:
Type: String
Default: 'image'
ContainerPort:
Type: Number
Default: 80
Resources:
LogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
LogGroupName: !Sub '${Environment}-fake-user-api-logGroup'
RetentionInDays: 30
ECSCluster:
Type: 'AWS::ECS::Cluster'
Properties:
ClusterName: !Sub '${Environment}-MyFargateCluster'
ExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Sub '${Environment}-${AWS::AccountId}-ExecutionRole'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: ecs-tasks.amazonaws.com
Action: 'sts:AssumeRole'
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy'
ECSService:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: !Sub '${Environment}-${AWS::AccountId}-ECSService'
LaunchType: FARGATE
Cluster: !Ref ECSCluster
TaskDefinition: !Ref TaskDefinition
DesiredCount: 1
NetworkConfiguration:
AwsvpcConfiguration:
AssignPublicIp: ENABLED
SecurityGroups:
- sg-XXXXXXXXXX
Subnets:
- subnet-XXXXXXXXXX
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
RequiresCompatibilities:
- FARGATE
TaskRoleArn: !Ref ExecutionRole
ExecutionRoleArn: !Ref ExecutionRole
ContainerDefinitions:
- Name: !Sub '${Environment}-${AWS::AccountId}-Container'
Image: !Ref ECRDockerUri
Memory: 1024
Essential: true
DisableNetworking: false
Privileged: false
ReadonlyRootFilesystem: true
Environment:
- Name: SPRING_PROFILES_ACTIVE
Value: !Ref Environment
PortMappings:
- ContainerPort: !Ref ContainerPort
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: ca-central-1
awslogs-stream-prefix: test
Cpu: '1024'
Memory: '2048'
NetworkMode: awsvpc
I had an ecs cluster running with ec2: I had a service running a nginx task and i had an ec2 autoscaling group, with an ALB in front of then. The task network interface was awsvcp. it worked fine but, as i need to allow dynamic port mapping (for running more than one task per ec2 instance), i changed my settings (now the task uses a bridge interface network and allow dynamic port mapping (host port = 0)). But, since i did that changes, my alb receive 504 (timeout) when try to communicate with the ec2 instances. And i can't even ssh inside the ec2 instance anymore (timeout too). Why this small setting (changing the network interface for dynamic port mapping) messed up my cluster? I suspect is something related to the ec2 instances settings, because i can't even ssh on then anymore. Bellow i pasted the keys settings on my cloudformation template:
LoadBalancer:
Condition: CreateMainResources
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Scheme: internet-facing
Subnets:
- !Ref PublicSubnet1
- !Ref PublicSubnet2
Type: application
SecurityGroups:
- !Ref ECSSecurityGroup
Listener80:
Condition: CreateMainResources
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
LoadBalancerArn: !Ref LoadBalancer
Port: !Ref ListeningOnPort
Protocol: HTTP
DefaultActions:
- TargetGroupArn: !Ref MyTargetGroup
Type: forward
MyTargetGroup:
Condition: CreateMainResources
DependsOn: LoadBalancer
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
Matcher:
HttpCode: 200-499 # 200-499 ou 200,204
Port: !Ref ListeningOnPort
Protocol: HTTP
TargetType: instance # ip
VpcId: !Ref VPC
EC2LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Condition: CreateEC2Resources
Properties:
LaunchTemplateData:
ImageId: !Ref Ec2ImageId
InstanceType: !Ref InstanceType
IamInstanceProfile:
Arn: !GetAtt EC2InstanceProfile.Arn
Monitoring:
Enabled: true
KeyName: !Ref Key
NetworkInterfaces:
- AssociatePublicIpAddress: true
DeviceIndex: '0'
Groups:
- !GetAtt EC2SecurityGroup.GroupId
SubnetId: !Ref PublicSubnet1
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash
echo ECS_CLUSTER=${cluster_name} >> /etc/ecs/ecs.config
- cluster_name: !Sub ${AWS::StackName}-cluster
EC2SecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
VpcId: !Ref VPC
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
FromPort: !Ref ListeningOnPort
IpProtocol: "tcp"
ToPort: !Ref ListeningOnPort
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref SSHUserIP
NginxWebServerTaskDefinition:
Condition: CreateECSResources
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
- Name: !Ref TaskContainerName
Image: !Ref ContainerDefinitionImage
Essential: true
Privileged: false
PortMappings:
- ContainerPort: !Ref ListeningOnPort
HostPort: 0 # !Ref ListeningOnPort
Protocol: tcp
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref LogGroup
awslogs-region: us-east-1
awslogs-stream-prefix: nginx
Cpu: !Ref TaskDefinitionCpu
Memory: !Ref TaskDefinitionMemory
ExecutionRoleArn: !Ref ExecutionRole
Family: !Sub ${AWS::StackName}-nginx-task
NetworkMode: bridge # awsvpc
RequiresCompatibilities:
- EC2
TaskRoleArn: !Ref TaskRole
ECSSecurityGroup:
Condition: CreateMainResources
Type: AWS::EC2::SecurityGroup
Properties:
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: !Ref ListeningOnPort
ToPort: !Ref ListeningOnPort
CidrIp: 0.0.0.0/0
VpcId: !Ref VPC
Service:
Condition: CreateECSResources
DependsOn:
- Listener80
- EC2AutoScalingGroup
Type: AWS::ECS::Service
Properties:
Cluster: !Ref Cluster
CapacityProviderStrategy:
- CapacityProvider: !Ref MainCapacityProvider
Weight: !Ref Weight
TaskDefinition: !Ref NginxWebServerTaskDefinition
SchedulingStrategy: REPLICA
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 100
DeploymentController:
Type: ECS
PlacementStrategies:
- Type: binpack
Field: memory
DesiredCount: !Ref TaskDefinitionInstantiations
LoadBalancers:
- ContainerName: !Ref TaskContainerName
ContainerPort: !Ref ListeningOnPort
TargetGroupArn: !Ref MyTargetGroup
# NetworkConfiguration: # awsvpc only
# AwsvpcConfiguration:
# Subnets:
# - !Ref PublicSubnet1
# - !Ref PublicSubnet2
# SecurityGroups:
# - !Ref ECSSecurityGroup
The issue was that my ec2 instance must listening on all possible ephemeral host port to it works (because of the dynamic mapping setting), otherwise those port was unreachable and the timeout would be trigger. So, i needed to change my security group settings:
EC2SecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
# range de portas efemeras do alb 32768-65535
FromPort: !If [DynamicPortMapping, 32768, !Ref ListeningOnPort]
ToPort: !If [DynamicPortMapping, 65535, !Ref ListeningOnPort ]
SourceSecurityGroupId: !Ref ECSSecurityGroup
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref SSHUserIP
references:
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs/
https://www.youtube.com/watch?v=cmRZleI18Yg ( 4:52 - 5:12 is the key moment)
we face an issue with jenkins installed on kubernetes using jenkins operator, so we can't persist created jobs because after restarting the pods we lost our jobs, those are our configurations used to start it:
apiVersion: jenkins.io/v1alpha2
kind: Jenkins
metadata:
name: jenkins
namespace: integration
spec:
configurationAsCode:
configurations:
groovyScripts:
configurations:
backup:
containerName: backup
action:
exec:
command:
- /home/user/bin/backup.sh
interval: 30
makeBackupBeforePodDeletion: true
restore:
containerName: backup
action:
exec:
command:
- /home/user/bin/restore.sh
master:
basePlugins:
- name: kubernetes
version: 1.25.2
- name: workflow-job
version: "2.39"
- name: workflow-aggregator
version: "2.6"
- name: git
version: 4.2.2
- name: job-dsl
version: "1.77"
- name: configuration-as-code
version: "1.38"
- name: kubernetes-credentials-provider
version: "0.13"
plugins:
- name: maven-plugin
version: "3.8"
- name: ansible
version: "1.1"
- name: bitbucket
version: 1.1.27
- name: bitbucket-build-status-notifier
version: 1.4.2
- name: docker-plugin
version: 1.2.1
- name: generic-webhook-trigger
version: "1.72"
- name: github-pullrequest
version: 0.2.8
- name: job-import-plugin
version: "3.4"
- name: msbuild
version: "1.29"
- name: nexus-artifact-uploader
version: "2.13"
- name: pipeline-npm
version: 0.9.2
- name: pipeline-utility-steps
version: 2.6.1
- name: pollscm
version: 1.3.1
- name: postbuild-task
version: "1.9"
- name: ranorex-integration
version: 1.0.2
- name: sidebar-link
version: 1.11.0
- name: sonarqube-generic-coverage
version: "1.0"
- name: sonar
version: "2.13"
- name: simple-theme-plugin
version: "0.6"
priorityClassName:
disableCSRFProtection: false
containers:
- name: jenkins-master
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 80
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 1500m
memory: 3Gi
requests:
cpu: 1
memory: 500Mi
- name: backup
image: virtuslab/jenkins-operator-backup-pvc:v0.0.8
imagePullPolicy: IfNotPresent
env:
- name: BACKUP_DIR
value: /backup
- name: JENKINS_HOME
value: /jenkins-home
- name: BACKUP_COUNT
value: "3"
volumeMounts:
- mountPath: /jenkins-home
name: jenkins-home
- mountPath: /backup
name: backup
volumes:
- name: backup
persistentVolumeClaim:
claimName: jenkins-backup
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home
securityContext:
fsGroup: 1000
runAsUser: 1000
seedJobs:
- description: Jenkins Operator repository
id: jenkins-operator
repositoryBranch: master
repositoryUrl: https://github.com/jenkinsci/kubernetes-operator.git
targets: cicd/jobs/*.jenkins
the operator have two scripts backup and restore and what we seen is that our pre-configured jobs are persisted but the new created one (using GUI) they aren't. any ideas about this problem? or the jenkins operator doesn't permit the persistency ?
From the docs (https://jenkinsci.github.io/kubernetes-operator/docs/getting-started/latest/configuring-backup-and-restore/):
Because of Jenkins Operator’s architecture, the configuration of Jenkins should be done using ConfigurationAsCode or GroovyScripts and jobs should be defined as SeedJobs. It means that there is no point in backing up any job configuration up. Therefore, the backup script makes a copy of jobs history only.
So yes, this is the intended behaviour, you should create new jobs as SeedJobs, not in the GUI.
I am trying to set docker log tag with Ansible for Amazon ECS TaskDefinition but unfortunately, I am getting an error mentioned below.
I exactly want to display the container name in the docker logs.
playbook.yml:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: '{{.Name}}'
family: "{{ taskfamily_name }}"
state: present
register: task_output
error:
TASK [Create task definition] ***************************************************************************
task path: /home/ubuntu/ansible/ecs_random.yml:14
fatal: [localhost]: FAILED! => {
"msg": "template error while templating string: unexpected '.'. String: {{.Name}}"
}
Below expression works for me.
tag: "{{ '{{' }}.Name{{ '}}' }}"
task:
tasks:
- name: Create task definition
ecs_taskdefinition:
containers:
- name: hello-world-1
cpu: "2"
essential: true
image: "nginx"
memory: "128"
portMappings:
- containerPort: "80"
hostPort: "0"
logConfiguration:
logDriver: syslog
options:
syslog-address: udp://127.0.0.1:514
tag: "{{ '{{' }}.Name{{ '}}' }}"
Related Question:
Escaping double curly braces in Ansible
Related Documentation:
http://jinja.pocoo.org/docs/dev/templates/#escaping
I tried to create a pod with a particular environment for uwsgi configuration , but it was this message :
failed to load "phptime.yml": JSON: I can not unpack the number in the value of the string type Go
when I tried to run this command :
kubectl create -f phptime.yml
I found that trouble in environments that has names like this:
UWSGI_HTTP-MODIFIER1
or
UWSGI_PHP-SAPI-NAME
or
UWSGI_MASTER-AS-ROOT
but with environments that has a next names all ok:
UWSGI_HTTP
or
UWSGI_INCLUDE
A lot of our containers took configuration from environments and I need include all of my conf environments. This is my rc conf:
containers:
- name: phptime
image: ownregistry/phpweb:0.5
env:
- name: UWSGI_UID
value: go
- name: UWSGI_GID
value: go
- name: UWSGI_INCLUDE
value: /var/lib/go-agent/pipelines/test/test-dev0/.uwsgi_dev.ini
- name: UWSGI_PHP-SAPI-NAME
value: apache
- name: UWSGI_HTTP
value: :8086
- name: UWSGI_HTTP-MODIFIER1
value: 14
- name: UWSGI_PIDFILE
value: '/tmp/uwsgi.pid'
- name: UWSGI_MASTER-FIFO
value: '/tmp/fifo0'
- name: UWSGI_MASTER-AS-ROOT
value: 'true'
- name: UWSGI_MASTER
value: 'true'
ports:
- containerPort: 8086
resources:
limits:
cpu: 500m
memory: 200Mi
requests:
cpu: 500m
memory: 200Mi
volumeMounts:
- mountPath: /var/lib/go-agent/pipelines/test/test-dev0/
name: site
readOnly: true
volumes:
- hostPath:
path: /home/user/www/
name: site
Is this kubernetes issue or it`s my? How to solve this? Thanks!
You must quote all of the values that you want to set as environment variables that the yaml parser might interpret as a non-string type.
For example, in influxdb-grafana-controller.yaml the values true and false are quoted because they could be interpreted as booleans. The same constraint applies to purely numerical values.