"no environment variables set" when not using Matrix on Travis CI - travis-ci

We're cutting in support for testing ARM64 on Travis CI. We also stopped using the standard Matrix and switched to exclusively using include: to reduce unneeded jobs. Avoiding matrix: and using include: saves 25 to 50 unneeded jobs.
The result of testing of the change is available here. About 8 jobs fail in the configuration with the error "no environment variables set":
We think the jobs are coming from osx_image: xcode10.1 in .travis.yml. It appears the unwanted jobs are the result of osx_image applied to {Linux,OS X} x {GCC,Clang} x {amd64,arm64}.
We don't know how to stop the jobs or how to work around them.
How do we fix the jobs with "no environment variables set" failures?
Here is the relevant pieces of .travis.yml.
language: cpp
arch:
- amd64
- arm64
os:
- linux
- osx
osx_image:
- xcode10.1
dist: xenial
sudo: required
git:
depth: 5
compiler:
- clang
- gcc
env:
global:
- BUILD_JOBS=2
- ANDROID_HOME="$HOME/android-sdk"
- ANDROID_SDK="$HOME/android-sdk"
- ANDROID_NDK="$HOME/android-ndk"
jobs:
include:
- os: linux
name: Linux with GCC (all)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=all
- os: linux
name: Linux with GCC (native)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=native
- os: linux
name: Linux with GCC (no-asm)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=no-asm
- os: linux
name: Linux with GCC (debug)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=debug
- os: linux
name: Linux with GCC (asan)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=asan
- os: linux
name: Linux with GCC (ubsan)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=ubsan
- os: linux
name: Linux with GCC (pem)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=pem
- os: linux
name: Linux with GCC (autotools)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=autotools
- os: linux
name: Linux with GCC (cmake)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=cmake
- os: linux
name: Linux with Clang (all)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=all
- os: linux
name: Linux with Clang (native)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=native
- os: linux
name: Linux with Clang (no-asm)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=no-asm
- os: linux
name: Linux with Clang (debug)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=debug
- os: linux
name: Linux with Clang (asan)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=asan
- os: linux
name: Linux with Clang (ubsan)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=ubsan
- os: linux
name: Linux with Clang (pem)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=pem
- os: linux
name: Linux with Clang (autotools)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=autotools
- os: linux
name: Linux with Clang (cmake)
arch: amd64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=cmake
- os: osx
name: OS X with Clang (all)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=all
- os: osx
name: OS X with Clang (native)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=native
- os: osx
name: OS X with Clang (no-asm)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=no-asm
- os: osx
name: OS X with Clang (debug)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=debug
- os: osx
name: OS X with Clang (asan)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=asan
- os: osx
name: OS X with Clang (ubsan)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=ubsan
- os: osx
name: OS X with Clang (pem)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=pem
- os: osx
name: OS X with Clang (autotools)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=autotools
- os: osx
name: OS X with Clang (cmake)
arch: amd64
compiler: clang
env:
- BUILD_OS=osx
- BUILD_MODE=cmake
- os: linux
name: Linux with GCC (all)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=all
- os: linux
name: Linux with GCC (native)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=native
- os: linux
name: Linux with GCC (no-asm)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=no-asm
- os: linux
name: Linux with GCC (debug)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=debug
- os: linux
name: Linux with GCC (asan)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=asan
- os: linux
name: Linux with GCC (ubsan)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=ubsan
- os: linux
name: Linux with GCC (pem)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=pem
- os: linux
name: Linux with GCC (autotools)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=autotools
- os: linux
name: Linux with GCC (cmake)
arch: arm64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=cmake
- os: linux
name: Linux with Clang (all)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=all
- os: linux
name: Linux with Clang (native)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=native
- os: linux
name: Linux with Clang (no-asm)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=no-asm
- os: linux
name: Linux with Clang (debug)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=debug
- os: linux
name: Linux with Clang (asan)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=asan
- os: linux
name: Linux with Clang (ubsan)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=ubsan
- os: linux
name: Linux with Clang (pem)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=pem
- os: linux
name: Linux with Clang (autotools)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=autotools
- os: linux
name: Linux with Clang (cmake)
arch: arm64
compiler: clang
env:
- BUILD_OS=linux
- BUILD_MODE=cmake
- os: linux
name: Android on Linux (armeabi-v7a)
arch: amd64
env:
- BUILD_OS=linux
- BUILD_MODE=android
- PLATFORM=armeabi-v7a
- os: linux
name: Android on Linux (aarch64)
arch: amd64
env:
- BUILD_OS=linux
- BUILD_MODE=android
- PLATFORM=aarch64
- os: linux
name: Android on Linux (x86)
arch: amd64
env:
- BUILD_OS=linux
- BUILD_MODE=android
- PLATFORM=x86
- os: linux
name: Android on Linux (x86_64)
arch: amd64
env:
- BUILD_OS=linux
- BUILD_MODE=android
- PLATFORM=x86_64
- os: osx
name: iOS on OS X (iPhoneOS)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=iPhoneOS
- os: osx
name: iOS on OS X (Arm64)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=Arm64
- os: osx
name: iOS on OS X (WatchOS)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=WatchOS
- os: osx
name: iOS on OS X (AppleTVOS)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=AppleTVOS
- os: osx
name: iOS on OS X (iPhoneSimulator)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=iPhoneSimulator
- os: osx
name: iOS on OS X (WatchSimulator)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=WatchSimulator
- os: osx
name: iOS on OS X (AppleTVSimulator)
arch: amd64
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=AppleTVSimulator
allow_failures:
- os: osx
name: iOS on OS X (WatchOS)
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=WatchOS
- os: osx
name: iOS on OS X (iPhoneSimulator)
env:
- BUILD_OS=osx
- BUILD_MODE=ios
- PLATFORM=iPhoneSimulator
before_install:
- |
...
script:
- |
...
branches:
...
notifications:
...

We think the jobs are coming from osx_image: xcode10.1 in .travis.yml
This was incorrect. We removed the global key osx_image but the problem still exited.
But we still don't quite understand where the jobs were coming from other than it was a byproduct of matrix: expansion, which we were trying to avoid.
How do we fix the jobs with "no environment variables set" failures?
The fix was to get rid of matrix: expansion, but it is not readily available information. It certainly was not stated in the docs. Or we could not find it in the docs.
To avoid matrix: expansion get rid of all global keys for env, arch, os, compiler. The insight is, the global keys trigger the matrix expansion. Matrix expansion does not depend on the presence of the matrix: or jobs: keys.
Our resulting yml file looks like the following. We have to manually build the cross product of {env} x {arch} x {os} x {compiler} (which we were already doing).
language: cpp
dist: xenial
sudo: required
git:
depth: 5
jobs:
include:
- os: linux
name: Linux with GCC (all)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=all
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (native)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=native
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (no-asm)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=no-asm
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (debug)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=debug
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (asan)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=asan
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (ubsan)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=ubsan
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (pem)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=pem
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (autotools)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=autotools
- BUILD_JOBS=2
- os: linux
name: Linux with GCC (cmake)
arch: amd64
compiler: gcc
env:
- BUILD_OS=linux
- BUILD_MODE=cmake
- BUILD_JOBS=2
...
jobs: is an alias for matrix:, so using jobs.include instead of matrix.include does not have the intended effect. Also see the Travis YML Schema.

Related

ERROR !The requested handler 'docker status' was not found in either the main handlers list nor in the listening handlers list

I'm using vagrant to create a simulation of a prod cluster in which there a a master and two nodes, my vagrantfile look like this :
IMAGE_NAME = "bento/ubuntu-16.04"
N = 2
Vagrant.configure("2") do |config|
config.ssh.insert_key = false
config.vm.provider "virtualbox" do |v|
v.memory = 1024
v.cpus = 2
end
config.vm.define "k8s-master" do |master|
master.vm.box = IMAGE_NAME
master.vm.network "private_network", ip: "192.168.50.10"
master.vm.hostname = "k8s-master"
master.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/master-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.50.10",
}
end
end
(1..N).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.box = IMAGE_NAME
node.vm.network "private_network", ip: "192.168.50.#{i + 10}"
node.vm.hostname = "node-#{i}"
node.vm.provision "ansible" do |ansible|
ansible.playbook = "kubernetes-setup/node-playbook.yml"
ansible.extra_vars = {
node_ip: "192.168.50.#{i + 10}",
}
end
end
end
end
and my master playbook look like this :
---
- hosts: all
become: true
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- name: Add an apt signing key for Docker
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add apt repository for stable version
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install docker and its dependecies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- docker-ce
- docker-ce-cli
- containerd.io
notify:
- docker status
- name: Add vagrant user to docker group
user:
name: vagrant
group: docker
- name: Remove swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none
- name: Disable swap
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Add an apt signing key for Kubernetes
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes.list
- name: Install Kubernetes binaries
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- kubelet
- kubeadm
- kubectl
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
- name: Restart kubelet
service:
name: kubelet
daemon_reload: yes
state: restarted
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
- name: Setup kubeconfig for vagrant user
command: "{{ item }}"
with_items:
- mkdir -p /home/vagrant/.kube
- cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
- chown vagrant:vagrant /home/vagrant/.kube/config
- name: Install calico pod network
become: false
command: kubectl create -f https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml
- name: Generate join command
command: kubeadm token create --print-join-command
register: join_command
- name: Copy join command to local file
local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"
handlers:
- name: docker status
service: name=docker state=started
while the one used for the nodes is here below :
---
- hosts: all
become: true
tasks:
- name: Install packages that allow apt to be used over HTTPS
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- name: Add an apt signing key for Docker
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add apt repository for stable version
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
- name: Install docker and its dependecies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- docker-ce
- docker-ce-cli
- containerd.io
notify:
- docker status
- name: Add vagrant user to docker group
user:
name: vagrant
group: docker
- name: Remove swapfile from /etc/fstab
mount:
name: "{{ item }}"
fstype: swap
state: absent
with_items:
- swap
- none
- name: Disable swap
command: swapoff -a
when: ansible_swaptotal_mb > 0
- name: Add an apt signing key for Kubernetes
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present
- name: Adding apt repository for Kubernetes
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes.list
- name: Install Kubernetes binaries
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- kubelet
- kubeadm
- kubectl
- name: Configure node ip
lineinfile:
path: /etc/default/kubelet
line: KUBELET_EXTRA_ARGS=--node-ip={{ node_ip }}
- name: Restart kubelet
service:
name: kubelet
daemon_reload: yes
state: restarted
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
- name: Copy the join command to server location
copy: src=join-command dest=/tmp/join-command.sh mode=0777
- name: Join the node to cluster
command: sh /tmp/join-command.sh
but when I launch my vagrant everything goes well until the installation of docker task on the node where I am facing this issue :
Vagrant has automatically selected the compatibility mode '2.0'
according to the Ansible version installed (2.9.6).
Alternatively, the compatibility mode can be specified in your Vagrantfile:
https://www.vagrantup.com/docs/provisioning/ansible_common.html#compatibility_mode
node-1: Running ansible-playbook...
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 16.04 on host node-1 should use
/usr/bin/python3, but is using /usr/bin/python for backward compatibility with
prior Ansible releases. A future Ansible release will default to using the
discovered platform python for this host. See https://docs.ansible.com/ansible/
2.9/reference_appendices/interpreter_discovery.html for more information. This
feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [node-1]
TASK [Install packages that allow apt to be used over HTTPS] *******************
changed: [node-1]
[WARNING]: Updating cache and auto-installing missing dependency: python-apt
TASK [Add an apt signing key for Docker] ***************************************
changed: [node-1]
TASK [Add apt repository for stable version] ***********************************
changed: [node-1]
TASK [Install docker and its dependecies] **************************************
*****ERROR! The requested handler 'docker status' was not found in either the main handlers list nor in the listening handlers list
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.*****
Anyone have a clue what could be the reason, I have tried to change the syntax but still I don't think it is a typo problem ?
As #zeitounator said in his command I forgot to add the handler part in the nodes configuration playbook, once it was added everything went well.

ECS + EC2 using CloudFormation stuck in CREATE_IN_PRO

Stack creation would get stuck in CREATE_IN_PROGRESS. The resource Service specifically. The stack creation would complete without it. This is what the cloudformation looks like. I have checked CloudTrail but can't find anything out of the ordinary.
AWSTemplateFormatVersion: '2010-09-09'
Description: Amazon ECS Preview Quickstart Template
Parameters:
SubnetID:
Type: String
SubnetID2:
Type: String
ClusterName:
Description: Name of your Amazon ECS Cluster
Type: String
ConstraintDescription: must be a valid Amazon ECS Cluster.
Default: TestCluster
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: Container Instance type
Type: String
Default: t2.micro
AllowedValues:
- t2.micro
- t2.small
- t2.medium
- m3.medium
- m3.large
- m3.xlarge
- m3.2xlarge
- c3.large
- c3.xlarge
- c3.2xlarge
- c3.4xlarge
- c3.8xlarge
- r3.large
- r3.xlarge
- r3.2xlarge
- r3.4xlarge
- r3.8xlarge
- i2.xlarge
- i2.2xlarge
- i2.4xlarge
- i2.8xlarge
- hi1.4xlarge
- hs1.8xlarge
- cr1.8xlarge
- cc2.8xlarge
ConstraintDescription: must be a valid EC2 instance type.
SSHLocation:
Description: " The IP address range that can be used to SSH to the EC2 instances"
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Mappings:
AWSInstanceType2Arch:
t2.micro:
Arch: HVM64
t2.small:
Arch: HVM64
t2.medium:
Arch: HVM64
m3.medium:
Arch: HVM64
m3.large:
Arch: HVM64
m3.xlarge:
Arch: HVM64
m3.2xlarge:
Arch: HVM64
c3.large:
Arch: HVM64
c3.xlarge:
Arch: HVM64
c3.2xlarge:
Arch: HVM64
c3.4xlarge:
Arch: HVM64
c3.8xlarge:
Arch: HVM64
r3.large:
Arch: HVM64
r3.xlarge:
Arch: HVM64
r3.2xlarge:
Arch: HVM64
r3.4xlarge:
Arch: HVM64
r3.8xlarge:
Arch: HVM64
i2.xlarge:
Arch: HVM64
i2.2xlarge:
Arch: HVM64
i2.4xlarge:
Arch: HVM64
i2.8xlarge:
Arch: HVM64
hi1.4xlarge:
Arch: HVM64
hs1.8xlarge:
Arch: HVM64
cr1.8xlarge:
Arch: HVM64
cc2.8xlarge:
Arch: HVM64
AWSRegionArch2AMI:
us-east-1:
HVM64: ami-34ddbe5c
Resources:
Cluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: ClusterName
LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: deployment-example-log-group
ContainerInstance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile:
Ref: ECSIamInstanceProfile
ImageId:
Fn::FindInMap:
- AWSRegionArch2AMI
- Ref: AWS::Region
- Fn::FindInMap:
- AWSInstanceType2Arch
- Ref: InstanceType
- Arch
InstanceType:
Ref: InstanceType
SecurityGroups:
- Ref: ECSQuickstartSecurityGroup
KeyName:
Ref: KeyName
UserData:
Fn::Base64:
Fn::Join:
- ''
- - "#!/bin/bash -xe\n"
- echo ECS_CLUSTER=
- Ref: ClusterName
- " >> /etc/ecs/ecs.config\n"
ECSQuickstartSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable HTTP access via SSH
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp:
Ref: SSHLocation
ECSIamInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: "/"
Roles:
- Ref: ECSQuickstartRole
ECSQuickstartRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- sts:AssumeRole
Path: "/"
Policies:
- PolicyName: ECSQuickstart
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action: ecs:*
Resource: "*"
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: deployment-example-task
Cpu: 256
Memory: 512
# NetworkMode: awsvpc
TaskRoleArn: !Ref ECSQuickstartRole
ContainerDefinitions:
-
Name: engine
Image: gcr.io/xxxxx
Environment:
- Name: db_instance
Value: "clouform2"
- Name: LOG_LEVEL
Value: 1
- Name: HOST
Value: 0.0.0.0
- Name: HTTP_PORT
Value: 8181
PortMappings:
- ContainerPort: 8181
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
-
Name: encyou
Image: gcr.io/xxxx3
DependsOn:
- Condition: START
ContainerName: engine
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
-
Name: packager
Image: gcr.io/xxxxx
DependsOn:
- Condition: START
ContainerName: engine
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-region: !Ref AWS::Region
awslogs-group: !Ref LogGroup
awslogs-stream-prefix: ecs
RequiresCompatibilities:
- EC2
Service:
Type: AWS::ECS::Service
DependsOn: ContainerInstance
Properties:
ServiceName: ServiceName
Cluster: !Ref Cluster
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 75
DesiredCount: 2
TaskDefinition: !Ref 'TaskDefinition'
LaunchType: EC2
I would see that the Service, although it is the resource that get's stuck, would successfully be registered in the cluster but tasks would not.
Any idea what I am doing wrong?
Seems you are using wrong cluster names. Your cluster will be just called ClusterName, not TestCluster. Subsequently, your instance will be trying to register with non-existing cluster.
This is because instead of:
ClusterName: ClusterName
there should be:
ClusterName: !Ref ClusterName
Please note that there could be other issues, not yet apparent though. What's more, you are using custom images gcr.io/xxxxx witch does not allow to re-produce the possible issues.

Kubernetes Docker version upgrade | Fix error “unexpected EOF”

I have seen that gcloud kubernetes is using Docker version 17.03.2-ce, build f5ec1e2.
Where as the I want to have docker version Docker version 18.09.0, build 4d60db4
The error “* Fix error “unexpected EOF” when adding an 8GB file moby/moby#37771” has been resolved in the latter version of docker.
Is there any way i can manually upgrade the version?
Thanks
In Google Kubernetes engine you should have Node OS as Ubuntu. Then you should use a DeamonSet as start-up script with the following yaml file below:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ssd-startup-script
labels:
app: ssd-startup-script
spec:
template:
metadata:
labels:
app: ssd-startup-script
spec:
hostPID: true
containers:
- name: ssd-startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
sudo curl -s https://get.docker.com/ | sh
echo Done
Then you have the Docker version should be like below:
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:49:01 2018
OS/Arch: linux/amd64
Experimental: false

Docker build inside kubernetes pod fails with "could not find bridge docker0"

I moved our build agents into Kubernetes / Container Engine. They used to run on container vm (version container-vm-v20160321) and mount docker.sock into the docker container so we can run docker build from inside the container.
This used the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
volumeMounts:
- name: ssh-keys
mountPath: /var/go/.ssh
readOnly: true
- name: gcloud-keys
mountPath: /var/go/.gcloud
readOnly: true
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
restartPolicy: Always
dnsPolicy: Default
volumes:
- name: ssh-keys
gcePersistentDisk:
pdName: sh-keys
fsType: ext4
readOnly: true
- name: gcloud-keys
gcePersistentDisk:
pdName: gcloud-keys
fsType: ext4
readOnly: true
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Now after moving it into a full-blown Container Engine cluster - version 1.3.5 - with the following manifest it fails.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: gocd-agent
spec:
replicas: 2
strategy:
type: Recreate
revisionHistoryLimit: 1
selector:
matchLabels:
app: gocd-agent
template:
metadata:
labels:
app: gocd-agent
spec:
containers:
- name: gocd-agent
image: travix/gocd-agent:16.8.0
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: ssh-keys
mountPath: /k8s-ssh-secret
- name: gcloud-keys
mountPath: /var/go/.gcloud
- name: docker-sock
mountPath: /var/run/docker.sock
- name: docker-bin
mountPath: /usr/bin/docker
env:
- name: "GO_SERVER_URL"
value: "https://server:8154/go"
- name: "AGENT_KEY"
value: "***"
- name: "AGENT_RESOURCES"
value: "docker"
- name: "DOCKER_GID_ON_HOST"
value: "107"
volumes:
- name: ssh-keys
secret:
secretName: ssh-keys
- name: gcloud-keys
secret:
secretName: gcloud-keys
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: docker-bin
hostPath:
path: /usr/bin/docker
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It seems to start building just fine, but eventually it fails with a no such interface error:
Executing "docker build --force-rm=true --no-cache=true --file=target/docker/Dockerfile --tag=****:1.0.258 ."
Sending build context to Docker daemon 557.1 kB
...
Sending build context to Docker daemon 78.04 MB
Step 1 : FROM travix/base-debian-jre8
---> a130b5e1b4d4
Step 2 : ADD ***-1.0.258.jar ***.jar
---> 8d53e68e93a0
Removing intermediate container d1a758c9baeb
Step 3 : ADD target/newrelic newrelic
---> 9dbbb1c1db58
Removing intermediate container 461e66978c53
Step 4 : RUN bash -c "touch /***.jar"
---> Running in 6a28f48c9fd1
Removing intermediate container 6a28f48c9fd1
failed to create endpoint stupefied_shockley on network bridge: adding interface veth095b905 to bridge docker0 failed: could not find bridge docker0: route ip+net: no such network interface
Is it impossible to run docker build inside a pod due to Kubernetes networking or do I need to configure the pod differently? Or is it a bug in the particular docker version on the host?
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:20:08 2016
OS/Arch: linux/amd64
The bridge actually seems to exist on the host:
$ sudo brctl show
bridge name bridge id STP enabled interfaces
cbr0 8000.063c847a631e no veth0a58740b
veth1f558898
veth8797ea93
vethb11a7490
vethc576cc01
docker0 8000.02428db6a46e no
And docker info for completeness
$ sudo docker info
Containers: 15
Running: 14
Paused: 0
Stopped: 1
Images: 67
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 148
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.57 GiB
Name: gke-tooling-default-pool-1fa283a6-8ufa
ID: JBQ2:Q3AR:TFJG:ILTX:KMHV:M67A:NYEM:NK4G:R43J:K5PS:26HY:Q57S
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
And
$ uname -a
Linux gke-tooling-default-pool-1fa283a6-8ufa 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linux

Detect PHP version in Travis

In my Travis file I have several PHP versions and a script entry like this:
php:
- 5.6
- 5.5
- 5.4
- 5.3
script:
- export CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"
- phpize #and lots of other stuff here.
- make
I want to run the export CFLAGS line only when the PHP version matches 5.6.
I could theoretically do that with a nasty hack to detect the PHP version from the command line, but how can I do this through the Travis configuration script?
You can either use shell conditionals to do this:
php:
- 5.6
- 5.5
- 5.4
- 5.3
script:
- if [[ ${TRAVIS_PHP_VERSION:0:3} == "5.6" ]]; then export CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"; fi
- phpize #and lots of other stuff here.
- make
Or use the build matrix with explicit inclusions:
matrix:
include:
- php: 5.6
env: CFLAGS="-Wno-deprecated-declarations -Wdeclaration-after-statement -Werror"
- php: 5.5
env: CFLAGS=""
- php: 5.4
env: CFLAGS=""
- php: 5.3
env: CFLAGS=""
script:
- phpize #and lots of other stuff here.
- make
The latter is probably what you're looking for, the former is a little less verbose.

Resources