How to get image from a docker repo for test kitchen - docker

I need to test my application with Docker Image httpd and when i add it to the current .kitchen.yaml file it fails with the error:
root#ip-172-31-1-22:~/test1# kitchen test
-----> Starting Kitchen (v1.23.2)
-----> Cleaning up any prior instances of <default-httpd>
-----> Destroying <default-httpd>...
Finished destroying <default-httpd> (0m0.00s).
-----> Testing <default-httpd>
-----> Creating <default-httpd>...
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>> Create failed on instance <default-httpd>. Please see .kitchen/logs/default-httpd.log for more details
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
root#ip-172-31-1-22:~/test1#
My Kitchen.yaml file
driver:
name: docker
use_sudo: false
#image: httpd
provisioner:
hosts: aws
name: ansible_playbook
#roles_path: roles
require_ansible_repo: true
ansible_verbose: true
ansible_version: latest
require_chef_for_busser: false
playbook: default.yml
verifier:
name: inspec
platforms:
#- name: ubuntu
- name: httpd
driver_config:
run_command: /bin/systemd
privileged: true
provision_command:
- systemctl enable sshd.service
suites:
- name: default
verifier:
inspec_tests:
- path: /root/kitchen/test/integration/default/test.rb
Use Case:
1> Deoloy a Httpd base container
2> Use Ansible Play book to move a index.html file
3> Use ansible tos tart and stop the services
4> Run an inspec to check if the tests are successful
What is the right procedure to deploy an httpd using kitchen.yaml and to test the use case.
1> Is this the correct Approch??
2> Should i simply write a Ansible playbook to install httpd and then move my html files.
3> If the Answer to Q2 is yes, then doesn't it defeat the purpose of having an httpd Container already

since you are using ansible, then use docker modules to deploy httpd

Related

Filebeat 7.2 - Save logs from Docker containers to Logstash

I have a few Docker containers running on my ec2 instance.
I want to save logs from these containers directly to Logstash (Elastic Cloud).
When I tried to install Filebeat manually, everything worked allright.
I have downloaded it using
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
I have unpacked it, changed filebeat.yml configuration to
filebeat.inputs:
- type: log
enabled: true
fields:
application: "myapp"
fields_under_root: true
paths:
- /var/lib/docker/containers/*/*.log
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
This worked just fine, I could find my logs after searching application: "myapp" in Kibana.
However, when I tried to run Filebeat from Docker, no success.
This is filebeat part of my docker-compose.yml
filebeat:
image: docker.elastic.co/beats/filebeat:7.2.0
container_name: filebeat
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock #needed for autodiscover
My previous filebeat.yml from manual execution doesn't work, so I have tried many examples, but nothing worked. This is one example which I think should work, but it doesn't. Docker container starts no problem, but it can't read from logfiles somehow.
filebeat.autodiscover:
providers:
- type: docker
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/lib/docker/containers/*/*.log
json.keys_under_root: true
json.add_error_key: true
fields_under_root: true
fields:
application: "myapp"
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have also tried something like this
filebeat.autodiscover:
providers:
- type: docker
templates:
config:
- type: docker
containers.ids:
- "*"
filebeat.inputs:
- type: docker
containers.ids:
- "*"
processors:
- add_docker_metadata:
fields:
application: "myapp"
fields_under_root: true
cloud.id: "iamnotshowingyoumycloudidthisisjustfake"
cloud.auth: "elastic:mypassword"
I have no clue what else to try, filebeat logs still shows
"harvester":{"open_files":0,"running":0}}
I am 100% sure that logs from containers are under /var/lib/docker/containers/*/*.log ... as I said, Filebeat worked, when installed manually, not as docker image.
Any suggesions ?
Output log from Filebeat
2019-07-23T05:35:58.128Z INFO instance/beat.go:292 Setup Beat: filebeat; Version: 7.2.0
2019-07-23T05:35:58.128Z INFO [index-management] idxmgmt/std.go:178 Set output.elasticsearch.index to 'filebeat-7.2.0' as ILM is enabled.
2019-07-23T05:35:58.129Z INFO elasticsearch/client.go:166 Elasticsearch url: https://123456789.us-east-1.aws.found.io:443
2019-07-23T05:35:58.129Z INFO [publisher] pipeline/module.go:97 Beat name: e3e5163f622d
2019-07-23T05:35:58.136Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2019-07-23T05:35:58.142Z INFO instance/beat.go:421 filebeat start running.
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:104 No registry home found. Create: /usr/share/filebeat/data/registry/filebeat
2019-07-23T05:35:58.142Z INFO registrar/migrate.go:112 Initialize registry meta file
2019-07-23T05:35:58.144Z INFO registrar/registrar.go:108 No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:145 Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json
2019-07-23T05:35:58.146Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2019-07-23T05:35:58.146Z INFO crawler/crawler.go:72 Loading Inputs: 1
2019-07-23T05:35:58.146Z WARN [cfgwarn] docker/input.go:49 DEPRECATED: 'docker' input deprecated. Use 'container' input instead. Will be removed in version: 8.0.0
2019-07-23T05:35:58.150Z INFO log/input.go:148 Configured paths: [/var/lib/docker/containers/*/*.log]
2019-07-23T05:35:58.150Z INFO input/input.go:114 Starting input of type: docker; ID: 11882227825887812171
2019-07-23T05:35:58.150Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2019-07-23T05:35:58.150Z WARN [cfgwarn] docker/docker.go:57 BETA: The docker autodiscover is beta
2019-07-23T05:35:58.153Z INFO [autodiscover] autodiscover/autodiscover.go:105 Starting autodiscover manager
2019-07-23T05:36:28.144Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s
{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":17}},"total":{"ticks":40,"time":{"ms":52},"value":40},"user":{"ticks":30,"time":{"ms":35}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"4427db93-2943-4a8d-8c55-6a2e64f19555","uptime":{"ms":30111}},"memstats":{"gc_next":4194304,"memory_alloc":2118672,"memory_total":6463872,"rss":28352512},"runtime":{"goroutines":34}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0},"writes":{"success":1,"total":1}},"system":{"cpu":{"cores":1},"load":{"1":0.31,"15":0.03,"5":0.09,"norm":{"1":0.31,"15":0.03,"5":0.09}}}}}}
Hmm, I don't see anything obvious in the Filebeat config on why its not working, I have a very similar config running for a 6.x Filebeat.
I would suggest doing a docker inspect on the container and confirming that the mounts are there, maybe check on permissions but errors would have probably shown in the logs.
Also could you try looking into using container input? I believe it is the recommended method for container logs in 7.2+: https://www.elastic.co/guide/en/beats/filebeat/7.2/filebeat-input-container.html

Filebeat not pushing logs to Elasticsearch

I am new to docker and all this logging stuff so maybe I'm making a stuipd mistake so thanks for helping in advance. I have ELK running a a docker container (6.2.2) via Dockerfile line:
FROM sebp/elk:latest
In a separate container I am installing and running Filebeat via the folling Dockerfile lines:
RUN curl -L -O -k https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.2-amd64.deb
RUN dpkg -i filebeat-6.2.2-amd64.deb
COPY resources/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
RUN chmod go-w /etc/filebeat/filebeat.yml
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
My Filebeat configuration is:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /jetty/jetty-distribution-9.3.8.v20160314/logs/*.log
output.logstash:
enabled: false
hosts: ["elk-stack:9002"]
#index: 'audit'
output.elasticsearch:
enabled: true
hosts: ["elk-stack:9200"]
#index: "audit-%{+yyyy.MM.dd}"
path.config: "/etc/filebeat"
#setup.template.name: "audit"
#setup.template.pattern: "audit-*"
#setup.template.fields: "${path.config}/fields.yml"
As you can see I was trying to do a custom index into elasticsearch, but now I'm just trying to get the default working first. The jetty logs all have global read permissions.
The docker container logs show no errors and after running I make sure the config and output are OK:
# filebeat test config
Config OK
# filebeat test output
elasticsearch: http://elk-stack:9200...
parse url... OK
connection...
parse host... OK
dns lookup... OK
addresses: 172.17.0.3
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
version: 6.2.2
/var/log/filebeat/filebeat shows:
2018-03-15T13:23:38.859Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-03-15T13:23:38.860Z INFO instance/beat.go:475 Beat UUID: ed5cecaf-cbf5-438d-bbb9-30bab80c4cb9
2018-03-15T13:23:38.860Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elk-stack:9200
2018-03-15T13:23:38.891Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
However when i hit localhost:9200/_cat/indices?v it doesn't return any indices:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
How do I get this working? I am out of ideas. Thanks again for any help.
To answer my own question you can't start filebeat with:
RUN /usr/share/filebeat/bin/filebeat -e -d "publish" &
and have it keep running once the container starts. Need to manually start it or have it start in its own container with an ENTRYPOINT tag.

using ansible with docker-compose

I am trying to deploy a docker setup using Ansible playbook. For this, I am using docker_service.
My Playbook looks like:
---
- name: Run Docker compose
hosts: all
gather_facts: no
tasks:
- debug: msg="Container - {{ inventory_hostname }}"
- docker_service:
project_src: "compose"
state: absent
- docker_service:
project_src: "compose"
state: present
Upon running this simple playbook as:
ansible-playbook -v playbook.yml --ask-sudo-pass
I added --ask-sudo-pass to ensure that it was not a permission issue.
OUTPUT
SUDO password:
PLAY [Run Docker compose] ******************************************************
TASK [debug] *******************************************************************
ok: [prolims-staging] => {
"msg": "Container - prolims-staging"
}
TASK [docker_service] **********************************************************
fatal: [prolims-staging]: FAILED! => {"changed": false, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"}
to retry, use: --limit #/data/prolims-provision/provision-docker.retry
PLAY RECAP *********************************************************************
prolims-staging : ok=1 changed=0 unreachable=0 failed=1
I did try looking out for this issue on other forums as well ( and similar questions on this StackOverflow too), but those were not helpful.
Note: I am able to run docker-compose successfully in the target machine from its CLI (using sudo).
Also, I tried playing around with docker_container as well. I tried to execute a playbook with contents below:
...
- name: check container status
command: docker ps
register: result
- name: Create a container
docker_container:
name: db_pg
image: "postgres:latest"
state: present
recreate: yes
...
and running this playbook works perfectly fine.
I assume, posting my docker-compose file might not be relevant here.
I followed this example, but did not work. Maybe, I might be missing some stupid or really important thing here.
Any help on understanding and resolving this issue would be appreciated.
I am able to run docker-compose successfully in the target machine from its CLI (using sudo).
So you need to use become declaration for the task.
I added --ask-sudo-pass to ensure that it was not a permission issue.
Just adding --ask-sudo-pass to the ansible-playbook parameters doesn't have any effect unless the relevant tasks/plays have become declaration (and become_method is set to sudo, but this is by default).
Reference.

Kitchen and Kitchen-docker

I am trying to use kitchen-docker driver on a GNU/Linux machine. I have installed the kitchen-docker gem using chef gem install command.
This is an extract of my .kitchen.yml file:
---
driver:
name: docker
provisioner:
name: chef_zero
verifier:
name: inspec
platforms:
- name: centos-7.2
driver_config:
image: centos:7.2
platform: centos
suites:
- name: zaz
run_list:
- recipe[foo::bar]
...
...
I have Docker installed on it's latest version using Docker repositories for Centos. The service is running and Docker is in my path. However when I try to run a simple kitchen list using that .kitchen.yml I get this error:
[FakeyMcFakeFace#workstation foo]$ kitchen list
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::UserError
>>>>>> Message: You must first install the Docker CLI tool http://www.docker.io/gettingstarted/
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration
Why is docker not being recognized by Kitchen? If I run the diagnose -all option I just see it is failing on the dependencies check:
backtrace:
- "/home/FakeyMcFakeFace/.chefdk/gem/ruby/2.3.0/gems/kitchen-docker-2.6.0/lib/kitchen/driver/docker.rb:93:in
`rescue in verify_dependencies'"
What am I missing here?
To copy down from the comments, kitchen-docker requires passwordless sudo (if using sudo) right now, the error message is misleading

Concourse pending for long time before running task

I have a Concourse Pipeline with a Task using a Docker image that is stored in our local Artifactory server. Every time I start the Pipeline it takes about 5 mins until the tasks are finally run. The log looks like this:
I assume that Concourse somehow checks for newer versions of the Docker image. Unfortunately I have no chance to debug since all the logfiles on the Concourse worker VM offer no usable information.
My Questions:
How can I possibly debug what's going on, when Concourse says "preparing build" and the status is "pending".
Is there any chance to avoid Concourse from checking for a newer version of the Docker image? I tagged the Docker image with version latest - might this be an issue?
Any further ideas how I could speed things up?
Here is the detailed configuration of my pipeline and tasks:
pipeline.yml:
---
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_DOMAIN/subpath/concourse
username: ...
password: ...
insecure_registries:
- OUR_DOMAIN
# ...
jobs:
- name: deploy
public: true
plan:
- get: concourse-image
- task: create-manifest
image: concourse-image
file: concourse/tasks/create-manifest/task.yml
params:
# ...
task.yml:
---
platform: linux
inputs:
- name: git
- name: concourse
outputs:
- name: deployment-manifest
run:
path: concourse/tasks/create-and-upload-cloud-config/task.sh
The reason for this problem was that we pulled the Docker image from an internal Docker registry, which is running on HTTP only. Concourse tried to pull the image using HTTPS and it took around 5 mins until Concourse switched to HTTP (that's what a tcpdump on the worker showed us).
Changing the resource configuration to the following config solved the problems:
resources:
- name: concourse-image
type: docker-image
source:
repository: OUR_SERVER:80/subpath/concourse
username: docker-readonly
password: docker-readonly
insecure_registries:
- OUR_SERVER:80
So basically it was adding the port explicitly to the repository and the insecure_registries.

Resources