How to Test Cookbook in CHEF using DOCKER - docker

I am trying to test a simple cookbook recipe using docker driver. But, I get an error.
Can someone please help? I really gave my all effort trying several things but none of them worked. I had installed docker kitchen driver also using command: chef gem install kitchen-docker. Below are my configurational files, chef versions, kitchen.yml file etc.
Chef version
Chef Workstation version: 22.6.973
Chef InSpec version: 4.56.20
Chef CLI version: 5.6.1
Chef Habitat version: 1.6.420
Test Kitchen version: 3.2.2
Cookstyle version: 7.32.1
Chef Infra Client version: 17.10.0
chef gem list kitchen-docker gives output as: kitchen-docker (2.13.0)
Below is my kitchen.yml file
---
driver:
name: docker
provision_command: curl -L https://www.chef.io/chef/install.sh | bash
provisioner:
name: chef_zero
## product_name and product_version specifies a specific Chef product and version to install.
## see the Chef documentation for more details: https://docs.chef.io/workstation/config_yml_kitchen/
# product_name: chef
# product_version: 17
verifier:
name: inspec
platforms:
- name: ubuntu
- name: centos-7
driver_config:
image: 'centos:7'
platform: centos
transport:
name: docker
suites:
- name: default
run_list:
- recipe[docker-cookbook::default]
verifier:
inspec_tests:
- test/integration/default
attributes:
My recipe: default.rb
#
# Cookbook:: docker-cookbook
# Recipe:: default
#
# Copyright:: 2022, The Authors, All Rights Reserved.
file '/tmp/test.txt' do
content 'This is managed by Rapidops'
action :create
end
But, when I do kitchen list, I am getting below error:
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::UserError
>>>>>> Message: Kitchen YAML file /root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist.
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration

As per error message, you are trying to run kitchen commands from recipe directory -
/root/chef-repo/cookbooks/docker-cookbook/recipes/kitchen.yml does not exist
You should run kitchen commands from root level of cookbook.
In you case, it is - /root/chef-repo/cookbooks/docker-cookbook

Related

Rasa 3.x deployed on docker cant connect to duckling server

I have rasa running in a container and duckling running in another docker container.
I can access duckling from browser as well as postman but in rasa logs I get this error:
`rasa4 | 2023-01-05 10:16:01 ERROR`
rasa.nlu.extractors.duckling_entity_extractor - Failed to connect to duckling http server. Make sure the duckling server is running/healthy/not stale and the proper host and port are set in the configuration.
More information on how to run the server can be found on github: https://github.com/facebook/duckling#quickstart Error: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /parse (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb33c0733a0>: Failed to establish a new connection: [Errno 111] Connection refused'))
this is my config.yml file:
# The config recipe.
# https://rasa.com/docs/rasa/model-configuration/
recipe: default.v1
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: en
pipeline:
# # No configuration for the NLU pipeline was provided. The following default pipeline was used to train your model.
# # If you'd like to customize it, uncomment and adjust the pipeline.
# # See https://rasa.com/docs/rasa/tuning-your-model for more information.
- name: WhitespaceTokenizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: char_wb
min_ngram: 1
max_ngram: 4
- name: DIETClassifier
epochs: 100
constrain_similarities: true
- name: EntitySynonymMapper
- name: ResponseSelector
epochs: 100
constrain_similarities: true
- name: FallbackClassifier
threshold: 0.3
ambiguity_threshold: 0.1
- name: "DucklingHTTPExtractor"
# url of the running duckling server
url: "http://0.0.0.0:8000"
# dimensions to extract
dimensions: ["time", "number", "amount-of-money", "distance"]
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
# # No configuration for policies was provided. The following default policies were used to train your model.
# # If you'd like to customize them, uncomment and adjust the policies.
# # See https://rasa.com/docs/rasa/policies for more information.
# - name: MemoizationPolicy
# - name: RulePolicy
# - name: UnexpecTEDIntentPolicy
# max_history: 5
# epochs: 100
# - name: TEDPolicy
# max_history: 5
# epochs: 100
# constrain_similarities: true
I have installed docker on linux and followed all steps mentioned in rasa docker installation instructions

jkube resource failed: Unknown type CRD

I am using jkube to deploy a springboot helloworld application on my kubernetes installation. I wanted to add a resource fragment defining a Traefik ingress route but k8s:resource fails with "Unknown type 'ingressroute'".
IngressRoute has already been defined on the cluster using custom resource definition.
How do I write my fragment?
The following works when i deploy it with kubectl.
# IngresRoute
---
kind: IngressRoute
apiVersion: traefik.containo.us/v1alpha1
metadata:
name: demo
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`demo.domainname.com`)
kind: Rule
services:
- name: demo
port: 80
#Rohan Kumar
Thank you for your answer. I can built and deploy it, but as soon as I add a file to use my IngressRoute, then the k8s:resource target fails.
I added files - one for each CRD with filename -cr.yml and added the following to the pom file:
<pre>
<resources>
<customResourceDefinitions>
<customResourceDefinition>traefikservices.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsstores.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>tlsoptions.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>middlewares.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressrouteudps.traefik.containo.us</customResourceDefinition>
<customResourceDefinition>ingressroutetcps.traefik.containo.us</customResourceDefinition>
<customResourceDefinitions>ingressroutes.traefik.containo.us</customResourceDefinitions>
</customResourceDefinitions>
</resources>
Example IngressRoute definition:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
But when running the k8s:resource I get the error:
Failed to execute goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource (default-cli) on project demo:
Execution default-cli of goal org.eclipse.jkube:kubernetes-maven-plugin:1.0.2:resource failed: Unknown type
'ingressroute' for file 005-ingressroute.yml. Must be one of : pr, lr, pv, project, replicaset, cronjob, ds,
statefulset, clusterrolebinding, pvc, limitrange, imagestreamtag, replicationcontroller, is, rb, rc, ingress, route,
projectrequest, job, rolebinding, rq, template, serviceaccount, bc, rs, rbr, role, pod, oauthclient, ns,
resourcequota, secret, persistemtvolumeclaim, istag, customerresourcedefinition, sa, persistentvolume, crb,
clusterrb, crd, deploymentconfig, configmap, deployment, imagestream, svc, rolebindingrestriction, cj, cm,
buildconfig, daemonset, cr, crole, pb, clusterrole, pd, policybinding, service, namespace, dc
I'm from Eclipse JKube team. We have improved CustomResource support a lot in our recent v1.2.0 release. Now you only need to worry about how you name your CustomResource fragment and Eclipse JKube would detect the CustomResourceDefinition for specified IngressRoute.
I think you would need to name CustomResource fragments with a *-cr.yml at the end. This is due to distinguishing them from standard Kubernetes resources. For example I added your IngressRoute fragment in my src/main/jkube like this:
jkube-custom-resource-fragments : $ ls src/main/jkube/
ats-crd.yml crontab-crd.yml dummy-cr.yml podset-crd.yaml traefic-crd.yaml
ats-cr.yml crontab-cr.yml ingressroute-cr.yml second-dummy-cr.yml traefic-ingressroute2-cr.yml
crd.yaml dummy-crd.yml istio-crd.yaml test2-cr.yml virtualservice-cr.yml
jkube-custom-resource-fragments : $ ls src/main/jkube/traefic-ingressroute2-cr.yml
src/main/jkube/traefic-ingressroute2-cr.yml
Then you should be able to see your IngressRoute generated after k8s:resource phase:
$ mvn k8s:resource
...
$ cat target/classes/META-INF/jkube/kubernetes.yml
You can then go ahead and apply these generated manifests to your Kubernetes Cluster with apply goal:
$ mvn k8s:apply
...
$ kubectl get ingressroute
NAME AGE
demo 17s
foo 16s
I tried all this on this reproducer project and it seemed to be working okay for me: https://github.com/r0haaaan/jkube-custom-resource-fragments

spring-cloud-skipper : How to delete and re-deploy a package

I want to redeploy a package but I am getting an error:
skipper:>package install --package-name sg-cloud-MbakTestworld
--package-version 0.0.1 --release-name MbakTestworld --file E:\skipper\apps\MbakTestworld-upgrade-local.yml
Result:
Release with the name [] already exists and it is not deleted. Details
of the error have been omitted. You can use the stacktrace command to
print the full stacktrace.
My yml File:
spec:
applicationProperties:
server.port: 8029
spring.profiles.active: mbakCloud
deploymentProperties:
spring.cloud.deployer.memory: 512m
I found the answer:
skipper:>release delete --release-name MbakTestworld
MbakTestworld has been deleted.
Source: https://docs.spring.io/spring-cloud-skipper/docs/current/reference/htmlsingle/

Not able to find curator.yml (elasticsearch-curator) in linux

Official site of elasticsearch says the default config file exists in /home/username/.curator/curator.yml
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/command-line.html
But there is no such folder.
Also, I tried creating curator.yml and give path using --config option. But, it throws me error
curator --config ./curator.yml
Error: no such option: --config
Installation was done using apt
sudo apt-get update && sudo apt-get install elasticsearch-curator
Help me create a config file as I want to delete my log-indexes
Please note that the documentation does not say it that file exists after creation, it says:
If --config and CONFIG.YML are not provided, Curator will look in ~/.curator/curator.yml for the configuration file.
The file must be created by the end user.
Also, if you installed via:
sudo apt-get update && sudo apt-get install elasticsearch-curator
but did not add the official Elastic repository for Curator, then you installed an older version. Please check which version you are running with:
$ curator --version
curator, version 5.4.1
If you do not see the current version (5.4.1 at the time this answer was added), then you do not have the appropriate repository installed.
The official documentation provides an example client configuration file here.
There are also many examples of action files in the examples
Yes, one needs to create both the curator.yml as well as action.yml files.
Since I am on centos 7, I happened to install curator from RPM, and in its default /opt/elastic-curator' I could follow up this good blog (but badly formatted!) : https://anchormen.nl/blog/big-data-services/monitoring-aws-redshift-with-elasticsearch/ to ge the files as follows(you may modify according to your needs) :
curator.yml
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- <host1>
- <host2, likewise upto hostN >
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile: /var/log/curator.log
logformat: default
blacklist: []
and an action.yml as follows :
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: rollover
description: Rollover the index associated with index 'name', which should be in the form of prefix-000001 (or similar),or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_metrics_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1
2:
action: rollover
description: Rollover the index associated with index 'name' , which should be in the form of prefix-000001 (or similar), or prefix-YYYY.MM.DD-1.
options:
disable_action: False
name: redshift_query_metadata_daily
conditions:
max_age: 1d
extra_settings:
index.number_of_shards: 2
index.number_of_replicas: 1

SaltStack: dockerng is not available

I'm quite new to SaltStack. I've setup a salt-master and a salt-minion (via salt-cloud on my ESXi). It works fine so far. However, I cannot get dockerng to run any function on my minion. It always returns 'dockerng.xxxx' is not available:
# salt '*' test.ping
minion1:
True
$ salt '*' dockerng.version
minion1:
'dockerng.version' is not available.
However, When I call the same with salt-call directly on the minion:
$ salt-call dockerng.version
[INFO ] Determining pillar cache
local:
----------
ApiVersion:
1.23
Any hints/ideas?
Have you installed the python docker module on the minion itself? That's a requirement.
I just encountered exactly the same situation. Installing 'docker-py' on the salt-master worked for me. Of course, as suggested by Utah_Dave, docker-py would also be needed on any minion that would be targeted by dockerng.
I encountered this problem while using an image with docker pre-installed. The solution that works for me is to restart the salt-minion daemon:
salt-minion:
pkg:
- installed
- name: salt-minion
service.running:
- enable: True
- require:
- pkg: salt-minion
- service: docker
- pip: docker-py
- watch:
- pip: docker-py
taken from http://humankeyboard.com/saltstack/2013/how-to-restart-salt-minion.html
Unfortunately, the dockerng module doesn't work until the second run from the master. I'm still playing with watch and reload_modules trying to get this to work.
https://docs.saltstack.com/en/latest/ref/states/index.html#reloading-modules

Resources