I have defined the following Jelastic configuration for my environment:
env:
topology:
nodes:
- nodeGroup: bl
nodeType: nginx-dockerized
tag: 1.14.2
displayName: Node balancing
count: 1
fixedCloudlets: 1
cloudlets: 4
env:
DOCKER_EXPOSED_PORT: 22,80,443
- image: jenkins/jenkins:lts
count: 1
cloudlets: 16
nodeGroup: cp
- nodeGroup: sqldb
Now, I want the users of my environment to access my docker application only through the load balancing node. From Jelastic's dashboard, I can't seem to configure any firewall rules for the cp node group. How can I close any connection to the Jenkins node from the outside world and only keep it open from the nginx node?
As we can see, your environment has a custom Docker as a cp layer, that's why UI Firewall is not available for your cp node group. More details are in Container Firewall Rules Management article.
Nevertheless, even if your cp layer is not a custom Docker but Jelastic certified dockerized template, UI Firewall will be available but you will not be able to close direct access to this node via Shared Load Balancer anyway, due to internal limitations which will be improved in future releases. Some info you can find here
Custom Docker, unlike other types of templates, has a full root access, so you can easily configure firewall with help of a command line.
Related
I am trying to build a monitoring system with Prometheus, Grafana, and node exporter. I have a docker-compose file that spins up the Prometheus and Grafana containers but not node-exporter. According to Node exporter GitHub documentation, it is not recommended to deploy node-exporter as a container because it requires access to the host system.
Is it possible to use the Node exporter installed in the host machine alongside my docker-compose? If yes, what additional configs do I need to add?
There is a good guide is here. One thing to note is that node_exporter is built for linux/bsd. If you are running your compose file on windows you would need to run the windows_exporter. The process below would be different, but similar in theory.
to summarize the guide, download the specific/latest release and run the app:
wget https://github.com/prometheus/node_exporter/releases/download/v*/node_exporter-*.*-amd64.tar.gz
tar xvfz node_exporter-*.*-amd64.tar.gz
cd node_exporter-*.*-amd64
./node_exporter
you should be able to access it over port 9100 on your host machine.
curl http://localhost:9100/metrics
then setup you prometheus.yml file (that you have backed into your docker image, or are bind mounting through docker-compose) like:
global:
scrape_interval: 15s
scrape_configs:
- job_name: node
static_configs:
- targets: ['<host ip/fqdn name:9100']
When you run the node_exporter script you can pass it a list of collectors you want to enable/disable. As you tune your monitoring system you will find metrics that you want to track and some that just are not helpful. You can save some overhead by removing collectors that your monitoring system is not going to use.
I'm currently migrating a legacy server to Kubernetes, and I found that kubectl or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.
In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.
So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?
Of course, in a pinch, I could probably just execute something like run_server.sh | tee /var/log/my-own.log when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.
So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the --previous flag to look at logs:
kubectl logs -f <pod-name-xyz> --previous
But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:
volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
This will allow the directory which has all logs from /var/log directory from host to /tmp/log inside the container and the command will ensure that content of all files is flushed. Now you can run:
kubectl logs <pod-name-abc> -c count-log-1
This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as 1 or 2
The hyperledger project has a built-in docker image definition for running peer nodes. Given the vagrant focused development environment documentation, it's not immediately obvious that you can set up your own chain network using docker-compose.
To do that, first build the docker image by running this test (this test step is entirely dedicated to building the image):
go test github.com/hyperledger/fabric/core/container -run=BuildImage_Peer
Once the image is built, use docker-compose to launch the peer nodes. This folder has some pre-built yaml files for docker-compose:
github.com/hyperledger/fabric/bddtests
Use the following command to launch 3 peers (for instance):
docker-compose -f docker-compose-3.yml up --force-recreate -d
After the container instances are up, use docker inspect to get the IP addresses and use port 5000 to call the REST APIs (refer to the documentation for REST API spec).
Now that the Hyperledger Fabric project has published its inaugural release (v0.5-developer-preview), we have begun publishing official Hyperledger docker images for the fabric-baseimage, fabric-peer and fabric-membersrvc.
These images can be deployed, as noted by other respondents, using docker-compose. As noted above in the response by #tuand, the fabric/bddtests are a good source of compose files that could be repurposed.
Note that if running on a Mac or Windows using Docker for Mac(beta) that you'll need to use port mapping to expose ports for a peer, as Docker for Mac does not support routing IP traffic to and from containers. Container linking works as expected. Hence, you will either need to map different ports for each of the peers, or only expose a single peer instance.
The following compose file will start a single peer node on a Mac using Docker for Mac. Simply run docker-compose up:
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
You can look in the hyperledger/fabric github repository under the ./bddtests and ./consensus/docker-compose-files directories for examples on how to setup peer networks of 3, 4 or 5 nodes.
Remember to expose port 5000 for one of the validating peers so that you can use the REST api to interact with the peer node.
There are two github repositories that let you build docker images with hyperledger that you can run directly
https://github.com/joequant/hyperledger
and
https://github.com/yeasy/docker-hyperledger-peer
Under yeasy there are some repositories that contain fabric deploy scripts.
I'm trying to make a Dockerfile based on the RabbitMQ repository with a customized policy set. The problem is that I can't useCMD or ENTRYPOINT since it will override the base Dockerfile's and then I have to come up with my own and I don't want to go down that path. Let alone the fact if I don't use RUN, it will be a part of run time commands and I want this to be included in the image, not just the container.
Other thing I can do is to use RUN command but the problem with that is the RabbitMQ server is not running at build time and also there's no --offline flag for the set_policycommand of rabbitmqctl program.
When I use docker's RUN command to set the policy, here's the error I face:
Error: unable to connect to node rabbit#e06f5a03fe1f: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#e06f5a03fe1f]
rabbit#e06f5a03fe1f:
* connected to epmd (port 4369) on e06f5a03fe1f
* epmd reports: node 'rabbit' not running at all
no other nodes on e06f5a03fe1f
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-136#e06f5a03fe1f'
- home dir: /var/lib/rabbitmq
- cookie hash: /Rw7u05NmU/ZMNV+F856Fg==
So is there any way I can set a policy for the RabbitMQ without writing my own version of CMD and/or ENTRYPOINT?
You're in a slightly tricky situation with RabbitMQ as it's mnesia data path is based on the host name of the container.
root#bf97c82990aa:/# ls -1 /var/lib/rabbitmq/mnesia
rabbit#bf97c82990aa
rabbit#bf97c82990aa-plugins-expand
rabbit#bf97c82990aa.pid
For other image builds you could seed the data files, or write a script that RUN calls to launch the application or database and configure it. With RabbitMQ, the container host name will change between image build and runtime so the image's config won't be picked up.
I think you are stuck with doing the config on container creation or at startup time.
Options
Creating a wrapper CMD script to do the policy after startup is a bit complex as /usr/lib/rabbitmq/bin/rabbitmq-server runs rabbit in the foreground, which means you don't have access to an "after startup" point. Docker doesn't really do background processes so rabbitmq-server -detached isn't much help.
If you were to use something like Ansible, Chef or Puppet to setup the containers. Configure a fixed hostname for the containers startup. Then start it up and configure the policy as the next step. This only needs to be done once, as long as the hostname is fixed and you are not using the --rm flag.
At runtime, systemd could complete the config to a service with ExecStartPost. I'm sure most service managers will have the same feature. I guess you could end up dropping messages, or at least causing errors at every start up if anything came in before configuration was finished?
You can configure the policy as described here.
Docker compose:
rabbitmq:
image: rabbitmq:3.7.8-management
container_name: rabbitmq
volumes:
- ~/rabbitmq/data:/var/lib/rabbitmq:rw
- ./rabbitmq/rabbitmq.conf:/etc/rabbitmq/rabbitmq.conf
- ./rabbitmq/definitions.json:/etc/rabbitmq/definitions.json
ports:
- "5672:5672"
- "15672:15672"
I am in the process of writing my first ever Ansible playbook and am in need of a bit of steering. I have a simple network that consists of 3 VMs:
ansible01 - my Ansible server (Ubuntu)
db01 - a DB (again, Ubuntu)
myapp01 - an Ubuntu VM hosting a Java app
I have configured my /etc/ansible/hosts file like so:
[databases]
db01.example.com
[app_servers]
myapp01.example.com
myapp02.example.com
I have configured SSH correctly, and I can run ansible all ping -m and Ansible is able to ping the DB and app server nodes. So far so good.
I’m trying to write three (3) Docker-related playbooks that will accomplish the following:
Ensure that Docker is running on all [databases] nodes as well as all [app_servers] nodes; if it is not installed and running, then install Docker engine and start running it. If it is installed but not running, restart it.
Stop/start/restart all containers running for a specific type of node (“role"?!?). For instance, I’d like to tell Ansible that I want to restart all containers running on all [app_servers] nodes.
Stop/start/restart an arbitrary container running on an arbitrary node. For instance, perhaps myapp01 has 2 containers running on it, fizz and buzz. I’d like to be able to tell Ansible to restart (specifically) myapp01’s fizz container, but not its buzz container, nor any myapp02 containers.
I believe these belong in three separate playbooks (correct me if I’m wrong or if there’s a better way). I took a stab at them. The first is setup_docker.yml:
- name: ensure docker engine is installed and running
docker:
name: *
state: started
Then for restarting all [databases], in restart_app_servers.yml:
- name: restart app servers
docker:
name: app_servers
state: restarted
And for restarting an arbitrary container on a single node (restart_container.yml):
- name: restart a specific container
docker:
name: %name_of_container_and node%
state: restarted
But several problems here:
In setup_docker.yml, how do I specify that all node types ([databases] and [app_servers]) should be affected? I know that asterisk (“*”) isn’t correct.
In restart_app_servers.yml, what is the proper value for the name field? How do I actually tell Ansible to restart all app_server nodes?
In restart_container.yml, how do I “inject” (pass in as arguments/variables) the node's and container’s names? Ideally I’d like to run this playbook against any node and any container.
Anything else jumping out at you as wrong?
Thanks in advance!
I think you have Plays and Playbooks mixed up in meaning here. The three things you have specified above, setup_docker.yml, restart_app_servers.yml, and restart_container.yml appear to be Plays. I recommend creating a Docker role which contains the tasks you have detailed here.
To address your problems:
In setup_docker.yml, how do I specify that all node types ([databases] and [app_servers]) should be affected? I know that asterisk (“*”) isn’t correct.
This is done at the Playbook level. You can specify which hosts you want to be effected by which tasks, e.g:
#docker.yml
- hosts: all
user: {{ privileged_user }}
gather_facts: false
roles:
- install_docker
Then in your install_docker role, you would have something along the lines of:
- name: Add docker apt keys
apt_key: keyserver=keyserver.ubuntu.com id=36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: update apt
apt_repository: repo='deb https://get.docker.com/ubuntu docker main' state=present
- name: Install docker
apt: pkg=lxc-docker update_cache=yes
In restart_app_servers.yml, what is the proper value for the name field? How do I actually tell Ansible to restart all app_server nodes?
I'm assuming you mean you wish to restart all Docker containers on each of the nodes which belong to the app-server group?
I would keep an inventory of all of the container names for each group (since this example is relatively simple). e.g:
#group_vars/app-server
all_containers: [ 'container1', 'container2', 'container3',.. 'containern' ]
From here you can use this inventory in your Play to restart each container. In your Playbook:
#restart_app_containers.yml
- hosts: app_server
user: {{ privileged_user }}
gather_facts: false
roles:
- restart_app_servers
Then in the Play itself:
#restart_app_servers.yml
- name: restart app servers
docker:
name: {{ item }}
state: restarted
with_items: all_containers
In restart_container.yml, how do I “inject” (pass in as arguments/variables) the node's and container’s names? Ideally I’d like to run this playbook against any node and any container.
For this portion you would need to reference your container directly which you need to act against. This can be done with Dynamic Inventory, e.g
#sample.yml
- hosts: Tag_name_{{ public_name }}
user: {{ privileged_user }}
gather_facts: false
roles:
- example
In the event you are on AWS. The hosts dictionary would vary by infrastructure.
Then in your actual play you listed, you can pass in the specific variable. Since it's a single container on a single host, you could do this via the command line:
ansible-playbook -i $INVENTORY_FILE -e container_name=$CONTAINER_NAME restart_single_container_on_single_host.yml
Where your Play would look something like:
- name: restart a specific container
docker:
name: {{ container_name }}
state: restarted