How to stop/start logstash service running in docker - docker

I'm trying to figure out how logstash works/run inside docker, and I'm stuck with simple thing like starting and stoping logstash.
I have started logstash docker container with simple run
docker run -it --name l2 logstash
and with result:
[Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
Next thing is runing /bin/bash with exec command, to get inside running container.
docker exec -it l2 /bin/bash
root#1b55d3a40d3f:/#
Listing services status, shows that there is no logstash service running.
Where can I find logstash service and stop/start?
root#1b55d3a40d3f:/# service --status-all
[ - ] bootlogs
[ - ] bootmisc.sh
[ - ] checkfs.sh
[ - ] checkroot-bootclean.sh
[ - ] checkroot.sh
[ - ] dbus
[ - ] hostname.sh
[ ? ] hwclock.sh
[ - ] killprocs
[ - ] motd
[ - ] mountall-bootclean.sh
[ - ] mountall.sh
[ - ] mountdevsubfs.sh
[ - ] mountkernfs.sh
[ - ] mountnfs-bootclean.sh
[ - ] mountnfs.sh
[ - ] procps
[ - ] rc.local
[ - ] rmnologin
[ - ] sendsigs
[ + ] udev
[ ? ] udev-finish
[ - ] umountfs
[ - ] umountnfs.sh
[ - ] umountroot
[ - ] urandom
[ - ] x11-common

The logstash in the container is not run as a system service, the entrypoint in the image will start a process and will keep the container up until this process ends or fails.
If you do a docker top l2 it will show the logstash process running (probaly alone) in the container.
To stop the logstash, you need to stop the container with docker stop l2, and later when you need to start it again you can run docker start l2, it will work as long you set the containers name as l2 when you create or first run it.
Docker Start help: https://docs.docker.com/engine/reference/commandline/start/
Docker stop help: https://docs.docker.com/engine/reference/commandline/stop/
Docker create: https://docs.docker.com/engine/reference/commandline/create/

Related

How to enable rabbitmq plugin "rabbitmq_delayed_message_exchange" if rabbitmq was deployed using the rabbitmq operator in kubernetes

I deployed a rabbitmq instance using the rabbitmq operator in kubernetes. And I'm trying to enable the rabbitmq plugin: rabbitmq_delayed_message_exchange.
I tried defining my RabbitmqCluster as:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: definition
spec:
replicas: 1
rabbitmq:
additionalPlugins:
- rabbitmq_management
- rabbitmq_delayed_message_exchange
service:
type: LoadBalancer
And then I ran kubectl apply -f definition.yaml
But my pod logs showed this:
...
2020-10-05T15:42:15.081783023Z 2020-10-05 15:42:15.081 [info] <0.535.0> Server startup complete; 6 plugins started.
2020-10-05T15:42:15.081802701Z * rabbitmq_prometheus
2020-10-05T15:42:15.08180602Z * rabbitmq_peer_discovery_k8s
2020-10-05T15:42:15.081808816Z * rabbitmq_peer_discovery_common
2020-10-05T15:42:15.081811359Z * rabbitmq_management
2020-10-05T15:42:15.08181387Z * rabbitmq_web_dispatch
2020-10-05T15:42:15.081825082Z * rabbitmq_management_agent
2020-10-05T15:42:15.081951576Z completed with 6 plugins.
...
There wasn't any reference to this plugin in the logs.
I went inside my rabbitmq pod and ran: rabbitmq-plugins list
Listing plugins with pattern ".*" ...
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit#definition-rabbitmq-server-0.definition-rabbitmq-headless.default
|/
[ ] rabbitmq_amqp1_0 3.8.8
[ ] rabbitmq_auth_backend_cache 3.8.8
[ ] rabbitmq_auth_backend_http 3.8.8
[ ] rabbitmq_auth_backend_ldap 3.8.8
[ ] rabbitmq_auth_backend_oauth2 3.8.8
[ ] rabbitmq_auth_mechanism_ssl 3.8.8
[ ] rabbitmq_consistent_hash_exchange 3.8.8
[ ] rabbitmq_event_exchange 3.8.8
[ ] rabbitmq_federation 3.8.8
[ ] rabbitmq_federation_management 3.8.8
[ ] rabbitmq_jms_topic_exchange 3.8.8
[E*] rabbitmq_management 3.8.8
[e*] rabbitmq_management_agent 3.8.8
[ ] rabbitmq_mqtt 3.8.8
[ ] rabbitmq_peer_discovery_aws 3.8.8
[e*] rabbitmq_peer_discovery_common 3.8.8
[ ] rabbitmq_peer_discovery_consul 3.8.8
[ ] rabbitmq_peer_discovery_etcd 3.8.8
[E*] rabbitmq_peer_discovery_k8s 3.8.8
[E*] rabbitmq_prometheus 3.8.8
[ ] rabbitmq_random_exchange 3.8.8
[ ] rabbitmq_recent_history_exchange 3.8.8
[ ] rabbitmq_sharding 3.8.8
[ ] rabbitmq_shovel 3.8.8
[ ] rabbitmq_shovel_management 3.8.8
[ ] rabbitmq_stomp 3.8.8
[ ] rabbitmq_top 3.8.8
[ ] rabbitmq_tracing 3.8.8
[ ] rabbitmq_trust_store 3.8.8
[e*] rabbitmq_web_dispatch 3.8.8
[ ] rabbitmq_web_mqtt 3.8.8
[ ] rabbitmq_web_mqtt_examples 3.8.8
[ ] rabbitmq_web_stomp 3.8.8
[ ] rabbitmq_web_stomp_examples 3.8.8
and checked the pod plugins/ directory:
README
accept-0.3.5.ez
amqp10_client-3.8.8.ez
amqp10_common-3.8.8.ez
amqp_client-3.8.8.ez
aten-0.5.5.ez
base64url-0.0.1.ez
cowboy-2.6.1.ez
cowlib-2.7.0.ez
credentials_obfuscation-2.2.0.ez
cuttlefish-2.4.1.ez
eetcd-0.3.3.ez
gen_batch_server-0.8.4.ez
getopt-1.0.1.ez
goldrush-0.1.9.ez
gun-1.3.3.ez
jose-1.10.1.ez
jsx-2.11.0.ez
lager-3.8.0.ez
observer_cli-1.5.4.ez
prometheus-4.6.0.ez
ra-1.1.6.ez
rabbit-3.8.8.ez
rabbit_common-3.8.8.ez
rabbitmq_amqp1_0-3.8.8.ez
rabbitmq_auth_backend_cache-3.8.8.ez
rabbitmq_auth_backend_http-3.8.8.ez
rabbitmq_auth_backend_ldap-3.8.8.ez
rabbitmq_auth_backend_oauth2-3.8.8.ez
rabbitmq_auth_mechanism_ssl-3.8.8.ez
rabbitmq_aws-3.8.8.ez
rabbitmq_consistent_hash_exchange-3.8.8.ez
rabbitmq_event_exchange-3.8.8.ez
rabbitmq_federation-3.8.8.ez
rabbitmq_federation_management-3.8.8.ez
rabbitmq_jms_topic_exchange-3.8.8.ez
rabbitmq_management-3.8.8.ez
rabbitmq_management_agent-3.8.8.ez
rabbitmq_mqtt-3.8.8.ez
rabbitmq_peer_discovery_aws-3.8.8.ez
rabbitmq_peer_discovery_common-3.8.8.ez
rabbitmq_peer_discovery_consul-3.8.8.ez
rabbitmq_peer_discovery_etcd-3.8.8.ez
rabbitmq_peer_discovery_k8s-3.8.8.ez
rabbitmq_prelaunch-3.8.8.ez
rabbitmq_prometheus-3.8.8.ez
rabbitmq_random_exchange-3.8.8.ez
rabbitmq_recent_history_exchange-3.8.8.ez
rabbitmq_sharding-3.8.8.ez
rabbitmq_shovel-3.8.8.ez
rabbitmq_shovel_management-3.8.8.ez
rabbitmq_stomp-3.8.8.ez
rabbitmq_top-3.8.8.ez
rabbitmq_tracing-3.8.8.ez
rabbitmq_trust_store-3.8.8.ez
rabbitmq_web_dispatch-3.8.8.ez
rabbitmq_web_mqtt-3.8.8.ez
rabbitmq_web_mqtt_examples-3.8.8.ez
rabbitmq_web_stomp-3.8.8.ez
rabbitmq_web_stomp_examples-3.8.8.ez
ranch-1.7.1.ez
recon-2.5.1.ez
stdout_formatter-0.2.4.ez
syslog-3.4.5.ez
sysmon_handler-1.3.0.ez
So it means that the plugin doesn't come integrated.
I also found this:
How to install rabbitmq plugin on kubernetes?
But there is no reference to the rabbitmq operator and it was asked in Jun 2018. Also in the rabbitmq operator config there isn't any reference on using lifecycle hooks to mount the ez file.
1 idea that comes to my mind is creating my own rabbitmq image referencing the rabbitmq official image and add the plugin.
FROM rabbitmq:3.8.8-management
RUN apt-get update
RUN apt-get install -y curl
RUN curl -L https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/v3.8.0/rabbitmq_delayed_message_exchange-3.8.0.ez > $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN chown rabbitmq:rabbitmq $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN rabbitmq-plugins enable --offline rabbitmq_delayed_message_exchange
RUN rabbitmq-plugins enable --offline rabbitmq_consistent_hash_exchange
A 2nd idea is to mount the file in the pod file directory by defining a configmap with the file and using volumeMounts but I couldn't find any reference to use volumeMounts with the rabbitmq operator.
Is there a preferred way or any other way to enable it?
This plugin can be enabled by using a custom Docker Rabbitmq image with the plugin installed:
FROM rabbitmq:3.8.8-management
RUN apt-get update
RUN apt-get install -y curl
RUN curl -L https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/releases/download/v3.8.0/rabbitmq_delayed_message_exchange-3.8.0.ez > $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN chown rabbitmq:rabbitmq $RABBITMQ_HOME/plugins/rabbitmq_delayed_message_exchange-3.8.0.ez
RUN rabbitmq-plugins enable --offline rabbitmq_delayed_message_exchange
RUN rabbitmq-plugins enable --offline rabbitmq_consistent_hash_exchange
Notice: For immutable results or in case you can't depend on external changes, download the plugin to your machine and use COPY instead of using RUN curl... Make sure to have the rabbitmq_delayed_message_exchange .ez file in your machine.
Push your image to a container registry.
And then define your RabbitmqCluster as:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: definition
spec:
image: <container-registry>/<username>/rabbitmq-delayed-message-exchange:3.8.8-management
replicas: 1
rabbitmq:
additionalPlugins:
- rabbitmq_management
- rabbitmq_delayed_message_exchange
service:
type: LoadBalancer
Notice: Change the image to the one you pushed.

Artifactory oss install with Docker

I'm trying to install Artifactory oss using docker.
I'm running ubuntu 18.04 and docker 19.03.8
I followed the JFrog installation guide https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-DockerInstallation
I did all the steps. Except that chown -R 1030:1030 $JFROG_HOME/artifactory/var must be run with sudo.
The container start. But when I'm going to http://myhost:8082/ui/ I only see a page with the JFrog logo displaying with a zoomin/zoomout effect.
I the logs I see
################################################################
### All services started successfully in 116.053 seconds ###
################################################################
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:05.070Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:05.071Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:05.109Z [jfac ] [INFO ] [ ] [alConfigurationServiceBase:182] [c-default-executor-1] - Loading configuration from db finished successfully
2020-03-26T07:27:07.104Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on :8082
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8046
2020-03-26T07:27:07.105Z [jfrou] [INFO ] [ ] [server_configuration.go:61 ] [main ] - Server configuration reloaded on localhost:8049
2020-03-26T07:27:10.084Z [jfrou] [WARN ] [6ec6165e7fec2711] [ternal_topology_verifier.go:92] [main ] - failed pinging external node 'f461d2eebfe3' at address 'http://172.17.0.2:8082': Get http://172.17.0.2:8082/router/api/v1/system/ping: context deadline exceeded
The last line appears when I request the url in the browser.
What can I do ?
Edit:
I also tried with docker-compose also following the jfrog guide.
First run : artifactory is not starting !
After editing the .jfrog/artifactory/var/etc/system.yaml and chaning the 127.0.0.1 by my host name followed again with the config.sh artifactory is starting.
But same problem when accessing http://myhost:8082/ui/
I don't understand what's happening and why it is not working following the jfrog guides...
In my case, it turns out to be that my proxy settings are blocking the http client from contacting the local endpoint.
I updated docker-compose.yml to include no_proxy and noproxy environment variables and Artifactory runs without any complaint.
services:
artifactory:
environment:
- http_proxy=*********
- https_proxy=*********
- no_proxy=*********
- noproxy=172.16.0.0/12
image: docker.bintray.io/jfrog/artifactory-oss:latest
The solution was quite simple : try another browser !
With Edge it's not working.
With Firefox it's working ....

Unable to run JUnit5 tests with Bazel inside Docker container

I have a Kotlin project with Bazel with some JUnit5 tests that I run with:
bazel run //my_service:tests
and this is the output:
Test run finished after 1195 ms
[ 3 containers found ]
[ 0 containers skipped ]
[ 3 containers started ]
[ 0 containers aborted ]
[ 3 containers successful ]
[ 0 containers failed ]
[ 5 tests found ]
[ 0 tests skipped ]
[ 5 tests started ]
[ 0 tests aborted ]
[ 5 tests successful ]
[ 0 tests failed ]
5 tests successful. So far, so good. But when tests are run inside Bazel Docker container, I get this output:
Test run finished after 79 ms
[ 1 containers found ]
[ 0 containers skipped ]
[ 1 containers started ]
[ 0 containers aborted ]
[ 1 containers successful ]
[ 0 containers failed ]
[ 0 tests found ]
[ 0 tests skipped ]
[ 0 tests started ]
[ 0 tests aborted ]
[ 0 tests successful ]
[ 0 tests failed ]
As you see, no tests are found. Why?
I run tests inside container with these commands:
$ docker run -it -v $(pwd):/my_service --entrypoint "" l.gcr.io/google/bazel:2.2.0 /bin/bash
$ cd my_service
$ bazel run //my_service:tests
I'm using Bazel 2.2.0 in both, local and Docker image. Why am I not getting the same output?
I found the solution. That was really weird. I was using register_toolchains rule, instead of kt_register_toolchain. Silly me.

Docker entrypoint doesn't find command

I am trying to run the znc docker container in docker-compose. I have tried to follow the docs, using --makeconf, but something's wrong with my config.
$ docker-compose up
Starting server_znc_service_1 ... done
Attaching to server_znc_service_1
znc_service_1 | /entrypoint.sh: exec: line 6: znc: not found
server_znc_service_1 exited with code 127
docker-compose.yml
version: '3.2'
services:
znc_service:
image: library/znc
volumes:
- znc-cfg-volume:/znc-data
ports:
- "6697:6697"
environment:
VIRTUAL_HOST: "znc.localhost"
command: ["znc", "--makeconf"]
volumes:
znc-cfg-volume:
First of all compose container with no-start:
docker-compose up --no-start
Then if you try to run it, you will see reasons:
$ docker run -it znc
[ .. ] Checking for list of available modules...
[ >> ] ok
[ .. ] Opening config [/znc-data/configs/znc.conf]...
[ !! ] No such file
[ ** ] Restart ZNC with the --makeconf option if you wish to create this config.
[ ** ] Unrecoverable config error.
Then just run with make conf:
docker run -it znc --makeconf
[ .. ] Checking for list of available modules...
[ >> ] ok
[ ** ]
[ ** ] -- Global settings --
[ ** ]
[ ?? ] Listen on port (1025 to 65534):

Running many docker instances on Google cloud with different command-line parameters

Made computation docker which runs fine locally. Uploaded it to Gcloud and could run it. But what I really need is to run hundreds of instances with different argument each.
docker run -t dxyz arg0
docker run -t dxyz arg1
docker run -t dxyz arg2
...
What is the best way to do it? I tried Kubctl pods but looks like they supposed to be identical
This is pretty clunky due to the nesting and because it requires you to specify the replication controller's name and image twice, but you can technically use
kubectl run dxyz0 --image=dxyz --overrides='{"apiVersion": "v1", "spec": {"template": {"spec": {"containers": [ {"name:" "dxyz0", "image": "dxyz", "args": [ "arg0" ] } ] } } } }'
kubectl run dxyz1 --image=dxyz --overrides='{"apiVersion": "v1", "spec": {"template": {"spec": {"containers": [ {"name:" "dxyz1", "image": "dxyz", "args": [ "arg1" ] } ] } } } }'
...

Resources