translate docker run command (tensorflow-serving) into docker-compose - docker

Here is the docker run command:
docker run -p 8501:8501 \
--name tfserving_classifier \
-e MODEL_NAME=img_classifier \
-t tensorflow/serving
Here is what I tried but I am not able to get the MODEL_NAME to work
tensorflow-servings:
container_name: tfserving_classifier
ports:
- 8501:8501
command:
- -e MODEL_NAME=img_classifier
- -t tensorflow/serving

tensorflow-servings:
container_name: tfserving_classifier
image: tensorflow/serving
environment:
- MODEL_NAME=img_classifier
ports:
- 8501:8501

Related

Backup and restore Zookeeper and Kafka data

zookeeper cluster
sudo docker run -itd --name zookeeper-1 --restart=always -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=1 -e ZOO_SERVERS=0.0.0.0:2888:3888,ip2:2888:3888,ip3:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-2 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=2 -e ZOO_SERVERS=ip1:2888:3888,0.0.0.0:2888:3888,ip3:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
sudo docker run -itd --name zookeeper-3 --restart=always --network sevenchats -e ALLOW_ANONYMOUS_LOGIN=yes -e ZOO_SERVER_ID=3 -e ZOO_SERVERS=ip1:2888:3888,ip2:2888:3888,0.0.0.0:2888:3888 -v /data/zookeeper/zoo.cfg:/opt/bitnami/zookeeper/conf/zoo.cfg -v /data/zookeeper/data:/bitnami -p 2181:2181 -p 2888:2888 -p 3888:3888 bitnami/zookeeper:3.8.0
kafka cluster
sudo docker run -itd --name kafka-1 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-2 --network sevenchats -v /data/kafka/data:/bitnami --restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
sudo docker run -itd --name kafka-3 --network sevenchats -v /data/kafka/data:/bitnami –restart=always -v /data/kafka/server.properties:/bitnami/kafka/config/server.properties -e KAFKA_CFG_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e ALLOW_PLAINTEXT_LISTENER=yes -p 9092:9092 bitnami/kafka:3.1.0
after running these commands my cluster is working fine
I want to restore my Kafka data or topics to new cluster so using tar command I took the backup of the docker volumes on my host under /data/kafka/ folder
tar cvzf data_kafka1.tar.gz data
when I check the size of the folder on my existing cluster the /data/kafka folder is around 200MB
and when I copy my backup folder to new cluster and extract there using the command
tar xvzf data_kafka1.tar.gz
when I check the size of the data folder
du -hs data
it is 20GB on my new cluster
I want my backup size to be same on both the clusters
small difference is also fine but there is a huge difference in size
let me know what mistake am i doing?

ACME certbot: Dry run is successful although no connection open

I have a question about certbot:
At the moment, I test it using the following command:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
When I run it, it fails because it cannot connect. So I add -p 80:80 -p 443:443:
docker run -t --rm
-v nginx-docker_certs:/etc/letsencrypt
-v nginx-docker_certs-data:/data/letsencrypt
-p 80:80 -p 443:443
certbot/certbot
certonly --dry-run --standalone
-d mydomain.tld
Now it works.
When I remove -p 80:80 -p 443:443 again and do a test run again, the test run is successful. But why?

How to Update Graylog version in docker

I am new to graylog, I have installed graylog in docker and after installing it I observed 2 notifications one is related to Graylog Upgrade. Can someone tell me how to update it using docker commands?
Note: First in need to take backup of my data and then I need to update it to version 2.4.6.
Note 2: I have already referred the documentation in graylog.
http://docs.graylog.org/en/2.4/pages/upgrade.html
http://docs.graylog.org/en/2.4/pages/installation/docker.html
Graylog Installation process:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.2
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
To use the latest version change the tag of the graylog image from 2.4.0-1 to 2.4 or 2.4.6-1
Seems like the documentation you found is not completely in line with the documentation on docker hub:
If you simply want to checkout Graylog without any further customization, you can run the following three commands to create the necessary environment:
docker run --name mongo -d mongo:3
docker run --name elasticsearch \
-e "http.host=0.0.0.0" -e "xpack.security.enabled=false" \
-d docker.elastic.co/elasticsearch/elasticsearch:5.6.12
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4
First i have installed graylog with my own volumes
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.0-1
Now stop graylog using
docker stop [graylog Container ID]
Now remove container from docker
docker rm [graylog Container ID]
Now Remove docker image
docker rmi [graylog Image ID]
Now again install graylog by changing the graylog version
docker run --link mongo --link elasticsearch \
-p 9000:9000 -p 12201:12201 -p 514:514 \
-e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" \
-d graylog/graylog:2.4.6-1
Note: Only remove graylog not mongoDB/Elasticsearch. Then you won't loose any data.

How to run Kong API Gateway using docker containers?

I am very new to Kong API Gateway, and am currently attempting to run a Kong container with PostgreSQL as my database container.
How can i achieve this?
1. Start your database:
$ docker run -d --name kong-database \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:9.4
2. Start Kong:
Start a Kong container and link it to your database container, configuring the KONG_DATABASE environment variable with postgres.
$ docker run -d --name kong \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 7946:7946 \
-p 7946:7946/udp \
kong
3.Verify Kong is running:
$ curl http://127.0.0.1:8001
You can follow Kong installation guide. It worked for me as expected.
Step 1: Start Postgres container
docker run -d --name kong-database \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:9.5
Step 2: migrate database
docker run --rm \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations up
Step 3: start Kong
docker run -d --name kong \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
Step 4: verify
curl -i http://localhost:8001/
Answering #StefanWalther's question, here's an example on how to make it work with docker-compose:
version: "2.1"
services:
kong:
image: kong:latest
depends_on:
kong-database:
condition: service_healthy
healthcheck:
test:
- CMD
- nc
- -z
- localhost
- "8443"
retries: 10
links:
- kong-database:kong-database
command:
- "kong"
- "start"
- "--vv"
environment:
- KONG_DATABASE=cassandra
- KONG_CASSANDRA_CONTACT_POINTS=kong-database
- KONG_ADMIN_LISTEN=0.0.0.0:8001
- KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444
- KONG_NGINX_DAEMON=off
ports:
- "443:8443"
- "8001:8001"
restart: always
network_mode: "bridge"
kong-database:
image: cassandra:3
healthcheck:
test:
- "CMD-SHELL"
- "[ $$(nodetool statusgossip) = running ]"
volumes:
- ~/kong-database/cassandra:/var/lib/cassandra
expose:
- "9042"
restart: always
network_mode: "bridge"
And, as an extra, you can add kongfig to reconfigure the instance again:
kong-configurer:
image: mashupmill/kongfig
depends_on:
kong:
condition: service_healthy
links:
- kong:kong
volumes:
- ~/config.yml:/config.yml:ro
command: --path /config.yml --host kong:8001
network_mode: "bridge"
You can dump the configuration to use in this last container with:
kongfig dump --host localhost:8001 > ~/config.yml
More info on Kongfig, here.
Cheers.
Did you check the following repo?
https://github.com/Mashape/docker-kong
Here's my own docker compose and it works perfectly(from kong's docker project on github, i used kong-oidc, you can choose whatever version you like).
kong:
image: kong:1.3.0-alpine-oidc
container_name: kong
depends_on:
- kong-db
healthcheck:
test: ["CMD", "kong", "health"]
interval: 10s
timeout: 10s
retries: 10
restart: on-failure
ports:
- "8000:8000" # Listener
- "8001:8001" # Admin API
- "8443:8443" # Listener (SSL)
- "8444:8444" # Admin API (SSL)
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-db
KONG_PG_PORT: 5432
KONG_PG_DATABASE: api-gw
KONG_PROXY_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PROXY_LISTEN: 0.0.0.0:8000, 0.0.0.0:8443 ssl
KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
KONG_PLUGINS: bundled,oidc
KONG_LOG_LEVEL: debug
kong-migrations:
image: kong:1.3.0-alpine-oidc
command: kong migrations bootstrap
container_name: kong-migrations
depends_on:
- kong-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: api-gw
KONG_PG_HOST: kong-db
KONG_PG_PASSWORD:
KONG_PG_USER: kong
links:
- kong-db:kong-db
restart: on-failure
kong-migrations-up:
image: kong:1.3.0-alpine-oidc
container_name: kong-migrations-up
command: kong migrations up && kong migrations finish
depends_on:
- kong-db
environment:
KONG_DATABASE: postgres
KONG_PG_DATABASE: api-gw
KONG_PG_HOST: kong-db
KONG_PG_PASSWORD:
KONG_PG_USER: kong
links:
- kong-db:kong-db
restart: on-failure
Update 2020
Create bridge network for containers can access each others
docker network create my-net
Start kong-database container
docker run -d --name kong-database --network my-net -p 5432:5432 -e "POSTGRES_USER=kong" -e "POSTGRES_HOST_AUTH_METHOD=trust" -e "POSTGRES_DB=kong" postgres:alpine
Run temporary container to initialize data for kong-database
docker run --rm \
--network my-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
kong:latest kong migrations bootstrap
Run kong container
docker run -d --name kong \
--network my-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
Verify
curl http://127.0.0.1:8001

Docker varnish start with command but not with docker-compose

I'm new to docker and try to convert my actual web stack in it.
Currently I use this configuration:
varnish -> nginx -> php-fpm -> mysql
I have already convert php-fpm and nginx and now tries varnish.
When I run my image with a command, all is fine but when I put it in docker-compose my container restart indefinitely.
Command:
name="varnish"
cd $installDirectory/$name
docker build -t $name .
docker rm -f $(docker ps -a | grep $name | cut -d' ' -f1)
docker run -d -P --name $name \
-p 80:80 \
--link nginx:nginx \
-v /home/webstack/varnish/:/etc/varnish/ \
-t $name
My docker-compose.yml:
php-fpm:
restart: always
build: ./php-fpm
volumes:
- "/home/webstack/www/:/var/www/"
nginx:
restart: always
build: ./nginx
ports:
- "8080:8080"
volumes:
- "/home/webstack/nginx/:/etc/nginx/"
- "/home/webstack/log/:/var/log/nginx/"
- "/home/webstack/www/:/var/www/"
links:
- "php-fpm:php-fpm"
varnish:
restart: always
build: ./varnish
ports:
- "80:80"
volumes:
- "/home/webstack/varnish/:/etc/varnish/"
links:
- "nginx:nginx"
I have no result with docker logs webstack_varnish_1 and docker ps -a result show:
688c5aace1b3 webstack_varnish "/bin/bash" 16 seconds ago Restarting (0) 5 seconds ago 0.0.0.0:80->80/tcp
Here you can see my Dockerfile:
FROM debian:jessie
# Update apt sources
RUN apt-get -qq update
RUN apt-get install -y curl apt-transport-https
RUN sh -c "curl https://repo.varnish-cache.org/GPG-key.txt | apt-key add -"
RUN echo "deb https://repo.varnish-cache.org/debian/ jessie varnish-4.1" > /etc/apt/sources.list.d/varnish-cache.list
# Update the package repository
RUN apt-get -qq update
# Install varnish
RUN apt-get install -y varnish
# Expose port 80
EXPOSE 80
What I am doing wrong please?
Regards.
Your varnish Dockerfile seems to be missing ENTRYPOINT and/or CMD directives that would actually launch Varnish.
We have found the solution here :
https://github.com/docker/compose/issues/2563
I have to add tty: true to my varnish config.
Regards.

Resources