I'm now using docker-compose for all of my projects. Very convenient. Much more comfortable than manual linking through several docker commands.
There is something that is not clear to me yet though: the logic behind the linking environment variables.
Eg. with this docker-compose.yml:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
In the node app, I need to retrieve the mongodb url. And if I console.log(process.env), I get so many things that it feels very random (just kept the docker-compose-related ones):
MONGODB_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PORT: '27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
MONGODB_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_GOSU_VERSION: '1.7',
'MYAPP_MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GOSU_VERSION: '1.7',
MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_1_NAME: '/myapp_web_1/mongodb_1',
MONGODB_1_PORT_27017_TCP_PORT: '27017',
MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
'MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MONGODB_PORT: 'tcp://172.17.0.2:27017',
MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_ENV_GOSU_VERSION: '1.7',
MONGODB_ENV_MONGO_MAJOR: '3.2',
MONGODB_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_NAME: '/myapp_web_1/mongodb',
MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_PORT_27017_TCP_PORT: '27017',
MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_PORT_27017_TCP_PROTO: 'tcp',
MYAPP_MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
'MONGODB_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MYAPP_MONGODB_1_NAME: '/myapp_web_1/novatube_mongodb_1',
Don't know what to pick, and why so many entries? Is it better to use the general ones, or the MYAPP-prefixed one? Where does the MYAPP name comes from? Folder name?
Could someone clarify this?
Wouldn't it be easier to let the user define the ones he needs in the docker-compose.yml file with a custom mapping? Like:
links:
- mongodb:
- MONGOIP: IP
- MONGOPORT : PORT
What I'm saying might not have sense though. :-)
Environment variables are a legacy way of defining links between containers. If you are using a newer version of compose, you don't need the links declaration at all. Trying to connect to mongodb from your app container will work fine by just using the name of the service (mongodb) as a hostname, without any links defined in the compose file (instead using docker's builtin DNS resolution, check /etc/hosts, nothing in there either!)
In answer to your question about why the prefix with MYAPP, you're right. Compose prefixes the service name with the name of the folder (or 'project', in compose nomenclature). It does the same thing when creating custom networks and volumes.
Related
landoop/fast-data-dev:2.6
I want to change default batch.size using 'producer.override.batch.size=65536' when creating new connector.
But in order to do that, it's required to apply override policy on the worker side
connector.client.config.override.policy=All
Otherwise there is exception
"producer.override.batch.size" : The 'None' policy does not allow
'batch.size' to be overridden in the connector configuration.
It's not clear, how exactly:
to change the default worker properties
where it expects them to be placed
which name they should have
So that landoop sees them
I start the landoop using the following docker-compose.
version: '2'
services:
kafka-cluster:
image: landoop/fast-data-dev:2.6
environment:
ADV_HOST: 127.0.0.1
RUNTESTS: 0
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
volumes:
- ./connectors/news/crypto-panic-connector-1.0.jar:/connectors/crypto-panic-connector-1.0.jar
distributed.properties at folder /connect/connect-avro-distributed.properties generated by Landoop
offset.storage.partitions=5
key.converter.schema.registry.url=http://127.0.0.1:8081
value.converter.schema.registry.url=http://127.0.0.1:8081
config.storage.replication.factor=1
offset.storage.topic=connect-offsets
status.storage.partitions=3
offset.storage.replication.factor=1
key.converter=io.confluent.connect.avro.AvroConverter
config.storage.topic=connect-configs
config.storage.partitions=1
group.id=connect-fast-data
rest.advertised.host.name=127.0.0.1
port=8083
value.converter=io.confluent.connect.avro.AvroConverter
rest.port=8083
status.storage.replication.factor=1
status.storage.topic=connect-statuses
access.control.allow.origin=*
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
jmx.port=9584
plugin.path=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors
bootstrap.servers=PLAINTEXT://127.0.0.1:9092
crypto-panic-connector-1.0 connector directories structure:
/config:
> worker.properties
/src:
> ...
UPDATE
Adding to environment properties:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Doesn't work for landoop/fast-data-dev:2.6
In logs it's still
'connector.client.config.override.policy = None'
And warning
WARN The configuration 'connect.client.config.override.policy' was supplied but isn't a known config.
Changing this to
CONNECTOR_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Removes warning, but at the end the override policy is still 'None' and it's not possible to override properties for client when creating connector.
Changing to
CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
has same effect, policy 'None'.
Also batch size overriding is not aplied. So I assume those overriding features are not supported in Landoop.
WARN The configuration 'producer.override.batch.size' was supplied but isn't a known config.
I assume 'confluentinc/cp-kafka-connect' doesn't have UI built-in, and for learning purposes seems better to have it. So it's more preferable to do it in Landoop. But thanks for recommendation to use 'confluentinc/cp-kafka-connect'. I will try to do this config overriding there also
For starters, that image is very old and no longer maintained. I'd recommend you use confluentinc/cp-kafka-connect
In any case, for both images, you can use
environment:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
It's not clear, how exactly ... change the default worker properties
Look at the source code
I have followed the instructions on this page: https://docs.ksqldb.io/en/latest/operate-and-deploy/monitoring/
So this is my ksqldb-server part of docker-compose:
ksqldb-server:
image: confluentinc/ksqldb-server:0.15.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- kafka
- schema-registry
- kafka-connect
ports:
- "8088:8088"
- "1099:1099"
environment:
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: kafka:29092
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
KSQL_KSQL_CONNECT_URL: http://kafka-connect:8083
KSQL_KSQL_QUERY_PULL_METRICS_ENABLED: "true"
KSQL_JMX_OPTS: >
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1099
I have setup Prometheus in the same docker-compose file, and when I visit {prometheus-url}/targets, I see Get "http://ksqldb-server:1099/metrics": EOF
I have already tried plenty configurations during my research, including changing the -Djava.rmi.server.hostname either to the host's IP address or to ksqldb-server's container IP address, but none of them worked. Does anyone have a solution?
Well, six months later after having dealt with this topic once again, I managed to set this up. This follows the approach suggested by swist in my GitHub issue I created back then when I created this issue, too.
You need JMX Exporter. Download it here
You need a YAML file, telling the JMX exporter which metrics to export. You can get it here. If you are only interested in the ksqlDB metrics, remove all other patterns, e.g. the kafka patterns.
Place the JMX Exporter and the YAML file on every node on which you want to monitor a ksqlDB instance
Before starting ksqlDB, create the environment variable KSQL_JMX_OPTS as follows:
export KSQL_JMX_OPTS="-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.util.logging.config.file=logging.properties \
-javaagent:[BLUB]/jmx_prometheus_javaagent.jar=7010:ksqldb.yml"
You need to either create this variable every time you have a new session or create it permantently. [BLUB] is the absolute path to your JMX JAR.
Now you can run ksqlDB and the metrics become available at port 7010 (you can specify any other free port). If you want to have a good dashboard, go with this one.
The jmxremote.port value is also not a proper Prometheus target; it's for jconsole, Visualvm, or other JMX monitoring tools, as the documentation you've linked to says
If you want to use Prometheus, you need to download and mount the JMX exporter agent JAR into the container and modify the JVM arguments to include the agent+scraper port+mbeans config file...
You could also switch to using minikube and apply the Confluent ksqlDB Helm Chart, which does this for you
I'm trying to make a Graylog Docker Container persistent.
Meaning that after restarting (docker-compose down; docker-compose up) the logs will still be there alongside the configuration.
I've used the documentation at https://docs.graylog.org/en/3.1/pages/installation/docker.html I created a yml file with the content under the topic "Persisting data".
I only edited the line "GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/" to not use localhost but the external ip the machine is using.
Docker works, i can create an input and collect logfiles. What does not work is the data being persistent. Also every time i restart the node id changes, so i have to reconfigure the input. Running docker volume ls lists five volumes 3 of which are the ones created in the yml file.
I don't understand why data is not persistent. Can anybody help?
I had the same problem and I'd been struggling for a while before I found a solution. I'm on 3.2 and also had issues with node persistence. The documentation doesn't seem to directly state that there is one more configuration folder you need to persist, which is:
/usr/share/graylog/data/config
They actually mention it in the Custom configuration files section and when I took a look via CLI in that directory, it turns out that it's where the graylog.conf and node-id (the file Graylog uses to store information about its nodes) are stored as well!
Here's my docker-compose.override.yml section with the necessary changes (marked with '# ADDED' comments)
services:
graylog:
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
- GRAYLOG_IS_MASTER=true
#- GRAYLOG_NODE_ID_FILE=/usr/share/graylog/data/config/node-id
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
volumes:
- "graylogjournal:/usr/share/graylog/data/journal"
- "graylogconfig:/usr/share/graylog/data/config" # ADDED
volumes:
graylogjournal:
driver: local
graylogconfig: # ADDED
driver: local # ADDED
Hope this helps
You can add into daemon.json file these lines ;
{
"log-driver": "gelf",
"log-opts": {
"gelf-address": "udp://1.2.3.4:12201"
}
}
https://docs.docker.com/config/containers/logging/gelf/
A few days ago, my watchtower updated Confluence on Docker with the 6.15.1-alpine tag. It's hosted using Atlassians official image. Since those update, Confluence shows the setup screen. Haven't any chance to get inside the admin panel. When continue the wizard end entering server credentials of the existing installation, it gave an error that an installation already exists that would be overwritten if continued.
It was a re-push of the exact version tag 6.15.1 tag, not a regular version update. So there seems no possibility to use the old, working image. Also other versions seems re-pushed. Tried some older ones and also a new one, without success.
docker-compose.yml
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1-alpine
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./confluence.cfg.xml:/var/atlassian/application-data/confluence/confluence.cfg.xml
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I found out that there were the following changes on the images:
Ownership
The logs throwed errors about not beinng able to write on log files because nearly the entire home directory was owned by an user called bin:
root#8ac38faa94f1:/var/atlassian/application-data/confluence# ls -l
total 108
drwx------ 2 bin bin 4096 Aug 19 00:03 analytics-logs
drwx------ 3 bin bin 4096 Jun 15 2017 attachments
drwx------ 2 bin bin 24576 Jan 12 2019 backups
[...]
This could be fixed by executing a chown:
docker exec -it confluence bash
chown confluence:confluence -R /var/atlassian/application-data/confluence
Moutings inside mount
My docker-compose.yml mounts a volume to /var/atlassian/application-data/confluence and inside those volume, the confluence.cfg.xml file was mounted from current directory. This approach is a bit older and should seperate the user data in the volume from configuration files like docker-compose.yml and also the application itself as confluence.cfg.xml.
Seems not properly working any more on using Docker 17.05 and Docker-Compose 1.8.0 (at least in combination with Confluence), so I simply removed that second mount and placed the configuration file inside the volume.
Atlassian creates config files now dynamically
It was noticeable that my mounted configuration files like confluence.cfg.xml and server.xml were overwritten by Atlassians container. Their source code shows that they now use Jina2, a common Python template engine used in e.g. Ansible. A python script parse those files on startup and create Confluences configuration files, without properly checking on all of those files if they already exists.
Mounting them as read only caused the app to crash because this is also not handled in their Python script. By analyzing their templates, I learned that they replaced nearly every config item by environment variables. Not a bad approach, so I specified my server.xml parameters by env variables instead of mouting the entire file.
In my case, Confluence is behind a Traefik reverse proxy and it's required to tell Confluence it's final application url for end users:
environment:
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
Final working docker-compose.yml
By applying all modifications above, accessing the existing installation works again using the following docker-compose.yml file:
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I'm using Docker with phpMyAdmin and everything works fine except that it timeout way too quickly if I don't use it for a moment. How can I change the limit before having to reconnect ?
Is setting MAX_EXECUTION_TIME in your docker-compose (under 'environment') a possible solution?
After some research I have found an elegant solution. Refer to (https://hub.docker.com/r/phpmyadmin/phpmyadmin/) for information on config.user.inc.php.
The idea is you want to create a volume for this to store the following code.
config.user.inc.php
<?php
$cfg['LoginCookieValidity'] = (86400 * 30); //86400 is 24 hours in seconds. Therefore this is 30 days.
ini_set('session.gc_maxlifetime', (86400 * 30));
?>
You can just put whatever time you want in seconds here, by default it was 24 minutes.
My docker compose file then looked like this for phpmyadmin
phpmyadmin:
depends_on:
- db_live
- db_dev
container_name: phpmyadmin
image: phpmyadmin/phpmyadmin
volumes:
- ./phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
hostname: localbeta.domain.com
environment:
PMA_ARBITRARY: 1
UPLOAD_LIMIT: 25000000
restart: always
ports:
- "8081:80"
The key here is the volumes command. It will create a local directory to change the config.user.inc.php to whatever you want now!
Another thing to note is that PhpMyAdmin will give you a hostname error if you run it and view the console output, to fix this just add the 'hostname' field to the docker compose. Then in your systems host file just add that to your list of hosts and point it to 127.0.0.1 for local testing and don't change anything for the actual beta and/or live servers (if you are using this for that)
You know you will do it right if you go to the settings and see the following for login cookie validity: