How can I make phpmyadmin last longer without timeout with Docker - docker

I'm using Docker with phpMyAdmin and everything works fine except that it timeout way too quickly if I don't use it for a moment. How can I change the limit before having to reconnect ?

Is setting MAX_EXECUTION_TIME in your docker-compose (under 'environment') a possible solution?

After some research I have found an elegant solution. Refer to (https://hub.docker.com/r/phpmyadmin/phpmyadmin/) for information on config.user.inc.php.
The idea is you want to create a volume for this to store the following code.
config.user.inc.php
<?php
$cfg['LoginCookieValidity'] = (86400 * 30); //86400 is 24 hours in seconds. Therefore this is 30 days.
ini_set('session.gc_maxlifetime', (86400 * 30));
?>
You can just put whatever time you want in seconds here, by default it was 24 minutes.
My docker compose file then looked like this for phpmyadmin
phpmyadmin:
depends_on:
- db_live
- db_dev
container_name: phpmyadmin
image: phpmyadmin/phpmyadmin
volumes:
- ./phpmyadmin/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
hostname: localbeta.domain.com
environment:
PMA_ARBITRARY: 1
UPLOAD_LIMIT: 25000000
restart: always
ports:
- "8081:80"
The key here is the volumes command. It will create a local directory to change the config.user.inc.php to whatever you want now!
Another thing to note is that PhpMyAdmin will give you a hostname error if you run it and view the console output, to fix this just add the 'hostname' field to the docker compose. Then in your systems host file just add that to your list of hosts and point it to 127.0.0.1 for local testing and don't change anything for the actual beta and/or live servers (if you are using this for that)
You know you will do it right if you go to the settings and see the following for login cookie validity:

Related

Launching Keycloak 20.0.3 on REL 7.9 in a Docker container with compose never responds to HTTP requests

I have an application that uses Keycloak 15.0.0 on REL 7.9 and other OSes (REL 8.7, Ubuntu 22.04, Oracle Linux 8.7). I am running this behind NGINX proxy and have it 100% working with Keycloak 15.0.0 and have for about a year 1/2 now. Now, we need to update to Keycloak 20.0 for OpenJDK issues and such. I updated my image in the docker compose YML configuration, my environment variables that all changed by this v20.0, and launched my application to have it update.
On 3 of the 4 OSes this worked 100% fine, came up great, came up quick, love the v20.0 UI changes in Keycloak. I tried this on FIPS enabled and FIPS disabled setups, and all worked 100%. It works as expected with my application, behind NGINX, and no issues at all whatsoever we have found in the last two weeks.
However, on Red Hat 7.9 for whatever reason I get no logs at all and nothing happens. I can do a docker exec -it xxxxxx /bin/sh type of command and get into it, but even a curl http://localhost:8080/auth/ turns up just a connection refused. It is almost like it is not running.
This happens whether I am updating a Keycloak 15.0.0 already setup, or if I remove that docker volume and start over from scratch. It just hangs there and does nothing.
And this only happens on REL 7.9. The other operating systems work great after a few minutes and respond correctly. I have even left it alone for up to 30 minutes to see if there was a process running, something hidden, a timeout, something else as a "ghost in the machine". But still nothing works.
I have searched for a while, read the readme files on updates, and started over fresh on other OSes and they all work. Just not this one. So looking for guidance here on what to change/try. Or I cannot use Keycloak 20.0 on REL 7.9. Until its EOL June 2024.
The keycloak configuration that works on the other 3 OSes, with the same docker versions installed and all the same permissions setup via our Ansible setup, is below. I cannot figure out why REL 7.9 is the one holdout on this.
Any help or tips or things to try is much appreciated. I am 8+ hours into this with nothing to show.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
depends_on:
- postgres
networks:
- namednetwork
Voila!!
That line that #jayc talked on - JAVA_OPTS_APPEND="-Dcom.redhat.fips=false" was the answer. Thank you so much! This worked on my REL 7.9 box with FIPS enabled 100% every time, with the Keycloak 20.0.3 container I built from the steps mentioned on containers https://www.keycloak.org/server/containers.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
- JAVA_OPTS_APPEND="-Dcom.redhat.fips=false"
depends_on:
- postgres
networks:
- namednetwork

Does docker-compose support init container?

init container is a great feature in Kubernetes and I wonder whether docker-compose supports it? it allows me to run some command before launch the main application.
I come cross this PR https://github.com/docker/compose-cli/issues/1499 which mentions to support init container. But I can't find related doc in their reference.
This was a discovery for me but yes, it is now possible to use init containers with docker-compose since version 1.29 as can be seen in the PR you linked in your question.
Meanwhile, while I write those lines, it seems that this feature has not yet found its way to the documentation
You can define a dependency on an other container with a condition being basically "when that other container has successfully finished its job". This leaves the room to define containers running any kind of script and exit when they are done before an other dependent container is launched.
To illustrate, I crafted an example with a pretty common scenario: spin up a db container, make sure the db is up and initialize its data prior to launching the application container.
Note: initializing the db (at least as far as the official mysql image is concerned) does not require an init container so this example is more an illustration than a rock solid typical workflow.
The complete example is available in a public github repo so I will only show the key points in this answer.
Let's start with the compose file
---
x-common-env: &cenv
MYSQL_ROOT_PASSWORD: totopipobingo
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
environment:
<<: *cenv
init-db:
image: mysql:8.0
command: /initproject.sh
environment:
<<: *cenv
volumes:
- ./initproject.sh:/initproject.sh
depends_on:
db:
condition: service_started
my_app:
build:
context: ./php
environment:
<<: *cenv
volumes:
- ./index.php:/var/www/html/index.php
ports:
- 9999:80
depends_on:
init-db:
condition: service_completed_successfully
You can see I define 3 services:
The database which is the first to start
The init container which starts only once db is started. This one only runs a script (see below) that will exit once everything is initialized
The application container which will only start once the init container has successfuly done its job.
The initproject.sh script run by the db-init container is very basic for this demo and simply retries to connect to the db every 2 seconds until it succeeds or reaches a limit of 50 tries, then creates a db/table and insert some data:
#! /usr/bin/env bash
# Test we can access the db container allowing for start
for i in {1..50}; do mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "show databases" && s=0 && break || s=$? && sleep 2; done
if [ ! $s -eq 0 ]; then exit $s; fi
# Init some stuff in db before leaving the floor to the application
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create database my_app"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "create table my_app.test (id int unsigned not null auto_increment primary key, myval varchar(255) not null)"
mysql -u root -p${MYSQL_ROOT_PASSWORD} -h db -e "insert into my_app.test (myval) values ('toto'), ('pipo'), ('bingo')"
The Dockerfile for the app container is trivial (adding a mysqli driver for php) and can be found in the example repo as well as the php script to test the init was succesful by calling http://localhost:9999 in your browser.
The interesting part is to observe what's going on when launching the service with docker-compose up -d.
The only limit to what can be done with such a feature is probably your imagination ;) Thanks for making me discovering this.

Confluence on Docker runs setup assistent on existing installation after update

A few days ago, my watchtower updated Confluence on Docker with the 6.15.1-alpine tag. It's hosted using Atlassians official image. Since those update, Confluence shows the setup screen. Haven't any chance to get inside the admin panel. When continue the wizard end entering server credentials of the existing installation, it gave an error that an installation already exists that would be overwritten if continued.
It was a re-push of the exact version tag 6.15.1 tag, not a regular version update. So there seems no possibility to use the old, working image. Also other versions seems re-pushed. Tried some older ones and also a new one, without success.
docker-compose.yml
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1-alpine
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./confluence.cfg.xml:/var/atlassian/application-data/confluence/confluence.cfg.xml
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I found out that there were the following changes on the images:
Ownership
The logs throwed errors about not beinng able to write on log files because nearly the entire home directory was owned by an user called bin:
root#8ac38faa94f1:/var/atlassian/application-data/confluence# ls -l
total 108
drwx------ 2 bin bin 4096 Aug 19 00:03 analytics-logs
drwx------ 3 bin bin 4096 Jun 15 2017 attachments
drwx------ 2 bin bin 24576 Jan 12 2019 backups
[...]
This could be fixed by executing a chown:
docker exec -it confluence bash
chown confluence:confluence -R /var/atlassian/application-data/confluence
Moutings inside mount
My docker-compose.yml mounts a volume to /var/atlassian/application-data/confluence and inside those volume, the confluence.cfg.xml file was mounted from current directory. This approach is a bit older and should seperate the user data in the volume from configuration files like docker-compose.yml and also the application itself as confluence.cfg.xml.
Seems not properly working any more on using Docker 17.05 and Docker-Compose 1.8.0 (at least in combination with Confluence), so I simply removed that second mount and placed the configuration file inside the volume.
Atlassian creates config files now dynamically
It was noticeable that my mounted configuration files like confluence.cfg.xml and server.xml were overwritten by Atlassians container. Their source code shows that they now use Jina2, a common Python template engine used in e.g. Ansible. A python script parse those files on startup and create Confluences configuration files, without properly checking on all of those files if they already exists.
Mounting them as read only caused the app to crash because this is also not handled in their Python script. By analyzing their templates, I learned that they replaced nearly every config item by environment variables. Not a bad approach, so I specified my server.xml parameters by env variables instead of mouting the entire file.
In my case, Confluence is behind a Traefik reverse proxy and it's required to tell Confluence it's final application url for end users:
environment:
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
Final working docker-compose.yml
By applying all modifications above, accessing the existing installation works again using the following docker-compose.yml file:
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true

Not seeing certificate attributes when not running locally

The Problem
When I deploy a 4 peer nodes with PBFT or NOOPS in the cloud, any user certificate attributes are not seen. The values are blank.
Observations
Everything works locally. This suggests that I am calling the API correctly, and the chaincode is accessing attributes correctly.
When I attach to the membership container, I see the correct membersrvc.yaml with aca.enabled set to true. This is the same yaml that works locally. For good measure, I'm also passing the ENV variable MEMBERSRVC_CA_ACA_ENABLED=true.
I can see the attributes for the users in the membership service's ACA database. (suggesting that the users were created with attributes)
When I look at the actual certificate from the log (Bytes to Hex then Base64 decode) I see the attributes. (Appending certificate [30 82 02 dd 30 8....)
All attributes are blank when deployed. No errors.
Membership Service Logs
I enabled debug logging, and see that Membership services thinks it's enabled ACA:
19:57:46.421 [server] main -> DEBU 049 ACA was enabled [aca.enabled == true]
19:57:46.421 [aca] Start -> INFO 04a Staring ACA services...
19:57:46.421 [aca] startACAP -> INFO 04b ACA PUBLIC gRPC API server started
19:57:46.421 [aca] Start -> INFO 04c ACA services started
This looks good. What am I missing?
Guess
Could it be that the underlying docker container the chaincode deploys into doesn't have security enabled? Does it use the ENV passed to the parent peer? One difference is that locally I'm using "dev mode" without the base-image shenanigans.
Membership Service
membersrvc:
container_name: membersrvc
image: hyperledger/fabric-membersrvc
volumes:
- /home/ec2-user/membership:/user/membership
- /var/hyperledger:/var/hyperledger
command: sh -c "cp /user/membership/membersrvc.yaml /opt/gopath/src/github.com/hyperledger/fabric/membersrvc && membersrvc"
restart: unless-stopped
environment:
- MEMBERSRVC_CA_ACA_ENABLED=true
ports:
- 7054:7054
Root Peer Service
rootpeer:
container_name: root-peer
image: hyperledger/fabric-peer
restart: unless-stopped
environment:
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp1
- CORE_SECURITY_ENROLLID=vp1
- CORE_SECURITY_ENROLLSECRET=xxxxxxxx
- CORE_SECURITY_ENABLED=true
- CORE_SECURITY_ATTRIBUTES_ENABLED=true
- CORE_PEER_PKI_ECA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TCA_PADDR=members.x.net:7054
- CORE_PEER_PKI_TLSCA_PADDR=members.x.net:7054
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=NOOPS
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/hyperledger:/var/hyperledger
command: sh -c "peer node start"
ports:
- 7051:7051
- 7050:7050
Here's the request:
{
"jsonrpc": "2.0",
"method":"query",
"params": {
"chaincodeID": {
"name" :"659cb5dcc3063054e4c90908050eebf68eb2bd193cc1520f1f2d198f0ff42268"
},
"ctorMsg": {
"args":["get_results", "{\"Id\":\"abc123\"}"]
},
"secureContext": "user123",
"attributes":["account_id","role"]
},
"id": 2
}
Edited*: I previously thought this was just PBFT...but it's also happening on NOOPS on the cloud. I reduced the example to NOOPS.
My problem is that the fabric version inside the fabric-baseimage docker container is a good bit newer. This is my fault - because I populated that image with the fabric version manually.
Background
If one is using non-vagrant with 0.6 and not in DEV mode, deploying chaincode will have a "cannot find :latest tag" error. To solve this, I pulled a fabric-baseimage version, and populated it with what I needed, including a git-clone of fabric. I should have pulled the 0.6 branch, but instead it was pulling master.
So essentially, my fabric-peer, node-sdk deployer, and baseimage were using slightly different hyperledger versions.
After about 48 hours of configuration hell, I think I have it straightened out by sending everything back to 0.6. I have terraform spinning everything up successfully now.
I do wish the documentation included something about deploying in a non-dev multi-node environment.

Docker-compose : understanding linking environment variables

I'm now using docker-compose for all of my projects. Very convenient. Much more comfortable than manual linking through several docker commands.
There is something that is not clear to me yet though: the logic behind the linking environment variables.
Eg. with this docker-compose.yml:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
In the node app, I need to retrieve the mongodb url. And if I console.log(process.env), I get so many things that it feels very random (just kept the docker-compose-related ones):
MONGODB_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PORT: '27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
MONGODB_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_GOSU_VERSION: '1.7',
'MYAPP_MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GOSU_VERSION: '1.7',
MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_1_NAME: '/myapp_web_1/mongodb_1',
MONGODB_1_PORT_27017_TCP_PORT: '27017',
MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
'MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MONGODB_PORT: 'tcp://172.17.0.2:27017',
MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_ENV_GOSU_VERSION: '1.7',
MONGODB_ENV_MONGO_MAJOR: '3.2',
MONGODB_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_NAME: '/myapp_web_1/mongodb',
MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_PORT_27017_TCP_PORT: '27017',
MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_PORT_27017_TCP_PROTO: 'tcp',
MYAPP_MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
'MONGODB_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MYAPP_MONGODB_1_NAME: '/myapp_web_1/novatube_mongodb_1',
Don't know what to pick, and why so many entries? Is it better to use the general ones, or the MYAPP-prefixed one? Where does the MYAPP name comes from? Folder name?
Could someone clarify this?
Wouldn't it be easier to let the user define the ones he needs in the docker-compose.yml file with a custom mapping? Like:
links:
- mongodb:
- MONGOIP: IP
- MONGOPORT : PORT
What I'm saying might not have sense though. :-)
Environment variables are a legacy way of defining links between containers. If you are using a newer version of compose, you don't need the links declaration at all. Trying to connect to mongodb from your app container will work fine by just using the name of the service (mongodb) as a hostname, without any links defined in the compose file (instead using docker's builtin DNS resolution, check /etc/hosts, nothing in there either!)
In answer to your question about why the prefix with MYAPP, you're right. Compose prefixes the service name with the name of the folder (or 'project', in compose nomenclature). It does the same thing when creating custom networks and volumes.

Resources