How to restore redis dump from version 3.2 (without docker) to 6.0 (running with bitnami docker image) - docker

Initial goal
Currently we have an old version of redis running on a single physical server(redis v3.2.1). The goal is to have a redis/slave topology with sentinel for HA.
A way to accomplish this, is to create a slave of this single instance with version 6. Unfortunately, there is an icompatibility between version 3 and 6.
What I try to do
I tried to restore the dump to version 4.0.2-r0. It works fine.
Then I tried to restore to version 4.0.2-r1 and it failed and the
dump.rdb was totally replaced by an empty file automatically
How to reproduce
Install a single instance of redis with :
apt install -y redis-server redis-tools
Create a redis container with this docker-compose.yml file
version: '2'
services:
redis-master:
#image: 'bitnami/redis:6.0'
#image: 'bitnami/redis:4.0.2-r1'
image: 'bitnami/redis:4.0.2-r0'
container_name: redis-master
volumes:
- 'redis_data_master:/bitnami/redis/data'
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
volumes:
redis_data_master:
driver: local
docker-compose up -d
docker stop redis-master
# Replace test_redis_data_master by the right folder
sudo cp dump.rdb /var/lib/docker/volumes/test_redis_data_master/_data/dump.rdb
With this docker compose file, it work with v4.0.2-r0 but not with v4.0.2-r1.
cf logs here:
v4.0.2-r0
30:M 01 Dec 13:52:55.993 # Server initialized 30:M 01 Dec
13:52:58.049 * DB loaded from disk: 2.056 seconds 30:M 01 Dec
13:52:58.049 * Ready to accept connections
The database is loaded successfully
v4.0.2-r1
20:M 01 Dec 13:49:05.472 # Server initialized 20:M 01 Dec
13:49:05.472 * Ready to accept connections
The database is empty
Conclusion
Can you please help with this issue. Did someone faced the same issue or the same situation ?
Thanks in advance for your help :D

Have you tried to upgrade directly from 4.0 to 5.0?
I don't think upgrade revision by revision is a good idea since we release a new revision every day, and sometimes, they have bugs that are patched with another revision.
Apart from that, I recommend you to follow the official documentation to upgrade your databases. A major version usually means incompatible changes, so I am pretty sure you will find some issues in the way. It is highly probable that just changing the image tag will not work.

Related

Launching Keycloak 20.0.3 on REL 7.9 in a Docker container with compose never responds to HTTP requests

I have an application that uses Keycloak 15.0.0 on REL 7.9 and other OSes (REL 8.7, Ubuntu 22.04, Oracle Linux 8.7). I am running this behind NGINX proxy and have it 100% working with Keycloak 15.0.0 and have for about a year 1/2 now. Now, we need to update to Keycloak 20.0 for OpenJDK issues and such. I updated my image in the docker compose YML configuration, my environment variables that all changed by this v20.0, and launched my application to have it update.
On 3 of the 4 OSes this worked 100% fine, came up great, came up quick, love the v20.0 UI changes in Keycloak. I tried this on FIPS enabled and FIPS disabled setups, and all worked 100%. It works as expected with my application, behind NGINX, and no issues at all whatsoever we have found in the last two weeks.
However, on Red Hat 7.9 for whatever reason I get no logs at all and nothing happens. I can do a docker exec -it xxxxxx /bin/sh type of command and get into it, but even a curl http://localhost:8080/auth/ turns up just a connection refused. It is almost like it is not running.
This happens whether I am updating a Keycloak 15.0.0 already setup, or if I remove that docker volume and start over from scratch. It just hangs there and does nothing.
And this only happens on REL 7.9. The other operating systems work great after a few minutes and respond correctly. I have even left it alone for up to 30 minutes to see if there was a process running, something hidden, a timeout, something else as a "ghost in the machine". But still nothing works.
I have searched for a while, read the readme files on updates, and started over fresh on other OSes and they all work. Just not this one. So looking for guidance here on what to change/try. Or I cannot use Keycloak 20.0 on REL 7.9. Until its EOL June 2024.
The keycloak configuration that works on the other 3 OSes, with the same docker versions installed and all the same permissions setup via our Ansible setup, is below. I cannot figure out why REL 7.9 is the one holdout on this.
Any help or tips or things to try is much appreciated. I am 8+ hours into this with nothing to show.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
depends_on:
- postgres
networks:
- namednetwork
Voila!!
That line that #jayc talked on - JAVA_OPTS_APPEND="-Dcom.redhat.fips=false" was the answer. Thank you so much! This worked on my REL 7.9 box with FIPS enabled 100% every time, with the Keycloak 20.0.3 container I built from the steps mentioned on containers https://www.keycloak.org/server/containers.
keycloak:
image: keycloak-mybuild:20.0.3
restart: on-failure:5
container_name: keycloak
command: start --optimized
ports:
- "8080"
environment:
- KC_DB=postgres
- KC_DB_URL=jdbc:postgresql://postgres:5432/xxxxxxxx
- KC_DB_USERNAME=xxxxxxxx
- KC_DB_PASSWORD=xxxxxxxx
- KEYCLOAK_ADMIN=admin
- KEYCLOAK_ADMIN_PASSWORD=xxxxxxxx
- KC_HOSTNAME_STRICT=false
- KC_HOSTNAME_STRICT_HTTPS=false
- PROXY_ADDRESS_FORWARDING=true
- KC_HTTP_RELATIVE_PATH=/auth
- KC_HTTP_ENABLED=true
- KC_HTTP_PORT=8080
- JAVA_OPTS_APPEND="-Dcom.redhat.fips=false"
depends_on:
- postgres
networks:
- namednetwork

ERROR Disk error while locking directory /var/kafka-logs in 3.10 Kafka

I am using Kafka 3.1.0, Portainer 2.9.0 and docker 20.10.11 to build a 1 broker, 1 consumer and 1 producer cluster.
I am trying to map the log dirs via the docker-compose from the container to the host machine in order to persist the content of that directory (because if the container falls that information will be lost). I know it is recommended to have more than 1 broker, but since I am just testing this feature, I don't want to overcomplicate myself.
The problem I get is
ERROR Disk error while locking directory /var/kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /var/kafka-logs/.lock
[2022-03-31 12:00:53,986] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I have checked and the user that executes the broker has all permissions (since I created that directory with my Dockerfile).
RUN mkdir /var/kafka-logs \
&& chown -R kafka:kafka /var/kafka-logs \
&& chmod -R 777 /var/kafka-logs
I have seen that this problem was a thing in the 3.0 version and was fixed in the 3.1, and also that it only happened in Windows, so I don't know the source of this problem.
Edit: I have checked and even without the mapping it still prints that error. It must be a problem of changing the log.dirs property to a non /tmp directory, because if I leave the default configuration it works just fine.
By default I mean the following:
log.dirs=/tmp/kafka-logs
My docker-compose:
version: "3.8"
networks:
net:
external: true
services:
kafka-broker1:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
volumes:
- /var/volumes/kafka/config/server1.properties:/opt/kafka/config/server.properties
networks:
- net
kafka-producer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
kafka-consumer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
The problem was that I have been creating a few docker images and the container with the same name and it didn't picked the newest image.
Once I erased the rest of images and the container picked the lastest it all worked just fine, so it was basically a problem of not having enough permissions to get the lock of that directory.

Running Percona server in Docker fails with socket error

I've been trying, and failing, to get Percona Server (version 8 on CentOS) running as a lone service inside a docker-compose.yml file. The error that keeps coming up is:
mysql | 2020-03-16T23:04:25.189164Z 0 [ERROR] [MY-010270] [Server] Can't start server : Bind on unix socket: File name too long
mysql | 2020-03-16T23:04:25.189373Z 0 [ERROR] [MY-010258] [Server] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ?
mysql | 2020-03-16T23:04:25.190581Z 0 [ERROR] [MY-010119] [Server] Aborting
mysql | 2020-03-16T23:04:26.438533Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.18-9) Percona Server (GPL), Release 9, Revision 53e606f.
My docker-compose.yml file is as follows:
version: '3.7'
services:
mysql:
container_name: mysql
image: percona:8-centos
volumes:
- ./docker/mysql/setup:/docker-entrypoint-initdb.d
- ./docker/mysql/data:/var/lib/mysql
- ./docker/mysql/conf:/etc/mysql/conf.d:ro
environment:
- MYSQL_ROOT_PASSWORD=mypassword
- MYSQL_DATABASE=<redacted>
- MYSQL_USER=<redacted>
- MYSQL_PASSWORD=<redacted>
stop_grace_period: 20s
restart: always
A few things to note:
My my.cnf file, which lives on the host under docker/mysql/conf/, declares the location of the socket file as /var/run/mysql.sock instead of /var/lib/mysql/mysql.sock. Why would mysqld still be trying to use a different socket file path than the one I declared in my own config file? (And yes, my config file IS being picked up because when it used to have deprecated options declared inside it, mysqld complained and failed to start.)
In the beginning, I kept the socket file path setting alone and allowed it to use the default location; however, it resulted in the same exact error.
The documentation at the Percona Docker Hub page contains contradictions, one of the important ones being that they mention the config directory /etc/my.cnf.d inside the container, and then when they give an example they instead mention /etc/mysql/conf.d; the discrepancy makes me lose confidence in the entire rest of the documentation. Indeed, my lack of confidence now seems well-placed, since the official image fails to run properly out of the box.
So, does anyone know how to use the official Percona images? (Or am I going to be forced to roll my own service using my own Dockerfile?)
I was also getting the same error on mac os.
So getting a hint from error: "File name too long", I moved my entire project into home directory, so that my compose file was at path:~/myproject/docker-compose.yml. (May be you can try moving to root dir, just to avoid any confusion to what ~/ expands to.)
And it did the trick and mysql image was up again without any error.
PS: I am not saying that you need to place your project in homedir, but you need to find smallest folder path that works for your project.

Confluence on Docker runs setup assistent on existing installation after update

A few days ago, my watchtower updated Confluence on Docker with the 6.15.1-alpine tag. It's hosted using Atlassians official image. Since those update, Confluence shows the setup screen. Haven't any chance to get inside the admin panel. When continue the wizard end entering server credentials of the existing installation, it gave an error that an installation already exists that would be overwritten if continued.
It was a re-push of the exact version tag 6.15.1 tag, not a regular version update. So there seems no possibility to use the old, working image. Also other versions seems re-pushed. Tried some older ones and also a new one, without success.
docker-compose.yml
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1-alpine
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./confluence.cfg.xml:/var/atlassian/application-data/confluence/confluence.cfg.xml
- ./server.xml:/opt/atlassian/confluence/conf/server.xml
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true
I found out that there were the following changes on the images:
Ownership
The logs throwed errors about not beinng able to write on log files because nearly the entire home directory was owned by an user called bin:
root#8ac38faa94f1:/var/atlassian/application-data/confluence# ls -l
total 108
drwx------ 2 bin bin 4096 Aug 19 00:03 analytics-logs
drwx------ 3 bin bin 4096 Jun 15 2017 attachments
drwx------ 2 bin bin 24576 Jan 12 2019 backups
[...]
This could be fixed by executing a chown:
docker exec -it confluence bash
chown confluence:confluence -R /var/atlassian/application-data/confluence
Moutings inside mount
My docker-compose.yml mounts a volume to /var/atlassian/application-data/confluence and inside those volume, the confluence.cfg.xml file was mounted from current directory. This approach is a bit older and should seperate the user data in the volume from configuration files like docker-compose.yml and also the application itself as confluence.cfg.xml.
Seems not properly working any more on using Docker 17.05 and Docker-Compose 1.8.0 (at least in combination with Confluence), so I simply removed that second mount and placed the configuration file inside the volume.
Atlassian creates config files now dynamically
It was noticeable that my mounted configuration files like confluence.cfg.xml and server.xml were overwritten by Atlassians container. Their source code shows that they now use Jina2, a common Python template engine used in e.g. Ansible. A python script parse those files on startup and create Confluences configuration files, without properly checking on all of those files if they already exists.
Mounting them as read only caused the app to crash because this is also not handled in their Python script. By analyzing their templates, I learned that they replaced nearly every config item by environment variables. Not a bad approach, so I specified my server.xml parameters by env variables instead of mouting the entire file.
In my case, Confluence is behind a Traefik reverse proxy and it's required to tell Confluence it's final application url for end users:
environment:
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
Final working docker-compose.yml
By applying all modifications above, accessing the existing installation works again using the following docker-compose.yml file:
version: "2"
volumes:
confluence-home:
services:
confluence:
container_name: confluence
image: atlassian/confluence-server:6.15.1
#restart: always
mem_limit: 6g
volumes:
- confluence-home:/var/atlassian/application-data/confluence
- ./mysql-connector-java-5.1.42-bin.jar:/opt/atlassian/confluence/lib/mysql-connector-java-5.1.42-bin.jar
networks:
- traefik
environment:
- "TZ=Europe/Berlin"
- JVM_MINIMUM_MEMORY=4096m
- JVM_MAXIMUM_MEMORY=4096m
- ATL_proxyName=confluence.my-domain.com
- ATL_proxyPort=443
- ATL_tomcat_scheme=https
labels:
- "traefik.port=8090"
- "traefik.backend=confluence"
- "traefik.frontend.rule=Host:confluence.my-domain.com"
networks:
traefik:
external: true

setup drone continuous integration with github

I'm trying to setup a CI server inside a corporate network with drone (open source edition). Its author describes drone as very simple solution even for programmer (as I am), though some moments are not clear for me (may be official documentation misses them).
First, I've made up an docker image for my rails application: rails-qna.
Next, composing drone images:
docker-compose.yml:
version: '2'
services:
drone-server:
image: drone/drone:0.5
ports:
- 80:8000
volumes:
- ./drone:/var/lib/drone/
restart: always
environment:
- DRONE_OPEN=true
- DRONE_ADMIN=khataev
- DRONE_GITHUB_CLIENT=github-client-string
- DRONE_GITHUB_SECRET=github-secret-string
- DRONE_SECRET=drone-secret-string
drone-agent:
image: drone/drone:0.5
command: agent
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=ws://drone-server:8000/ws/broker
- DRONE_SECRET=drone-secret-string
Application is registered on Github and secret/client strings are provided.
I placed .drone.yml file into my project repository:
pipeline:
build:
image: rails-qna
commands:
- bundle exec rake db:drop
- bundle exec rake db:create
- bundle exec rake db:migrate
- bundle exec rspec
Unclear moments:
1) While registering OAuth application on github, we should specify Homepage URL and authorization callback URL. Where should they point to? Drone server container? Guessing that so, I specified
mycorporatedomain.com:3005
and
mycorporatedomain.com:3005/authorize
and setup port forwarding from 3005 port to 80 port of host, where drone docker is running. May be I'm wrong?
2) What should I specify in key DRONE_GITHUB_URL?
https://github.com or full path to my project repository, i.e.
https://github.com/khataev/qna?
3) What if I want to build some branch other than master? Were should I specify it? For now drone ready branch (with .drone.yml) is not a master branch - would it work?
4) Why DRONE_GITHUB_GIT_USERNAME and DRONE_GITHUB_GIT_PASSWORD are optional? How it is supposed to work if, I don't specify username and password for my github account?
5) When I start drone images with docker up, I get this errors:
→ docker-compose up
Starting drone_drone-server_1
Starting drone_drone-agent_1
Attaching to drone_drone-server_1, drone_drone-agent_1
drone-server_1 | time="2017-03-04T17:00:33Z" level=fatal msg="version control system not configured"
drone-agent_1 | 1:M 04 Mar 17:00:35.208 * connecting to server ws://drone-server:8000/ws/broker
drone-agent_1 | 1:M 04 Mar 17:00:35.229 # connection failed, retry in 15s. websocket.Dial ws://drone-server:8000/ws/broker: dial tcp: lookup drone-server on 127.0.0.11:53: no such host
drone_drone-server_1 exited with code 1
drone-server_1 | time="2017-03-04T16:53:38Z" level=fatal msg="version control system not configured"
UPD
5) this was solved - forgot to specify
DRONE_GITHUB=true
Homepage URL is the address of the server where drone is running on.
E.g. http://155.200.100.0
Authorize URL is the same address appended by /authorize
Eg. http://155.200.100.0/authorize
You dont have to specify that. DRONE_GITHUB=true says drone to use github url.
You can limit a single section to a branch or the whole drone build.
Single Section:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
when:
branch: master
Whole build process:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
branches: master
You don't need username and password when using OAuth.
Source:
http://readme.drone.io/admin/setup-github/
http://readme.drone.io/usage/skipping-builds/
http://readme.drone.io/usage/skipping-build-steps/
UPDATE:
Documentation is shifted to http://docs.drone.io/ due to version 0.6 of Drone

Resources