Docker hosted redis losing data - docker

I am running standalone Redis using docker-hub image with volume
forpersistent storage(--appendonly yes), however after a while all the keys from the Redis are disappearing.
I haven't set the EXPIRE time for any of the keys.
Runnning the docker with following command :
docker run -p 6379:6379 -v redis-vol:/data -d redis redis-server --appendonly yes
Can anyone please let me know what might be going wrong?
Thank you.

Yeah you will be losing the keys each time you are creating a new container, although you have a permanent storage 'volume'.
The thing you are missing is to set the environment variables ALLOW_EMPTY_PASSWORD=yes and DISABLE_COMMANDS=FLUSHDB,FLUSHALL,CONFIG
If you are using docker-compose file you can simply add them as:
redis:
image: 'bitnami/redis:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
- DISABLE_COMMANDS=FLUSHDB,FLUSHALL,CONFIG
container_name: haproxy_redis_auth_redis
ports:
- "6379:6379"
volumes:
- redis-data:/bitnami/redis/data
In so case, do not forget to make your application depends on redis service
application:
build: .
depends_on:
- db
- redis
I had the same issue which been fixed after I took a look here:
https://hub.docker.com/r/bitnami/redis/

Related

Cannot exec into container using GitBash when using Docker Compose

I'm new to Docker Compose, but have used Docker for years. The screen shot below is of PowerShell and of GitBash. If I run containers without docker-compose I can docker exec -it <container_ref> /bin/bash with no problems from either of these shells.
However, when running using docker-compose up both shells give no error when attempting to use docker-compose exec. They both just hang a few seconds and return to prompt.
Lastly, for some reason I do get an error in GitBash when using what I know: docker exec.... I've used this for years so I'm perplexed and posting a question. What does Docker Compose do that messes with GitBash docker ability, but not with PowerShell? And, why the hang when using docker-compose exec..., but no error?
I am using tty: true in the docker-compose.yml, but that honestly doesn't seem to make a difference. Not to throw a bunch of questions in one post, but whatever is going on could it also be the reason I can't hit my web server in the browser only when using Docker Compose to run it?
version: '3.8'
volumes:
pgdata:
external: true
services:
db:
image: postgres
container_name: trac-db
tty: true
restart: 'unless-stopped'
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: iol
volumes:
- pgdata:/var/lib/postgresql/data
network_mode: 'host'
expose:
- 5432
web:
image: lindben/trac-server
container_name: trac-server
tty: true
restart: 'unless-stopped'
environment:
ADDRESS: localhost
PORT: 3000
NODE_ENV: development
depends_on:
- db
network_mode: 'host'
privileged: true
expose:
- 1234
- 3000
```
I'm gonna be assuming you're using Docker for Desktop and so the reason you can docker exec just fine using powershell is because for windows docker is a native program\command and for GitBash which is based on bash a linux shell (bash = Bourne-Again SHell) not so much.
so when using a windows command that needs a tty you need some sort of "adapter" like winpty for example to bridge the gap between docker's interface and GitBash's one.
Here's a more detailed explanation on winpty
putting all of this aside, if trying to only use the compose options it maybe better for you to advise this question
Now, regarding your web service issue, I think that you're not actually publicly exposing your application using the expose tag. take a look at the docker-compose
expose reference. what you need is to add a "ports" tag like so as referenced here:
db:
ports:
- "5432:5432"
web:
ports:
- "1234:1234"
- "3000:3000"
Hope this solves your pickle ;)

How do I retain my content types within a dockerized strapi

I've been using strapi for docker (https://github.com/strapi/strapi-docker), but whenever I rebuild my container the data all disappears. I can still see it in the database, but the admin isn't recognizing it.
I tried recreating the content type, and then the records from the database appeared, but when I rebuild the container again the content type disappears
Where are content definitions stored? Is this a bug with the app? (I think strapi-docker is using an alpha release)
How to I get strapi to retain my content definitions in the database, so I can use a stateless container?
UPDATE
I tried looking at the attached volume -
api:
build: .
env_file: './dev.env'
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
restart: always
But there's nothing in it -
Aidans-MacBook:strapi-docker aidan$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
02b098286ada strapi-docker_api "docker-entrypoint.s…" 24 minutes ago Up 5 minutes (healthy) 0.0.0.0:1337->1337/tcp strapi-docker_api_1
Aidans-MacBook:strapi-docker aidan$ docker inspect -f "{{.Mounts}}" 02b098286ada
[{bind /Users/aidan/Documents/Code/beefbook/strapi-docker/strapi-app /usr/src/api/strapi-app rw true rprivate}]
Aidans-MacBook:strapi-docker aidan$ ls /Users/aidan/Documents/Code/beefbook/strapi-docker/strapi-app
Aidans-MacBook:strapi-docker aidan$
you need to mount the directory to keep the file persistent.
- ./strapi-app:/usr/src/api/strapi-app For application
- ./db:/data/db For DB
version: '3'
services:
api:
build: .
image: strapi/strapi
environment:
- APP_NAME=strapi-app
- DATABASE_CLIENT=mongo
- DATABASE_HOST=db
- DATABASE_PORT=27017
- DATABASE_NAME=strapi
- DATABASE_USERNAME=
- DATABASE_PASSWORD=
- DATABASE_SSL=false
- DATABASE_AUTHENTICATION_DATABASE=strapi
- HOST=localhost
ports:
- 1337:1337
volumes:
- ./strapi-app:/usr/src/api/strapi-app
#- /usr/src/api/strapi-app/node_modules
depends_on:
- db
restart: always
db:
image: mongo
environment:
- MONGO_INITDB_DATABASE=strapi
ports:
- 27017:27017
volumes:
- ./db:/data/db
restart: always
Run the docker-compose up and you will see the data is now has been persistent.
updated:
After investigation by #Aidan
it's used the APP_NAME env var (the default is "strapi-app"). so the correct mount is /usr/src/api/beef-content (since, in my case, the APP_NAME is beef-content). I'll use that to mount my volume
I may have a solution with my project "strapidocker-tools":
https://github.com/OliCpg/strapidocker-tools
This will let you backup, move and restore full dockerised strapi project.
Be aware that it will work with two docker containers named strapi and strapi_db. You should not rename them (i'll change that later on). They get recreated upon restore.
Still a work in progress and not very elegant at the present time but it works for me.
Feedback is welcome.

Sidekiq in dockerised rails application on AWS

I have a docker compose file with this content.
version: '3'
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: "redis:alpine"
ports:
- "6379:6379"
volumes:
- 'redis:/var/lib/redis/data'
sidekiq:
build: .
links:
- db
- redis
command: bundle exec sidekiq
volumes:
- '.:/app'
web:
image: production_image
ports:
- "80:80"
links:
- db
- redis
- sidekiq
restart: always
volumes:
postgres_data:
redis:
In this to run sidekiq, we run bundle exec sidekiq in the current directory. This works on my local machine in development environment. But on AWS EC2 container, I am sending my docker-compose.yml file and running docker-compose up. But since the project code is not there, sidekiq fails. How should I run sidekiq on EC2 instance without sending my code there and using docker container of my code only in the compose file?
The two important things you need to do are to remove the volumes: declaration that gets the actual application code from your local filesystem, and upload your built Docker image to some registry. Since you're otherwise on AWS, ECR is a ready option; public Docker Hub will work fine too.
Depending on how your Rails app is structured, it might make sense to use the same image with different commands for the main application and the Sidekiq worker(s), and it might work to just make it say
sidekiq:
image: production_image
command: bundle exec sidekiq
Since you're looking at AWS anyways you should also consider the possibility of using hosted services for data storage (RDS for the database, Elasticache for Redis). The important thing is to include the locations of those data stores as environment variables so that you can change them later (maybe they would default to localhost for developer use, but always be something different when deployed).
You'll also notice that my examples don't have links:. Docker provides an internal DNS service for containers to find each other, and Docker Compose arranges for containers to be found via their service key in the YAML file.
Finally, you should be able to test this setup locally before deploying it to EC2. Run docker build and docker-compose up as needed; debug; and if it works then docker push the image(s) and launch it on Amazon.
version: '3'
volumes: *volumes_from_the_question
services:
db: *db_from_the_question
redis: *redis_from_the_question
sidekiq:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/sidekiq:1.0
environment:
- PGHOST: db
- REDIS_HOST: redis
app:
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myapp/app:1.0
ports:
- "80:80"
environment:
- PGHOST: db
- REDIS_HOST: redis

How to use ipaddreses instead of container names in docker compse networking

I'm using docker compose for a web application that I'm creating with asp.net core, postgres and redis. I have everything set up in compose to connect to postgres using the service name I've specified in the docker-compose.yml file. When trying to do the same with redis, I get an exception. After doing research it turns out this exception is a known issue and the work around is using the ip address of the the machine instead of a host name. However I cannot figure out how to get the ipaddress of the redis service from the compose file. Is there a way to do that?
Edit
Here is the compose file
version: "3"
services:
postgres:
image: 'postgres:9.5'
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5433:5432'
redis:
image: 'redis:3.0-alpine'
command: redis-server --requirepass devpassword
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6378:6379'
web:
build: .
env_file:
- '.env'
ports:
- "8000:80"
volumes:
- './src/edb/Controllers:/app/Controllers'
- './src/edb/Views:/app/Views'
- './src/edb/wwwroot:/app/wwwroot'
- './src/edb/Lib:/app/Lib'
volumes:
postgres:
redis:
Ok, I found the answer. It was something I was trying but didn't realize the address may change everytime you restart the containers.
Run docker ps to get a list of running contianers then copy the id of your container and run docker inspect {container_id} and that will output the ipaddress that you can access it with from within the other running containers.
The reason I was confused was because that address may change when the containers are started. So I had to guess what the ip address was going to be before I started the containers. Luckly after 5 times I guessed correctly.

Restart Docker Containers when they Crash Automatically

I want to restart a container if it crashes automatically. I am not sure how to go about doing this. I have a script docker-compose-deps.yml that has elasticsearch, redis, nats, and mongo. I run this in the terminal to set this up: docker-compose -f docker-compose-deps.yml up -d. After this I set up my containers by running: docker-compose up -d. Is there a way to make these containers restart if they crash? I noticed that docker has a built in restart, but I don't know how to implement this.
After some feedback I added restart: always to my docker-compose file and my docker-compose-deps.yml file. Does this look correct? Or is this how you would implement the restart always?
docker-compose sample
myproject-server:
build: "../myproject-server"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5880:5880
- 6971:6971
volumes:
- "../myproject-server/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
myproject-associate:
build: "../myproject-associate"
dockerfile: Dockerfile-dev
restart: always
ports:
- 5870:5870
volumes:
- "../myproject-associate/src:/src"
working_dir: "/src"
external_links:
- nats
- mongo
- elasticsearch
- redis
docker-compose-deps.yml sample
nats:
image: nats
container_name: nats
restart: always
ports:
- 4222:4222
mongo:
image: mongo
container_name: mongo
restart: always
volumes:
- "./data:/data"
ports:
- 27017:27017
If you're using compose, it has a restart flag which is analogous to the one existing in the docker run command, so you can use that. Here is a link to the documentation about this part -
https://docs.docker.com/compose/compose-file/
When you deploy out, it depends where you deploy to. Most container clusters like kubernetes, mesos or ECS would have some configuration you can use to auto-restart your containers. If you don't use any of these tools you are probably starting your containers manually and can then just use the restart flag just as you would locally.
Looks good to me. What you want to understand when working on Docker policies is what each one means. always policy means that if it crashes for any reason automatically restart.
So if it stops for any reason, go ahead and restart it.
So why would you ever want to use always as opposed to say on-failure?
In some cases, you might have a container that you always want to ensure is running such as a web server. If you are running a public web application chances are you want that server to be available 100% of the time.
So for web application I expect you want to use always. On the other hand if you are running a worker process on a file and then naturally exit, that would be a good use case for the on-failure policy, because the worker container might be finished processing the file and you probably want to let it close out and not have it restart.
Thats where I would expect to use the on-failure policy. So not just knowing the syntax, but when to apply which policy and what each one means.

Resources