I am trying to find what is the sqlite database URL for me to run automation testing.
There is one sqllite database at my local path.
I did not see sql server docker, below is the only place I saw the database is being attached as volume.
Is this common practise that all services running as docker container but database is sitting local then attach to docker?
In this case how do I know the port in order to make use DatabaseLibrary to connect?
api:
image: test/api/server:${TEST_VERSION}
container_name: test_api_server
privileged: true
shm_size: 1gb
ports:
- ${API_SERVER_PORT}:${API_SERVER_PORT}
environment:
- DB_WATCHER_ADDR=http://watcher:${DB_WATCHER_API_PORT}
volumes:
- ${DB_DIR}:/home/test/api_server/db
Related
I'm trying to setup Apache Superset.
I want to access Postgres and Redis services running on host from a Superset instance running in Docker.
(I don't want to run dockers for db and redis as I have already installed these services in my application).
I see following in superset documentation:
Note: Users often want to connect to other databases from Superset. Currently, the easiest way to do this is to modify the docker-compose-non-dev.yml file and add your database as a service that the other services depend on (via x-superset-depends-on).
Here I'm clueless on how to configure yaml file for this. I'm pretty much new to docker. Could any one please guide me? Here is the yaml file service section from docker-compose-non-dev.yml.
redis:
image: redis:7
container_name: superset_cache
restart: unless-stopped
volumes:
- redis:/data
db:
env_file: docker/.env-non-dev
image: postgres:14
container_name: superset_db
restart: unless-stopped
volumes:
- db_home:/var/lib/postgresql/data```
If I understand correctly you want to do two things:
Use existing services for those components
Just make sure they're accessible from your Superset docker container and then specify the paths to those resources in superset_config.py or superset_config_docker.py, which is presumably where you're setting other config options. E.g., to point to your Postgres service as the backend database, specify:
SQLALCHEMY_DATABASE_URI='postgresql://username:password#path/dbname'
Where path is the URL or IP address of your Postgres instance. For Redis, it's not as clear but I think you'd set that location with REDIS_HOST= in your config. And you can use a .env file to store these strings instead of putting them directly in your config file.
Not run the unnecessary containers
You should be able to just delete the container specs you have in your post and remove both of them from the x-superset-depends-on: &superset-depends-on list at the top of the docker-compose file.
I'm using windows with linux containers. I have a docker-compose file for an api and a ms sql database. I'm trying to use volumes with the database so that my data will persist even if my container is deleted. My docker-compose file looks like this:
version: '3'
services:
api:
image: myimage/myimagename:myimagetag
environment:
- SQL_CONNECTION=myserverconnection
ports:
- 44384:80
depends_on:
- mydatabase
mydatabase:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=mypassword
volumes:
- ./data:/data
ports:
- 1433:1433
volumes:
sssvolume:
everything spins up fine when i do docker-compose up. I enter data into the database and my api is able to access it. The issue I'm having is when I stop everything and try deleting my database container, then do docker-compose up again. The data is no longer there. I've tried creating an external volume first and adding
external: true
to the volumes section, but that hasn't worked. I've also messed around with the path of the volume like instead of ./data:/data I've had
sssvolume:/var/lib/docker/volumes/sssvolume/_data
but still the same thing happens. It was my understanding that if you name a volume and then reference it by name in a different container, it will use that volume.
I'm not sure if my config is wrong or if I'm misunderstanding the use case for volumes and they aren't able to do what I want them to do.
MSSQL stores data under /var/opt/mssql, so you should change your volume definition in your docker-compose file to
volumes:
- ./data:/var/opt/mssql
I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.
I am new to docker and I am trying to dockerize this application I have written in Golang. It is a simple web server that interacts with rabbitmq and mongodb
It takes the creadentials form a toml file and loads it into a config struct before starting the application server on port 3000. These are the credentials
mongo_server = "localhost"
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#localhost:5672/"
If it can't connect to these urls it fails with an error. Following is my docker-compose.yml
version: '3'
services:
rabbitmq:
image: rabbitmq
ports:
- 5672:5672
mongodb:
image: mongo
ports:
- 27017:27017
web:
build: .
image: palash2504/collect
container_name: collect_service
ports:
- 3000:3000
depends_on:
- rabbitmq
- mongodb
links: [rabbitmq, mongodb]
But it fails to connect with rabbitmq on the url used for local development i.e. amqp://guest:guest#localhost:5672/
I realise that the rabbitmq container might be running on some different address other than the one provided in the config file.
I would like to know the correct way for setting any env credentials to be able to connect my app to rabbitmq.
Also what approach would be best to change my application code for initializing connections to external services? I was thinking about ditching the config.toml file and using os.Getenv and os.Setenv to get the urls for connections.
Localhost addresses are resolved, well, locally. They thus will not work inside containers, since they will look for a local address (i.e. inside the container).
Services can access each other by using service names as an address. So in the web container you can target mongodb for example.
You might give this a shot:
mongo_server = mongodb
database = "collect_db"
rabbitmq_server = "amqp://guest:guest#rabbitmq/"
It is advisable to set service target environment variables in the compose file itself:
#docker-compose.yml
#...other stuff...
web:
#...other stuff...
environment:
RABBITMQ_SERVER: rabbitmq
MONGO_SERVER: mongodb
depends_on:
- rabbitmq
- mongodb
This gives you a single place to make adjustments to the configuration.
As a side note, to me it seems that links: [rabbitmq, mongodb] can be removed. And I would advise not to alter the container name (remove container_name: collect_service unless it is necessary)
I have a docker volume defined in my docker-compose.yml
version: "2"
services:
postgres:
image: my_image/postgresql:9.3
volumes:
- test_volume:/var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
test_volume:
I want to know what is the standard way of backing up data from server?
Ideally I would like to just move docker volume around the servers, like from my production server to my sandbox server.
Or do people usually just dump the backup sql file and move it to somewhere else through automation tool?