I have a local project early in development which uses Nestjs and TypeORM to connect to a Docker postgres instance (called 'my_database_server'). Things were working on my old computer, an older Macbook Pro.
I've just migrated everything onto a new Macbook Pro with the new M2 chip (Apple silicon). I've downloaded the version of Docker Desktop that's appropriate for Apple silicon. It runs fine, it still shows 'my_database_server', it can launch that fine, and I can even use the Terminal to go into its Postgres db and see the data that existed in my old computer.
But, I can't figure out how to adjust the config of my project to get it to connect to this database. I've read from other articles that because Docker is running on Apple silicon now and is using emulation, that the host should be different.
This is what my .env used to look like:
POSTGRES_HOST=127.0.0.1
POSTGRES_PORT=5432
POSTGRES_USER=postgres
On my new computer, the above doesn't connect. I have tried these other values for POSTGRES_HOST, many inspired by other SO posts, but these all yield Error: getaddrinfo ENOTFOUND _____ errors:
my_database_server (the container name)
docker (since I didn't use a docker-compose.yaml file - see below - I don't know what the 'service name' is in this case)
192.168.65.0/24 (the "Docker subnet" value in Docker Desktop > Preferences > Resources > Network)
Next, for some other values I tried, the code is trying to connect for a longer time, but it's getting stuck on something later in the process. With these, eventually I get Error: connect ETIMEDOUT ______:
192.168.65.0
172.17.0.2 (from another SO post, I tried the terminal command docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 78f6e532b324 - the last part being the container ID of my_database_server)
In case it helps, I originally set up this docker container using the script I found here, not using a docker-compose.yaml file. Namely, I ran this script once at the beginning:
#!/bin/bash
set -e
SERVER="my_database_server";
PW="mysecretpassword";
DB="my_database";
echo "echo stop & remove old docker [$SERVER] and starting new fresh instance of [$SERVER]"
(docker kill $SERVER || :) && \
(docker rm $SERVER || :) && \
docker run --name $SERVER -e POSTGRES_PASSWORD=$PW \
-e PGPASSWORD=$PW \
-p 5432:5432 \
-d postgres
# wait for pg to start
echo "sleep wait for pg-server [$SERVER] to start";
SLEEP 3;
# create the db
echo "CREATE DATABASE $DB ENCODING 'UTF-8';" | docker exec -i $SERVER psql -U postgres
echo "\l" | docker exec -i $SERVER psql -U postgres
What should be my new db config settings?
I never figured the above problem out, but it was blocking me so I found a different away around.
Per other SO questions, I decided to go with the more typical route of using a docker-compose.yml file to create the Docker container. In case it helps others in this problem, this is what the main part of my docker-compose.yml looks like:
version: '3'
services:
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=${DATABASE_USER}
- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
- POSTGRES_DB=${DB_NAME}
container_name: postgres-db
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "54320:5432"
I then always run this with docker-compose up -d, not starting the container through the Docker Desktop app (though after that command, you should see the new container light up in the app).
Then in .env, I have this critical part:
POSTGRES_HOST=localhost
POSTGRES_PORT=54320
I mapped Docker's internal 5432 to the localhost-accessible 54320 (a suggestion I found here). Doing "5432:5432" as other articles suggest was not working for me, for reasons I don't entirely understand.
Other articles will suggest changing the host to whatever the service name is in your docker-compose.yml (for the example above, it would be db) - this also did not work for me. I believe the "54320:5432" part maps the ports correctly so that host can remain localhost.
Hope this helps others!
Related
I have a Dockerized Solr and I have to update a configset. The file I modified is under this path: solrhome/server/solr/configsets/myconfig/conf and is called DIH_ContentIndex.xml. After that, I deleted my Docker images and containers with these commands:
docker container stop <solr_container_id>
docker container rm <solr_container_id>
docker image rm <solr_img_id>
docker volume rm <solr_volume>
I rebuilt everything but Solr is not taking changes, as I can see when i go in the Files section. So, I decided to add a configset, that we will call newconfig with my changes at the same level of the other one. Redid everything and restarted. But nothing. So, I decided to enter the container with docker exec -it --user root <solr_container_id> /bin/bash and decided to change the files manually (just to test). Stopped, restarted the container but still nothing. After deleting again everything about Docker, I can still see my changes from inside the container. At this point, I think either I'm not deleting everything or I'm not placing my new config in the right directory. What else do I need to do for a clean build?
Here is the fragment of docker-compose I'm trying to launch, just in case this is the fault.
solr:
container_name: "MySolr"
build: ./solr
restart: always
hostname: solr
ports:
- "8983:8983"
networks:
- my-network
volumes:
- vol_solr:/opt/solr
depends_on:
- configdb
- zookeeper
environment:
ZK_HOST: zookeeper:2181
Of course, everything else is running fine so there is no error witht he dependencies.
It is not a problem of browser cache. I already tried cleaning the cache and using a different browser.
Some updates: it actually copies my config inside the fresh-built image.. But still, can't select it from the Frontend. Clearly, I'm placing my config files in the wrong path.
Thank you in advance!
Solved! All I had to do was:
enter the Solr container, with this command:
docker exec -it --user root <solr_container_id> /bin/bash
entering as root to be able to install nano
copy my pre-existing config somewhere (in the same path of bin for convenience) and modify the file DIH_ContentIndex.xml
apt update
apt install nano
nano DIH_ContentIndex.xml
Go to solr/bin pload to ZK, using this command:
solr zk upconfig -n config_name -d /path/to/config_folder
I have a Keycloak installation running as docker container in a docker-compose environment. Every night, my backup stops relevant containers, performs a DB and volume backup and restarts the containers again. For most it works, but Keycloak seems to have a problem with it and does not come up again afterwards. Looking at the logs, the error message is:
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
...
The batch failed with the following error: :
keycloak | WFLYCTL0062: Composite operation failed and was rolled back. Steps that failed:
keycloak | Step: step-9
keycloak | Operation: /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql, driver-module-name=org.postgresql.jdbc, driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)
keycloak | Failure: WFLYCTL0212: Duplicate resource [
keycloak | ("subsystem" => "datasources"),
keycloak | ("jdbc-driver" => "postgresql")
keycloak | ]
The docker-compose.yml entry for Keycloak looks as follows, important data obviously removed
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
environment:
- PROXY_ADDRESS_FORWARDING=true
- DB_VENDOR=postgres
- DB_ADDR=db
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=<password>
- VIRTUAL_HOST=<url>
- VIRTUAL_PORT=8080
- LETSENCRYPT_HOST=<url>
volumes:
- /opt/docker/keycloak-startup:/opt/jboss/startup-scripts
The volume I'm mapping is there to make some changes to WildFly to make sure it behaves well with the reverse proxy:
embed-server --std-out=echo
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=proxy-address-forwarding, \
value=true)
# Create new socket binding with proxy https port
/socket-binding-group=standard-sockets/ \
socket-binding=proxy-https \
:add(port=443)
# Enable https listener for the new security realm
/subsystem=undertow/ \
server=default-server/ \
http-listener=default \
:write-attribute(name=redirect-socket, \
value="proxy-https")
After stopping the container, its not starting anymore with the messages shown above. Removing the container and re-creating it works fine however. I tried to remove the volume after the initial start, this doesn't really make a difference either. I already learned that I have to remove the KEYCLOAK_USER=admin and KEYCLOAK_PASSWORD environment variables after the initial boot as otherwise the container complains that the user already exists and doesn't start anymore. Any idea how to fix that?
Update on 23rd of May 2021:
The issue has been resolved on RedHats Jira, it seems to be resolved in version 12. The related GitHub pull request can be found here: https://github.com/keycloak/keycloak-containers/pull/286
According to RedHat support, this is a known "issue" and not supposed to be fixed. They want to concentrate on a workflow where a container is removed and recreated, not started and stopped. They agreed with the general problem, but stated that currently there are no resources available. Stopping and starting the container is a operation which is currently not supported.
See for example https://issues.redhat.com/browse/KEYCLOAK-13094?jql=project%20%3D%20KEYCLOAK%20AND%20text%20~%20%22docker%20restart%22 for reference
A legitimate use case for restarting is to add debug logging. For example to debug authentication with an external identity provider.
I ended up creating a shell script that does:
docker stop [container]
docker rm [container]
recreates the image i want with changes to the logging configuration
docker run [options] [container]
However a nice feature of docker is the ability to restart a stopped container automatically, decreasing downtime. This Keycloak bug takes that feature away.
I had the same problem here, and my solution was:
Export docker container to a .tar file:
docker export CONTAINER_NAME > latest.tar
2- Create a new volume in a docker
docker volume create VOLUME_NAME
3 - Start a new docker container mapping the volume created to a container db path, something like this:
docker run --name keycloak2 -v keycloak_db:/opt/jboss/keycloak/standalone/data/ -p 8080:8080 -e PROXY_ADDRESS_FORWARDING=true -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=root jboss/keycloak
4 - Stop the container
5 - Unpack the tar file and find the database path, something like this:
tar unpack path: /opt/jboss/keycloak/standalone/data
6 - Move the path content to docker volume, if you dont know where is the physical path use docker inspect volume VOLUME_NAME to find the path
7 - Start the stoped container
This works for me, I hope its so helpfull to the next person to fix this problem.
I'm trying to configure docker-compose to use GreenPlum db in Ubuntu 16.04. Here is my docker-compose.yml:
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:
The issue is when I run it with sudo docker-compose up the GrrenPlum db is shutdowm immedately after starting. It looks as this:
greenplum_1 | 20170602:09:01:01:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Starting Master instance 72ba20be3774 directory /gpdata/master/gpseg-1
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Command pg_ctl reports Master 72ba20be3774 instance active
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-No standby master configured. skipping...
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Database successfully started
greenplum_1 | ALTER ROLE
dockergreenplumn_greenplum_1 exited with code 0 <<----- Here
Actually, when I start it with just sudo docker run pivotaldata/gpdb-base it's ok.
What's wrong with the docker compose?
First of all, be cautious running this image: the image looks to be badly maintained, and the information on Docker Hub indicates it's neither "official", nor "supported" in any way;
2017-01-09: Toolsmiths reviewed this image; it is not one we create. We make no promises about whether this is up to date or if it works. Feel free to email pa-toolsmiths#pivotal.io if you are the owner and are interested in collaborating with us on this image.
When using images from Docker Hub, it's recommended to either use official images, or when not available, prefer automated builds (in which case the source code of the image can be verified to see what's used to build theimage).
I think the image is built from this GitHub repository, which means it has not been updated for over a year, and uses an outdated (CentOS 6.7) base image that has a huge amount of critical vulnerabilities
Back to your question;
I tried starting the image, both with docker-compose and docker run, and both resulted in the same for me.
Looking at that image, it is designed to be run interactively, or to be used as a base image (and overriding the command).
I inspected the image to find out what the container's command is;
docker inspect --format='{{json .Config.Cmd}}' pivotaldata/gpdb-base
["/bin/sh","-c","echo \"127.0.0.1 $(cat ~/orig_hostname)\" >> /etc/hosts && service sshd start && su gpadmin -l -c \"/usr/local/bin/run.sh\" && /bin/bash"]
So, this is what's executed when the container is started;
echo "127.0.0.1 $(cat ~/orig_hostname)" >> /etc/hosts \
&& service sshd start \
&& su gpadmin -l -c "/usr/local/bin/run.sh" \
&& /bin/bash"
Based on the above, there is no "foreground" process in the container, so the moment /usr/local/bin/run.sh finishes, a bash shell is started. A bash shell wothout a tty attached, exits immediately, at which point the container exits.
To run this image
(Again; be cautious running this image)
Either run the image interactively, by passing it stdin and a tty (-i -t, or -it as a shorthand);
docker run -it pivotaldata/gpdb-base
Or can run it "detached", as long as a tty is passed as well (add the -d and -t flags, or -dt as a shorthand); doing so, keeps the container running in the background;
docker run -dit pivotaldata/gpdb-base
To do the same in docker-compose, add a tty to your service;
tty: true
Your compose file will then look like this;
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
tty: true
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
What I'm trying to do
I want to run a yesod web application in one docker container, linked to a postgres database in another docker container.
What I've tried
I have the following file hierarchy:
/
api/
Dockerfile
database/
Dockerfile
docker-compose.yml
The docker-compose.yml looks like this:
database:
build: database
api:
build: api
command: .cabal/bin/yesod devel # dev setting
environment:
- HOST=0.0.0.0
- PGHOST=database
- PGPORT=5432
- PGUSER=postgres
- PGPASS
- PGDATABASE=postgres
links:
- database
volumes:
- api:/home/haskell/
ports:
- "3000:3000"
Running sudo docker-compose up fails either to start the api container at all or, just as often, with the following error:
api_1 | Yesod devel server. Press ENTER to quit
api_1 | yesod: <stdin>: hGetLine: end of file
personal_api_1 exited with code 1
If, however, I run sudo docker-compose database up & then start up the api container without using compose but instead using
sudo docker run -p 3000:3000 -itv /home/me/projects/personal/api/:/home/haskell --link personal_database_1:database personal_api /bin/bash
I can export the environment variables being set up in the docker-compose.yml file then manually run yesod devel and visit my site successfully on localhost.
Finally, I obtain a third different behaviour if I run sudo docker-compose run api on its own. This seems to start successfully but I can't access the page in my browser. By running sudo docker-compose run api /bin/bash I've been able to explore this container and I can confirm the environment variables being set in docker-compose.yml are all set correctly.
Desired behaviour
I would like to get the result I achieve from running the database in the background then manually setting the environment in the api container's shell simply by running sudo docker-compose up.
Question
Clearly the three different approaches I'm trying do slightly different things. But from my understanding of docker and docker-compose I would expect them to be essentially equivalent. Please could someone explain how and why they differ and, if possible, how I might achieve my desired result?
The error-message suggests the API container is expecting input from the command-line, which expects a TTY to be present in your container.
In your "manual" start, you tell docker to create a TTY in the container via the -t flag (-itv is shorthand for -i -t -v), so the API container runs successfully.
To achieve the same in docker-compose, you'll have to add a tty key to the API service in your docker-compose.yml and set it to true;
database:
build: database
api:
build: api
tty: true # <--- enable TTY for this service
command: .cabal/bin/yesod devel # dev setting