Google Cloud SQL Docker Proxy - docker

I am trying to connect to my Cloud SLQ (Postgres) instance from my local environment (OSX) using the Google Cloud SQL Docker Proxy as documented here. When running the proxy I get:
google: could not find default credentials.
Note that I am running gcloud on my local environment within the right project and having authenticated through the application-default login. I understand that in similar questions this is what solved the issue however this is not my case.

Specifying the absolute path of the key file solved my problem.

Here is a good resource for this issue: https://cloud.google.com/sql/docs/mysql/connect-admin-proxy
Look at where you specify PATH_TO_KEY_FILE here:
docker run -d \
-v PATH_TO_KEY_FILE:/config \
-p 127.0.0.1:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.19.1 /cloud_sql_proxy \
-instances=INSTANCE_CONNECTION_NAME=tcp:0.0.0.0:3306 \
-credential_file=/config

Related

boto3: config profile could not be found

I'm testing my lambda function wrapped in a docker image and provided environment variable AWS_PROFILE=my-profile for the lambda function. However, I got an error : "The config profile (my-profile) could not be found" while this information is there in ~/.aws/credentials and ~/.aws/config files. Below are my commands:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 <image>:latest lambda_func.handler
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '"body":{"x":5, "y":6}}'
The thing is that if I just run the lambda function as a separated python script then it works.
Can someone show me what went wrong here?
Thanks
When AWS is showing how to use their containers, such as for local AWS Glue, they share the ~/.aws/ in read-only mode with the container using volume option:
-v ~/.aws:/root/.aws:ro
Thus if you wish to follow AWS example, your docker command could be:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 -v ~/.aws:/root/.aws:ro <image>:latest lambda_func.handler
The other way is to pass the AWS credentials using docker environment variables, which you already are trying.
You need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Your home directory (~) is not copied to Docker container, so AWS_PROFILE will not work.
See here for an example: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html

Cant log in to app after deployed on Cloud Run

I have deployed my doccano app on Cloud Run but cant cant log in. It keeps complaining about the "Incorrect username or password"
I think this is because I havent given the auth argument, on my local I usually need use to start the container with this kind of cmd:
docker container create --name doccano \
-e "ADMIN_USERNAME=admin" \
-e "ADMIN_EMAIL=admin#example.com" \
-e "ADMIN_PASSWORD=password" \
-p 8000:8000 chakkiworks/doccano
But I dont know where on GCP Cloud Run where can I add such information
can someone help? Thanks!
You can add your environment variables just next to the CONTAINER section which is VARIABLES on the Advanced settings, you set them as Name/Value pair.
Or update your service via command line:
gcloud run services update SERVICE --update-env-vars KEY1=VALUE1,KEY2=VALUE2
If you have multiple env variables, separate them with a comma ','.
However in production, please avoid storing your sensitive info as environment variables because they can be easily acccessed in plaintext. I recommend that you take your time and learn how to use Secret Manager.

How to launch a rails console in a Fargate container

I would like to open a Rails console in a Fargate container to interact with my production installation
However after searching the web and posting in the AWS forum I could not find an answer to this question
Does anyone know how I can do this? This seems like a mandatory thing to have in any production environment and having no easy way to do it is kind of surprising coming from such a respected cloud provider as AWS
Thanks
[Update 2021]: It is now possible to run a command in interactive mode with AWS Fargate!
News: https://aws.amazon.com/fr/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
The command to run is:
aws ecs execute-command \
--cluster cluster-name \
--task task-id \
--container container-name \ # optional
--interactive \
--command "rails c"
Troubleshooting:
Check the AWS doc for IAM permissions: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites
After trying lots of things, I found a way to open a Rails console pointing to my production environment, so I will post it here in case somebody come accross the same issues
To summarise I add a rails application deployed on Fargate connected to a RDS postgres database
What I did is creating a VPN client endpoint to the VPC hosting my Rails app and my RDS database
Then after being connected to this VPN, I simply run my rails production container (with the same environment variables) overriding the container command to run the console startup script (bundle exec rails c production)
Being run on my local machine I can normally attach a TTY to this container and access my production console
I think this solution is good because it allow any developper working on the project to open a console without any costs incurred and a well-though security policy on the AWS end ensure that the console access is secure, plus you don't have to expose your database outside of your VPC
Hope this helped someone
Doing any sort of docker exec is a nightmare with ECS and fargate. Which makes doing things like shells or migrations very difficult.
Thankfully, a fargate task on ECS is really just an AWS server running a few super customized docker run commands. So if you have docker, jq, and the AWS CLI either on EC2 or your local machine, you can fake some of those docker run commands yourself and enter a bash shell. I do this for Django so I can run migrations and enter a python shell, but I'd assume it's the same for rails (or any other container that you need bash in)
Note that this only works if you only care about 1 container spelled out in your task definition running at a time, although I'd imagine you could jerry-rig something more complex easy enough.
For this the AWS CLI will need to be logged in with the same IAM permissions as your fargate task. You can do this locally by using aws configure and providing credentials for a user with the correct IAM permissions, or by launching an EC2 instance that has a role either with identical permissions, or (to keep things really simple) the role that your fargate task is running and a security group with identical access (plus a rule that lets you SSH into the bastion host.) I like the EC2 route, because funneling everything through the public internet and a VPN is... slow. Plus you're always guaranteed to have the same IAM access as your tasks do.
You'll also need to be on the same subnet as your fargate tasks are located on, which can usually be done via a VPN, or by running this code on a bastion EC2 host inside your private subnet.
In my case I store my configuration parameters as SecureStrings within the AWS Systems Manager Parameter Store and pass them in using the ECS task definition. Those can be pretty easily acquired and set to a local environment variable using
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name parameter.name.database_url \
| jq '.Parameter["Value"]' -r)
I store my containers on ECR, so I then need to login my local docker container to ECR
eval $(aws ecr get-login --no-include-email --region $REGION)
Then it's just a case of running an interactive docker container that passes in the DATABASE_URL, pulls the correct image from ECR, and enters bash. I also expose port 8000 so I can run a webserver inside the shell if I want, but that's optional.
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG \
/bin/bash
Once you run that you should see your copy of docker download the image from your container repository then launch you into bash (assuming bash is installed inside your container.) Docker has a pretty solid cache, so this will take a bit of time to download and launch the first time but after that should be pretty speedy.
Here's my full script
#!/bin/bash
REGION=${REGION:-us-west-2}
ENVIRONMENT=${ENVIRONMENT:-staging}
DOCKER_REPO_NAME=${DOCKER_REPO_NAME:-reponame}
TAG=${TAG:-latest}
ACCOUNT_ID=$(aws sts get-caller-identity | jq -r ".Account")
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name projectname.$ENVIRONMENT.database_url \
| jq '.Parameter["Value"]' -r)
eval $(aws ecr get-login --no-include-email --region $REGION)
IMAGE=$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$IMAGE \
/bin/bash
You cannot ssh to the underlying host when you are using the Fargate execution type for ECS. This means that you cannot docker exec into a running container.
I haven't tried this on Fargate, but you should be able to create a fargate task in which the command is rails console.
Then if you configure the task as interactive, you should be able to launch the interactive container and have access to the console via stdin.
Ok, so I ended up doing things a bit differently. Instead of trying to run the console on Fargate, I just run a console on my localhost, but configure it to use RAILS_ENV='production' and let it use my RDS instance.
Of course to make this work, you have to expose your RDS instance through an egress rule in your security group. It's wise to configure it in a way that it only allows your local IP, to keep it a bit more secure.
The docker-compose.yml then looks something like this:
version: '3'
web:
stdin_open: true
tty: true
build: .
volumes:
- ./rails/.:/your-app
ports:
- "3000:3000"
environment: &env_vars
RAILS_ENV: 'production'
PORT: '8080'
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
DATABASE_URL: 'postgresql://username:password#yours-aws-rds-instance:5432/your-db'
When you then run docker-compose run web rails c it uses your local Rails codebase, but makes live changes to your RDS DB (the prime reason why you'd like access to rails console anyway).

How to start Bitbucket server as a container in detached mode?

Many questions exists about how to run containers in detached mode.
My question though is kinda specific to running Atlassian Bitbucket server in detached mode containers.
I tried the below as the last layer in my dockerfile and when i run the container with -d the process is not started
RUN /opt/atlassian-bitbucket/bin/start-bitbucket.sh
I tried using ENTRYPOINT like below
ENTRYPOINT ["/opt/atlassian-bitbucket/bin/start-bitbucket.sh"]
but container always exits after the start script completes.
Not sure if anyone has setup a Bitbucket Data center in containers but i am curious to see how they would have run multiple containers of the same image and made them join a single cluster.
Full disclosure: I work for Atlassian Premier Support, work closely with our Bitbucket Server team, and have been the primary maintainer of the atlassian/bitbucket-server Docker image for the past couple of years.
Short version
First: Use our official image, there are a host of problems we've solved over the years so rather than trying to start from scratch use ours as a base.
Second: You can indeed run a Data Center cluster in Docker. My personal test environment consists of 3 cluster nodes and a couple of Smart Mirrors, all using the official image, with HAProxy in front acting as a load balancer and an external Elasticsearch instance managing search. Check out the README above for a list of common configuration options - the ones you'll likely need can be set by passing environment variables
Long Version
AKA "How can I spin up a full DC cluster in a test environment?"
Here's a simple tutorial I put together for our own internal support teams a long time ago. It uses a custom HAProxy Docker container to give you an out-of-the-box load balancer. It's intended for testing on a single host, so if you want to do something different or closer to a production deployment, this won't cover that.
There's a lot to cover here, so let's start with the basics.
Networking
There are a few ways to connect up individual Docker containers so they can find each other and communicate (e.g. the --link parameter), but a Docker Network is by far the most flexible. With a dedicated network, we get the following:
Inter-container communication: Containers on the same network can communicate with each other and access services from other containers without the need to publish specific ports to the host.
Automatic DNS: Containers can find each other via their container name (defined by the --name parameter). Unlike real DNS however, when a container is down its DNS resolution ceases to exist. This can cause some issues for services like HAProxy - but we'll get to that later. Also worth noting is that this does not set the machine's hostname, which needs to be set separately if required.
Static IP assignment: For certain use cases it's useful to give Docker containers static IP addresses within their network
Multicast: Docker networks support multicast by default, which is perfect for Data Center nodes communicating over Hazelcast
One thing a Docker network doesn't do is attach the host to its network, so you, the user, can't connect to containers by container name, and you still need to publish ports to the local machine. However, there are situations where doing this is both useful and necessary. The simplest workaround is to add entries to your hosts file that point each container name you wish to access to the loopback address, 127.0.0.1
To create a Docker network, run the following command. In my example we're going to name our network atlasNetwork. If you want to use another name, remember to change the network name in all subsequent docker commands.
docker network create --driver bridge \
--subnet=10.255.0.0/16 \
atlasNetwork
Here, we're creating a network using the bridge driver - this is the simplest type of network. More complex network types allow the network to span multiple hosts. We're also manually specifying the subnet - if we leave this out Docker will choose one at random, and it could conflict with an existing network subnet, so it's safest to choose our own. We're also specifying a /16 mask to allow us to use IP address ranges within the last two octets - this will come up later!
Storage
Persistent data such as $BITBUCKET_HOME, or your database files, need to be stored somewhere outside of the container itself. For our test environment, we can simply store these directly on the host, our local OS. This means we can edit config files using our favourite text editor, which is pretty handy!
In the examples below, we're going to store our data files in the folder ~/dockerdata. There's no need to create this folder or any subfolders, as Docker will do this automatically. If you want to use a different folder, make sure to update the examples below.
You may wonder why we're not using Docker's named volumes instead of mounting folders on the host. Named volumes are an easier to manage abstraction and are generally recommended; however for the purposes of a test environment (particularly on Docker for Mac, where you don't have direct access to the virtualised file system) there's a huge practical benefit to being able to examine each container's persistent data directly. You may want to edit a number of configuration files in Bitbucket, or Postgres, or HAProxy, and this can be difficult when using a named volume, as it requires you to open a shell into the container - and many containers don't contain basic text editor utilities (not even vi!). However, if you prefer to use volumes, you can do so simply by replacing the host folder with the named volume in all of the below examples.
Database
The first service we need on our network is a database. Let's create a Postgres instance:
docker run -d \
--name postgres \
--restart=unless-stopped \
-e POSTGRES_PASSWORD=mysecretpassword \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v ~/dockerdata/postgres:/var/lib/postgresql/data/pgdata \
--network=atlasNetwork \
-p 5432:5432 \
postgres:latest
Let's examine what we're doing here:
-d
Run the container and detach from it (return to the prompt). Without this option, when the container starts we'll be attached directly to its stdout, and cancelling out would stop the container.
--name postgres
Set the name of the container to postgres, which also acts as its DNS record on our network.
--restart=unless-stopped
Sets the container to automatically start when Docker starts, unless you have explicitly stopped the container. This way, when you restart your computer, Postgres comes back up automatically
-e POSTGRES_PASSWORD=mysecretpassword
Sets password for the default postgres user to mysecretpassword
-e PGDATA=/var/lib/postgresql/data/pgdata
The official Postgres docker image recommends specifying this custom location when mounting the data folder to an external volume
-v ~/dockerdata/postgres:/var/lib/postgresql/data/pgdata
Mounts the folder /var/lib/postgresql/data/pgdata inside the container to an external volume, located on the host at ~/dockerdata/postgres. This folder will be created automatically
--network=atlasNetwork
Joins the container to our custom Docker network
-p 5432:5432
Publishes the Postgres port to the host machine, so we can access Postgres on localhost:5432. This isn't necessary for other containers to access the service, but it is necessary for us to get to it
postgres:latest
The latest version of the official Postgres docker image
Run the command, and hey presto, you can now access a fully functioning Postgres instance. For the sake of consistency, you may want to add your very first hosts entry here:
127.0.0.1 postgres
Now you, and any running containers, can access the instance at postgres:5432
Before you move on, you should connect to your database using your DB admin tool of choice. Connect to the hostname postgres with the username postgres, the default database postgres and the password mysecretpassword, and create a Bitbucket database ready to go:
CREATE USER bitbucket WITH PASSWORD 'bitbucket';
CREATE DATABASE bitbucket WITH OWNER bitbucket ENCODING 'UTF8';
If you don't have a DB admin tool handy, you can create a DB by using docker exec to run psql directly in the container:
# We need to run two commands because psql won't let
# you run CREATE DATABASE from a multi-command string
docker exec -it postgres psql -U postgres -c \
"CREATE USER bitbucket WITH PASSWORD 'bitbucket';"
docker exec -it postgres psql -U postgres -c \
"CREATE DATABASE bitbucket WITH OWNER bitbucket ENCODING 'UTF8';"
Elasticsearch
The next service we'll set up is Elasticsearch. We need a dedicated instance that all of our Data Center nodes can access. We have a great set of instructions on how to install a compatible version, configure it for use with Bitbucket, and install Atlassian's buckler security plugin: Install and configure a remote Elasticsearch instance
So how do we set this up in Docker? Well, it's easy:
docker pull dchevell/bitbucket-elasticsearch:latest
docker run -d \
--name elasticsearch \
-e AUTH_BASIC_USERNAME=bitbucket \
-e AUTH_BASIC_PASSWORD=mysecretpassword \
-v ~/dockerdata/elastic:/usr/share/elasticsearch/data \
--network=atlasNetwork \
-p 9200:9200 \
dchevell/bitbucket-elasticsearch:latest
Simply put, dchevell/bitbucket-elasticsearch is a pre-configured Docker image that is set up according to the instructions on Atlassian's Install and configure a remote Elasticsearch instance KB article. Atlassian's Buckler security plugin is installed for you, and you can configure the username and password with the environment variables seen above. Again, we're mounting a data volume to our host machine, joining it to our Docker network, and publishing a port so we can access it directly. This is solely for troubleshooting purposes, so if you want to poke around in your local Elasticsearch instance without going through Bitbucket, you can.
Now we're done, you can add your second hosts entry:
127.0.0.1 elasticsearch
HAProxy
Next, we'll set up HAProxy. Installing Bitbucket Data Center provides some example configuration, and again, we have a pre-configured Docker image that does all the hard work for us. But first, there's a few things we need to figure out first.
HAProxy doesn't play well with a Docker network's DNS system. In the real world, if a system is down, the DNS record still exists and connections will simply time out. HAProxy handles this scenario just fine. But in a Docker network, when a container is stopped, its DNS record ceases to exist, and connections to it fail with an "Unknown host" error. HAProxy won't start when this happens, which means we can't configure it to proxy connections to our nodes by container name. Instead, we will need to give each node a static IP address, and configure HAProxy to use the IP address instead.
Even though we have yet to create our nodes, we can decide on the IP addresses for them now. Our Docker network's subnet is 10.255.0.0/16, and Docker will dynamically assign containers addresses on the last octet (e.g. 10.255.0.1, 10.255.0.2 and so on). Since we know this, we can safely assign our Bitbucket nodes static IP addresses using the second-last octet:
10.255.1.1
10.255.1.2
10.255.1.3
With that out of the way, there's one more thing. HAProxy is going to be the face of our instance, so its container name is going to represent the URL we use to access the instance. In this example, we'll call it bitbucketdc. We're also going to set the host name of the machine to be the same.
docker run -d \
--name bitbucketdc \
--hostname bitbucketdc \
-v ~/dockerdata/haproxy:/usr/local/etc/haproxy \
--network=atlasNetwork \
-e HTTP_NODES="10.255.1.1:7990,10.255.1.2:7990,10.255.1.3:7990" \
-e SSH_NODES="10.255.1.1:7999,10.255.1.2:7999,10.255.1.3:7999" \
-p 80:80 \
-p 443:443 \
-p 7999:7999 \
-p 8001:8001 \
dchevell/bitbucket-haproxy:latest
In the above example, we're specifying the HTTP endpoints of our future Bitbucket nodes, as well as the SSH endpoints, as a comma separated list. The container will turn this into valid HAProxy configuration. The proxied services will be available on port 80 and port 443, so we're publishing them both. This container is configured to automatically generate a self-signed SSL certificate based on the hostname of the machine, so we have HTTPS access available out of the box.
Since we're proxying SSH as well, we're also publishing port 7999, Bitbucket Server's default SSH port. You'll notice we're also publishing port 8001. This is to access HAProxy's Admin interface, so we can monitor which nodes are detected as up or down at any given time.
Lastly, we're mounting HAProxy's config folder to a data volume. This isn't really necessary, but it will let you directly access haproxy.cfg so you can get a feel for the configuration options there.
Now it's time for our third hosts entry. This one is , since it impacts things like Base URL access, is absolutely required
127.0.0.1 bitbucketdc
Bitbucket nodes
Finally we're ready to create our Bitbucket nodes. Since these are all going to be accessed via the load balancer, we don't have to publish any ports. However, for troubleshooting and testing purposes there are times when you'll want to hit a particular node directly, so we're going to publish each node to a different local port so we can access it directly when needed.
docker run -d \
--name=bitbucket_1 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.1 \
-p 7001:7990 \
-p 7991:7999 \
atlassian/bitbucket-server:latest
docker run -d \
--name=bitbucket_2 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.2 \
-p 7002:7990 \
-p 7992:7999 \
atlassian/bitbucket-server:latest
docker run -d \
--name=bitbucket_3 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.3 \
-p 7003:7990 \
-p 7993:7999 \
atlassian/bitbucket-server:latest
You can see that we're specifying the static IP addresses we decided on when we set up HAProxy. It's up to you whether you add hosts entries for these nodes, or simply access their ports via localhost. Since no other containers need to access our nodes via host name, it's not really necessary, and I personally haven't bothered.
The official Docker image adds the ability to set a Docker-only variable, ELASTICSEARCH_ENABLED=false to prevent Elasticsearch from starting in the container. The remaining Hazelcast properties are natively supported in the official docker image, because Bitbucket 5 is based on Springboot and can automatically translate environment variables to their equivalent dot properties for us.
Turn it all on
Now we're ready to go!
Access your instance on https://bitbucketdc (or whatever name you chose). Add a Data Center evaluation license (You can generate a 30 day one on https://my.atlassian.com) and connect it to your Postgres database. Log in, then go to Server Admin and connect your Elasticsearch instance (remember, it's running on port 9200, so set the Elasticsearch URL to http://elasticsearch:9200 and use the username and password we configured when we created the Elasticsearch container.
Visit the Clustering section in Server Admin, and you should see all of the nodes there, demonstrating that Multicast is working and the nodes have found each other.
That's it! Your Data Center instance is fully operational. You can use it as your daily instance by shutting down all but one node, and simply use it as a single node test instance - then, whenever you need, turn on the additional nodes.
see official docker image: https://hub.docker.com/r/atlassian/bitbucket-server/
just run:
docker run -v /data/bitbucket:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
you can also take a look at the official dockerfile: https://hub.docker.com/r/atlassian/bitbucket-server/dockerfile
If you use the command to spin up the bitbucket container, you'll get the message below after the build:
The path /data/bitbucket is not shared from the host and is not known to Docker. You can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.

neo4j-browser change bolt port for docker container

I trying to startup a neo4j container for test data and use a separate bolt port.
docker run --env=NEO4J_AUTH=none
--env=NEO4J_dbms_security_procedures_unrestricted=apoc.\\\*
--publish=7475:7474 --publish=7688:7687
--volume=$HOME/neo4j/conf-test:/conf
--volume=$HOME/neo4j/test-data:/data
--volume=$HOME/neo4j/plugins:/plugins
--name=neo4j-test neo4j
In $HOME/neo4j/conf-test/neo4j.conf file I have tried:
dbms.connector.bolt.listen_address=:7688 # doesn't do anything
dbms.connector.bolt=:7688 # error also error with =7688
dbms.connector.bolt.address=0.0.0.0:7688 # does nothing
When I open my browser to http://localhost:7475/browser/ it tries to connect to 7687
I use :server connect command to connect but it doesn't save the setting; though it connects fine. Everytime I refresh I have to enter them again.
Any thoughts?
I couldn't get this working with a config file since the docker container kept overwriting the file with its own settings.
Trick for me was to note that the listen_address and advertised_address variables require a double underscore:-
docker run \
-e NEO4J_dbms_connector_bolt_listen__address=:7688 \
-e NEO4J_dbms_connector_bolt_advertised__address=:7688 \
--rm \
--name neo4j \
--publish=7575:7474 \
--publish=7688:7687 \
neo4j
2018-02-07 11:33:34.593+0000 INFO Bolt enabled on 0.0.0.0:7688.
This got me running on the correct port!
Got it.
So I was missing advertised_address.
Leaving my docker run command alone,
I just add the following lines (or modify) to my $HOME/neo4j/conf-test/neo4j.conf file
dbms.connector.bolt.listen_address=:7688
dbms.connector.bolt.advertised_address=:7688
Works for me.

Resources