How to start Bitbucket server as a container in detached mode? - docker
Many questions exists about how to run containers in detached mode.
My question though is kinda specific to running Atlassian Bitbucket server in detached mode containers.
I tried the below as the last layer in my dockerfile and when i run the container with -d the process is not started
RUN /opt/atlassian-bitbucket/bin/start-bitbucket.sh
I tried using ENTRYPOINT like below
ENTRYPOINT ["/opt/atlassian-bitbucket/bin/start-bitbucket.sh"]
but container always exits after the start script completes.
Not sure if anyone has setup a Bitbucket Data center in containers but i am curious to see how they would have run multiple containers of the same image and made them join a single cluster.
Full disclosure: I work for Atlassian Premier Support, work closely with our Bitbucket Server team, and have been the primary maintainer of the atlassian/bitbucket-server Docker image for the past couple of years.
Short version
First: Use our official image, there are a host of problems we've solved over the years so rather than trying to start from scratch use ours as a base.
Second: You can indeed run a Data Center cluster in Docker. My personal test environment consists of 3 cluster nodes and a couple of Smart Mirrors, all using the official image, with HAProxy in front acting as a load balancer and an external Elasticsearch instance managing search. Check out the README above for a list of common configuration options - the ones you'll likely need can be set by passing environment variables
Long Version
AKA "How can I spin up a full DC cluster in a test environment?"
Here's a simple tutorial I put together for our own internal support teams a long time ago. It uses a custom HAProxy Docker container to give you an out-of-the-box load balancer. It's intended for testing on a single host, so if you want to do something different or closer to a production deployment, this won't cover that.
There's a lot to cover here, so let's start with the basics.
Networking
There are a few ways to connect up individual Docker containers so they can find each other and communicate (e.g. the --link parameter), but a Docker Network is by far the most flexible. With a dedicated network, we get the following:
Inter-container communication: Containers on the same network can communicate with each other and access services from other containers without the need to publish specific ports to the host.
Automatic DNS: Containers can find each other via their container name (defined by the --name parameter). Unlike real DNS however, when a container is down its DNS resolution ceases to exist. This can cause some issues for services like HAProxy - but we'll get to that later. Also worth noting is that this does not set the machine's hostname, which needs to be set separately if required.
Static IP assignment: For certain use cases it's useful to give Docker containers static IP addresses within their network
Multicast: Docker networks support multicast by default, which is perfect for Data Center nodes communicating over Hazelcast
One thing a Docker network doesn't do is attach the host to its network, so you, the user, can't connect to containers by container name, and you still need to publish ports to the local machine. However, there are situations where doing this is both useful and necessary. The simplest workaround is to add entries to your hosts file that point each container name you wish to access to the loopback address, 127.0.0.1
To create a Docker network, run the following command. In my example we're going to name our network atlasNetwork. If you want to use another name, remember to change the network name in all subsequent docker commands.
docker network create --driver bridge \
--subnet=10.255.0.0/16 \
atlasNetwork
Here, we're creating a network using the bridge driver - this is the simplest type of network. More complex network types allow the network to span multiple hosts. We're also manually specifying the subnet - if we leave this out Docker will choose one at random, and it could conflict with an existing network subnet, so it's safest to choose our own. We're also specifying a /16 mask to allow us to use IP address ranges within the last two octets - this will come up later!
Storage
Persistent data such as $BITBUCKET_HOME, or your database files, need to be stored somewhere outside of the container itself. For our test environment, we can simply store these directly on the host, our local OS. This means we can edit config files using our favourite text editor, which is pretty handy!
In the examples below, we're going to store our data files in the folder ~/dockerdata. There's no need to create this folder or any subfolders, as Docker will do this automatically. If you want to use a different folder, make sure to update the examples below.
You may wonder why we're not using Docker's named volumes instead of mounting folders on the host. Named volumes are an easier to manage abstraction and are generally recommended; however for the purposes of a test environment (particularly on Docker for Mac, where you don't have direct access to the virtualised file system) there's a huge practical benefit to being able to examine each container's persistent data directly. You may want to edit a number of configuration files in Bitbucket, or Postgres, or HAProxy, and this can be difficult when using a named volume, as it requires you to open a shell into the container - and many containers don't contain basic text editor utilities (not even vi!). However, if you prefer to use volumes, you can do so simply by replacing the host folder with the named volume in all of the below examples.
Database
The first service we need on our network is a database. Let's create a Postgres instance:
docker run -d \
--name postgres \
--restart=unless-stopped \
-e POSTGRES_PASSWORD=mysecretpassword \
-e PGDATA=/var/lib/postgresql/data/pgdata \
-v ~/dockerdata/postgres:/var/lib/postgresql/data/pgdata \
--network=atlasNetwork \
-p 5432:5432 \
postgres:latest
Let's examine what we're doing here:
-d
Run the container and detach from it (return to the prompt). Without this option, when the container starts we'll be attached directly to its stdout, and cancelling out would stop the container.
--name postgres
Set the name of the container to postgres, which also acts as its DNS record on our network.
--restart=unless-stopped
Sets the container to automatically start when Docker starts, unless you have explicitly stopped the container. This way, when you restart your computer, Postgres comes back up automatically
-e POSTGRES_PASSWORD=mysecretpassword
Sets password for the default postgres user to mysecretpassword
-e PGDATA=/var/lib/postgresql/data/pgdata
The official Postgres docker image recommends specifying this custom location when mounting the data folder to an external volume
-v ~/dockerdata/postgres:/var/lib/postgresql/data/pgdata
Mounts the folder /var/lib/postgresql/data/pgdata inside the container to an external volume, located on the host at ~/dockerdata/postgres. This folder will be created automatically
--network=atlasNetwork
Joins the container to our custom Docker network
-p 5432:5432
Publishes the Postgres port to the host machine, so we can access Postgres on localhost:5432. This isn't necessary for other containers to access the service, but it is necessary for us to get to it
postgres:latest
The latest version of the official Postgres docker image
Run the command, and hey presto, you can now access a fully functioning Postgres instance. For the sake of consistency, you may want to add your very first hosts entry here:
127.0.0.1 postgres
Now you, and any running containers, can access the instance at postgres:5432
Before you move on, you should connect to your database using your DB admin tool of choice. Connect to the hostname postgres with the username postgres, the default database postgres and the password mysecretpassword, and create a Bitbucket database ready to go:
CREATE USER bitbucket WITH PASSWORD 'bitbucket';
CREATE DATABASE bitbucket WITH OWNER bitbucket ENCODING 'UTF8';
If you don't have a DB admin tool handy, you can create a DB by using docker exec to run psql directly in the container:
# We need to run two commands because psql won't let
# you run CREATE DATABASE from a multi-command string
docker exec -it postgres psql -U postgres -c \
"CREATE USER bitbucket WITH PASSWORD 'bitbucket';"
docker exec -it postgres psql -U postgres -c \
"CREATE DATABASE bitbucket WITH OWNER bitbucket ENCODING 'UTF8';"
Elasticsearch
The next service we'll set up is Elasticsearch. We need a dedicated instance that all of our Data Center nodes can access. We have a great set of instructions on how to install a compatible version, configure it for use with Bitbucket, and install Atlassian's buckler security plugin: Install and configure a remote Elasticsearch instance
So how do we set this up in Docker? Well, it's easy:
docker pull dchevell/bitbucket-elasticsearch:latest
docker run -d \
--name elasticsearch \
-e AUTH_BASIC_USERNAME=bitbucket \
-e AUTH_BASIC_PASSWORD=mysecretpassword \
-v ~/dockerdata/elastic:/usr/share/elasticsearch/data \
--network=atlasNetwork \
-p 9200:9200 \
dchevell/bitbucket-elasticsearch:latest
Simply put, dchevell/bitbucket-elasticsearch is a pre-configured Docker image that is set up according to the instructions on Atlassian's Install and configure a remote Elasticsearch instance KB article. Atlassian's Buckler security plugin is installed for you, and you can configure the username and password with the environment variables seen above. Again, we're mounting a data volume to our host machine, joining it to our Docker network, and publishing a port so we can access it directly. This is solely for troubleshooting purposes, so if you want to poke around in your local Elasticsearch instance without going through Bitbucket, you can.
Now we're done, you can add your second hosts entry:
127.0.0.1 elasticsearch
HAProxy
Next, we'll set up HAProxy. Installing Bitbucket Data Center provides some example configuration, and again, we have a pre-configured Docker image that does all the hard work for us. But first, there's a few things we need to figure out first.
HAProxy doesn't play well with a Docker network's DNS system. In the real world, if a system is down, the DNS record still exists and connections will simply time out. HAProxy handles this scenario just fine. But in a Docker network, when a container is stopped, its DNS record ceases to exist, and connections to it fail with an "Unknown host" error. HAProxy won't start when this happens, which means we can't configure it to proxy connections to our nodes by container name. Instead, we will need to give each node a static IP address, and configure HAProxy to use the IP address instead.
Even though we have yet to create our nodes, we can decide on the IP addresses for them now. Our Docker network's subnet is 10.255.0.0/16, and Docker will dynamically assign containers addresses on the last octet (e.g. 10.255.0.1, 10.255.0.2 and so on). Since we know this, we can safely assign our Bitbucket nodes static IP addresses using the second-last octet:
10.255.1.1
10.255.1.2
10.255.1.3
With that out of the way, there's one more thing. HAProxy is going to be the face of our instance, so its container name is going to represent the URL we use to access the instance. In this example, we'll call it bitbucketdc. We're also going to set the host name of the machine to be the same.
docker run -d \
--name bitbucketdc \
--hostname bitbucketdc \
-v ~/dockerdata/haproxy:/usr/local/etc/haproxy \
--network=atlasNetwork \
-e HTTP_NODES="10.255.1.1:7990,10.255.1.2:7990,10.255.1.3:7990" \
-e SSH_NODES="10.255.1.1:7999,10.255.1.2:7999,10.255.1.3:7999" \
-p 80:80 \
-p 443:443 \
-p 7999:7999 \
-p 8001:8001 \
dchevell/bitbucket-haproxy:latest
In the above example, we're specifying the HTTP endpoints of our future Bitbucket nodes, as well as the SSH endpoints, as a comma separated list. The container will turn this into valid HAProxy configuration. The proxied services will be available on port 80 and port 443, so we're publishing them both. This container is configured to automatically generate a self-signed SSL certificate based on the hostname of the machine, so we have HTTPS access available out of the box.
Since we're proxying SSH as well, we're also publishing port 7999, Bitbucket Server's default SSH port. You'll notice we're also publishing port 8001. This is to access HAProxy's Admin interface, so we can monitor which nodes are detected as up or down at any given time.
Lastly, we're mounting HAProxy's config folder to a data volume. This isn't really necessary, but it will let you directly access haproxy.cfg so you can get a feel for the configuration options there.
Now it's time for our third hosts entry. This one is , since it impacts things like Base URL access, is absolutely required
127.0.0.1 bitbucketdc
Bitbucket nodes
Finally we're ready to create our Bitbucket nodes. Since these are all going to be accessed via the load balancer, we don't have to publish any ports. However, for troubleshooting and testing purposes there are times when you'll want to hit a particular node directly, so we're going to publish each node to a different local port so we can access it directly when needed.
docker run -d \
--name=bitbucket_1 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.1 \
-p 7001:7990 \
-p 7991:7999 \
atlassian/bitbucket-server:latest
docker run -d \
--name=bitbucket_2 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.2 \
-p 7002:7990 \
-p 7992:7999 \
atlassian/bitbucket-server:latest
docker run -d \
--name=bitbucket_3 \
-e ELASTICSEARCH_ENABLED=false \
-e HAZELCAST_NETWORK_MULTICAST=true \
-e HAZELCAST_GROUP_NAME=bitbucket-docker \
-e HAZELCAST_GROUP_PASSWORD=bitbucket-docker \
-e SERVER_PROXY_NAME=bitbucketdc \
-e SERVER_PROXY_PORT=443 \
-e SERVER_SCHEME=https \
-e SERVER_SECURE=true \
-v ~/dockerdata/bitbucket-shared:/var/atlassian/application-data/bitbucket/shared \
--network=atlasNetwork \
--ip=10.255.1.3 \
-p 7003:7990 \
-p 7993:7999 \
atlassian/bitbucket-server:latest
You can see that we're specifying the static IP addresses we decided on when we set up HAProxy. It's up to you whether you add hosts entries for these nodes, or simply access their ports via localhost. Since no other containers need to access our nodes via host name, it's not really necessary, and I personally haven't bothered.
The official Docker image adds the ability to set a Docker-only variable, ELASTICSEARCH_ENABLED=false to prevent Elasticsearch from starting in the container. The remaining Hazelcast properties are natively supported in the official docker image, because Bitbucket 5 is based on Springboot and can automatically translate environment variables to their equivalent dot properties for us.
Turn it all on
Now we're ready to go!
Access your instance on https://bitbucketdc (or whatever name you chose). Add a Data Center evaluation license (You can generate a 30 day one on https://my.atlassian.com) and connect it to your Postgres database. Log in, then go to Server Admin and connect your Elasticsearch instance (remember, it's running on port 9200, so set the Elasticsearch URL to http://elasticsearch:9200 and use the username and password we configured when we created the Elasticsearch container.
Visit the Clustering section in Server Admin, and you should see all of the nodes there, demonstrating that Multicast is working and the nodes have found each other.
That's it! Your Data Center instance is fully operational. You can use it as your daily instance by shutting down all but one node, and simply use it as a single node test instance - then, whenever you need, turn on the additional nodes.
see official docker image: https://hub.docker.com/r/atlassian/bitbucket-server/
just run:
docker run -v /data/bitbucket:/var/atlassian/application-data/bitbucket --name="bitbucket" -d -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server
you can also take a look at the official dockerfile: https://hub.docker.com/r/atlassian/bitbucket-server/dockerfile
If you use the command to spin up the bitbucket container, you'll get the message below after the build:
The path /data/bitbucket is not shared from the host and is not known to Docker. You can configure shared paths from Docker -> Preferences... -> Resources -> File Sharing.
Related
How to setup a pull-through local registry that persists images on local machine? (for slow internet access)
How to create a local-registry container, that mounts a volume from the host machine and persist locally all the images that get pulled? I want to not download images more than once, if not necessary, even after the registry (or the whole Docker VM) is being thrown away and recreated. This is useful when having slow connection or no connectivity. Would also allow to mount a backup with pre-downloaded images, as docker volume, skipping altogether the need for an internet connection. This latter is already possible, but it would be more convenient than having to manually docker push/docker pull onto the local registry, or to docker save/docker load each image that need to be available there. It's a rephrasing on this, that wasn't reopened because of lack of feedback. Main purpose is to make the answer available for search, but feel free to propose better solutions.
Here are the step-by-step instructions. Hopefully will save time & make life easier to somebody else, travelling or living in disadvantaged areas of the world where internet connections can't access the Docker world, because they are too limited or sometime absent altogether! Istructions are for macOS and Minikube but can be adapted also for VM running on Windows or via Docker Desktop. (note: you will need to check if your virtualization technology provides automount of the system user directory) Configuration Define first your environment variables with the desired values. See env-vars in the code below (PROXIED_REGISTRY, REGISTRY_USERNAME, REGISTRY_PASSWORD, PATH_WHERE_TO_PERSIST_IMAGES, etc.) On the host machine Minikube If using minikube, first bind to docker on its VM's eval $(minikube docker-env) or run the commands directly from inside the VM, via minikube ssh. Create local registry (note: some envs might be unnecessary; check Docker docs to see what you need) The -v option mounts onto the local registry the path where you want to persist the registry data (repositories folders and image layers). When you use Minikube, this latter will automatically mount the home folder from the host (/Users/, on macOS) onto the virtual machine where Docker is run. docker run -d -p 5000:5000 \ -e STANDALONE=false \ -e "REGISTRY_LOG_LEVEL=debug" \ -e "REGISTRY_REDIRECT_DISABLE=true" \ -e MIRROR_SOURCE="https://${PROXIED_REGISTRY}" \ -e REGISTRY_PROXY_REMOTEURL="https://${PROXIED_REGISTRY}" \ -e REGISTRY_PROXY_USERNAME="${REGISTRY_USER}" \ -e REGISTRY_PROXY_PASSWORD="${REGISTRY_PASSWORD}" \ -v /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry:/var/lib/registry \ --restart=always \ --name local-registry \ registry:2 Login to your local registry echo -n "${REGISTRY_PASSWORD}" | docker login -u "${REGISTRY_USER}" --password-stdin "localhost:5000" (optional) Verify that the persist directories are present docker exec registry ls -la /var/lib/registry/docker/registry ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry Try to pull one image from your private registry (to see it proxied through the repository localhost:5000) docker pull localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG} (optional) Verify the image data has been synced on local host, where desired docker exec registry ls -la /var/lib/registry/docker/registry ll /Users/${MACOS_USERNAME}/${PATH_WHERE_TO_PERSIST_IMAGES}/docker/registry/docker/registry If using Kubernetes change the deployment spec container image to: localhost:5000/${REPOSITORY}/${IMAGE}:${IMAGE_TAG} Et voila! You now can keep the images downloaded from your repository stored onto your host machine! If internet is available, the local registry will ensure to have the most recent version of your pulled images, requesting it to the proxied registry (private, or the the Docker hub). And you will have a last resort backup to run your container also when your internet connection is too slow for re-downloading everything you need, or is unavailable altogether! (really useful with Minikube, when you need to destroy your docker virtual machine) References: https://docs.docker.com/registry/recipes/mirror/#run-a-registry-as-a-pull-through-cache https://minikube.sigs.k8s.io/docs/handbook/mount/#driver-mounts
Pointing multiple domains to one virtual host with nginx and jwilder/docker-gen
I am new to using nginx and am unsure as to what solution I should tackle this problem with. I would like to have multiple domains pointing to one single virtual host in nginx. So, for example: the domains example.one.com and example.two.com would point to virtual host hub.site.com. I am running nginx and nginx-gen (jwilder/docker-gen) as Docker containers. Config settings for nginx are therefore being defined based on the arguments specified in the run command of other Docker containers. For example: docker run -d \ --restart=unless-stopped \ --name hub \ --network local \ -e VIRTUAL_HOST=hub.site.com \ -e VIRTUAL_PORT=3000 \ -e LETSENCRYPT_HOST=hub.site.com \ -e LETSENCRYPT_EMAIL=info#site.com \ -e FRONTEND='https://hub.site.com/' \ site/hub Here, everything with the -e flag gets recognized and implemented into the nginx config. If there's a way to specify multiple domains to point to hub.site.com from this run command, it would solve my problem whilst still using the current flow for defining settings for the nginx config. The only other solution I know of would be directly editing the nginx config and using the method from this thread. Although that would go against my current flow for defining nginx config settings.
The jwilder/docker-gen is generate docker-compose.yml for jwilder/nginx-proxy In the document say: Multiple Hosts If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, foo.bar.com,baz.bar.com,bar.com and each host will be setup the same. So you can add multiple domains to a single host by separating them with commas: ... -e VIRTUAL_HOST=example.one.com,example.two.com ...
multiple versions of neo4j server at the same machine
I downloaded 2 versions of neo4j on Ubuntu 18.04 which are "neo4j-community-3.5.12" and "neo4j-community-3.5.8" I run 3.5.8 with default settings I can see it from the web. http://localhost:7474/ For 3.5.12 I changed conf/neo4j.conf file and set some other port numbers for not to conflict with the default ones. 3.5.8 version runs fine on :7474. When I start 3.5.12, the logs says it is running but when I check from browser it is not running. I tried 2 different port settings, none worked. Below is the log file. Why it is not running? I see that many people recommended using docker. I also tried that. I set up docker a container with command sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j here I have an existing /d1/data/databases/graph.db folder. When I go to localhost:7474 it is fine it shows me the existing database. I set up another docker container with command sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j here I expect to see an EMPTY database but I see the already existing database again. When I go to the data folder inside db2. I see that it created some files here. WHY do I see the same database? Also note that when I go to see the databases, headers of the web pages showing they are using the same bolt port? can I copy the neo4j image and use different images to generate containers? Does that help? I recognized that multiple databases are running and active but somehow I'm not able to reach the second one through a browser.
Considering the docker commands- cmd1: sudo docker run --name db1 -p7474:7474 -p7687:7687 -d -v /db1/data:/data -v /db1/logs:/logs -v /db1/conf:/conf --env NEO4J_AUTH=none neo4j cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:7687 -d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j The container ports are defaults exposed as the same host port for db1 instance. Whereas for db2 instance series 3xxx has been used. To browse the UI provided by neo4j, you can use either 7474 or 3001 port which is mapped to 7474 container port. Neo4j browser uses defaults (from neo4j.conf) to connect to neo4j server. The default settings are as bolt://<machineip>:7687, where db1 instance has already exposed the container port to 7687 host port. A running instance is found on 7687 port which initiates a WebSocket connection for db1 and db2. How to connect to an appropriate instance? Use: :server disconnect and :server connect with the appropriate bolt://<machineip>:port connection string Map db1 instance bolt container port to different host port (i.e. other than 7687) As no defaults will be available (Preferred), set the same hostport:containerport combination e.g. cmd2: sudo docker run --name db2 -p3001:7474 -p3002:7473 -p3003:3003-d -v /db2/data:/data -v /db2/logs:/logs -v /db2/conf:/conf --env NEO4J_AUTH=none neo4j in this case, a Volume has to be mapped to provide neo4j.conf with updated values as dbms.connector.bolt.listen_address=:3003
In case anybody still needs it: Here is how to run two neo4j databases neo4j_01 and neo4j_02 in two different docker containers, saving the data in different directories and accessing them on different ports. docker container 1: neo4j_01 docker run \ --name neo4j_01 \ -p1474:7474 -p1687:7687 \ -d \ -v $HOME/neo4j_01/neo4j/data:/data \ -v $HOME/neo4j_01/neo4j/logs:/logs \ -v $HOME/neo4j_01/neo4j/import:/var/lib/neo4j/import \ -v $HOME/neo4j_01/neo4j/plugins:/plugins \ --env NEO4J_AUTH=username/enterpasswordhere \ neo4j:latest docker container 2: neo4j_02 docker run \ --name neo4j_02 \ -p2474:7474 -p2687:7687 \ -d \ -v $HOME/neo4j_02/neo4j/data:/data \ -v $HOME/neo4j_02/neo4j/logs:/logs \ -v $HOME/neo4j_02/neo4j/import:/var/lib/neo4j/import \ -v $HOME/neo4j_02/neo4j/plugins:/plugins \ --env NEO4J_AUTH=username/enterpasswordhere \ neo4j:latest After executing the code above e.g. neo4j_01 can be reached on port 1474 (when logging in you need to change the bolt port to 1687 in the first line and then enter username and password in second and third line) You can stop a container with docker kill neo4j_01 and restart it with docker start neo4j_01. Data will still be there. It is saved in $HOME/neo4j_01/neo4j/data. Doing it like this, I did not encounter any problems with ports/ accessing the wrong database etc.
After a lot of effort, my solution is not to use docker. Go and download a community server from here. https://neo4j.com/download-center/#community. It will give you a compressed file. Extract it. You will have a folder named like neo4j-community-3.5.14. Make a copy of THAT FOLDER. For each server instance, make a copy. Inside the folder, there is a conf folder which has a file named neo4j.conf. Open that file. By changing some settings inside this folder, you can run many neo4j servers. Change the below settings To accept non-local connections, uncomment this line: dbms.connectors.default_listen_address=0.0.0.0 change some port numbers so that they won't intersect with already used ones dbms.connector.bolt.listen_address=:3003 dbms.connector.https.listen_address=:3002 dbms.connector.http.listen_address=:3001
How to persist configuration & analytics across container invocations in Sonarqube docker image
Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins. Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart. How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts. We need to setup a database of our own and point it to Sonarqube at the time of starting the container. Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile. Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required). Database Setup: create database sonar; grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD"; flush privileges; # since we do not know the containers IP before hand, we use '%' for sonarqube host IP. It is not necessary to create tables, Sonarqube creates them if it doesn't find them. Starting up Sonarqube container: # create a directory on host mkdir /server_data/sonarqube/extensions mkdir /server_data/sonarqube/data # this will be useful in saving startup time # Start the container docker run -d \ --name sonarqube \ -p 9000:9000 \ -e SONARQUBE_JDBC_USERNAME=sonar \ -e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \ -e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \ -v /server_data/sonarqube/data:/opt/sonarqube/data \ -v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \ sonarqube
Hi #VanagaS and others landing here. I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one. Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable. When using docker run. Simply do: txt docker run -d \ ... ... -e SONARQUBE_HOME=/sonarqube-data -v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed. Or with Kubernetes. In your deployment YAML file. Do: txt ... ... env: - name: SONARQUBE_HOME value: /sonarqube-data ... ... volumeMounts: - name: app-volume mountPath: /sonarqube-data And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file. This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein. And voila your Sonarqube data is thereby persisted. I hope this will help others. N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred. Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04 postgres=# create database sonar; postgres=# create user sonar with encrypted password 'mypass'; postgres=# grant all privileges on database sonar to sonar; create a directory on host mkdir /server_data/sonarqube/extensions mkdir /server_data/sonarqube/data # this will be useful in saving startup time Start the container docker run -d --name sonarqube -p 9000:9000 -e SONARQUBE_JDBC_USERNAME=sonar -e SONARQUBE_JDBC_PASSWORD=mypass -e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar -v /server_data/sonarqube/data:/opt/sonarqube/data -v /server_data/sonarqube/extensions:/opt/sonarqube/extensions sonarqube You may face this error when you do "docker logs container_id" ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] This is the fix, run on your host sysctl -w vm.max_map_count=262144 In order to add hostname edit /etc/postgresql/10/main/postgresql.conf In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf 10 - postgres version used
Consul not deregistering zombie services
I am deploying a simple hello world nginx container with marathon, and everything seems to work well, except that I have 6 containers that will not deregister from consul. docker ps shows none of the containers are running. I tried using the /v1/catalog/deregister endpoint to deregister the services, but they keep coming back. I then killed the registrator container, and tried deregistering again. They came back. I am running registrator with docker run -d --name agent-registrator -v /var/run/docker.sock:/tmp/docker.sock --net=host gliderlabs/registrator consul://127.0.0.1:8500 -deregister-on-success -cleanup There is 1 consul agent running. Restarting the machine (this is a single node installation on a local vm) does not make the services go away. How do I make these containers go away?
Using the http api for removing services is another much nicer solution. I just figured out how to manually remove services before I figured out how to use the https api. To delete a service with the http api use the following command: curl -v -X PUT http://<consul_ip_address>:8500/v1/agent/service/deregister/<ServiceID> Note that your is a combination of three things: the IP address of host machine the container is running on, the name of the container, and the inner port of the container (i.e. 80 for apache, 3000 for node js, 8000 for django, ect) all separated by colins : Heres an example of what that would actually look like: curl -v -X PUT http://1.2.3.4:8500/v1/agent/service/deregister/192.168.1.1:sharp_apple:80 If you want an easy way to get the ServiceID then just curl the service that contains a zombie: curl -s http://<consul_ip_address>:8500/v1/catalog/service/<your_services_name> Heres a real example for a service called someapp that will return all the services under it: curl -s http://1.2.3.4:8500/v1/catalog/service/someapp
Don't use catalog, instead of using agent, the reason is catalog is maintained by agents, it will be resync-back by agent even if you remove it from catalog, remove zombie services shell script: leader="$(curl http://ONE-OF-YOUR-CLUSTER:8500/v1/status/leader | sed 's/:8300//' | sed 's/"//g')" while : do serviceID="$(curl http://$leader:8500/v1/health/state/critical | ./jq '.[0].ServiceID' | sed 's/"//g')" node="$(curl http://$leader:8500/v1/health/state/critical | ./jq '.[0].Node' | sed 's/"//g')" echo "serviceID=$serviceID, node=$node" size=${#serviceID} echo "size=$size" if [ $size -ge 7 ]; then curl --request PUT http://$node:8500/v1/agent/service/deregister/$serviceID else break fi done curl http://$leader:8500/v1/health/state/critical json parser jq is used for field retrieving
Here is how you can absolutely delete all the zombie services: Go into your consul server, find the location of the json files containing the zombies and delete them. For example I am running consul in a container: docker run --restart=unless-stopped -d -h consul0 --name consul0 -v /mnt:/data \ -p $(hostname -i):8300:8300 \ -p $(hostname -i):8301:8301 \ -p $(hostname -i):8301:8301/udp \ -p $(hostname -i):8302:8302 \ -p $(hostname -i):8302:8302/udp \ -p $(hostname -i):8400:8400 \ -p $(hostname -i):8500:8500 \ -p $(ifconfig docker0 | awk '/\<inet\>/ { print $2}' | cut -d: -f2):53:53/udp \ progrium/consul -server -advertise $(hostname -i) -bootstrap-expect 3 Notice the flag -v /mnt:/data this is where all the data consul is storing is located. For me it was located in /mnt. Under this directory you will find several other directories. config raft serf services tmp Go into services and you will see the files that contain the json info of your services, find any ones that contains the info of zombies and delete them. Then restart consul. Then repeat for each server in your cluster that has zombies on it.
In a Consul Cluster the Agents are considered authoritative. If you use the the HTTP Api /v1/catalog/deregister endpoint to deregister services, it will keep coming back as long as other Agents have known about that service. It's the way that the Gossip protocol works. If you want Services to go away immediately you need to deregister the host agent properly by issuing a consul leave before killing the service on the node.
This is one of the problems with Consul and registrator, if the service doesn't have a check associated with it, the service will stick around until it's de-registered and be "active". So it's good practice to have services register a health check as well. That way they will at least be critical if registrator messes up and forgets to de-register the service (which I see happens a lot). Alex's answer, of erasing the files in consul's data/services directory (then consul reload) definitely works to erase the service, but registrator will re-add them, if the containers are still around and running. Apparently the newer registrator versions are better at cleanup, but I've had mixed success. Now I don't use registrator at all, since it doesn't add health checks. I use nomad to run my containers (also from hashicorp) and it will create the service AND create the health check, and does a great job of cleaning up after itself.
Try to switch to v5 docker run -d --name agent-registrator -v /var/run/docker.sock:/tmp/docker.sock gliderlabs/registrator:v5 -internal consul://172.16.0.4:8500