Changing neo4j conf enviroment variable has no effect - neo4j

I use neo4j on ubuntu. I want to have two graph db, for regular use and tests.
I read article about how switching between two graph db.
I do steps base on article:
cp /etc/neo4j/neo4j.conf /etc/neo4j/neo4j_test/neo4j.conf
# change dbms.active_database=graph.db to # change dbms.active_database=graph_test.db
sudo vim /etc/neo4j/neo4j_test/neo4j.conf
export NEO4J_CONF="/etc/neo4j/neo4j_test"
sudo systemctl restart neo4j
But when I check logs:
sudo journalctl -f -u neo4j
Config is default conf and didn't changed:
Sep 17 11:18:33 pc2 neo4j[32657]: config: /etc/neo4j
What is my fault? and is another way to switch between 2 graph db?

I think you could not save it properly using vim. neo4j.conf is a read-only file. So you can use this command to save read-only file :w !sudo tee %
You can see this question also:E212: Can't open file for writing

Related

Do I need to set up NEO4J_IMPORT environment variable?

cp -f data/*.csv ${NEO4J_IMPORT}/
cd "${NEO4J_HOME}"
time bin/cypher-shell < $tmp_dir/create_graph.cypher
I am seeing a script to create a neo4j database, but running into a problem:
cp: /person.csv: Read-only file system
Connection refused
I am on Mac and can echo the NEO4J_HOME variable, but no NEO4J_IMPORT. Should I set my own NEO4J_IMPORT environment variable when using cypher-shell to create a graph? Where to set the NEO4J_IMPORT environment variable, if it is a must?
NEO4J_IMPORT need not be in env.
Probably neo4j instance is down or running or running on non standard port. Try this one.
In neo4j home path and make sure file has sufficient permission.
cat query.cypher | bin/cypher-shell -u neo4j -p neo4j -a localhost:7687 --format plain
https://neo4j.com/docs/operations-manual/3.5/tools/cypher-shell/#cypher-shell-syntax

sh: psql: command not found in Zabbix template

There is a template for monitoring PostgreSQL
Zabbix v3.2
Zabbix_agent is installed on another server and monitors PostgreSQL 9.6 there. The host is active in the Zabbix server in the web interface, there are no errors.
Partially zero values ​​from the database are output to the template on the Zabbix server and there are
sh: psql: command not found
enter image description here
You are attempting to execute /usr/pgsql-9.6/bin/psql.
If that postgres directory is not in the $PATH env var, then trying just psql
won't work and will produce a "command not found" diagnostic.
Either tell Zabbix to execute the full pathname,
or put the postgres directory in Zabbix's PATH,
or choose a directory already in Zabbix's PATH
and add a symlink to it.
For example:
$ cd /usr/bin
$ sudo ln -s /usr/pgsql-9.6/bin/psql

Clickhouse in Docker option experimental_allow_extended_storage_definition_syntax

I'm trying to set the following option flag to 1: experimental_allow_extended_storage_definition_syntax to be able to test the new partitions features.
But I don't find where this can be set. Is it in a config file, while opening a session?
I'm using:
Clickhouse in Docker under Ubuntu 16.04 LTS
Tabix in docker
If you have the exact command line to pass to be able to set that up with Docker that would be great.
It is user settings, which could be set for a particular session or globally using users.xml.
Let's set the setting for default user (settings of all users are inherited from default user settings).
We will not modify /etc/clickhouse-server/users.xml directly, just add special file experimental_allow_extended_storage_definition_syntax.xml in users.d subdirrectory. It will be merged into main users config file.
So, Docker file commands:
RUN mkdir -p /etc/clickhouse-server/users.d/
RUN chown -R clickhouse /etc/clickhouse-server/users.d/
RUN echo '<yandex><profiles><default><experimental_allow_extended_storage_definition_syntax>1</experimental_allow_extended_storage_definition_syntax></default></profiles></yandex>' > /etc/clickhouse-server/users.d/experimental_allow_extended_storage_definition_syntax.xml
You could see an example of the Dockerfile here

How to mount local volumes in docker machine

I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:
web:
build: .
command: ./run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
When running docker-compose up -d all goes well until trying to execute the command and an error is produced:
Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./run_web.sh": stat ./run_web.sh: no such file or directory
Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?
Docker-machine automounts the users directory... But sometimes that just isn't enough.
I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine
Add Virtual Machine Mount Point (part 1)
CLI: (Only works when machine is stopped)
VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount
So an example in windows would be
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount
GUI: (does NOT require the machine be stopped)
Start "Oracle VM VirtualBox Manager"
Right-Click <machine name> (default)
Settings...
Shared Folders
The Folder+ Icon on the Right (Add Share)
Folder Path: <host dir> (e:)
Folder Name: <mount name> (e)
Check on "Auto-mount" and "Make Permanent" (Read only if you want...) (The auto-mount is sort of pointless currently...)
Mounting in boot2docker (part 2)
Manually mount in boot2docker:
There are various ways to log in, use "Show" in "Oracle VM VirtualBox Manager", or ssh/putty into docker by IP address docker-machine ip default, etc...
sudo mkdir -p <local_dir>
sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
But this is only good until you restart the machine, and then the mount is lost...
Adding an automount to boot2docker:
While logged into the machine
Edit/create (as root) /mnt/sda1/var/lib/boot2docker/bootlocal.sh, sda1 may be different for you...
Add
mkdir -p <local_dir>
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.
Old method: Less recommended, but left as an alternative
Edit (as root) /mnt/sda1/var/lib/boot2docker/profile, sda1 may be different for you...
Add
add_mount() {
if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
fi
}
add_mount <local dir> <mount name>
As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.
git -c core.autocrlf=false clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
git -c core.autocrlf=false checkout v1.8.1 #or your appropriate version
Edit rootfs/etc/rc.d/automount-shares
Add try_mount_share <local_dir> <mount_name> line right before fi at the end. For example
try_mount_share /e e
Just be sure not to set the to anything the os needs, like /bin, etc...
docker build -t boot2docker . #This will take about an hour the first time :(
docker run --rm boot2docker > boot2docker.iso
Backup the old boot2docker.iso and copy your new one in its place, in ~/.docker/machine/machines/
This does work, it's just long and complicated
docker version 1.8.1, docker-machine version 0.4.0
Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to
get the current working directory of the docker-machine instance docker-machine ssh <name> pwd
use a command line tool like rsync to copy folder to remote system
rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:<result _of_pwd_from_1>.
The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:/root
NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.
change the volume mount point in your docker-compose.yml file from .:/app to /root/<name_of_folder>:/app
run docker-compose up -d
NB when changes are made locally, don't forget to rerun rsync to push the changes to the remote system.
Its not perfect but it works. An issue is ongoing https://github.com/docker/machine/issues/179
Other project that attempt to solve this include docker-rsync
At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.
There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.
Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app
web:
build: .
command: git clone https://github.com/my/repo.git; ./repo/run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:
docker-machine scp -r . dev:/home/docker/project
Being this the general form:
docker-machine scp [machine:][path] [machine:][path]
So you can copy files from, to and between machines.
Cheers!1
Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:
docker-machine mount <machine-name>:<guest-path> <host-path>
Check the docs for more information: https://docs.docker.com/machine/reference/mount/
PR with the change: https://github.com/docker/machine/pull/4018
If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename> command like this:
rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>
It uses this command format of rsync, leaving HOST blank:
rsync [OPTION]... SRC [SRC]... [USER#]HOST:DEST
(http://linuxcommand.org/man_pages/rsync1.html)
Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.
1st the upgrade Pain:
Uninstall VirtualBox first.
Yep that may break stuff in other tools like Android Studio. Thanks Docker :(
Install new version of Docker Toolbox.
Redis Database Example:
redis:
image: redis:alpine
container_name: redis
ports:
- "6379"
volumes:
- "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
run docker-machine stop default - Ensure VM is haulted
In Oracle VM VirtualBox Manager ...
Added a shared folder in default VM via or command line
D:\Projects\MyProject\db => /var/db
In docker-compose.yml...
Mapped redis volume as: "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
Set COMPOSE_CONVERT_WINDOWS_PATHS=0 (for Toolbox version >= 1.9.0)
run docker-machine start default to restart the VM.
cd D:\Projects\MyProject\
docker-compose up should work now.
Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb
Why avoid relative host paths?
I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml file (bad for SCM).
Original Issue
FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"
ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data
This breaks for two reasons ..
It can't access D: drive
Volume paths can't include \ characters
docker-compose adds them and then blames you for it !!
Use COMPOSE_CONVERT_WINDOWS_PATHS=0 to stop this nonsense.
I recommend documenting your additional VM shared folder mapping in your docker-compose.yml file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.
All other answers were good for the time but now (Docker Toolbox v18.09.3) all works out of the box. You just need to add a shared folder into VirtualBox VM.
Docker Toolbox automatically adds C:\Users as shared folder /c/Users under virtual linux machine (using Virtual Box shared folders feature), so if your docker-compose.yml file is located somewhere under this path and you mount host machine's directories only under this path - all should work out of the box.
For example:
C:\Users\username\my-project\docker-compose.yml:
...
volumes:
- .:/app
...
The . path will be automatically converted to absolute path C:\Users\username\my-project and then to /c/Users/username/my-project. And this is exactly how this path is seen from the point of view of linux virtual machine (you can check it: docker-machine ssh and then ls /c/Users/username/my-project). So, the final mount will be /c/Users/username/my-project:/app.
All works transparently for you.
But this doesn't work if your host mount path is not under C:\Users path. For example, if you put the same docker-compose.yml under D:\dev\my-project.
This can be fixed easily though.
Stop the virtual machine (docker-machine stop).
Open Virtual Box GUI, open Settings of Virtual Machine named default, open Shared Folders section and add the new shared folder:
Folder Path: D:\dev
Folder Name: d/dev
Press OK twice and close Virtual Box GUI.
Start the virtual machine (docker-machine start).
That's all. All paths of host machine under D:\dev should work now in docker-compose.yml mounts.
It can be done witch combination of three tools:
docker-machine mount, rsync, inotifywait
TL;DR
Script based on all below is here
Let's say you have your docker-compose.yml and run_web.sh in /home/jdcaballerov/web
Mount directory on machine which has same path as you have it on your host docker-machine machine:/home/jdcaballerov/web /tmp/some_random_dir
Synchronize mounted directory with dir on your host rsync -r /home/jdcaballerov/web /tmp/some_random_dir
Synchronize on every change of files in your directory:
inotifywait -r -m -e close_write --format '%w%f' /home/jdcaballerov/web | while read CHANGED_FILE
do
rsync /home/jdcaballerov/web /tmp/some_random_dir
done
BE AWARE - there are two directories which has same path - one is on your local (host) machine, second is on docker machine.
I assume the run_web.sh file is in the same directory as your docker-compose.yml file. Then the command should be command: /app/run_web.sh.
Unless the Dockerfile (that you are not disclosing) takes care of putting the run_web.sh file into the Docker image.
After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below:
- Windows 7
- docker-machine.exe version 0.7.0
- VirtualBox 5.0.22
#!env bash
: ${NAME:=default}
: ${SHARE:=c/Proj}
: ${MOUNT:=/c/Proj}
: ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}
SCRIPT=/mnt/sda1/var/lib/boot2docker/bootlocal.sh
## set -x
docker-machine stop $NAME
"$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
docker-machine start $NAME
docker-machine env $NAME
docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" | sudo tee -a $SCRIPT'
docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
#docker-machine ssh $NAME 'ls $MOUNT'
I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name) from where you have access to local files.
Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.
It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials.
The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.
It would be useful if docker-compose gave an error in these situations.

How to change the docker image installation directory?

From what I can tell, docker images are installed to /var/lib/docker as they are pulled. Is there a way to change this location, such as to a mounted volume like /mnt?
With recent versions of Docker, you would set the value of the data-root parameter to your custom path, in /etc/docker/daemon.json
(according to https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file).
With older versions, you can change Docker's storage base directory (where container and images go) using the -goption when starting the Docker daemon. (check docker --help).
You can have this setting applied automatically when Docker starts by adding it to /etc/default/docker
Following advice from comments I utilize Docker systemd documentation to improve this answer.
Below procedure doesn't require reboot and is much cleaner.
First create directory and file for custom configuration:
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo $EDITOR /etc/systemd/system/docker.service.d/docker-storage.conf
For docker version before 17.06-ce paste:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --graph="/mnt"
For docker after 17.06-ce paste:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --data-root="/mnt"
Alternative method through daemon.json
I recently tried above procedure with 17.09-ce on Fedora 25 and it seem to not work. Instead of that simple modification in /etc/docker/daemon.json do the trick:
{
"graph": "/mnt",
"storage-driver": "overlay"
}
Despite the method you have to reload configuration and restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart docker
To confirm that Docker was reconfigured:
docker info|grep "loop file"
In recent version (17.03) different command is required:
docker info|grep "Docker Root Dir"
Output should look like this:
Data loop file: /mnt/devicemapper/devicemapper/data
Metadata loop file: /mnt/devicemapper/devicemapper/metadata
Or:
Docker Root Dir: /mnt
Then you can safely remove old Docker storage:
rm -rf /var/lib/docker
For new docker versions we need to use data-root as graph is deprecated in v17.05.0: official deprecated docs
Edit /etc/docker/daemon.json (if it doesn’t exist, create it) and include:
{
"data-root": "/new/path/to/docker-data"
}
Then restart Docker with:
sudo systemctl daemon-reload
sudo systemctl restart docker
A more detailed step-by-step explanation (including moving data) using Docker Storage with data-root can be found in: Blog post
In case of Windows a similar post Windows specific
Much easier way to do so:
Stop docker service
sudo systemctl stop docker
Move existing docker directory to new location
sudo mv /var/lib/docker/ /path/to/new/docker/
Create symbolic link
sudo ln -s /path/to/new/docker/ /var/lib/docker
Start docker service
sudo systemctl start docker
Since I haven't found the correct instructions for doing this in Fedora (EDIT: people pointed in comments that this should also work on CentOS and Suse) (/etc/default/docker isn't used there), I'm adding my answer here:
You have to edit /etc/sysconfig/docker, and add the -g option in the OPTIONS variable. If there's more than one option, make sure you enclose them in "". In my case, that file contained:
OPTIONS=--selinux-enabled
so it would become
OPTIONS="--selinux-enabled -g /mnt"
After a restart (systemctl restart docker) , Docker should use the new directory
Don't use a symbolic Link to move the docker folder to /mnt (for example).
This may cause in trouble with the docker rm command.
Better use the -g Option for docker.
On Ubuntu you can set it permanently in /etc/default/docker.io. Enhance or replace the DOCKER_OPTS Line.
Here an example:
`DOCKER_OPTS="-g /mnt/somewhere/else/docker/"
This solution works on Red Hat 7.2 & Docker 1.12.0
Edit the file
/lib/systemd/system/docker.service in your text editor.
add -g /path/to/docker/ at the end of ExecStart directive. The complete line should look like this.
ExecStart=/usr/bin/dockerd -g /path/to/docker/
Execute the below command
systemctl daemon-reload
systemctl restart docker
Execute the command to check docker directory
docker info | grep "loop file\|Dir"
If you have /etc/sysconfig/docker file in Red Hat or docker 1.7.1 check this answer.
In CentOS 6.5
service docker stop
mkdir /data/docker (new directory)
vi /etc/sysconfig/docker
add following line
other_args=" -g /data/docker -p /var/run/docker.pid"
then save the file and start docker again
service docker start
and will make repository file in /data/docker
Copy-and-paste version of the winner answer :)
Create this file with only this content:
$ sudo vi /etc/docker/daemon.json
{
"graph": "/my-docker-images"
}
Tested on Ubuntu 16.04.2 LTS in docker 1.12.6
The official way of doing this based on this Post-installation steps for Linux guide and what I found while web-crawling is as follows:
Override the docker service conf:
sudo systemctl edit docker.service
Add or modify the following lines, substituting your own values.
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --graph="/mnt/docker"
Save the file. (It creates: /etc/systemd/system/docker.service.d/override.conf)
Reload the systemctl configuration.
sudo systemctl daemon-reload
Restart Docker.
sudo systemctl restart docker.service
After this if you can nuke /var/lib/docker folder if you do not have any images there you care to backup.
For Debian/Ubuntu or Fedora, you can probably use the other answers. But if you don't have files under /etc/default/docker or /etc/sysconfig/docker, and your system is running systemd, you may want to follow this answer by h3nrik. I am using Arch, and this works for me.
Basically, you need to configure systemd to read the new docker image location as an environment variable, and pass that environment variable into the Docker daemon execution script.
For completeness, here is h3nrick's answer:
Do you have a /lib/systemd/system/docker.service file?
If so, edit it so that the Docker service uses the usual /etc/default/docker as an environment file: EnvironmentFile=-/etc/default/docker.
In the /etc/default/docker file then add DOCKER_OPTS="-g /home/rseixas/Programs/Docker/images".
At the end just do a systemctl daemon-reload && systemctl restart docker.
For further information please also have a look at the documentation.
As recommneded by #mbarthelemy this can be done via the -g option when starting the docker daemon directly.
However, if docker is being started as a system service, it is not recommended to modify the /etc/default/docker file. There is a guideline to this located here.
The correct approach is to create an /etc/docker/daemon.json file on Linux (or Mac) systems or %programdata%\docker\config\daemon.json on Windows. If this file is not being used for anything else, the following fields should suffice:
{
"graph": "/docker/daemon_files"
}
This is assuming the new location where you want to have docker persist its data is /docker/daemon_files
A much simpler solution is to create a soft link point to whatever you want, such as
link -s /var/lib/docker /mnt/whatever
It works for me on my CentOS 6.5 server.
I was having docker version 19.03.14. Below link helped me.
Check this Link
in /etc/docker/daemon.json file I added below section:-
{
"data-root": "/hdd2/docker",
"storage-driver": "overlay2"
}
On openSUSE Leap 42.1
$cat /etc/sysconfig/docker
## Path : System/Management
## Description : Extra cli switches for docker daemon
## Type : string
## Default : ""
## ServiceRestart : docker
#
DOCKER_OPTS="-g /media/data/installed/docker"
Note that DOCKER_OPTS was initially empty and all I did was add in the argument to make docker use my new directory
On Fedora 26 and probably many other versions, you may encounter an error after moving your base folder location as described above. This is particularly true if you are moving it to somewhere under /home. This is because SeLinux kicks in and prevents the docker container from running many of its programs from under this location.
The short solution is to remove the --enable-selinux option when you add the -g parameter.
On an AWS Ubuntu 16.04 Server I put the Docker images on a separate EBS, mounted on /home/ubuntu/kaggle/, under the docker dir
This snippet of my initialization script worked correctly
# where are the images initially stored?
sudo docker info | grep "Root Dir"
# ... not where I want them
# modify the configuration files to change to image location
# NOTE this generates an error
# WARNING: Usage of loopback devices is strongly discouraged for production use.
# Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
# see https://stackoverflow.com/questions/31620825/
# warning-of-usage-of-loopback-devices-is-strongly-discouraged-for-production-use
sudo sed -i ' s##DOCKER_OPTS=.*#DOCKER_OPTS="-g /home/ubuntu/kaggle/docker"# ' /etc/default/docker
sudo chmod -R ugo+rw /lib/systemd/system/docker.service
sudo cp /lib/systemd/system/docker.service /etc/systemd/system/
sudo chmod -R ugo+rw /etc/systemd/system/
sudo sed -i ' s#ExecStart.*#ExecStart=/usr/bin/dockerd $DOCKER_OPTS -H fd://# ' /etc/systemd/system/docker.service
sudo sed -i '/ExecStart/a EnvironmentFile=-/etc/default/docker' /etc/systemd/system/docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo docker info | grep "Root Dir"
# now they're where I want them
For Mac users in the 17.06.0-ce-mac19 version you can simply move the Disk Image location from the user interface in the preferences option Just change the location of the disk image and it will work (by clicking Move disk Image) and restarting the docker. Using this approach I was able to use my external hardisk for storing docker images.
For Those looking in 2020. The following is for Windows 10 Machine:
In the global Actions pane of Hyper-V Manager click Hyper-V
Settings…
Under Virtual Hard Disks change the location from the
default to your desired location.
Under Virtual Machines change the
location from the default to your desired location, and click apply.
Click OK to close the Hyper-V Settings page.
This blog post helps me
Here are the steps to change the directory even after you’ve created Docker containers etc.
Note, you don’t need to edit docker.service or init.d files, as it will read the change from the .json file mentioned below.
Edit /etc/docker/daemon.json (if it doesn't exist, create it)
Add the following
{
"data-root": "/new/path/to/docker-data"
}
Stop docker
sudo systemctl stop docker
Check docker has been stopped
ps aux | grep -i docker | grep -v grep
Copy the files to the new location
sudo rsync -axPS /var/lib/docker/ /new/path/to/docker-data
Start Docker back up
sudo systemctl start docker
Check Docker has started up using the new location
docker info | grep 'Docker Root Dir'
Check everything has started up that should be running
docker ps
Leave both copies on the server for a few days to make sure no issues arise, then feel free to delete it.
sudo rm -r /var/lib/docker

Resources