cp -f data/*.csv ${NEO4J_IMPORT}/
cd "${NEO4J_HOME}"
time bin/cypher-shell < $tmp_dir/create_graph.cypher
I am seeing a script to create a neo4j database, but running into a problem:
cp: /person.csv: Read-only file system
Connection refused
I am on Mac and can echo the NEO4J_HOME variable, but no NEO4J_IMPORT. Should I set my own NEO4J_IMPORT environment variable when using cypher-shell to create a graph? Where to set the NEO4J_IMPORT environment variable, if it is a must?
NEO4J_IMPORT need not be in env.
Probably neo4j instance is down or running or running on non standard port. Try this one.
In neo4j home path and make sure file has sufficient permission.
cat query.cypher | bin/cypher-shell -u neo4j -p neo4j -a localhost:7687 --format plain
https://neo4j.com/docs/operations-manual/3.5/tools/cypher-shell/#cypher-shell-syntax
Related
I have a MYSQL database container running on a centos server. How do I automate backing up the database outside the container?
as mentioned in the comments, make sure you use volumes for data folder.
for backing up:
create a bash script on the host machine and make it executable:
#!/bin/bash
DATE=$(date '+%Y-%m-%d_%H-%M-%S')
docker exec <container name> /usr/bin/mysqldump -u <putdatabaseusername here> -p<PutyourpasswordHere> --all-databases > /<path to desired backup location>/$DATE.sql
if [[ $? == 0 ]]; then
find /<path to desired backup location>/ -mtime +10 -exec rm {} \;
fi
change the following:
<container name> to actual db container name
<putdatabaseusername here> to db user
<PutyourpasswordHere> to the db password
create a directory for backup files and replace /<path to desired backup location>/ to the actual path
create a cronjob on host machine that executes the script in desired time/period
note that this script will retain backups for 10 days, change the number to reflect your needs.
Important note: this script is storing the password in the file, use a secure way in production
There is a template for monitoring PostgreSQL
Zabbix v3.2
Zabbix_agent is installed on another server and monitors PostgreSQL 9.6 there. The host is active in the Zabbix server in the web interface, there are no errors.
Partially zero values from the database are output to the template on the Zabbix server and there are
sh: psql: command not found
enter image description here
You are attempting to execute /usr/pgsql-9.6/bin/psql.
If that postgres directory is not in the $PATH env var, then trying just psql
won't work and will produce a "command not found" diagnostic.
Either tell Zabbix to execute the full pathname,
or put the postgres directory in Zabbix's PATH,
or choose a directory already in Zabbix's PATH
and add a symlink to it.
For example:
$ cd /usr/bin
$ sudo ln -s /usr/pgsql-9.6/bin/psql
I use neo4j on ubuntu. I want to have two graph db, for regular use and tests.
I read article about how switching between two graph db.
I do steps base on article:
cp /etc/neo4j/neo4j.conf /etc/neo4j/neo4j_test/neo4j.conf
# change dbms.active_database=graph.db to # change dbms.active_database=graph_test.db
sudo vim /etc/neo4j/neo4j_test/neo4j.conf
export NEO4J_CONF="/etc/neo4j/neo4j_test"
sudo systemctl restart neo4j
But when I check logs:
sudo journalctl -f -u neo4j
Config is default conf and didn't changed:
Sep 17 11:18:33 pc2 neo4j[32657]: config: /etc/neo4j
What is my fault? and is another way to switch between 2 graph db?
I think you could not save it properly using vim. neo4j.conf is a read-only file. So you can use this command to save read-only file :w !sudo tee %
You can see this question also:E212: Can't open file for writing
While working behind the corporate proxies..
Why can't docker export the proxy specific value from environment variables
(http_proxy, https_proxy,...).
Usually you get timeout issue while pulling the image, even if the proxy url is mentioned in environment vairable.
I have to set the value (hard-code the same value again) in or by creating the config files in /etc/systemd/system/docker.service.d folder.
If we change the proxy url, we have to make changes in different place. Or is there any way to refer the value from environment variable ?
I have tried the docker run -e env_proxy_variable=proxy_url but got the same timeout issue.
Consider using the below instead:
export HTTP_PROXY=http://xxx:port/
export HTTPS_PROXY=http://xxx:port/
export FTP_PROXY=http://xxx:port/
You can hardcode these variables in the /etc/default/docker file so that they are exported whenever docker is started.
You can check if the environment variable has been exported by typing $(name_of_var). For eg, after running
docker run --env HTTP_PROXY="123.45.21.32" -it ubuntu_latest /bin/bash
type
echo $HTTP_PROXY
It is likely that your DNS server isn't configured. Try
cat /etc/resolv.conf
if you see something like:
nameserver:8.8.8.8
then it's likely that the DNS server is inaccessible behind firewalls. You can pass dns server address along with docker run command like so:
docker run --env HTTP_PROXY="123.45.21.32" --dns=172.10.18.0 -it ubuntu_latest /bin/bash
Sonarqube official docker image, is not persisting any configuration changes like: creating users, changing root password or even installing new plugins.
Once the container is restarted, all the configuration changes disappear and the installed plugins are lost. Even the projects' keys and their previous QA analytics data is unavailable after a restart.
How can we persist the data when using Sonarqube's official docker image?
Sonarqube image comes with a temporary h2 database engine which is not recommended for production and doesn't persist across container restarts.
We need to setup a database of our own and point it to Sonarqube at the time of starting the container.
Sonarqube docker images exposes two volumes "$SONARQUBE_HOME/data", "$SONARQUBE_HOME/extensions" as seen from Sonarqube Dockerfile.
Since we wanted to persist the data across invocations, we need to make sure that a production grade database is setup and is linked to Sonarqube and the extensions directory is created and mounted as volume on the host machine so that all the downloaded plugins are available across container invocations and can be used by multiple containers (if required).
Database Setup:
create database sonar;
grant all on sonar.* to `sonar`#`%` identified by "SOME_PASSWORD";
flush privileges;
# since we do not know the containers IP before hand, we use '%' for sonarqube host IP.
It is not necessary to create tables, Sonarqube creates them if it doesn't find them.
Starting up Sonarqube container:
# create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
# Start the container
docker run -d \
--name sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME=sonar \
-e SONARQUBE_JDBC_PASSWORD=SOME_PASSWORD \
-e SONARQUBE_JDBC_URL="jdbc:mysql://HOST_IP_OF_DB_SERVER:PORT/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance" \
-v /server_data/sonarqube/data:/opt/sonarqube/data \
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions \
sonarqube
Hi #VanagaS and others landing here.
I just wanted to provide an alternative to the above. Maybe some would even consider it an easier one.
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Since Sonarqube v7.9 , Mysql is not supported. One needs to use postgresql. Install Postgresql and configure to run on host ip rather than localhost, private ip is preferred.
Reference: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04
postgres=# create database sonar;
postgres=# create user sonar with encrypted password 'mypass';
postgres=# grant all privileges on database sonar to sonar;
create a directory on host
mkdir /server_data/sonarqube/extensions
mkdir /server_data/sonarqube/data # this will be useful in saving startup time
Start the container
docker run -d
--name sonarqube
-p 9000:9000
-e SONARQUBE_JDBC_USERNAME=sonar
-e SONARQUBE_JDBC_PASSWORD=mypass
-e SONARQUBE_JDBC_URL=jdbc:postgresql://{host/private ip only}:5432/sonar
-v /server_data/sonarqube/data:/opt/sonarqube/data
-v /server_data/sonarqube/extensions:/opt/sonarqube/extensions
sonarqube
You may face this error when you do "docker logs container_id"
ERROR: [1] bootstrap checks failed [1]: max virtual memory areas
vm.max_map_count [65530] is too low, increase to at least [262144]
This is the fix, run on your host
sysctl -w vm.max_map_count=262144
In order to add hostname
edit /etc/postgresql/10/main/postgresql.conf
In order to add docker as client for postgres edit /etc/postgresql/10/main/pg_hba.conf
10 - postgres version used