I'm currently installing Nominatim using the Docker image that can be found at https://github.com/bringnow/docker-nominatim . However, when I send a query I get the following error:
Bad Request
Nominatim has encountered an error with your request.
Details: Illegal query string (not an UTF-8 string): paderborn
When I have a look at the console, I get the following error:
ERROR: relation "query_log" does not exist at character 13
STATEMENT: insert into query_log values ('2018-05-23 15:25:03.9961','paderborn','172.18.0.1')
ERROR: relation "new_query_log" does not exist at character 13
STATEMENT: insert into new_query_log (type,starttime,query,ipaddress,useragent,language,format) values ('search','2018-05-23 15:25:03.9961','q=paderborn&polygon=1&viewbox=','172.18.0.1','Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:60.0) Gecko/20100101 Firefox/60.0','short_name:de,short_name:en-US,short_name:en,name:de,name:en-US,name:en,place_name:de,place_name:en-US,place_name:en,official_name:de,official_name:en-US,official_name:en,short_name,name,place_name,official_name,ref,type','')
ERROR: function make_standard_name(unknown) does not exist at character 8
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
STATEMENT: select make_standard_name('paderborn') as string
I already found an answer that proposes the following solution:
./utils/setup.php --create-functions --enable-diff-updates
However, this results in an error:
Functions
CREATE FUNCTION
ERROR: could not access file "/app/module/nominatim.so": No such file or directory
When I have a look at the file system, the file nominatim.so exists. So, this error is confusing.
Does anyone know a solution for that?
I found the mistake: We have two docker images, one for nominatim and one for postgis. The file /app/module/nominatim.so is created inside the nominatim image but also needed within the postgis image. The solution is to create a volume that allows to share the file.
Within docker-compose.yaml add the following lines to the nominatim service:
volumes:
- ./volumes/module:/mnt/module
and the following lines within postgis service:
volumes:
- ./volumes/module:/app/module
Further extend the entrypoint.sh:
log_info "==> Copy nominatim.so"
cp /app/module/nominatim.so /mnt/module/nominatim.so
Note that you have to rebuild the nominatim Docker image.
Related
I have a docker-compose file that uses variable substitution for some secrets and I want to get an error if they are not supplied or empty, for this purpose I have tried this:
environment:
- >-
JAVA_OPTS=
-DMYSQL_USER=${MYSQL_USER:?MYSQL_USER_NOT_SET}
-DMYSQL_PASSWORD=${MYSQL_PASSWORD:?MYSQL_PASSWORD_NOT_SET}
-DMYSQL_URL=db:3306/${MYSQL_DATABASE:?MYSQL_DATABASE_NOT_SET}
However, it gives me the error:
ERROR: Invalid interpolation format for "environment" option in service "myservice": "JAVA_OPTS= -DMYSQL_USER=${MYSQL_USER:?MYSQL_USER_NOT_SET}...
According to https://docs.docker.com/compose/compose-file/#variable-substitution this should work since it has this snippet:
Similarly, the following syntax allows you to specify mandatory
variables:
${VARIABLE:?err} exits with an error message containing err if
VARIABLE is unset or empty in the environment. ${VARIABLE?err} exits
with an error message containing err if VARIABLE is unset in the
environment.
I also have version: "3.4" in my docker-compose so that shouldn't be the issue.
Already tried it with just ${MY_VAR?MY_ERROR} but it didn't work either.
I have even gone as far as to look at the source code but found nothing helpful.
EDIT :
I tried to make a minimum size reproduction:
docker-compose.yml
version: "3.4"
services:
hello:
image: hello-world
environment:
- TEST=${TEST?err}
docker-compose up
ERROR: Invalid interpolation format for "environment" option in service "hello": "TEST=${TEST?err}
This depends on your docker-compose version.
With docker-compose 1.17.1 you will get
ERROR: Invalid interpolation format for "environment" option in service "my-service": ...
if you use ${TEST?"My error message"} but with
e.g. docker-compose 1.29.2 it works as expected
ERROR: Missing mandatory value for "environment" option interpolating ... in service "my-service": "My error message"
I am trying to run hadoop using docker provided here:
https://github.com/big-data-europe/docker-hadoop
I use the following command:
docker-compose up -d
to up the service and am able to access it and browse file system using: localhost:9870. Problem rises whenever I try to use pyhdfs to put file on HDFS. Here is my sample code:
hdfs_client = HdfsClient(hosts = 'localhost:9870')
# Determine the output_hdfs_path
output_hdfs_path = 'path/to/test/dir'
# Does the output path exist? If not then create it
if not hdfs_client.exists(output_hdfs_path):
hdfs_client.mkdirs(output_hdfs_path)
hdfs_client.create(output_hdfs_path + 'data.json', data = 'This is test.', overwrite = True)
If test directory does not exist on HDFS, the code is able to successfully create it but when it gets to the .create part it throws the following exception:
pyhdfs.HdfsIOException: Failed to find datanode, suggest to check cluster health. excludeDatanodes=null
What surprises me is that my code is able to create the empty directory but fails to put the file on HDFS. My docker-compose.yml file is exactly the same as the one provided in the github repo. The only change I've made is in the hadoop.env file where I change:
CORE_CONF_fs_defaultFS=hdfs://namenode:9000
to
CORE_CONF_fs_defaultFS=hdfs://localhost:9000
I have seen this other post on sof and tried the following command:
hdfs dfs -mkdir hdfs:///demofolder
which works fine in my case. Any help is much appreciated.
I would keep the default CORE_CONF_fs_defaultFS=hdfs://namenode:9000 setting.
Works fine for me after adding a forward slash to the paths
import pyhdfs
fs = pyhdfs.HdfsClient(hosts="namenode")
output_hdfs_path = '/path/to/test/dir'
if not fs.exists(output_hdfs_path):
fs.mkdirs(output_hdfs_path)
fs.create(output_hdfs_path + '/data.json', data = 'This is test.')
# check that it's present
list(fs.walk(output_hdfs_path))
[('/path/to/test/dir', [], ['data.json'])]
I have recreated a simple example in this tiny github repo. I am attempting to use symfony/dependency-injection to configure monolog/monolog to write logs to php://stderr. I am using a yaml file called services.yml to configure dependency injection.
This all works fine if my yml file looks like this:
parameters:
log.file: 'php://stderr'
log.level: 'DEBUG'
services:
stream_handler:
class: \Monolog\Handler\StreamHandler
arguments:
- '%log.file%'
- '%log.level%'
log:
class: \Monolog\Logger
arguments: [ 'default', ['#stream_handler'] ]
However, my goal is to read the path of the log files and the log level from environment variables, $APP_LOG and LOG_LEVEL respectively. According to The symphony documentations on external paramaters the correct way to do that in the services.yml file is like this:
parameters:
log.file: '%env(APP_LOG)%'
log.level: '%env(LOGGING_LEVEL)%'
In my sample app I verified PHP can read these environment variables with the following:
echo "Hello World!\n\n";
echo 'APP_LOG=' . (getenv('APP_LOG') ?? '__NULL__') . "\n";
echo 'LOG_LEVEL=' . (getenv('LOG_LEVEL') ?? '__NULL__') . "\n";
Which writes the following to the browser when I use my original services.yml with hard coded values.:
Hello World!
APP_LOG=php://stderr
LOG_LEVEL=debug
However, if I use the %env(VAR_NAME)% syntax in services.yml, I get the following error:
Fatal error: Uncaught UnexpectedValueException: The stream or file "env_PATH_a61e1e48db268605210ee2286597d6fb" could not be opened: failed to open stream: Permission denied in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php:107 Stack trace: #0 /var/www/vendor/monolog/monolog/src/Monolog/Handler/AbstractProcessingHandler.php(37): Monolog\Handler\StreamHandler->write(Array) #1 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(337): Monolog\Handler\AbstractProcessingHandler->handle(Array) #2 /var/www/vendor/monolog/monolog/src/Monolog/Logger.php(532): Monolog\Logger->addRecord(100, 'Initialized dep...', Array) #3 /var/www/html/index.php(17): Monolog\Logger->debug('Initialized dep...') #4 {main} thrown in /var/www/vendor/monolog/monolog/src/Monolog/Handler/StreamHandler.php on line 107
What am I doing wrong?
Ok you need a few things here. First of all you need version 3.3 of Symfony, which is still in beta. 3.2 was the released version when I encountered this. Second you need to "compile" the environment variables.
Edit your composer.json with the following values and run composer update. You might need to update other dependencies. You can substitute ^3.3 with dev-master.
"symfony/config": "^3.3",
"symfony/console": "^3.3",
"symfony/dependency-injection": "^3.3",
"symfony/yaml": "^3.3",
You will likely have to do this for symfony/__WHATEVER__ if you have other symfony components.
Now in you're code after you load your yaml configuration into your dependency container you compile it.
So after you're lines here (perhaps in bin/console):
$container = new ContainerBuilder();
$loader = new YamlFileLoader($container, new FileLocator(__DIR__ . DIRECTORY_SEPARATOR . '..'));
$loader->load('services.yml');
Do this:
$container->compile(true);
Your IDE's intellisense might tell you compile takes no parameters. That's ok. That's because compile() grabs its args indirectly via func_get_arg().
public function compile(/*$resolveEnvPlaceholders = false*/)
{
if (1 <= func_num_args()) {
$resolveEnvPlaceholders = func_get_arg(0);
} else {
. . .
}
References
Github issue where this was discussed
Pull request to add compile(true)
Using this command after loading your services.yaml file should help.
$containerBuilder->compile(true);
given your files gets also validated by the checks for proper configurations which this method also does. The parameter is $resolveEnvPlaceholders which makes environmental variables accessible to the yaml services configuration.
I am running two instances of RedHat. I have SaltMaster installed on one machine and SaltMinion installed on another. I am using a free version of Salt. I want to test SaltStack to do a basic configuration management task. If it can transfer a file from SaltMaster to SaltMinion, that would be great. If it can install Apache web server on SaltMinion, that would be great. Either task will help me learn. My learning goal is semi-flexible.
I can use salt '*' test.ping. The response is True. I tried this command: salt '*' state.apply
I got this error:
> hostname.fqdn:
> Data failed to compile:
> ----------
> No matching salt environment for environment 'qa' found
> ----------
> No matching sls found for 'qa1' in env 'qa'
> ----------
> No matching sls found for 'base1' in env 'base'
> ----------
> No matching salt environment for environment 'dev' found
> ----------
> Specified SLS base1 in saltenv dev is not available on the salt master or through a configured fileserver
I modified the /etc/salt/master file. I uncommented these lines:
fileserver_backend:
- git
- roots
I tried this command again: salt '*' state.apply
I received this error:
> [ERROR ] Error parsing configuration file: /etc/salt/master -
> expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '<document start>', but found '<block mapping start>' in
> "<string>", line 547, column 1:
> fileserver_backend:
> ^
I have been following these directions here:
https://docs.saltstack.com/en/latest/topics/tutorials/states_pt1.html
I created a webserver.sls file.
I inserted these lines as the content:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
I do not see how the three lines in the directions above would be enough to configure SaltStack to work. Where would the apache installation media need to be? Where would the transfer happen from? Am I supposed to download the media to SaltMaster? I would assume so. But where would I put it? I have a satellite server for yum commands to work.
Alternatively, how do I get SaltStack to transfer a file from SaltMaster to SaltMinion?
The first error ([...]No matching sls found for 'qa1' in env 'qa'[...]) indicates that you have configured a lot of different environments (file_roots), which are not present on your master's filesystem. Your approach to solve this goes in the correct direction, but leads to this error:
[ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^ [ERROR ] Error parsing configuration file: /etc/salt/master - expected '', but found '' in "", line 547, column 1: fileserver_backend: ^
You should no longer be able to test.ping your minion, as the salt master should not run anymore, does it? To solve it just read the error message. It tells you with which point in your salt master configuration file salt is unhappy.
The fileserver_backend configures which types of backend should be available. You should check the file_roots configuration to actually define which roots are available. Roots refer to salt states folders in your filesystem.
A very simple config might look like that:
file_roots:
base:
- /srv/salt
It assumes that /srv/salt is the root of your state tree - which effectively means, that your webserver.sls should be located in this folder.
Your webserver.sls looks promising - it should install apache2 on a minion, when you apply it.
Managing configuration files on the master and transferring them to the minions is something salt can easily achieve. A simple state might look like:
/etc/myawesomeconfigurationfile.conf:
file.managed:
source: salt://myawesomefile # refers to /srv/salt/myawesomefile
user: root
group: root
mode: 640
You also asked for media files that you want to manage. If you talk about application related data it is not a good idea to use salt to move them around. IMO other approaches like NFS, GlusterFS or anything else that decouples user content from your application would be a better approach.
I am trying to use the export-graphml function in Neo4j 2.2. I have downloaded neo4j shell tools and extract it into the lib directory. I am able to export the entire database as a graphml file. However, if I try to export a subset using a query, I receive the following error:
Error occurred in server thread; nested exception is:
java.lang.NoSuchMethodError: org.neo4j.cypher.export.CypherResultSubGraph.from(Lorg/neo4j/cypher/javacompat/ExecutionResult;Lorg/neo4j/graphdb/GraphDatabaseService;Z)Lorg/neo4j/cypher/export/SubGraph;
The statement I used is:
export-graphml -o /path/to/file/out.graphml match (n:Person)-[r:RELATIONSHIP]-() WHERE n.id = 12345 return n, r
I have tried different variations with the different options (-r, -t) and none work