I am trying to refer to this link of documentation: https://neo4j.com/labs/apoc/4.2/operational/init-script/
This is my docker command:
docker run --rm \
--env NEO4J_AUTH="neo4j/test" \
--env NEO4JLABS_PLUGINS='["apoc"]' \
--env NEO4J_apoc_export_file_enabled="true" \
--env NEO4J_apoc_import_file_enabled="true" \
--env NEO4J_apoc_import_file_use__neo4j__config="true" \
--env NEO4J_dbms_directories_import="/" \
--env NEO4J_apoc_initializer_neo4j_1='CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher");' \
--env NEO4J_apoc_initializer_neo4j_2='CALL apoc.cypher.runFile("file:////var/lib/neo4j/db_init_scripts/db_schema.cypher");' \
--name neo4j \
neo4j:cus
where neo4j:cus is my custom image where I copied the required cypher files into neo4j:4.2 image
My db_ddl.cypher has create indexes:
CREATE CONSTRAINT idx_person_unq IF NOT EXISTS ON (p:Person) ASSERT p.name IS UNIQUE;
My db_schema.cypher has schema creation:
MERGE (p:Person {name: "Inital Person"});
When I try to start the container, I get below message:
Unrecognized setting. No declared setting with name: apoc.initializer.cypher.1
Unrecognized setting. No declared setting with name: apoc.initializer.cypher.2
When I try to use, older version of environment variables:
--env NEO4J_apoc_initializer_cypher_1='CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_schema.cypher");' \
--env NEO4J_apoc_initializer_cypher_2='CALL apoc.cypher.runFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher");' \
I get this:
Unrecognized setting. No declared setting with name: apoc.initializer.cypher.1
Unrecognized setting. No declared setting with name: apoc.initializer.cypher.2
But when I try to run only one cypher:
--env NEO4J_apoc_initializer_cypher='CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher");' \
Then it works.
This is how my conf/neo4j.conf looks:
root#4f8955759b52:/var/lib/neo4j# grep apoc conf/neo4j.conf
#dbms.security.procedures.allowlist=apoc.coll.*,apoc.load.*,gds.*
apoc.initializer.neo4j.1=CALL apoc.cypher.runFile("file:////var/lib/neo4j/db_init_scripts/db_schema.cypher");
apoc.initializer.neo4j.0=CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher");
apoc.import.file.use_neo4j_config=true
apoc.import.file.enabled=true
apoc.export.file.enabled=true
dbms.security.procedures.unrestricted=apoc.*
Can anyone point what I am missing so that I can run both creation of indexes as well as initialize some schema as well?
Thanks
Well this behavior works only when you create a separate apoc.conf file and place it beside neo4j.conf
my apoc.conf now looks like this:
apoc.initializer.neo4j.1=CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher")
apoc.initializer.neo4j.2=CALL apoc.cypher.runFile("file:////var/lib/neo4j/db_init_scripts/db_schema.cypher");
And log files have outputs:
2021-05-26 18:51:06.068+0000 INFO [neo4j/c0fb7489] successfully initialized: CALL apoc.cypher.runSchemaFile("file:////var/lib/neo4j/db_init_scripts/db_ddls.cypher")
2021-05-26 18:51:06.882+0000 INFO Remote interface available at http://localhost:7474/
2021-05-26 18:51:06.892+0000 INFO Started.
2021-05-26 18:51:08.091+0000 INFO [neo4j/c0fb7489] successfully initialized: CALL apoc.cypher.runFile("file:////var/lib/neo4j/db_init_scripts/db_schema.cypher");
Related
I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
I have a docker container running with a Graph created. I am following this guide to installing APOC. I have copied the JAR file from /var/lib/neo4j/labs to /var/lib/neo4j/plugins and have restarted the container.
A screenshot of the instructions:
I also enabled dbms.security.procedures.unrestricted=apoc.* but the APOC calls do not work after restarting the container.
It always says,
"There is no procedure with the name apoc.help registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed."
Is there anything I am missing?
My Neo4J version: 4.4.11
APOC versions I have tried are: apoc-4.4.0.8-core, apoc-4.4.0.6-core and apoc-4.4.0.9-core
Update 1
Script that produces the above output:
MATCH (n:FEATURE{name:'Update_Profile'})
CALL apoc.path.spanningTree(n,{maxLevel:15}) YIELD path
RETURN path
Second one:
CALL apoc.export.cypher.all("all-plain.cypher", {
format: "plain",
useOptimizations: {type: "UNWIND_BATCH", unwindBatchSize: 20}
})
YIELD file, batches, source, format, nodes, relationships, properties, time, rows, batchSize
RETURN file, batches, source, format, nodes, relationships, properties, time, rows, batchSize;
Update 2
I run the following command inside the docker container to copy the jar file having the current location at /var/lib/neo4j
cp labs/apoc-4.4.0.8-core.jar /var/lib/neo4j/plugins/
After this, I restarted the container using:
sudo docker container restart cybersage-neo4j
Why don't you use NEO4JLABS_PLUGINS environment variable to simply install APOC automatically like in the documentation?
docker run \
-p 7474:7474 -p 7687:7687 \
-v $PWD/data:/data -v $PWD/plugins:/plugins \
--name neo4j-apoc \
-e NEO4J_apoc_export_file_enabled=true \
-e NEO4J_apoc_import_file_enabled=true \
-e NEO4J_apoc_import_file_use__neo4j__config=true \
-e NEO4JLABS_PLUGINS=\[\"apoc\"\] \
-e NEO4J_dbms_security_procedures_unrestricted=apoc.\\\* \
neo4j:4.0
Hi I use Kafka connect docker container image confluentinc/cp-kafka-connect 5.5.3 and everything was running fine when using follwing Parameters
...
-e "CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
...
Now we introduced Schema Registry and decided to go with JsonSchemaConverter for now and not avro. I changed follwoing (INTERNAL stays as it is for now)
...
-e "CONNECT_KEY_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_VALUE_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
-e "CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
...
Following Error appeared:
[2021-02-04 09:24:14,637] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.json.JsonSchemaConverter for configuration key.converter: Class io.confluent.connect.json.JsonSchemaConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:727)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:473)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:374)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:316)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
It seems the converter is not available here by default. Do I have to install JsonSchemaConverter? I thought it comes by default?
I'm trying to get a bunch of very big files into a docker container while using CWL. When using the default method of file-inputs via
job.yml:
input_file:
class: File
path: /home/ubuntu/data/bigfile.zip
the CWL runner somehow copies the file and gets stuck. Is there an easy way of just mounting a directory directly into a docker container?
task.cwl:
cwlVersion: cwl:draft-3
class: CommandLineTool
baseCommand: run.sh
hints:
- class: DockerRequirement
dockerImageId: name123
inputs:
- id: input_file
type: File
inputBinding:
position: 1
outputs: []
Thanks in advance!
The CWL user guide has an example for how to do this: https://www.commonwl.org/user_guide/15-staging/index.html
You use InitialWorkDirRequirement and add the input file to the list of files to be staged in the working directory, like so:
cwlVersion: v1.0
class: CommandLineTool
baseCommand: cat
hints:
DockerRequirement:
dockerPull: alpine
inputs:
in1:
type: File
inputBinding:
position: 1
valueFrom: $(self.basename)
requirements:
InitialWorkDirRequirement:
listing:
- $(inputs.in1)
outputs:
out1: stdout
When you run this, say with the CWL reference runner (cwltool), you can see that the input file is mounted directly in the working directory (but safely, in a ReadOnly mode):
[job step-staging.cwl] /private/tmp/docker_tmpIaCJQ8$ docker \
run \
-i \
--volume=/private/tmp/docker_tmpIaCJQ8:/XMOiku:rw \
--volume=/private/tmp/docker_tmpW2RR3v:/tmp:rw \
--volume=/Users/kghose/Work/code/conditional/runif-examples/wf1.cwl:/XMOiku/wf1.cwl:ro \
--workdir=/XMOiku \
--read-only=true \
--log-driver=none \
--user=501:20 \
--rm \
--env=TMPDIR=/tmp \
--env=HOME=/XMOiku \
--cidfile=/private/tmp/docker_tmpdV6afe/20190502114327-207989.cid \
alpine \
cat \
wf1.cwl > /private/tmp/docker_tmpIaCJQ8/f3c708b20abf7fbf7f089060ec071c0956eb0cfd
However, as #TheDudeAbides says, the behavior of CWL 1.0 is to mount the files and not copy them. So even if you did not stage them, they would be mounted to make them available to the container, just in a different directory. This is how cwltool, toil and the SBG platform work.
I am trying to run kubernetes on coreos. I am using fleet, setup-network-environment, and kube-register to register nodes. However, in my cloud-init file where I write my systemd unit files, the kubelet's unit file won't run this properly:
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 --port=10250 \
--hostname_override=${DEFAULT_IPV4} \
--allow_privileged=true \
--logtostderr=true \
--healthz_bind_address=0.0.0.0
Instead of my public ip, ${DEFAULT_IPV4} results in $default_ipv4, which also doesn't result in the ip. I know --host-name-override should just take a string, and it works when I run this line from command line. There are other unit files where ${ENV_VAR} works fine. Why is it that for the kubelet's unit file, it just breaks?
EDIT 1
/etc/network-environment
LO_IPV4=127.0.0.1
ENS33_IPV4=192.168.195.242
DEFAULT_IPV4=192.168.195.242
ENS34_IPV4=172.22.22.238
EDIT 2
kubelet unit file
- name: kube-kubelet.service
command: start
content: |
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
Requires=setup-network-environment.service
After=setup-network-environment.service
[Service]
EnvironmentFile=/etc/network-environment
ExecStartPre=/usr/bin/curl -L -o /opt/bin/kubelet -z /opt/bin/kubelet https://storage.googleapis.com/kubernetes-release/release/v0.18.2/bin/linux/amd64/kubelet
ExecStartPre=/usr/bin/chmod +x /opt/bin/kubelet
# wait for kubernetes master to be up and ready
ExecStartPre=/opt/bin/wupiao 172.22.22.10 8080
ExecStart=/opt/bin/kubelet \
--address=0.0.0.0 \
--port=10250 \
--hostname_override=172.22.22.21 \
--api_servers=172.22.22.10:8080 \
--allow_privileged=true \
--logtostderr=true \
--healthz_bind_address=0.0.0.0 \
--healthz_port=10248
Restart=always
RestartSec=10
The Exec*=command is not a shell command. In my experimenting it was not very good at figuring out where the variable was unless it was by itself. I went and looked at some examples online and they always show the environment variable by itself. So, given a file like /tmp/myfile:
ENV=1.2.3.4
These [Service] definitions won't do what you think:
EnvironmentFile=/tmp/myfile
ExecStart=echo M$ENV
ExecStart=echo $ENV:8080
but, this will work on a line by itself:
EnvironmentFile=/tmp/myfile
ExecStart=echo $ENV
That doesn't help much when trying to pass an argument, like:
EnvironmentFile=/tmp/myfile
ExecStart=echo --myarg=http://$ENV:8080/v2
To accomplish passing the argument I had to put the entire myarg in a string in /tmp/myfile:
ENV="--myarg=http://1.2.3.4:8080/v2"
Finally I could can get my argument passed:
EnvironmentFile=/tmp/myfile
ExecStart=echo $ENV
It would seem the issue was in the version of coreos in the vagrant box. After an update of the vagrant box the environment variable was able to resolve to the proper value.