I want to compile solr from the main trunk and run it.
I did the following:
git clone https://github.com/apache/lucene-solr.git
cd lucene-solr/solr
ant dist
bin/solr -e cloud
This creates the relevant solr nodes but fails to create a collection with the following error:
$ bin/solr -e cloud
Welcome to the SolrCloud example!
This interactive session will help you launch a SolrCloud cluster on your local workstation.
To begin, how many Solr nodes would you like to run in your local cluster? (specify 1-4 nodes) [2]
Ok, let's start up 2 Solr nodes for your example SolrCloud cluster.
Please enter the port for node1 [8983]
8983
Please enter the port for node2 [7574]
7574
Starting up SolrCloud node1 on port 8983 using command:
solr start -cloud -s example/cloud/node1/solr -p 8983
Waiting to see Solr listening on port 8983 [|]
Started Solr server on port 8983 (pid=94888). Happy searching!
Starting node2 on port 7574 using command:
solr start -cloud -s example/cloud/node2/solr -p 7574 -z localhost:9983
Waiting to see Solr listening on port 7574 [|]
Started Solr server on port 7574 (pid=94979). Happy searching!
Now let's create a new collection for indexing documents in your 2-node cluster.
Please provide a name for your new collection: [gettingstarted]
gettingstarted
How many shards would you like to split gettingstarted into? [2]
2
How many replicas per shard would you like to create? [2]
2
Please choose a configuration for the gettingstarted collection, available options are:
basic_configs, data_driven_schema_configs, or sample_techproducts_configs [data_driven_schema_configs]
Error: Could not find or load main class org.apache.solr.util.SolrCLI
I am sure this used to work before.
But I am not able to figure out what's wrong.
Any help would be appreciated.
ant server needs to be run to solve the classpath issue.
(Or ant example for older versions).
Related
Using docker-desktop on macOS.
I'm trying to run a node following the instructions on this page.
The database name is node, which is the same as the username: node. The user has access to the database and can log in using psql client.
Connection strings I've tried in the .env file:
postgresql://node#localhost/node
postgresql://node:password#localhost/node
postgresql://node:password#localhost:5432/node
postgresql://node:password#127.0.0.1:5432/node
postgresql://node:password#127.0.0.1/node
When I run the start command: cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink local n , using docker-desktop on macOS, I get the following stack trace:
2020-09-15T14:24:41Z [INFO] Starting Chainlink Node 0.8.15 at commit a904730bd62c7174b80a2c4ccf885de3e78e3971 cmd/local_client.go:50
2020-09-15T14:24:41Z [INFO] SGX enclave *NOT* loaded cmd/enclave.go:11
2020-09-15T14:24:41Z [INFO] This version of chainlink was not built with support for SGX tasks cmd/enclave.go:12
2020-09-15T14:24:41Z [INFO] Locking postgres for exclusive access with 500ms timeout orm/orm.go:69
2020-09-15T14:24:41Z [ERROR] unable to lock ORM: dial tcp 127.0.0.1:5432: connect: connection refused logger/default.go:139 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Error
/chainlink/core/logger/default.go:117
...
Does anyone know how I can resolve this?
The problem probably caused by the fact that your chainlink database has been locked with Exclusive Lock and before stopping node that locks never removed.
What you do in this situation (as what works for me) is use PgAdmin Ui or similar way to find all Locks then find the Exclusive Lock that is held on the chainlink database and note down its Process id or ids (if multiple exclusive locks there are on chainlink DB)
Log in to your pg client and run SELECT pg_terminate_backend(<pid>) or SELECT pg_cancel_backend(<pid>); Enter PID of those locks here without quotes and meanwhile keep refreshing on pg admin URL to see if those processes stopped If stopped then rerun your chainlink node.
The problem is with docker networking.
Add --network host to the docker run command so that it is:
cd ~/.chainlink-kovan && docker run -p 6688:6688 -v ~/.chainlink-kovan:/chainlink -it --env-file=.env smartcontract/chainlink --network host local n
This fixes the issue.
So I have an issue with docker-compose and rabbitmq.
I run docker-compose up. Everything spins up. Docker-compose:
services:
rabbitmq3:
image: "rabbitmq:3-management"
hostname: "localhost"
command: rabbitmq-server
ports:
- 5672:5672
- 15672:15672
Then I do sudo rabbitmqctl status to check connection with node. I get this error:
Error: unable to perform an operation on node 'rabbit#localhost'. Please see diagnostics information and suggestions below.
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit#localhost
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: [rabbit#localhost]
rabbit#localhost:
* connected to epmd (port 4369) on localhost
* epmd reports: node 'rabbit' not running at all
no other nodes on localhost
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-25456-rabbit#localhost'
* effective user's home directory: /Users/olof.grund
* Erlang cookie hash: d1oONiVA/qogGxkf6vs9Rw==
When I do it in the container docker-compose exec -T rabbitmq3 rabbitmqctl status it works.
Do I need to expose something from docker somehow? Some rabbitmq client or node maybe?
I used all the tips that I have found in other sources. (adding IP to /etc/hosts/, restarts of containers, services). Took me a day to finally get this to work and it boils down to this.
<wait for 60secs since the rabbit container has been started>
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl force_boot
rabbitmqctl start_app
Rabbitmq uses Erlang's distribution protocol, which requires port 4369 open for the EPMD (Erlang Port Mapper Daemon), expose it in the docker-compose and stop the EPMD running in your host.
Installed Docker on Mac and trying to run Vespa on Docker following steps specified in following link
https://docs.vespa.ai/documentation/vespa-quick-start.html
I did n't had any issues till step 4. I see vespa container running after step 2 and step 3 returned 200 OK response.
But Step 5 failed to return 200 OK response. Below is the command I ran on my terminal
curl -s --head http://localhost:8080/ApplicationStatus
I keep getting
curl: (52) Empty reply from server whenever I run without -s option.
So I tried to see listening ports inside my vespa container and don't see anything for 8080 but can see for 19071(used in step 3)
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 8080'
➜ ~ docker exec vespa bash -c 'netstat -vatn| grep 19071'
tcp 0 0 0.0.0.0:19071 0.0.0.0:* LISTEN
Below doc has info related to vespa ports
https://docs.vespa.ai/documentation/reference/files-processes-and-ports.html
I'm assuming port 8080 should be active after docker run(step 2 of quick start link) and can be accessed outside container as port mapping is done.
But I don't see 8080 port active inside container in first place.
A'm I missing something. Do I need to perform any additional step than mentioned in quick start? FYI I installed Jenkins inside my docker and was able to access outside container via port mapping. But not sure why it's not working with vespa.I have been trying from quiet sometime but no progress. Please advice me if I'm missing something here.
You have too low memory for your docker container, "Minimum 6GB memory dedicated to Docker (the default is 2GB on Macs).". See https://docs.vespa.ai/documentation/vespa-quick-start.html
The deadlock detector warnings and failure to get configuration from configuration server (which is likely oom killed) indicates that you are too low on memory.
My guess is that your jdisc container had not finished initialize or did not initialize properly? Did you try to check the log?
docker exec vespa bash -c '/opt/vespa/bin/vespa-logfmt /opt/vespa/logs/vespa/vespa.log'
This should tell you if there was something wrong. When it is ready to receive requests you would see something like this:
[2018-12-10 06:30:37.854] INFO : container Container.org.eclipse.jetty.server.AbstractConnector Started SearchServer#79afa369{HTTP/1.1,[http/1.1]}{0.0.0.0:8080}
[2018-12-10 06:30:37.857] INFO : container Container.org.eclipse.jetty.server.Server Started #10280ms
[2018-12-10 06:30:37.857] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Switching to the latest deployed set of configurations and components. Application switch number: 0
[2018-12-10 06:30:37.859] INFO : container Container.com.yahoo.container.jdisc.ConfiguredApplication Initializing new set of configurations and components. Application switch number: 1
I have tried to setup a Redis cluster running docker but it hangs when I try to join them. My docker ps gives me this:
Notice the port mapping.
All containers have this basic redis.conf file
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
cluster-announce-ip 127.0.0.1
cluster-announce-port [7001, 7002, 7003, 7004, 7005 or 7006]
cluster-announce-bus-port [7101, 7102, 7103, 7104, 7105 or 7106]
Where the only change is the cluster-announce-port and cluster-announce-bus-port for each docker container. I hope you get the point.
I try to join the nodes with ./redis-trib.rb create --replicas 1 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006
And it discovers it perfectly and asking if the config should be accepted:
But then redis-trib hangs indefinitely with "Waiting for the cluster to join". I can see through docker logs r_1 to r_6, that the epoch is getting set:
1:M 15 Jul 10:38:08.493 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
So redis-trib does call the different nodes.
I cant really find anything about the cluster-announce variables anywhere. Does anyone here know how to do this? I think my problems lies in this part.
The redis version I am using is 4.0.10.
Ok so I figured it out. I needed to
set my cluster-announce-ip to the Ethernet adapter that has been created when installing docker (open up a terminal and do ipconfig)
update redis-trib.rb to reflect this IP
map the 16379 port when the docker image is created
I am trying to run solr on my machine. I have made everthing available for the same.
For example java and ruby versions are same as asked in the tutorials around.
This is how I am doing it.
solr_wrapper -d solr/config/ --collection_name hydra-development --version 6.3.0
This throws the followign error.
`exec': Failed to execute solr start: (RuntimeError)
Port 8983 is already being used by another process (pid: 1814)
Please choose a different port using the -p option.
The error message clearly indicates that some other process is using port 8983.
U need to find which process and try killing it
first run
$ lsof -i :8983
This will list applications running on port 8983. Lets say the pid of the process is 1814
run
$ sudo kill 1814
if you run into Error CREATEing SolrCore, it is mostly because of the permission issues caused by root installation
first cleanup the broken core:
bin/solr delete -c mycore
and recreate core as the solr user
su -u solr -c "/opt/solr/bin/solr create_core -c mycore"