Can't connect java client to Marklogic database - connection

I've just installed a MarkLogic nosql database out of the box on a windows machine.
I wrote a simple javaclient to put data in to the database but I get this error:
org.apache.http.conn.HttpHostConnectException: Connection to http://my.caci.local:8003 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:158)
The Marklogic database is started. This is the code :
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8003, "admin", "admin", Authentication.DIGEST);
XMLDocumentManager docMgr = client.newXMLDocumentManager(); BinaryDocumentManager binMgr = client.newBinaryDocumentManager();
DOMHandle handle = new DOMHandle(); for (int i = 0; i < AANT_PERSONEN; i++) {
Document document = createDocument(i);
String docId = "/zaak/" + 20;
handle.set(document);
docMgr.write(docId, handle); }
....
The Marklogic console reports the following ports to be active on my.caci.local:
Default :: Admin : 8001 [HTTP]
Default :: App-Services : 8000 [HTTP]
Default :: HealthCheck : 7997 [HTTP]
Default :: Manage : 8002 [HTTP]
I'm new to marklogic and this is my question:
- what port should I use to connect to from my java client?

In agreement with MystyxMac, I notice the console does not report a REST server on 8003.
Here's the documentation for setting up a REST server:
http://docs.marklogic.com/guide/rest-dev/intro#id_97899
You should also add users for the rest-reader, rest-writer, and rest-admin roles.
Hoping that helps,
Erik Hennum

For testing purposes you can simply switch the port you are using to 8000.
From the documentation:
When you install MarkLogic Server, a pre-configured REST API instance
is available on port 8000. This instance uses the Documents database
as the content database and the Modules database as the modules
database.
The instance on port 8000 is convenient for getting started, but you
will usually create a dedicated instance for production purposes.
http://docs.marklogic.com/guide/rest-dev/service#id_15309

Related

The neo4j cypher shell and the browser connections are working but the golang client connection is not working

I have disabled the authentication on my neo4j server, so I can connect using the cypher shell using no credentials as it follows and is working.
$ ./bin/cypher-shell -a 192.168.0.89
This is how I'm declaring my driver and the session, I also tried using neo4j://* instead of bolt://*:
driver, err := neo4j.NewDriver("bolt://192.168.0.89:7687", neo4j.NoAuth())
if err != nil {
return "", err
}
defer driver.Close()
session, _ := driver.NewSession(neo4j.SessionConfig{AccessMode: neo4j.AccessModeWrite})
defer session.Close()
But that doesn't work either. I'm getting this error when running the hello world from the neo4j olang driver page https://neo4j.com/developer/go/
TLS error: Remote end closed the connection, check that TLS is enabled on the server
There are the logs of the server when it starts:
2021-03-07 23:17:23.227+0000 INFO ======== Neo4j 4.2.3 ========
2021-03-07 23:17:24.119+0000 INFO Performing postInitialization step for component 'security-users' with version 2 and status CURRENT
2021-03-07 23:17:24.119+0000 INFO Updating the initial password in component 'security-users'
2021-03-07 23:17:24.243+0000 INFO Bolt enabled on 192.168.0.89:7687.
2021-03-07 23:17:25.139+0000 INFO Remote interface available at http://192.168.0.89:7474/
2021-03-07 23:17:25.140+0000 INFO Started.
These are all my config settings:
dbms.connector.bolt.advertised_address=192.168.0.89:7687
dbms.connector.bolt.enabled=true
dbms.connector.bolt.listen_address=192.168.0.89:7687
dbms.connector.bolt.tls_level=DISABLED
dbms.connector.http.advertised_address=192.168.0.89:7474
dbms.connector.http.enabled=true
dbms.connector.http.listen_address=192.168.0.89:7474
dbms.connector.https.enabled=false
dbms.default_advertised_address=192.168.0.89
dbms.default_database=neo4j
dbms.default_listen_address=192.168.0.89
dbms.directories.import=/home/eduardo/NEO4J/import
dbms.directories.neo4j_home=/home/eduardo/NEO4J
dbms.jvm.additional=-Dlog4j2.disable.jmx=true
dbms.security.auth_enabled=false
dbms.tx_log.rotation.retention_policy=1 days
dbms.tx_state.memory_allocation=ON_HEAP
dbms.windows_service_name=neo4j
Again, I can connect to the same host and the browser is also working fine:
Thanks in advance for any help :)
Adding to your answer: it is likely you're using the v1.x of the Go driver. If you switch to using the v4.x driver instead, you will not have to specify this config value.
You can upgrade by simply adding v4 in your import statement like so:
import github.com/neo4j/neo4j-go-driver/v4/neo4j
More info: https://github.com/neo4j/neo4j-go-driver/blob/4.2/MIGRATIONGUIDE.md
For anyone looking for the answer, the bolt driver will try to use TLS by default and since in my case is not configured, the encryption needs to be disabled in the driver constructor call.
driver, err := neo4j.NewDriver("bolt://192.168.0.89:7687", neo4j.NoAuth(), func(c *neo4j.Config) { c.Encrypted = false })
Hope this helps other people experiencing the same issue :)

Deploying smart contract using truffle on private blockchain node on docker

I am facing problems deploying a smart contract on my private blockchain network. I created my blockchain network on three VMs (miners) using puppeth on a fourth VM (controller) by following the steps in this blog: https://medium.com/#collin.cusce/using-puppeth-to-manually-create-an-ethereum-proof-of-authority-clique-network-on-aws-ae0d7c906cce
Afterwards, I installed truffle on one of the miners VMs and i initialized truffle using the command:
truffle init
Then I wrote a simple hello world smart contract, compiled it and deployed it on truffle development blockchain and it worked. However, I tried to deploy it on my private blockchain but I can't connect to the network.
The admin.nodeInfo command in geth console returns the folowing output:
docker exec -it 954cd3955065 geth attach ipc:/root/.ethereum/geth.ipc
Welcome to the Geth JavaScript console!
instance: Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5
coinbase: 0xe8cc4bea2cfdfd14cddefe1141bedd109576b9a9
at block: 78558 (Tue Dec 01 2020 22:01:02 GMT+0000 (UTC))
datadir: /root/.ethereum
modules: admin:1.0 clique:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0
To exit, press ctrl-d
> admin.nodeInfo
{
enode: "enode://7206ca3c62f6db47e1230dcf14a765d4c9b4870a66470dbb21fcc5ed2fab2167d6bcc47eec8044c42037b3e6e0017aeb8ddfc3580471da54a6c7274a0c1fe46b#10.100.2.32:30303",
enr: "enr:-Je4QGXlVAESp8r2s1uHBJxoDLWQo8IvZsbe5sX2YRBb0un9Gdlt8nfDKQBR_j0lDPtaoCCuis4cJJlqtEHfa4tLO2EIg2V0aMfGhG5b-B6AgmlkgnY0gmlwhApkAiCJc2VjcDI1NmsxoQNyBso8YvbbR-EjDc8Up2XUybSHCmZHDbsh_MXtL6shZ4N0Y3CCdl-DdWRwgnZf",
id: "027a351994ac1b127df56180b6210310cc0164f17f1b12c167cb167c4ffaa122",
ip: "10.100.2.32",
listenAddr: "[::]:30303",
name: "Geth/v1.9.25-unstable-ead81461-20201123/linux-amd64/go1.15.5",
ports: {
discovery: 30303,
listener: 30303
},
protocols: {
eth: {
config: {
byzantiumBlock: 0,
chainId: 1515,
clique: {...},
constantinopleBlock: 0,
eip150Block: 0,
eip150Hash: "0x0000000000000000000000000000000000000000000000000000000000000000",
eip155Block: 0,
eip158Block: 0,
homesteadBlock: 0,
istanbulBlock: 0,
petersburgBlock: 0
},
difficulty: 98201,
genesis: "0x17f752387c901db617cf0594ecd2cb9811dfcd666318c2e0e7cb0239471da979",
head: "0xf8a37d0390558746901faa55463c127c553f02cf2d23ce0cb469fcd470c810f9",
network: 1515
}
}
}
I tried adding the network configuration in truffle-config.js like this:
devnet2: {
host: "localhost",
port: "30303", //port where the node is
network_id: "*",
from: 0x91cd7b879fefff34259d577a56d290b3315bf9b3 // Treats this network as if it was a public net. (default: false)
}
then, when deploying using the command truffle deploy --network devnet2 i always get this error:
Compiling your contracts...
===========================
> Everything is up to date, there is nothing to compile.
/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56
throw new Error(errorMessage);
^
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout.setTimeout (/usr/local/lib/node_modules/truffle/build/webpack:/packages/provider/index.js:56:1)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
I tried extending the timeout limit but it didn't work. I also tried using Web3 Providers (HTTPProvider and IPCProvider) but without any luck (i can give more details, if needed).
Any help is well appreciated because i spent a lot of time on it without getting anywhere. Unfortunately, i couldn't find anything on deploying smart contracts to a node that is running on docker. If needed, i can gladly give more details about what i did.
I managed to run smart contracts on a private network, not using docker however. Some things come to mind. did you run a miner on your network? you will need to run a miner so that the contract gets migrated. Did you make sure that the gaslimit is met when running the contract? the miners will wait for the max gas limit to be reached before processing any request.
Did you already deploy the contract? in the migration scripts you either create a new migration script by bumping the version or use the reset flag to run all migration scripts again.

spring data elastic search Not a valid protocol version: This is not an HTTP port

I have the following elastic-search container configuration in my test case
#Container
public static GenericContainer container = new GenericContainer<>("elasticsearch:7.7.0")
.withExposedPorts(9200,9300).withEnv("discovery.type","single-node")
.withNetwork(Network.newNetwork())
.withNetworkAliases("someNetwork");
In a #BeforeAll annotated method I elasticsearch url property like this
System.setProperty("spring.data.elasticsearch.cluster-nodes", container.getContainerIpAddress() + ":" + container.getMappedPort(9300));
From power shell when I check the running containers (during the test case debug pause), I find something like this under ports column : 0.0.0.0:32844->9200/tcp, 0.0.0.0:32843->9300/tcp
When I print container.getContainerIpAddress() + ":" + container.getMappedPort(9300), I get the same port mapped to 9300 in the container ports column, in this case localhost:32843, for sure the port is random and get changed in every new run.
when the code `conf = repo.save(conf); run, I get the following exception:
Caused by: org.apache.http.ProtocolException: Not a valid protocol version: This is not an HTTP port
at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:209)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:245)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.http.ParseException: Not a valid protocol version: This is not an HTTP port
at org.apache.http.message.BasicLineParser.parseProtocolVersion(BasicLineParser.java:148)
at org.apache.http.message.BasicLineParser.parseStatusLine(BasicLineParser.java:366)
at org.apache.http.impl.nio.codecs.DefaultHttpResponseParser.createMessage(DefaultHttpResponseParser.java:112)
at org.apache.http.impl.nio.codecs.DefaultHttpResponseParser.createMessage(DefaultHttpResponseParser.java:50)
at org.apache.http.impl.nio.codecs.AbstractMessageParser.parseHeadLine(AbstractMessageParser.java:156)
at org.apache.http.impl.nio.codecs.AbstractMessageParser.parse(AbstractMessageParser.java:207)
... 11 more
`
You are using a REST client to access Elasticsearch on port 9300. This is the port for the TransportClient. Wit a REST client you need to target port 9200.

Erlang :ssh authentication error. How to connect to ssh using identity file

I'm getting an authentication error when trying to connect ssh host.
The goal is to connect to the host using local forwarding. The command below is an example using drop bear ssh client to connect to host with local forwarding.
dbclient -N -i /opt/private-key-rsa.dropbear -L 2002:1.2.3.4:2006 -p 2002 -l
test_user 11.22.33.44
I have this code so far which returns empty connection
ip = "11.22.33.44"
user = "test_user"
port = 2002
ssh_config = [
user_interaction: false,
silently_accept_hosts: true,
user: String.to_charlist(user),
user_dir: String.to_charlist("/opt/")
]
# returns aunthentication error
{:ok, conn} = :ssh.connect(String.to_charlist(ip), port, ssh_config)
This is the error Im seeing
Server: 'SSH-2.0-OpenSSH_5.2'
Disconnects with code = 14 [RFC4253 11.1]: Unable to connect using the available authentication methods
State = {userauth,client}
Module = ssh_connection_handler, Line = 893.
Details:
User auth failed for: "test_user"
I'm a newbie to elixir and have been reading this erlang ssh document for 2 days. I did not find any examples in the documentation which makes it difficult to understand.
You are using non-default key name, private-key-rsa.dropbear. Erlang by default looks for this set of names:
From ssh module docs:
Optional: one or more User's private key(s) in case of publickey authorization. The default files are
id_dsa and id_dsa.pub
id_rsa and id_rsa.pub
id_ecdsa and id_ecdsa.pub`
To verify this is a reason, try renaming private-key-rsa.dropbear to id_rsa. If this works, the next step would be to add a key_cb callback to the ssh_config which should return the correct key file name.
One example implementation of a similar feature is labzero/ssh_client_key_api.
The solution was to convert dropbear key to ssh key. I have used this link as reference.
Here is the command to convert dropbear key to ssh key
/usr/lib/dropbear/dropbearconvert dropbear openssh /opt/private-key-rsa.dropbear /opt/id_rsa

i installed FreeRADIUS , Mysql inside docker Container

I installed FreeRADIUS , Mysql inside docker Container
I exposed ports 1812 , 1813 , 3306 outside .
I imported Database to mysql .
I inserted this rows to databases
INSERT INTO nas VALUES (NULL , '0.0.0.0/0', 'myNAS', 'other', NULL , 'mysecret', NULL , NULL , 'RADIUS Client');
INSERT INTO radcheck (username, attribute, op, value) VALUES ('thisuser', 'User-Password', ':=', 'thispassword');
INSERT INTO radusergroup (username, groupname, priority) VALUES ('thisuser', 'thisgroup', '1');
INSERT INTO radgroupreply (groupname, attribute, op, value) VALUES ('thisgroup', 'Service-Type', ':=', 'Framed-User'), ('thisgroup', 'Framed-Protocol', ':=', 'PPP'), ('thisgroup', 'Framed-Compression', ':=', 'Van-Jacobsen-TCP-IP');
and i stopped freeradius ==> service freeradius stop
and iam using debug mode ==> freeradius -X
And when using this Command in another terminal for the same container ==> radtest thisuser thispassword 127.0.0.1 0 mysecret
Output: Server Accepted the request
But When the previous Command in another machine
Server does not see the request and output in the other machine is " No response "
Notes in the IN etc IN freeradius IN radiusd.conf file :
listen {
type = auth
ipaddr = *
port = 0 }
listen {
ipaddr = *
port = 0
type = acct }
How can i fix it ?
Adding the rows to the sql database is insufficient. You need to configure your sql instance in mods-available/sql to match your local database, uncomment read_clients in mods-available/sql, and list the sql module in the instantiate section in radiusd.conf to ensure it's loaded if it's not referenced elsewhere in one of the virtual servers.
After making these changes, restart the server. The SQL module should then read the clients list in on startup. Check the debug output freeradius -X to ensure the SQL module can connect to your database, and read the NAS entries in successfully.
The reason why your local connections work is because there's a client entry included for localhost in the clients.conf file that ships with the server.
I fixed this Issue by expose Ports in UDP Protocol -p 1813:1813/udp -p 1812:1812/udp

Resources