I have a new local ASP.Net Core 6 application that uses docker and is connected to a local docker postgres container (individually run containers, not with Docker Compose). I added the Npgsql package and managed to successfully connect to it and create tables (Checked with PgAdmin). The issue is whenever I try to run 'Drop-Database' from the Package Manager Console, I get the below error:
Performing the operation "Drop-Database" on target "database 'Failed creating connection: Couldn't set trusted_connection (Parameter 'trusted_connection')' on server 'Failed creating connection: Couldn't set trusted_connection (Parameter 'trusted_connection')'".
To me this doesn't make sense because I am not using any trusted connection parameter in my connection string:
"ConnectionStrings": {
"DefaultConnection": "Host=host.docker.internal;Database=local;Username=postgres;Password=password;Include Error Detail=true"
}
I have two other solutions that use docker and successfully run PMC commands to a local docker postgres container so I am not sure what is different this time. I have not found any similar resources for troubleshooting this, and specifying 'Trusted_Connection=true' or 'Integrated Security=True' does not work.
My Program.cs file is unchanged from the starter project except for the below for Npgsql:
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
builder.Services.AddDbContext<ApplicationDbContext>(options => options
.UseNpgsql(connectionString)
.EnableDetailedErrors());
Please let me know if there is anything that could be causing this issue.
So the issue was previously on a different project, I manually set the enviroment in Package Manager Console using:
$env:ASPNETCORE_ENVIRONMENT='Local'
So when trying to drop the database, it was searching for a Local appsettings connection string which did not exist on my new project. Setting the environment back to Development solved this issue for me.
Related
I was trying to follow this tutorial:
https://docs.opensea.io/docs/1-structuring-your-smart-contract
And even found this extremely helpful YouTube video to guide me:
https://www.youtube.com/watch?v=lbXcvRx0o3Y&ab_channel=DanViau
But I've encountered a problem after installing and setting up everything I needed. The problem occured when I tried to deploy the contracts using this bash command:
truffle deploy --network rinkeby
The error message I got is:
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout._onTimeout (C:\Users\alonb\.nvm\versions\node\v12.22.5\bin\node_modules\truffle\build\webpack:\packages\provider\index.js:56:1)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
It's not caused by slow internet connection - I know that because I have tried executing this command on 3 different WiFi connections, one at 200 Mb/s rate.
I have tried to change the truffle-config.js file and add a longer timeout threshold (like suggested here), but the only thing that changed was that the error message took much longer to appear.
Technical info - I'm using Git Bash, npm version 6.14.14, nvm version 0.38.0, node version 12.22.5.
Any suggestions? I'm lost.
Alon
I also ran into this error when following the same tutorial.
I use Alchemy (not Infura), and the issue was my API_KEY.
In other tutorials I've followed, the scripts require the full alchemy API Key (ex. "https://eth-rinkeby.alchemyapi.io/v2/<random-key>").
So, when I was following this tutorial, that is what I supplied. And, I ran into the error you reported.
But when I reviewed the truffle.js script provided by the tutorial authors, I found this:
const rinkebyNodeUrl = isInfura
? "https://rinkeby.infura.io/v3/" + NODE_API_KEY
: "https://eth-rinkeby.alchemyapi.io/v2/" + NODE_API_KEY;
Thus, the script was producing:
rinkebyNodeUrl = https://eth-rinkeby.alchemyapi.io/v2/https://eth-rinkeby.alchemyapi.io/v2/<**random-key**>
...which is clearly wrong.
Thus, ensuring I set my API_KEY environment variable only to random-key and not https://eth-rinkeby.alchemyapi.io/v2/https://eth-rinkeby.alchemyapi.io/v2/<random-key>, my contract deployed successfully.
Also, make sure you have enough ETH in your wallet on the Rinkeby network. Faucets always seem to work for a little while then stop working, so do some Google searches to find one that is currently functional.
The solution is incredibly easy -
Instead of using just the relevant part of the Alchemy key:
40Oo3XScVabXXXX8sePUEp9tb90gXXXX
I used the whole URL:
https://eth-rinkeby.alchemyapi.io/v2/40Oo3XScVabXXXX8sePUEp9tb90gXXXX
I had the same experience but when using hardhat not truffle.My internet connection was ok,try switching from Git bash to terminal(CMD).Use a completely new terminal avoid Gitbash and powershell.
Remove the function wrapper from provider inside the network configuration.
ropsten_infura: {
provider: new HDWalletProvider({
mnemonic: {
phrase: mnemonic
},
providerOrUrl: `https://ropsten.infura.io/v3/${project_id}`,
addressIndex
}),
network_id: 3
}
The rinkbery network has been decommissioned. Use Goerli or Sepolia network. Update your truffle config add a section for goerli in networks. E.g
goerli: {
provider: () =>
new HDWalletProvider(
mnemonic,
`https://goerli.infura.io/v3/${INFURAKEY}`
),
network_id: 5, // goerli's id
gas: 4500000, //
gasPrice: 10000000000,
}
The run the command
truffle deploy --network goerli
I am working on the example from the SymmetricDS tutorial. I am using the configuration files corp-000.properties and store-001.properties found in the samples directory of the download zip. I have placed them in the engine directory and edited them so that corp-000 is using a Postgresql DB as master-000 and store-001 is using an MySQL DB as slave-001, both on separate machine.
Here are the config from corp-000.properties:
engine.name=master-000
db.driver=org.postgresql.Driver
db.url=jdbc:postgresql://127.0.0.1/master?stringtype=unspecified
I've also enable the firewall (8080/tcp and 5432/tcp) and changed port from 31415 to 8080: However when the same error still came out and the url returns this result:
This site can’t be reached
<Master-node-IP> refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
What should I do to solve thise problem?
Add to corp configuration
auto.registration=true
Can’t hurt to add
auto.reload=true
Solution by #swm is
The solution is need to set bind ip in symmetricDS
Below are some example configs.
What is happening here is that the main or master cannot see the sync / registration urls and ports not the database.
Make sure the following are setup correctly.
MAIN
registration.url=
sync.url=ttp://<IP>:<PORT>/sync/<SDS_MAIN>
CHILD
registration.url=http://<IP>:<PORT>/sync/<SDS_MAIN>
sync.url=http://<IP>:<PORT>/sync/<SDS_CHILD>
FULL EXAMPLE CONFIGS BELOW
MAIN
engine.name=<SDS_MAIN>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<IP>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=
sync.url=ttp://<IP>:<PORT>/sync/<SDS_MAIN>
group.id=<GID>
external.id=000
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
#start.initial.load.extract.job=false
compression.level=-1
compression.strategy=0
CHILD
engine.name=<SDS_CHILD>
db.driver=net.sourceforge.jtds.jdbc.Driver
db.url=jdbc:jtds:sqlserver://<IP>:1433/<DB>;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880
db.user=***********
db.password=***********
registration.url=http://<IP>:<PORT>/sync/<SDS_MAIN>
sync.url=http://<IP>:<PORT>/sync/<SDS_CHILD>
group.id=<GID>
external.id=100
auto.registration=true
initial.load.create.first=true
sync.table.prefix=sym
start.initial.load.extract.job=false
compression.level=-1
compression.strategy=0
Tried gerrit 2.15 and 2.16.6.
I'm trying to migrate an existing gerrit instance to another hardware.
There is an AOSP mirror with old changes and database.
I've moved everything to the new hardware and can see a list of changes and a list of projects, but I can't access any changes themselves. When I click to access some change, I receive 404 not found and a message "Server error: Not found: root-dir".
When I try to create a new project with the name "root-dir/project_path/project_name", I receive the same message.
Cgroups configurations are correct.
Reindexing doesn't help, neither does reinitializing.
Using ssh I can create a project and pull any change I want.
The only difference between configurations is that now we use nginx in front of a gerrit instance.
Why is this happening?
The issue was actually in an nginx configuration.
server {
...
location / {
proxy_pass http://[ipv6_address]:8443/;
...
}
}
The slash at the end caused the issue: should be proxy_pass http://[ipv6_address]:8443;
Issue 1999: creating project with a / will cause an error (404 not found)
Troubleshooting
Does laravel 5.1 work without internet connection?
I like to create a laravel new application
when i execute laravel new test (with intenet connection) it works well;
but when i execute similar command in the same directory (new anotherName) without internet connection it doesn't work and the nest error message is shown
[GuzzleHttp\Exception\RequestException]
Error creating resource. [url] http://cabinet.laravel.com/latest.zip [type]
2 [message] fopen(http://cabinet.laravel.com/latest.zip): failed to open s
tream: php_network_getaddresses: getaddrinfo failed: Name or service not kn
own [file] /home/<Myname>/.composer/vendor/guzzlehttp/guzzle/src/Adapter/Str
eamAdapter.php [line] 367
Is there a solution because i can't work online always?
When you use the laravel installer it fetches the latest version from the server. One solution would be to initialise a Laravel project, then add it to git version control and then when offline checkout the project to a new folder. You'd have to manually choose a new app key (I think). You will also not be able to composer require or npm install any new packages while offline.
Once you have created it though it should run offline (unless your views are sourcing assets from, say, bootstrap or jQuery CDNs).
Composer 2+:
COMPOSER_DISABLE_NETWORK=1 laravel new myapp
Troubleshooting:
Check your composer version: composer --version - you may have to update to the latest version with composer self-update;
Check you have a global cache: echo $COMPOSER_HOME - you may have to create a ~/.composer and set export COMPOSER_HOME="${HOME}/.composer" to your ~/.bashrc or ~/.zshrc - don't forget to close and open your terminal to apply the changes;
If you get this error https://repo.packagist.org could not be fully loaded (Network disabled, request canceled: https://repo.packagist.org/packages.json), package information was loaded from the local cache and may be out of date, the laravel packages are not in the global cache. Run the command with internet enabled to download the files.
We recently were able to get a small redhat server to experiment with shiny-server. Our IT department got shiny-server running and installed the Oracle client but I can't get ROracle to work in shiny-server. They (IT) have decided that it is an application issue and are starting to give up...
Initially ROracle didn't work on the server at all but we got it working from my user account by setting the LD_LIBRARY_PATH in my .bashrc file. With that done I can log into the server, and query the database from R. I can even use runApp() to run my shiny app from R.
When I try to access that same app through shiny-server I get the following error:
Listening on port 40679
Loading required package: DBI
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/usr/lib64/R/library/ROracle/libs/ROracle.so':
libclntsh.so.11.1: cannot open shared object file: No such file or directory
Error : package or namespace load failed for 'ROracle'
Error : package or namespace load failed for 'ROracle'
which is the same error I was getting on my account before I set the LD_LIBRARY_PATH variable. The server is running as user shiny but apparently won't run any startup scripts so what fixed it for my user won't fix it for the shiny user. This is all very far outside my area of knowledge and as I said, our IT department says they are out of ideas.
I don't have sudo access to the server so the things I can try are limited. I tried setting the LD_LIBRARY_PATH from my server.R script before loading ROracle with Sys.setenv() and by using system() but those didn't work. Our DBA that has been trying to help me tried setting the LD_LIBRARY_PATH in /etc/init/shiny-server.conf but that doesn't seem to work either.
I am really hoping that someone here has some ideas.
Thanks
After a couple of frustrating days I found the solution. You need to set the LD_LIBRARY_PATH variable in the upstart script located at /etc/init/shiny-server.conf, but per the upstart documentation, you need to define it with the env keyword. So adding:
env LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib:$LD_LIBRARY_PATH
to beginning of the shiny-server.conf script seems to have fixed the problem.
Here's my blog post (about a year old) with a detailed description on how to get R Studio Server going with Oracle. LD_LIBRARY_PATH, OCI Lib and all the stuff is provided. Maybe this helps someone else: http://learnfrominfo.tumblr.com/post/38382388429/connect-r-studio-server-to-an-oracle-database-with
I had the same problem with PHP and Apache.
Please refer to the Setting the Oracle Environment in the PHP documentation.
Also, see comment - which syntax did you use in /etc/init/shiny-server.conf?
SERVER.R
library
library(RMySQL)
library(caTools)
library(rpart)
library(RJDBC)
shinyServer(
function(input ,output)
{
dvr =JDBC("oracle.jdbc.OracleDriver",classPath="D:/ojdbc6.jar")
url = ""
user = ""
password = ""
jd =dbConnect(dvr,url, user, password)
a1 <- eventReactive(input$predict,
{
a<-input$ref
table2<-data.frame(dbGetQuery(jd,paste0("
select colnames from Tablename where REFNO=",a," and ROWNUM<15
")))
print(table2)
print(bs<-table2)
}
)
output$dis<-renderTable({
a1()
})
}
)