I have a network_id error when migrating to the Rinkeby test net - migrate

I am following this tutorial, but when it comes time to migrate my smart contract to the Rinkeby test net (truffle migrate --network rinkeby) I am getting the following error:
You must specify a network_id in your 'rinkeby' configuration in order to use this network.
What is confusing is that I have definitely specified a network id in my truffle-config.js file.
This is where I create the 'rinkeby' network:
require("dotenv").config();
const HDWalletProvider = require('#truffle/hdwallet-provider');
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 7545, // Standard Ethereum port (default: none)
network_id: "*" // Any network (default: none)
},
rinkeby: {
provider: () => new HDWalletProvider(process.env.MNEMONIC,
`https://rinkeby.infura.io/v3/${process.env.INFURA_API_KEY}`),
network_id: 4, // Ropsten's id
gas: 5500000, // Ropsten has a lower block limit than mainnet
confirmations: 2, // # of confs to wait between deployments. (default: 0)
timeoutBlock: 200, // # of block before a deployment times out (minimum/default: 50)
skipDryRun: true // Skip dry run before migrations? (default: false for public nets)
},
},
By the way, I am using an Infura endpoint and HD Wallet Provider. Any help would be greatly appreciated, because I haven't been able to find anywhere where this particular situation is addressed.

So, the issue was that I needed to run the command 'truffle migrate --network rinkeby' from my project's root directory (in my case, the folder eth-hello-world). Then, after deleting the build/contracts folder I already had, the contracts successfully compiled and were deployed. It turns-out it was a pretty simple solution! :)

i had this same issue when i was migrating with the command 'truffle migrate --network rinkeby'. Replacing the command with 'truffle deploy --network rinkeby' solved the problem for me

First off I would recommend that you use #truffle/hdwallet-provider and not the library you used above. That is a very old and deprecated version of that library (that's what it was previously named).
What happens when you use "*" for the network_id property? Also, what version of Truffle are you using? You can find this out by running truffle version.

I tried your method deleting the build/contracts folder but it didn't work. So I backed up my project folder and ran truffle init, then I restored the solidity contract files using the backup files and ran truffle migrate --network rinkeby again, and it worked.

I had such a problem, it seems kinda a bug(maybe), but I just erased the network_id line once, migrate, and after that, added it again, and the problem was gone!
hope your problem gets solved by this simple solution.

Related

Drop-Database Failed creating connection: Couldn't set trusted_connection

I have a new local ASP.Net Core 6 application that uses docker and is connected to a local docker postgres container (individually run containers, not with Docker Compose). I added the Npgsql package and managed to successfully connect to it and create tables (Checked with PgAdmin). The issue is whenever I try to run 'Drop-Database' from the Package Manager Console, I get the below error:
Performing the operation "Drop-Database" on target "database 'Failed creating connection: Couldn't set trusted_connection (Parameter 'trusted_connection')' on server 'Failed creating connection: Couldn't set trusted_connection (Parameter 'trusted_connection')'".
To me this doesn't make sense because I am not using any trusted connection parameter in my connection string:
"ConnectionStrings": {
"DefaultConnection": "Host=host.docker.internal;Database=local;Username=postgres;Password=password;Include Error Detail=true"
}
I have two other solutions that use docker and successfully run PMC commands to a local docker postgres container so I am not sure what is different this time. I have not found any similar resources for troubleshooting this, and specifying 'Trusted_Connection=true' or 'Integrated Security=True' does not work.
My Program.cs file is unchanged from the starter project except for the below for Npgsql:
var connectionString = builder.Configuration.GetConnectionString("DefaultConnection");
builder.Services.AddDbContext<ApplicationDbContext>(options => options
.UseNpgsql(connectionString)
.EnableDetailedErrors());
Please let me know if there is anything that could be causing this issue.
So the issue was previously on a different project, I manually set the enviroment in Package Manager Console using:
$env:ASPNETCORE_ENVIRONMENT='Local'
So when trying to drop the database, it was searching for a Local appsettings connection string which did not exist on my new project. Setting the environment back to Development solved this issue for me.

How to force .env variables update in a nuxt project?

hi!
I wonder if anyone knows if there is any way to force the update of the .env file.
First of all, every time that I modified my .env variables, the changes were happening right away but now I started using the next build configuration:
build: {
hardSource: true,
cache: true,
parallel: true,
}
And ever since I started using those experimental features, the .env variables do not seem to get updated after I update one value in my .env file.
In my project, I develop the API in one machine and the front end on other machine (Just for convenience), so, sometimes my machine has the next IP address: 192.168.100.100 and sometimes 192.168.100.101, etc.
My project uses the environment variable (in the .env file)
API_URL=http://192.168.100.100:4100
BASE_URL=http://localhost:4200
So, when the local IP address of the first machine changes, I have to update the .env file.
The problem now is that even after killing the app, deleting .nuxt folder and running npm run dev I still see the API requests having the previous IP address.
Solutions?
I have thought of disabling the cache and hardSource configurations but they are really helpful to me and the IP changes are not that often, but once in a while I have to update other variable, so that's not a solution for me.
I have also thought of disabling DHCP on my other machine and assigning it a fixed local IP address, but that is not ideal for me, although I think I will do that for now, hoping that in the future I get to know a better way of updating the environment variables (Because sooner or later I will need to update another variable that has nothing to do with the IP address)
I'd like to know if there is a way to force the .env variables to be updated in a nuxt project with hardSource, cache and parallel set to true.

Rinkeby contracts deployment Error: Timeout

I was trying to follow this tutorial:
https://docs.opensea.io/docs/1-structuring-your-smart-contract
And even found this extremely helpful YouTube video to guide me:
https://www.youtube.com/watch?v=lbXcvRx0o3Y&ab_channel=DanViau
But I've encountered a problem after installing and setting up everything I needed. The problem occured when I tried to deploy the contracts using this bash command:
truffle deploy --network rinkeby
The error message I got is:
Error: There was a timeout while attempting to connect to the network.
Check to see that your provider is valid.
If you have a slow internet connection, try configuring a longer timeout in your Truffle config. Use the networks[networkName].networkCheckTimeout property to do this.
at Timeout._onTimeout (C:\Users\alonb\.nvm\versions\node\v12.22.5\bin\node_modules\truffle\build\webpack:\packages\provider\index.js:56:1)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
It's not caused by slow internet connection - I know that because I have tried executing this command on 3 different WiFi connections, one at 200 Mb/s rate.
I have tried to change the truffle-config.js file and add a longer timeout threshold (like suggested here), but the only thing that changed was that the error message took much longer to appear.
Technical info - I'm using Git Bash, npm version 6.14.14, nvm version 0.38.0, node version 12.22.5.
Any suggestions? I'm lost.
Alon
I also ran into this error when following the same tutorial.
I use Alchemy (not Infura), and the issue was my API_KEY.
In other tutorials I've followed, the scripts require the full alchemy API Key (ex. "https://eth-rinkeby.alchemyapi.io/v2/<random-key>").
So, when I was following this tutorial, that is what I supplied. And, I ran into the error you reported.
But when I reviewed the truffle.js script provided by the tutorial authors, I found this:
const rinkebyNodeUrl = isInfura
? "https://rinkeby.infura.io/v3/" + NODE_API_KEY
: "https://eth-rinkeby.alchemyapi.io/v2/" + NODE_API_KEY;
Thus, the script was producing:
rinkebyNodeUrl = https://eth-rinkeby.alchemyapi.io/v2/https://eth-rinkeby.alchemyapi.io/v2/<**random-key**>
...which is clearly wrong.
Thus, ensuring I set my API_KEY environment variable only to random-key and not https://eth-rinkeby.alchemyapi.io/v2/https://eth-rinkeby.alchemyapi.io/v2/<random-key>, my contract deployed successfully.
Also, make sure you have enough ETH in your wallet on the Rinkeby network. Faucets always seem to work for a little while then stop working, so do some Google searches to find one that is currently functional.
The solution is incredibly easy -
Instead of using just the relevant part of the Alchemy key:
40Oo3XScVabXXXX8sePUEp9tb90gXXXX
I used the whole URL:
https://eth-rinkeby.alchemyapi.io/v2/40Oo3XScVabXXXX8sePUEp9tb90gXXXX
I had the same experience but when using hardhat not truffle.My internet connection was ok,try switching from Git bash to terminal(CMD).Use a completely new terminal avoid Gitbash and powershell.
Remove the function wrapper from provider inside the network configuration.
ropsten_infura: {
provider: new HDWalletProvider({
mnemonic: {
phrase: mnemonic
},
providerOrUrl: `https://ropsten.infura.io/v3/${project_id}`,
addressIndex
}),
network_id: 3
}
The rinkbery network has been decommissioned. Use Goerli or Sepolia network. Update your truffle config add a section for goerli in networks. E.g
goerli: {
provider: () =>
new HDWalletProvider(
mnemonic,
`https://goerli.infura.io/v3/${INFURAKEY}`
),
network_id: 5, // goerli's id
gas: 4500000, //
gasPrice: 10000000000,
}
The run the command
truffle deploy --network goerli

Docker failing to see updated fixtures CSV in rspec test directory

This one is quite strange.
I am running a very typical Docker container that holds a Rails API. Inside this API, I have an endpoint which takes an upload of a CSV and does some things and stuff.
Here is the exact flow:
vim spec/fixtuers/bid_update.csv
# fill it with some data
# now we call the spec that uses this fixture
docker-compose run --rm web bundle exec rspec spec/requests/bids_spec.rb
# and now the csv is loaded and I can see it as plaintext
However, after creating this, I decided to change the content of the CSV. So I do this, adding a column and respective value to it for each piece.
Now, however, when we run our spec again after saving this it has the old version of the CSV. The one originally used at the breakpoint in the spec.
cat'ing out the CSV shows it clearly should have the new content.
Restarting the VM does nothing. The only solution I've found is to docker-machine rm dev and build a new machine (my main one for this is called dev).
I am entirely perplexed as to what could cause this or a simple means to fix it (building with all those images takes a while).
Ideas? Inform me I'm an idiot and I just had to press 0 for an operator and they would have fixed it?
Any help appreciated :)
I think it could be an issue with how virtualbox shares folders with your environment. More information here https://github.com/mitchellh/vagrant/issues/351#issuecomment-1339640

Receiving this odd error with Vagrant, wondering if someone could lend a hand

I am trying to set up Vagrant. I am following the guide on the web site and currently have trouble with the Provisioning part of the guide (http://vagrantup.com/docs/getting-started/provisioning.html) I have followed this thing to exactly as it is on the site but I am receiving this error, I am on Mac OSX if it's of any important
evan#superduper ~/vagrant_guide $ vagrant up
There was a problem with the configuration of Vagrant. The error message(s)
are printed below:
chef:
* Run list must not be empty.
Here is my code for the Vagrant file as well if this also helps:
Vagrant::Config.run do |config|
config.vm.box = "lucid32"
# Enable the chef solo provisioner
config.vm.provisioner = :chef_solo
# Grab the cookbooks from the Vagrant files
config.chef.recipe_url = "http://files.vagrantup.com/getting_started/cookbooks.tar.gz"
end
Does anyone what this is from and how I can fix it?
Thanks
J
You need to add this line to your Vagrantfile:
config.chef.add_recipe("vagrant_main")
In vagrant 0.6.0 and above you must add at least one recipe in your Vagrantfile, because the default recipe list is empty. Old Vagfiles have to be updated, to work with 0.6.0+
Here is the change.
I recommend you to read the changelog before updating vagrant. The project is beta, so the API is changing rapidly.
The author promises that the API will be stable at version 1.0 :-)

Resources