How to make minting function that takes 0.1 ethereum in Solidity? - token

Can someone please explain how can I make a function that mints a token for 0.1 eth in Solidity and verify it in HardHat? I have done this so far:
HardHat:
[owner] = await ethers.getSigners();
const Nft = await ethers.getContractFactory("contract");
const nft = await Nft.deploy(owner.address);
prov = ethers.getDefaultProvider();
let balance = await prov.getBalance(owner.address);
console.log(balance); <-- evaluates to 10000000000000
await hoodie.mint({ value: ethers.utils.parseEther("0.1") });
console.log(balance); <-- still evaluates to 10000000000000
Solidity:
function mint() payable public returns (uint256) {;
require(msg.value == 0.1 ether || msg.value == 100000000000000000 wei, "Transaction amount has to be 0.1 eth");
_safeMint(msg.sender, token_id);
return token_id;
}
Thanks in advance!

in that case,
you may try adding approve and transferFrom methods.
approve is needed to make you approve the fund transfer
transferFrom is needed to make the fund transfer happen
the contract:
function mint() payable public returns (uint256) {
require(msg.value == 0.1 ether || msg.value == 100000000000000000
wei, "Transaction amount has to be 0.1 eth");
IERC20(*the ETH address here*).approve(msg.sender, msg.value);
IERC20(*the ETH address here*).transferFrom(msg.sender, address(this), msg.value);
_safeMint(msg.sender, token_id);
return token_id;
}

You need to use payable to transfer native token such as ETH or BNB in smart contract.
And then you can call _safeMint() private function with token ID.
You also need to increase token ID after mint.
function mint() payable public returns (uint256) {
require(msg.value == 0.1 ether || msg.value == 100000000000000000
wei, "Transaction amount has to be 0.1 eth");
payable(this).transfer(msg.value);
_safeMint(msg.sender, token_id);
token_id.increament();
return token_id;
}

Related

Stream of millions of objects takes too much memory

I'm generating a load of coordinates (made of 3 numbers) within a geographical area. However, using Streams (which should be much more efficient than Lists), fills up the app's memory very quickly, as can be seen in this screenshot from Observatory.
I need a structure where events can go in, and be read out one by one, and when this happens, removed from the structure. As far as I understand, that is what a Stream is. When you add a value, the old one is removed.
Unfortunatley, this doesn't appear to be happening. Instead, the stream just grows larger and larger - or at least something reading it does, but I just run the .length method on the returned stream, and that's it.
Here's the function that starts the Isolate that returns the stream of coordinate tiles. I'll omit the actual generator, as it's not important: it just sends a Coord to the SendPort.
static Stream<Coords<num>> _generateTilesComputer(
DownloadableRegion region,
) async* {
List<List<double>> serialiseOutline(l) => (l as List)
.cast<LatLng>()
.map((e) => [e.latitude, e.latitude])
.toList();
final port = ReceivePort();
final tilesCalc = await Isolate.spawn(
region.type == RegionType.rectangle
? rectangleTiles
: region.type == RegionType.circle
? circleTiles
: lineTiles,
{
'port': port.sendPort,
'rectOutline': region.type != RegionType.rectangle
? null
: serialiseOutline(region.points),
'circleOutline': region.type != RegionType.circle
? null
: serialiseOutline(region.points),
'lineOutline': region.type != RegionType.line
? null
: (region.points as List<List<LatLng>>)
.chunked(4)
.map((e) => e.map(serialiseOutline)),
'minZoom': region.minZoom,
'maxZoom': region.maxZoom,
'crs': region.crs,
'tileSize': region.options.tileSize,
},
);
await for (final Coords<num>? coord in port
.skip(region.start)
.take(((region.end ?? double.maxFinite) - region.start).toInt())
.cast()) {
if (coord == null) {
port.close();
tilesCalc.kill();
return;
}
yield coord;
}
}
}
How can I prevent this memory leak? Happy to add more info if needed, but the full source code can be found at https://github.com/JaffaKetchup/flutter_map_tile_caching.
Does this help a bit? It's your bottom bit. The .batch method is used to read values in batches of 500, which can be changed to a different value if it is needed. The count variable is used to keep track of the number of values processed, and when it reaches the limit, the port is closed and the isolate is killed.
int count = 0;
final limit = ((region.end ?? double.maxFinite) - region.start).toInt();
await for (final Coords<num> coord in port
.skip(region.start)
.batch(500)) {
if (count >= limit) {
port.close();
tilesCalc.kill();
return;
}
count += coord.length;
yield coord;
}
}
}
To force the deletion of values from the stream when they are read out, you can implement a buffer using a StreamController and limit the number of values in the buffer. When the buffer reaches its limit, you can remove the first value in the buffer and add the next one. This will ensure that the memory usage stays under control.
Here's an example implementation:
static Stream<Coords<num>> _generateTilesComputer(
DownloadableRegion region,
) async* {
List<List<double>> serialiseOutline(l) => (l as List)
.cast<LatLng>()
.map((e) => [e.latitude, e.latitude])
.toList();
final port = ReceivePort();
final controller = StreamController<Coords<num>>();
final tilesCalc = await Isolate.spawn(
region.type == RegionType.rectangle
? rectangleTiles
: region.type == RegionType.circle
? circleTiles
: lineTiles,
{
'port': port.sendPort,
'rectOutline': region.type != RegionType.rectangle
? null
: serialiseOutline(region.points),
'circleOutline': region.type != RegionType.circle
? null
: serialiseOutline(region.points),
'lineOutline': region.type != RegionType.line
? null
: (region.points as List<List<LatLng>>)
.chunked(4)
.map((e) => e.map(serialiseOutline)),
'minZoom': region.minZoom,
'maxZoom': region.maxZoom,
'crs': region.crs,
'tileSize': region.options.tileSize,
},
);
final bufferSize = 1000;
int count = 0;
port
.skip(region.start)
.take(((region.end ?? double.maxFinite) - region.start).toInt())
.cast()
.listen((Coords<num> coord) {
if (coord == null) {
controller.close();
port.close();
tilesCalc.kill();
return;
}
if (count >= bufferSize) {
controller.add(coord);
controller.remove(0);
} else {
controller.add(coord);
count++;
}
});
yield* controller.stream;
}

USDC smart contract on polygon revoked all my approve request

Simple. I'm trying to interact with USDC in my Dapp.
async function approveUSDC() {
try {
const USDCContractAdress = "0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174";
const USDCContractAbi = [
"function approve(address spender, uint256 amount) public",
];
const address = "0x644F303448a1d14Fa24Bf18200Dd41790f6B9920"; //BGGExchange Contract
const amount = 50000000000000000000n ;
const USDCContract = new ethers.Contract(USDCContractAdress, USDCContractAbi, provider);
const result = await USDCContract.connect(signer).approve(address, amount);
console.log(result);
} catch (error) {
console.log(error);
}
}
Everytime I call the approve function on the USDC smart contract of polygon, the transaction return as revoked. I'm baffled please help.

Implementing sellToken in Solidity

I am writing a token smart contract in Remix, but the sellToken always doesn't work and get the error below:
enter image description here
Here is my code:
function sellToken(uint256 amount) public returns (bool){
require(amount > 0, "You should sell some tokens");
require(balance[msg.sender] >= amount, "You should have enough amount of tokens to sell");
balance[msg.sender] -= amount;
uint256 refund = amount * tokenPrice;
payable(msg.sender).transfer(refund);
emit Sell(msg.sender, amount);
return true;
}

How to do multiple updates for the same asset in a transaction

Here I want to do the multiple update commands for the same asset in a single transaction based on conditions.
This is my sample CTO File:
asset SampleAsset identified by id{
o String id
o Integer value
o Integer value2
o Integer value3
}
transaction SampleTransaction {
o Integer value
}
This is my sample JS file:
async function sampleTransaction(tx) {
var value = tx.value;
await updateValue(value);
if(value < MAX){ //MAX=10000
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value2 = max;
await assetRegistry1.update(data1); //updateNo2
}
else{
const assetRegistry1 = await getAssetRegistry('org.example.basic.SampleAsset');
var data1 = await assetRegistry.get("1");
data1.value3 = value;
await assetRegistry1.update(data1); //UpdateNo2
}
}
async function updateValue(value){
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
var data = await assetRegistry.get("1");
data.value = value;
await assetRegistry.update(data); //UpdateNo1
}
With the above code, only latest update (UpdateNo2) command is making changes to the asset. what about the first update?
In Hyperledger fabric during proposal simulation any writes made to keys cannot be read back. Hyperledger composer is subject to that same limitation both when it is used with a real fabric implementation as well as when it's used in a simulation mode (for example when using the web connection in composer-playground).
This is the problem you are seeing in your TP function. Every time you perform
let data = await assetRegistry.get("1");
in the same transaction, you are getting the original asset, you don't get a version of the asset that has been updated earlier in the transaction. So what is finally put into the world state when the transaction is committed will be only the last change you made which is why only UpdateNo2 is being seen.
Try something like this (Note I've not tested it)
async function sampleTransaction(tx) {
const assetRegistry = await getAssetRegistry('org.example.basic.SampleAsset');
const data = await assetRegistry1.get("1");
const value = tx.value;
updateValue(data, value);
if(value < MAX){ //MAX=10000
data.value2 = MAX;
}
else{
data.value3 = value;
}
await assetRegistry1.update(data);
}
function updateValue(data, value){
data.value = value;
}
(Note I have left the function structure in just to show the equivalent but updateValue can easily be removed)

Safari dropping Web Socket connection due to idle/inactivity when page not in focus

We are facing this issues with our app, only in safari browser, especially on iOS devices.
Current behavior
Not sure if this is a known issue (I tried searching but found nothing). Safari for Mac appears to be silently dropping web socket connections due to inactivity/idle if the page/tab is not in focus.
The biggest issue is that in mobile iOS X is very persistent.
Steps to reproduce
Open Safari > Website loads > Put Safari in Idle and open any application or lock the device.
On wake up, Safari is closing the connection and the data is not displayed anymore, we get infinite loading of the modules where we request the data.
Expected behavior
Websockets should be kept alive via the heartbeat functionality. Not seeing this behavior in other browsers so unlikely to be the code.
Is this possibly some sort of power-saving feature that is overriding/ignoring the heartbeats?
import 'whatwg-fetch';
import Config from "../config/main";
import WS from "./websocket";
import Helpers from "./helperFunctions";
var Zergling = (function (WS, Config) {
'use strict';
var Zergling = {};
var subscriptions = {}, useWebSocket = false, sessionRequestIsInProgress = false, loginInProgress = false,
uiLogggedIn = false, // uiLogggedIn is the login state displayed in UI (sometimes it differs from real one, see delayedLogoutIfNotRestored func)
authData, session, connectionAvailable, isLoggedIn, longPollUrl;
Zergling.loginStates = {
LOGGED_OUT: 0,
LOGGED_IN: 1,
IN_PROGRESS: 2
};
Zergling.codes = { // Swarm response codes
OK: 0,
SESSION_LOST: 5,
NEED_TO_LOGIN: 12
};
function getLanguageCode (lng) {
if (Config.swarm.languageMap && Config.swarm.languageMap[lng]) {
return Config.swarm.languageMap[lng];
}
return lng;
}
//helper func for fetch
function checkStatus (response) {
if (response.status >= 200 && response.status < 300) {
return response;
} else {
var error = new Error(response.statusText);
error.response = response;
throw error;
}
}
//helper func for fetch
function parseJSON (response) {
return response.json();
}
/**
* #description returns randomly selected(taking weight into consideration) long poll url
* #returns {String} long polling URL
*/
function getLongPollUrl () {
if (!longPollUrl) {
longPollUrl = Helpers.getWeightedRandom(Config.swarm.url).url;
console.debug('long Polling URL selected:', longPollUrl);
}
return longPollUrl;
}
/**
* #description
* Applies the diff on object
* properties having null values in diff are removed from object, others' values are replaced.
*
* Also checks the 'price' field for changes and adds new field 'price_change' as sibling
* which indicates the change direction (1 - up, -1 down, null - unchanged)
*
* #param {Object} current current object
* #param {Object} diff received diff
*/
function destructivelyUpdateObject (current, diff) {
if (current === undefined || !(current instanceof Object)) {
throw new Error('wrong call');
}
for (var key in diff) {
if (!diff.hasOwnProperty(key)) continue;
var val = diff[key];
if (val === null) {
delete current[key];
} else if (typeof val !== 'object') {
current[key] = val;
} else { // diff[key] is Object
if (typeof current[key] !== 'object' || current[key] === null) {
current[key] = val;
} else {
var hasPrice = (current[key].price !== undefined);
var oldPrice;
if (hasPrice) {
oldPrice = current[key].price;
}
destructivelyUpdateObject(current[key], val);
if (hasPrice) {
current[key].price_change = (val.price === oldPrice) ? null : (oldPrice < val.price) * 2 - 1;
}
}
}
}
}
This is and iOS feature that protects users against code draining their battery...
Push notifications for background applications should be performed using iOS's push notification system rather than by keeping an open connection alive.
There are hacks around this limitation, but the truth is that the limitation is good for the users and shouldn't be circumvented.
Read the technical note in the link for more details.

Resources