Metaplex Candy machine mainnet NFT depoly issue - upload

I have made Solana NFT using Metaplex Candy Machine.
I have uploaded 1000 NFT. But In candy machine UI, shows available count is 985.
I lost 15 NFTs.
Also, if I click Mint Button, the count was reduced to 3 at once. and can't see NFT on my phantom wallet.
It worked on devnet perfectly, but after deploying mainnet, it occurred above error.
Please help me with this issue. how to fix this?.
I can't retrieve the lost NFTs?

I have not seen this exact error but it could be because you did not run the verify_upload cmd after you uploaded.
Source:
https://docs.metaplex.com/candy-machine-v2/verify-upload
This is always recommended as network issues can cause some transactions to fail in large uploads and the CLI won't retry if they fail. The only way to confirm they are all uploaded is
ts-node ~/metaplex/js/packages/cli/src/candy-machine-v2-cli.ts verify_upload -e devnet -k ~/.config/solana/devnet.json -c example
If this fails with:
Error: not all NFTs checked out. check out logs above for details
then just rerun the upload cmd and verify again until it outputs
Ready to deploy!

You can launch the deployment of NFTs multiple times until it is okay.

Related

Gitea Docker Registry - Unauthorized on first login?

I have a Jenkins pipeline where a Docker (Podman) image is built und pushed to a private Gitea docker registry. This basically works. But I have the problem, that the first build after several hours, that means on the next day, crashes because pushing to the Gitea registry leads to:
Error: writing blob: uploading layer to https://192.168.0.5:4000/v2/myorg/myproject/blobs/uploads/ptuh7yizsrqvx5wlg9uctlzdv?digest=sha256%3A7ca0dabc572c112e5141bac7e5f29a0c1b1f727ce939ac1e7da342d3adf324a: received unexpected HTTP status: 500 Internal Server Error
When I click on the link, it shows me:
errors
0
code "UNAUTHORIZED"
message ""
Since I trigger the login from a remote host via Jenkins pipeline, I do that with a script, but I'm pretty sure that this does not matter. The content at the script is:
#!/bin/bash
podman login -u builderuser -p builderpassword 192.168.0.5:4000
I see in the logs that Login Succeeded!, but after the next step, I get the error from above after it tries to copy the blob to the registry.
I also tried to add a "logout" before login in the script via
podman logout 192.168.0.5:4000
But this does not help either.
When I trigger the same build again, the process works without problems. Maybe some caching problem anywhere? The problem appears at the first build on the next day, so I guess there is some session timeout anywhere after several hours. Any ideas?
[UPDATE]
I think this is a bug in Gitea. In log I see this:
Nov 11 08:50:40 server gitea[34985]: 2022/11/11 08:50:40 ...es/container/blob.go:66:func1() [E] [636dfed0-7] Error inserting package: pq: duplicate key value violates unique constraint »UQE_package_version_s«
And in code of Gitea in auths.go, I see a comment leading me to the assumption that they are aware of this problem:
// FIXME: if the name conflicts, it will result in 500: Error 1062: Duplicate entry 'aa' for key 'login_source.UQE_login_source_name'
What I still don't understand is, why this happens only once, at the beginning of the day, and not always.
Did you check the logs on the registry?
If you get something like
<path>/registry/docker: permission denied
it means the error happens if your user does not have the permission to write.
If on the permission side you're ok, than the error shown is quite random and has the same root cause as gitlab-org/gitlab#215715
The error has been fixed for gitlab. You'd need to check on Jenkins if they have some open issues similar to this.

GoogleCloudPlatform/gcr-cleaner cannot delete the stale images

I am trying to use the gcr-cleaner that recommended by google to clean up my stale images. However, it cannot delete anything as expected even if it returns me it executes successfully.
I have granted the browser, cloud run admin, service account user, and storage admin roles for the service account. Also, the docker configuration is successful as well. i have tried both GitHub action and cloud run, none of them work.
Even if I give it a wrong repo name, it will show
Deleting refs older than 2022-10-25T20:08:16Z on 1 repo(s)...
gcr.io/project-id/my-repo
✗ no refs were deleted
But there are a bunch of images are older than that timestamp.
Anyone has the same issue before? How should i solve it?
I just had this issue right now, my problem was that without the flag -tag-filter-any it does not delete tagged images, and all images I wanted to delete are tagged. What solved for me was then setting this flag with the regex for my tags:
docker run -v "${HOME}/.config/gcloud:/.config/gcloud" -it us-docker.pkg.dev/gcr-cleaner/gcr-cleaner/gcr-cleaner-cli -grace 720h -keep 5 -repo gcr.io/[MY-PROJECT]/[MY-REPO] -tag-filter-any "^(\d+).(\d+).(\d+)$"

Maxmind geoipupdate gets http 403 on docker run

I am using maxmind GeoLite2 binary database for geolocation services and I want to update this periodically.
It works fine on updating through geoipupdate program installed via brew.
However Maxmind provides a docker image to update db periodically.
When I try to run docker command below,
docker run --env-file IdeaProjects/ip-geolocation-service/src/main/resources/application.properties -v /Users/me/GeoIp maxmindinc/geoipupdate
With the environment file refers to application.properties,
GEOIPUPDATE_ACCOUNT_ID=12345
GEOIPUPDATE_LICENSE_KEY=aaaaaaaaaa
GEOIPUPDATE_EDITION_IDS=GeoIP2-Country
I gets the following error:
# STATE: Creating configuration file at /etc/GeoIP.conf
# STATE: Running geoipupdate
error retrieving updates: error while getting database for GeoIP2-Country: unexpected HTTP status code: received HTTP status code: 403: Invalid product ID or subscription expired for GeoIP2-Country
Since my credentials is working on manual trigger, I wonder why it has not working on docker run? Any idea for spotting problem or anyone has faced with it?
You write that you want to use the free GeoLite2 database but the ID you use looks like the commercial/paid one. Try the following instead:
GEOIPUPDATE_EDITION_IDS=GeoLite2-Country
Source: https://github.com/maxmind/geoipupdate/blob/main/doc/docker.md

Docker mkimage_yum.sh for centos 7 fails

A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.

Docker. How to resume downloading image when interrupted?

How can I resume pull when disconnected? The pull process always start from the beginning every time I run docker pull some-image again after disconnected. My connection is so unstable that even downloading just a 100MB image take so long and almost fails every time. So, it is almost impossible for me to pull a bigger image. So, how can I resume the pull process?
Update:
The pull process will now automatically resume based on which layers have already been downloaded. This was implemented with https://github.com/moby/moby/pull/18353.
Old:
There is no resume feature yet. However there are discussions around this feature being implemented with docker's download manager.
Docker's code isn't as updated as the moby in development repository on github. People have been having issues for several years relating to this. I had tried to manually use several patches which aren't in the upstream yet, and none worked decent.
The github repository for moby (docker's development repo) has a script called download-frozen-image-v2.sh. This script uses bash, curl, and other things like JSON interpreters via command line. It will retrieve a docker token, and then download all of the layers to a local directory. You can then use 'docker load' to insert into your local docker installation.
It does not do well with resume though. It had some comment in the script relating to 'curl -C' isn't working. I had tracked down, and fixed this problem. I made a modification which uses a ".headers" file to retrieve initially, which has always returned a 302 while I've been monitoring, and then retrieves the final using curl (+ resume support) to the layer tar file. It also has to loop on the calling function which retrieves a valid token which unfortunately only lasts about 30 minutes.
It will loop this process until it receives a 416 stating that there is no resume possible since it's ranges have been fulfilled. It also verifies the size against a curl header retrieval. I have been able to retrieve all images necessary using this modified script. Docker has many more layers relating to retrieval, and has remote control processes (Docker client) which make it more difficult to control, and they viewed this issue as only affecting some people on bad connections.
I hope this script can help you as much as it has helped me:
Changes:
fetch_blob function uses a temporary file for its first connection. It then retrieves 30x HTTP redirect from this. It attempts a header retrieval on the final url and checks whether the local copy has the full file. Otherwise, it will begin a resume curl operation. The calling function which passes it a valid token has a loop surrounding retrieving a token, and fetch_blob which ensures the full file is obtained.
The only other variation is a bandwidth limit variable which can be set at the top, or via "BW:10" command line parameter. I needed this to allow my connection to be viable for other operations.
Click here for the modified script.
In the future it would be nice if docker's internal client performed resuming properly. Increasing the amount of time for the token's validation would help tremendously..
Brief views of change code:
#loop until FULL_FILE is set in fetch_blob.. this is for bad/slow connections
while [ "$FULL_FILE" != "1" ];do
local token="$(curl -fsSL "$authBase/token?service=$authService&scope=repository:$image:pull" | jq --raw-output '.token')"
fetch_blob "$token" "$image" "$layerDigest" "$dir/$layerTar" --progress
sleep 1
done
Another section from fetch_blob:
while :; do
#if the file already exists.. we will be resuming..
if [ -f "$targetFile" ];then
#getting current size of file we are resuming
CUR=`stat --printf="%s" $targetFile`
#use curl to get headers to find content-length of the full file
LEN=`curl -I -fL "${curlArgs[#]}" "$blobRedirect"|grep content-length|cut -d" " -f2`
#if we already have the entire file... lets stop curl from erroring with 416
if [ "$CUR" == "${LEN//[!0-9]/}" ]; then
FULL_FILE=1
break
fi
fi
HTTP_CODE=`curl -w %{http_code} -C - --tr-encoding --compressed --progress-bar -fL "${curlArgs[#]}" "$blobRedirect" -o "$targetFile"`
if [ "$HTTP_CODE" == "403" ]; then
#token expired so the server stopped allowing us to resume, lets return without setting FULL_FILE and itll restart this func w new token
FULL_FILE=0
break
fi
if [ "$HTTP_CODE" == "416" ]; then
FULL_FILE=1
break
fi
sleep 1
done
Try this
ps -ef | grep docker
Get PID of all the docker pull command and do a kill -9 on them. Once killed, re-issue the docker pull <image>:<tag> command.
This worked for me!

Resources