Remix automatically reverting token deployment to address only - token

I copied a contract with pragma experimental ABI I found to play around with the features on testnet.
Contract copied:
https://bscscan.com/address/0x68590a47578e5060a29fd99654f4556dbfa05d10#code and here is my testnet contract deployed to the BSC: https://testnet.bscscan.com/address/0xb7030b205dfec92df0a9eacc1b418c39df77c3a0
When compiling, I've tried optimization enabled, and disabled. Auto compile enabled and disabled. Same issue either way. Only the address gets compiled.
Compiler defaults to just the address, so I use the drop down in the compiler menu and selected the part of the contract with the token name. As soon as I hit "Compile," the contract drop down automatically reverts back to just the address.
I tried seeing if it would let me deploy the contract by selecting the part of contract with the token name in the deployment screen, even though it appears to only want to compile the address.
Doesn't work either.
It only compiles the address while my wallet is connected to mainnet as well. Not sure what causes this?

Remix compiles all contracts in the file, when you hit the Compile button.
As soon as I hit "Compile," the contract drop down automatically reverts back to just the address.
I agree that this is a bit annoying.
But you can still chose the specific contract to deploy - in the "Deploy & run transactions" tab.

Related

MSIX packaged application will always ask for firewall rule when it updates

I have an MSIX application that is being built in the pipeline but each time the application updates and starts it will ask for firewall rule to be added as soon as it makes a call to some http API.
If I go to firewall and check for all previous rules referencing this application it becomes clear why this is happening. So for instance I now have around 20 rules just from testing today and they are all tied to program and look something like this:
C:\program files\windowsapps{guid*}{app_version**}{another_uniqueIdentifier***}\MyApplication.exe
(*) This is drawn from manifest -> identity -> name
(**) There is a powershell script that updates package.appxmanifest version with build number from the pipeline
(***) I am not sure where this comes from
So obviously everytime I build a new version of the application it will receive this value and when I update and run it, firewall will think it is a new application. I doubt this is by design as Firewall would get clogged with firewall rules very soon in some instances where applications receive a lot of new updates.
What am I missing here?
My application (*.wapproj targeting the actual application) is being built from the Azure DevOps pipeline and these are msbuild parameters I use:
/p:ApplicationVersion=$(Build.BuildNumber)
/p:Version=$(Build.BuildNumber)
/p:AllowUnsafeBlocks=true
/p:SelfContained=false
/p:AppxPackageSigningEnabled=true
/p:PackageCertificateThumbprint="$(Thumbprint)"
/p:AppxPackageSigningTimestampServerUrl="http://timestamp.digicert.com"
/p:AppxPackageSigningTimestampDigestAlgorithm="SHA256"
/p:AppInstallerUri="https://ourlocal.server.com/Clients/MyApplication/"

TFS server workspace with detection of local changes

I would like to leverage the benefits of a server workspace (seeing who has checked out which file) together with the ability of the Team Explorer - Pending Changes to detect local changes (which with my current configuration works only when using a local workspace).
Is there a possibility to configure such behavior? I do not understand which technical limitation makes my server workspace incapable of sensing that I added, removed or changed files without checking them out first. It should at least be capable of showing these and then prompting me to check them out before I can include them for commit.
Is there a possibility to configure such behavior?
Nothing is built in. If you take locks on the server, you need to explicitly notify the server, but that would mean having something running all the time to check for file changes and see if a lock could be taken (and how would arbitrary tools handle that failing?)
You could create something yourself to do this (the TFS-VC API is available).
Meanwhile most developers find the local model works better (don't require exclusive access, in the cases where there is a conflict it is resolved at checkin).

OAuth Error - script deleted or disabled

"The OAuth identity of this script has been deleted or disabled. This may be due to a Terms of Service violation."
I run a google script on a google sheet which notifies other users at my organization that I have uploaded the google sheet information to our database. This script suddenly stopped working and threw up the error quoted at the top of this post. As far as I know, there were no recent changes to the ownership of relevant files/folders. Please help! I need this script in my everday work.
This seems to fix the problem, but in fact, it does not.
The problem is related to the cloud project bound to the script, The reason turned out to be that the Terms of Service (TOS) for Cloud projects changed and that the user has to acknowledge this. If that does not happen then after some time, the whole shebang is disabled and you get the error message.
(This is why a copy seems to work: it works until some Google bot notices that it is bound to a project without TOS acknowledgment, and then it disables it.)
The solution:
open the script
click Resources > Cloud Platform project
click on the bound project
This will open up the Cloud console and also show the popup for you to acknowledge the new TOS. If you agree to this, you're set, and your script works again.
Note: it seems that you need to do this only once for the Cloud environment. So if you have several scripts, then you need to do this for one script only. Or access the Cloud environment directly and acknowledge the new TOS.
Note: even if you thought your script is not bound to a Cloud project, trust me... it is. If you do not bind it yourself, then it is bound to a default project specific to that script.
Hope this helps.
Had the same issue, the fix is :
Copy your script locally to some temporary file.
Delete the old project in Google script editor.
Reload script editor and create a new project.
Create new script and copy from the local backup.
You are all set to ok. I changed function and script name before executing, not sure if that was needed.
I had faced the same issue recently and solved the same as follows -
Go to https://console.cloud.google.com.
Create a new project (or select an existing one) and note the "Project number".
Open the Script > Click Resources > Cloud Platform project
Change Project using the "Project number" noted above.

How can I bind my VS 2003 / XP Mode Project to the appropriate Server folders location with TFS?

Somehow my project got its source control bindings mixed up, and I'm trying to bind the local files to the correct place on the server. I am trying first to unbind the project, but when I then try to set up the binding anew and "Add Solution to Source Control", I get, "A project PDAClient.csdproj that you are attempting to add to source control cannot be added because the item AppSettings.cs is already under source control at the selected location"
It apparently only chose AppSettings.cs as the problem file to complain about because it is the first one in alphabetical order. I surmise this because I temporarily removed it from the project, tried again, and it complained about the next file in alpha order in the same way.
To try to outfox TFS, I renamed "MSSCCPRJ.SCC" to "MSSCCPRJ.SCCHide" and also renamed "PDAClient.vssscc" to "PDAClient.vsssccHide" but it simply created a fresh "PDAClient.vssscc"
(PDAClient is the name of the solution and the project)
If I try from VS 2003 File > Source Control > Change Source Control, I see this:
If I then select Bind for the solution, and then the eponymous project, I see:
If I hit "Browse" or the ellipsis button in the Server Binding column, it just "flashes" but opens no dialog for me to make the connection.
So the solution's binding is "invalid" but the project's binding is supposedly valid...
If I then select "OK" I get this:
...which looks promising ("Yes! Fix the bindings!") but selecting the "Fix" button simply takes me back to the Change Source Control dialog without having done anything. So I finally, reluctantly, select the other option, to "continue with the existing bindings" and see:
Okay...it tells me I have to check in a project for that to work, and I try to proceed, but see:
Note that it is trying to connect me to Handheld/Development/Development/HHS, but that's not what I want and need. DEV is a different branch; this is the Release branch. You can see that in the screamshot above in the solutions Path property (set to C:\Project\sscs\Handheld\Release (etc.)) not ...Development...(etc.) I compared the two using the built-in tool and saw that, indeed, the Server version was from the Dev branch (not the desired Release branch) and took the local version. But then I got:
As I then saw that some of the project's files were checked out, I was hoping against hope that perhaps it was now going to work. I tested it by making a change to a method name, but ended up seeing this, "An error or user cancellation occurred during checkout. Some files may not have been checked out. (File was not checked out.)" and then that was followed up with, "Could not perform refactoring because some of affected files could not be made writeable."...and so my change was backed out for me automatically.
Obviously, this isn't going to work, because I do need to make changes to this project.
Flailing about with what's left to me on the File > Source Control menu, I selected "Add Project From Source Control..." to see what it might offer. It first gives me a dialog where I connect to a TFS; I did. I navigated to the right spot on the server, and this looks good and ready to go:
Selecting OK invokes a dialog that tells me, "The local folder you chose to store your solution contains one or more solution files that have the same name as those in the source control server folder." with Overwrite, Cancel, and Help buttons.
I select Overwrite. I am then presented with a dialog:
I select PDAClient.sln (HHS was the former name of the solution/project)
However, when I subsequently select the Open button, I get, "The folder 'C:\Project\sscs\Handheld\Releases\6-4-0\HHS' cannot be used for the solution or project because it is already in use to store part of another solution or project."
I have no choice but to select "OK" which negates the whole process.
As a final head-first, possible-collar-bone-breaking feat of Any-Port-in-a-Stormism Syndrome, I select File > Source Control > Team Foundation Server MSSCCI Provider. This invokes the Kafka-esque Windows 2010 Shell inside of VS 2003 inside of XP Mode. According to what I see there, my setup is correct: The Server's copies of the Release project are bound to the local files Release folders:
But \Releases\HHS is grayed out, indicating there is no connection between the server folders and the local folders. And note that most (not all, but most) of the files in the Releases setup are actually stored locally in the Development folders! There are some key files that are bound correctly:
All the (dozens of) unseen files (only the first and last are seen in the last two screenshots) are tied to Development, too.
Although I don't have a "bind" type of context menu item for \Releases\HHS, there is a "map local"; although it is already ostensibly mapped correctly, I try it out, but get "The local folder could not be set to C:\Project\sscs\Handheld\Releases\6-4-0\HHS because it is already the local folder for another server folder."
So I go up to \Development\HHS, which does have a "valid" binding; note, again, that it is bound to the wrong local path (Releases instead of Dev).
So for it I first select the contextual "Remove Mapping" menu item. This affords me the opportunity to "Edit or remove a workspace mapping." I change the local folder from Releases to Dev. It looks good; Dev is now bound to Dev, and the binding is still seen as valid; this time it really is (I hope, anyway).
I now turn my attention back to Releases, but the context item "map local" is no longer there...and, although it shows the right connection between Server location and local, it is still grayed out...???
Note: The "Pending Changes" list of files is identical with both \Development\HHS and \Releases\HHS highlighted: the same three files in both cases are shown as being in the local Releases folder, and all the others in the local Dev folder.
Back in VS 2003 (out of the VS 2010 Shell running the TFS MSSCCI Provider), I go to "Change Source Control" and see that both the solution and the project have a Status of "Valid" now...when I select "OK" though, it tells me many of the files do not match and to either contact the administrator or perhaps a Get All will solve it. I tentatively look into a Select All, but see that it still says my project is bound to Development. ARGHHHH!!!!
Can anybody make sense out of this madness? How can I get the Release server folders pointed to the Release local folders, and Dev Server folders to the Dev local folders, without any bleedover and mismatching?
UPDATE
I looked in Source Control Explorer (TFS MSSCCI) again this morning, and my Dev\HHS had again gone back to being set to the wrong local path (Releases) and is connected (I guess that's what the glyph of the facing-each-other vertical arrows to the left of the folder indicates).
As to Releases\HHS, it was not connected (no glyph), but I was able to right click and map to a new folder I set up.
Here's what I see now (after changing the mapping of DEV from the local Releases folder back to the local DEV folder AGAIN!).
Properties for Dev HHS:
Properties for Release HHS:
I don't know if this makes sense to you, but it looks fishy to me.
UPDATE 2
The madness continues unabated today. My solution claims to have two pending checkins:
When I select "Check In," I get a confirmation dialog; I continue with the "Check In" button there. Then I get the "Check In - Source Files" dialog. I select the "Check In" button there, too. But then I see, "Files not checked out"
If I repeat the operations above, the last message is:
No Changess to Check In
All of the changes where either unmodified files or locks. The changes have been undone by the server."
???
IMO, I would have saved a lot of time by just zipping up files when I wanted to save the latest changes, rather than use this irksome beast; I spend more time fiddling with "productivity" tools than just using a more straightforward approach. Give me zip files and a good diff util over this cauldron of dashed hopes and clever-clever dirty tricks!
UPDATE 3
And if I close the project and re-open it, I see the following three times in a row:
So who in blue blazes told you to find such a server?!?!
Then I get:
And finally this again:
Argggggghhhhhhhhhhhhhhhhh!!!!!!!!!!!!!!!!!!!
UPDATE 4
Even though the path for the solution and project are right (Releases), this is what the files in the project show:
The branches tab, as shown in Update, show Dev going down to Release; I don't know if that's right or not, because Release was a branch of Dev,
or...???
Anyway, I see the above from File > Source Control > Team Foundation Properties
HOWEVER, when I choose File > Source Control > Team Foundation Server MSSCCI Provider, the binding seems to be correct - the HHS Dev project has Dev as its local folder location, and the HHS Release project has the Release folders as its local location.
I don't know who is more confused: me, anybody who happens to read this, or TFS/MSSCCI itself. This kind of thing is, ironically, a real productivity killer.

Only create a TFS work item on a new failed build

I've seen the post about disabling work item creation on all failed builds, but I'd like to have TFS only create a work item on the first failure. We have a very complicated legacy system that involves VB6 COM components and frequently have build failures on the build server that track back to some funkiness VB6 does with binary files (frx, ctl, etc. -- if you haven't had to deal with that in a while, you don't want to). The only way to resolve those issues is to try to make updates on a developer machine, then check in the files and run the build again (since the build doesn't fail on the dev machine). So we may have three or four (or more) failed builds before we get a success, which means we'll have three or four work items to close out.
Ideally, I'd like to have the following:
Joe checks in a change that causes the build to fail
A work item gets created and assigned to Joe
Joe checks in another change and the build still fails
No additional work item creation
Joe checks in a change the build succeeds
The work item assigned to Joe in step 2 above gets marked as Closed
But I'd be happy with just steps 1 through 4.
How would you determine that the second failed build was related to the first one, since there's an additional check-in involved? What happens if the next check-in is actually additional code committed by another developer - you'd want them to know their code broke the build, or that it's still broken, even though according to your steps, nothing would be triggered.
You'd either need to find a way to link the builds - for example, track who the auto-work-item is assigned to and then not create another work-item for checkins from that developer until there's a successful build, and maybe you could somehow queue up the builds for the other developers. I'm not really sure how you'd do it.
Does this move you in the right direction?

Resources