Testing google assistant using gactions CLI - google-assistant-sdk

For testing my assistant app, I want to use 'gactions CLI'(https://developers.google.com/actions/tools/gactions-cli)
But, When I try this part
$ gactions test --action_package PACKAGE_NAME --project PROJECT_ID
For example:
$ gactions test --action_package mypackage.json --project my-project-1234567
In this link(https://developers.google.com/actions/tools/gactions-cli)
I can't find what is the 'PACKAGE_NAME (mypackage.json)'.
How can I get my project's 'mypackage.json'???
Where is that file??

The PACKAGE_NAME field in this case is your actions.json file, or whatever you have called the Action definition package file. This is where you defined the entry point to your Action and describe how to access the fulfillment server.

Related

Fastlane failing with error "Cannot obtain the content provider public id. Please specify a provider short name using the -asc_provider option."

I created an iOS test flight build using Fastlane, and I got this strange error, not sure why because it was working fine yesterday and now without any change in Fastlane configuration it gives me an error while uploading the build to the Apple App store.
errors wordings are as below
[21:50:01]: Transporter transfer failed.
[21:50:01]:
[21:50:01]: Cannot obtain the content provider public id. Please specify a provider short name using the -asc_provider option.
[21:50:02]: Cannot obtain the content provider public id. Please specify a provider short name using the -asc_provider option.
Return status of iTunes Transporter was 1: Cannot obtain the content provider public id. Please specify a provider short name using the -asc_provider option.
The call to the iTMSTransporter completed with a non-zero exit status: 1. This indicates a failure.
[21:50:02]: Error uploading ipa file:
[21:50:02]: fastlane finished with errors
[!] Error uploading ipa file:
Refer below logs
For those who are suffering with this on Azure Devops's AppStoreRelease task. Using #user20291554 solution it can be fixed as follows
- job: ios
pool:
vmImage: macOS-latest
variables:
DELIVER_ITMSTRANSPORTER_ADDITIONAL_UPLOAD_PARAMETERS: "-asc_provider <your team ID or short name if different>"
steps:
...
- task: AppStoreRelease#1
inputs:
...
Please add the itc_provider along with the apple_id on the below line of code.
upload_to_testflight(
skip_waiting_for_build_processing: true,
apple_id: "APPLE_ID",
itc_provider:"ID" #example: W4A0P2BYMN
)
If you are on multiple App Store Connect teams, deliver needs a provider short name to know where to upload your binary. deliver will try to use the long name of the selected team to detect the provider short name. To override the detected value with an explicit one, use the itc_provider option.
Fastlane reference doc for itc_provider
Stackoverflow: To get provider short name(itc_provider)
Fastlane Github: related ticket
I had the same.
This comment from github helped me.
Add ENV variable to your deployment (or local machine 🥇, or Fastfile
directly) With DELIVER_ITMSTRANSPORTER_ADDITIONAL_UPLOAD_PARAMETERS we
can add the "missing" -asc_provider variable.
ENV["DELIVER_ITMSTRANSPORTER_ADDITIONAL_UPLOAD_PARAMETERS"] =
"-asc_provider YourShortName" Just deployed and it works for those who
can't wait.
For me adding the environment variable works perfectly:
ITMSTRANSPORTER_FORCE_ITMS_PACKAGE_UPLOAD: true
For my case, here is an example of Azure DevOps pipelines:
- task: AppStoreRelease#1
env:
ITMSTRANSPORTER_FORCE_ITMS_PACKAGE_UPLOAD: true
...
Source Fastlane GitHub issue
This is how I solved it!
deliver(
app_identifier: '{{YOUR_APP_ID}}',
submit_for_review: false,
skip_screenshots: true,
force: true,
itc_provider: "{{YOUR_TEAM_ID}}" // <- added!
)
To get itc_provider run command
/Applications/Xcode.app/Contents/SharedFrameworks/ContentDeliveryServices.framework/Versions/A/itms/bin/iTMSTransporter -m provider -u 'appleid#xxx.xx' -p 'xxxx-xxxx-xxxx-xxxx' -account_type itunes_connect -v off
where
appleid#xxx.xx your appleid
xxxx-xxxx-xxxx-xxxx - password for your app
How to generate an app-specific password
Sign in to appleid.apple.com.
In the Sign-In and Security section, select App-Specific Passwords.
Select Generate an app-specific password or select the Add button Blue plus sign icon., then follow the steps on your screen.
Enter or paste the app-specific password into the password field of the app.
Im using the fastlane deliver to upload my apps
The solution for me was:
Add new tag/flag for command fastlane deliver
Example: fastlane deliver --username xxx#xxx.com....
And new tag added was --itc-provider my_team_id
You can found your team_id here: page
So, the command at the end was:
fastlane deliver --verbose --ipa xxx --username xxx --app_identifier xxx --itc_provider team_id
xxx => corresponds about your project
team_id => corresponds about Team ID, that you can get on page above

Argo artifact passing cant save output

I am trying to run the artifact passing example on Argoproj. However, I am getting the following error:
failed to save outputs: verify serviceaccount platform:default has necessary privileges
This error is appearing in the first step (generate-artifact) itself.
Selecting the generate-artifact component and clicking YAML gives following line highlighted
Nothing appears on clicking LOGS.
I need to understand the correct sequence of steps in running the YAML file so that this error does not appear and artifacts are passed. Could not find much resources on this issue other than this page where the issue is discussed on argo repository.
All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, the default service account of the workflow's namespace.
Here the default service account of that namespace doesn't seem to be given any roles by default.
Try granting a role to the “default” service account in a namespace:
kubectl create rolebinding argo-default-binding \
--clusterrole=cluster-admin \
--serviceaccount=platform:default \
--namespace=platform
Since the default service account now gets all access via the 'cluster-admin' role, the example should work now.

Connecting to different ARN/Role/Amazon Account when trying to deploy

I have previously had Serverless installed on a server, and then when I tried to edit the function and package it back up to edit the zip file I broke it, so I have to start all over. So to begin this issue: I had Serverless running and was using it with this package - https://github.com/adieuadieu/serverless-chrome/tree/master/examples/serverless-framework/aws
When I sudo npm run deploy, I get the ServerlessError:
ServerlessError: User: arn:aws:sts::XXX:assumed-role/EC2CodeDeploy/i-268b1acf is not authorized to perform: cloudformation:DescribeStackResources on resource: arn:aws:cloudformation:us-east-1:YYY:stack/aws-dev/*
I'm not sure why it is trying to connect to a Role and not an IAM. So I check the Role, and it is in an entirely different AWS account than the account I've configured. Let's call this Account B.
When it comes to configuration, I've installed AWS CLI and entered in the key, id, and region in my Account A in AWS. Not touching Account B whatsoever. When I run aws s3 ls I see the correct s3 buckets of the account with the key/id/regioin, so I know CLI is working with the correct account. Sounds good. I check the ~/.aws/creditionals file and just has one profile [default] which seems normal. No other profiles are in here. I copied this over to the ~/.aws/config file so now both files are same. Works great.
I then go into my SSH where I've installed serverless, and run npm run deploy and it gives me the same message above. I think maybe somehow it is not using the correct account for whatever reason. So I manually set the access key and secret with the following commands:
serverless config credentials --provider aws --key XXX --secret YYY
It tells me there already is a profile in the aws creds file, so I then add --o to the end to overwrite. I run sudo npm run deploy and still same error.
I then run this command to manually set a profile in the creds for serverless, with the profile name matching the IAM user name:
serverless config credentials --provider aws --key XXX --secret YYY --profile serverless-agent
Where "serverless-agent" is the name of my IAM user I've been trying to use to deploy. I run this, it tells me there already is an existing profile in the aws creds file so I run it with --o and it tells me the aws file is now updated. In bash I go to Vim the file and I only see the single "[default]" settings, as if nothing has changed. I run sudo npm run deploy and it gives me the same Error.
I then go and manually set the access and secret:
export AWS_ACCESS_KEY_ID=XXX
export AWS_SECRET_ACCESS_KEY=YYY
I run sudo npm run deploy and it gives me the same Error.
I even removed AWS CLI, and the directory that holds the creditionals and config files - and when I manually set my account creds via serverless config it tells me there already is a profile set up in my aws file, prompting me to use the overwrite command - how is this possible when the file is literally not on my computer?
So I then think that serverless itself has a cache or something, calling the wrong file or whatever for creds, so I uninstall serverless via sudo npm uninstall -g serverless so that I can start from zero again. I then do all of the above steps and more all over again, and nothing has changed. Same error message.
I do have Apex.run set up, but that should be using my AWS CLI config file so I'm not sure if that is causing any problems. But then again I've no clue of anything deep on this subject, and I can't find any ability to remove Apex itself in their docs.
In the package I am trying to deploy, I do not have a profile:XXX set in the serverless.yml file, because I've read if you do not then it just defaults to the [default] profile you have set in the aws creds file on your computer. Just to check, I go into the serverless.yml file and set the profile: default, and the error I now get when I run npm run deploy is
Profile default does not exist
How is that possible when I have the "default" profile set in my creds file? So I remember that previously I ran the serverless config creditionals command and added the profile name of serverless-agent to it (yet didn't save in the aws creds file as I mentioned above), so I add that profile name to the serverless.yml file just to see if this works, and same error of "Profile default does not exist".
So back to the error message. The Role is an account not even related to the IAM user I'm using in my aws creds. Without knowing a lot about this, it's as if the config in serverless via ssh isn't correct or something. Is it using old creds I had set up in Apex.run? Why is the aws creds file not updated with the profile when I manually set it in serverless config command? I am using the same user account (but with new key and secret) that I used a few weeks ago when I correctly deployed and my Lambda and API was set up for me on AWS. Boy do I miss those time and wish I didn't mess up my existing Lambda functions, without setting version number prior, forcing me to start all over.
I am so confused. Any help would be greatly appreciated.
If you are using IAM role then you have to use that IAM role through assume role using powershell.
I was also facing same issue earlier, when we moved from from user to role.

How to update your surge.sh project?

I deployed a surge.sh project and it was published successfully, however, I want to make some updates to the project, the updates have been applied locally and can be seen working but it tried to publish it to the same domain again yet it doesn't appear updated. I also tried to tear down the project and re-upload it still it shows the old project, I cleared the cache too but it still didn't update. Any idea why?
To update a project you just need to publish it again to the same domain.
You can do this quickly by creating a file called CNAME in the project root directory to set the default domain name, like this:
echo site-name.surge.sh >CNAME
Then each time to update:
surge .
Alternatively, without a CNAME file, specify the domain in the surge command like this:
surge --domain site-name.surge.sh .
See https://surge.sh/help/remembering-a-domain .
This will not solve the need to tear down and republish the site described in the original post, but if you're not having that problem, it will make it quick to update your surge.sh project, as per the title question, without the need to edit the domain the usual prompt.
Open Git Bash.
Switch to the project directory.
Type surge and click Enter.
Click Enter in line "project".
Enter URL of your project in line "domain" and click Enter.
That's it!
Use the following deploy script to update your deployed Surge Project
"deploy": "surge --project ./path_to_build_folder --domain custom-domain.surge.sh"
Just go to your project folder, use cd to select your project directory, then type npm run build , then change directory to build directory cd build and then use surge
Edit:
Example \
You need go to your project directory, in my case it's jamming
$ cd ../
King#DESKTOP-5ERNS17 MINGW64 ~/Documents/Projects/jamming (main)
$ pwd
/c/Users/King/Documents/Projects/jamming
King#DESKTOP-5ERNS17 MINGW64 ~/Documents/Projects/jamming (main)
Then you do npm run build, I guess you need to update the build.
King#DESKTOP-5ERNS17 MINGW64 ~/Documents/Projects/jamming (main)
$ npm run build
Once the build is compiled, go in to the build directory by using cd build and then do surge
King#DESKTOP-5ERNS17 MINGW64 ~/Documents/Projects/jamming (main)
$ cd build
King#DESKTOP-5ERNS17 MINGW64 ~/Documents/Projects/jamming/build
$ surge
That's how it worked with me. If I don't do npm run build before the surge just won't update it... I don't know if this is the right way...
To update your Surge project, simply run surge in the project directory and input the same domain you're already using in the domain: prompt.
Some users are describing a problem where they have to do a hard refresh to see their updates. This is to be expected, and happens with any static file host, since static files are cached by your browser to make page loads faster.
So to see the latest version of your Surge site, press Ctrl+Shift+J or F12 to open the developer tools, and right-click on the reload button in the browser's unibar, and click on Empty Cache and Hard Reload.
Open Cmd.
Switch to the project directory.
Type surge . and click Enter.
and change the random domain with your web that's want to update and hit enter.
That's it <3
Go to the directory project and use git or cmd and this structure:
type surge and hit enter
hit enter in project line
replace the random domain with your web url
hit enter

How to make deliver (fastlane) download metadata for multiple targets?

I have an Xcode project with six targets, each target is made to build a separate app. I'm trying to setup fastlane to assist me in publication of these apps.
Fastlane docs suggest using .env files in order to handle multiple targets (you can specify app_identifier, team_name, etc. in different .env files, and then, for instance, call fastlane appstore --env ENV_NAME_HERE). However I can't figure out how to set up deliver properly.
deliver init downloads metadata for one target only by default. I need to download metadata for all my targets to different directories (and then use those directories to upload data, obviously).
deliver download_metadata doesn't accept the --env parameter (my Deliverfile depends on env files). I've tried fastlane deliver --env, but it seems to be just a shorthand for deliver, so it doesn't work either.
I guess I could just manually run deliver with different --metadata_path parameters (and all the other parameters since my Deliverfile is invalid, because it depends on env files), and then later specify directories using Deliverfile + .env files. But since I have Deliverfile and .env files already set up (now I use deliver to upload the binary only), I hoped that there is a better way. Is there?
P.S. This is a large legacy project, so splitting it into six different projects would be great, but it's not an option, unfortunately.
I've been struggling with this as well and setting up the submit is easy using the .env files.
But retrieving the initial data is difficult, but not impossible.
To grab the metadata it ran this command:
fastlane deliver download_metadata -m "./Targets/Release/Metadata" -u "itunes#username" -a "com.example.ios"
And for the screenshots:
fastlane deliver download_screenshots -w "./Targets/Release/Screenshots" -u "itunes#username" -a "com.example.ios"
Adding up to #rckoenes answer:
1) Create an .env.yourEnvName file with this info (as an example):
DLV_METADATA_PATH="../Targets/Your_Target/Metadata"
DLV_ITUNESCONNECT_USERNAME="yourItunesUser#something.com"
DLV_BUNDLE_ID="com.yourCompany.yourTarget"
2) Create a lane like this:
desc "Download metadata"
lane :metadata do
sh('fastlane deliver download_metadata -m "$DLV_METADATA_PATH" -u $DLV_ITUNESCONNECT_USERNAME -a $DLV_BUNDLE_ID')
end
3) Call fastlane like this:
fastlane metadata --env yourEnvName
That way it's a little bit cleaner, and you keep the vars in the .env file.
For automating this call for multiple targets, please refer to: https://docs.fastlane.tools/faqs/#multiple-targets-of-the-same-underlying-app
This is a combination of #rckoenes, #Riddick's answer and this fastlane github issue submission.
I was trying #Riddick's answer to have a cleaner workflow but I couldn't make it work to download metadata. For some reason, it only makes the metadata path folder but no metadata downloaded from iTunesConnect. I did some trial and error and found this line of code from the link above this:
ENV["DELIVER_FORCE_OVERWRITE"] = "1"
added it to the lane and worked!
1) Create an .env.yourEnvName file with this info (as an example):
METADATA_PATH="../Targets/Your_Target/Metadata"
APP_IDENTIFIER="com.yourCompany.yourTarget"
2) Create a lane like this:
desc "Download metadata"
lane :metadata do
ENV["DELIVER_FORCE_OVERWRITE"] = "1" # This is the additional line from Riddick's code
sh "fastlane deliver download_metadata --app_identifier #{ENV['APP_IDENTIFIER'] --metadata_path #{ENV['METADATA_PATH']}"
end
3) Call fastlane like this:
fastlane metadata --env yourEnvName
***I did not use the username parameter because I had it on my Deliver File.

Resources