Automate Twitter Bot - twitter

I have done a twitter bot using python that posts a tweet about the weather info for a specific city. I test it doing this: python file.py and then I check on my Twitter Account that it works.
But, how can I execute it periodically? Where can I upload my source code? Are there any free server that runs my file.py for free?

Assuming you're running gnu/linux and your machine is online most of the time, you can configure your own crontab to run your script periodically.
check: https://www.freebsd.org/doc/handbook/configtuning-cron.html
If that is not the case,
Check out https://wiki.python.org/moin/FreeHosts for your purpose first from the list should do the job. (https://www.pythonanywhere.com/)

You can host your code file to a github repository, then run your .py file through a Github action which run by a schedule you set up by a .yml file at .github/workflows folder.

Related

How do I use this config.yml file to run a web scraper that someone else built?

My end goal: I want to fetch data from a retail site on an hourly schedule to see if a specific product is back in stock or not.
I tried using xpath in python to scrape the site myself, but I'm not too familiar, and why reinvent the wheel if someone built a scraper already? In this case, Diggernaut has a github repo.
https://github.com/Diggernaut/configs/tree/master/bananarepublic.gap.com
I'm using the above github repo to try and run a pre-existing web scraper on the banana republic retail site. All that's included in the folder is a config.yml file. I don't even know where to start to try and run it... I am not familiar with using .yml files at all, barely know my way around a terminal (I can do basic "ls" and "cd" and "brew install", otherwise, no idea).
Help! I have docker and git installed (not that I know how to use docker). I have a Mac version 10.13.6 (High Sierra).
I'm not sure why you're looking at using Docker for this, as the config.yml is designed for use on Diggernaut.com and not as part of a docker container deployment. In fact, there is no docker container for Diggernaut that exists as far as I can see.
On the main Github config page for Diggernaut they list the following instructions:
All configs can be used with Diggernaut service to retrieve products information.
You need to create free account at Diggernaut
Login to your account
Create a project with any name and description you want
Get into your new project by clicking it and create new digger with any name
Then you will see 3 options suggested to you, you need to use one where you will use meta-language
Config editor will open and you can simply copy and paste config code and click on save button.
Switch mode for digger from Debug to Active and then run your digger.
Wait for completion.
Download data.
Schedule your runs if required.

Milo - Run configuration settings

I am interested create an Client Interface for a simulation software. So I have downloaded Milo and I went through the examples. I was able to build successfully when I executed mvn clean package. I would like to know how to set the run configuration for executing the Client Example.java.
In the client-examples project. Look into the ClientExample.java source file. In this file
update the method getEndpointUrl for server Endpoints. By default this method has a loop back Endpoints i.e :"opc.tcp://localhost:12686/milo".

Google Cloud Storage: Output path does not exist or is not writeable

I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!

use Google spreadsheet connector wso2

I would like to use the google spreadsheet connector at https://github.com/wso2/esb-connectors/tree/048e223c037b447c3f77c2b7e72338dc26ea5c46/googlespreadsheet. But it is not found in wso2 store. I would like to know how I can compile it and use the connector from github. Please help
Generally git wont allow you to get a folder. so you need to go with svn approach. follow bellow instruction as it is. (assumtion on you are woking in linux / mac enviorment. if not make the command the way works on windows.
create new directory where ever you want and navigate inside that
wget https://raw.githubusercontent.com/wso2/esb-connectors/048e223c037b447c3f77c2b7e72338dc26ea5c46/pom.xml
mkdir wso2
cd wso2
mkdir esbconnector
cd esbconnector/
mkdir googlespreadhseet
cd googlespreadhseet/
svn checkout https://github.com/wso2/esb-connectors/trunk/googlespreadsheet/googlespreadsheet-connector/googlespreadsheet-connector-2.0.0/org.wso2.carbon.connector
cd org.wso2.carbon.connector/
mvn clean install
it may take little time as its required to download few artifacts. if its ended up with error that integration test not found get the integration test Base from the same repo and build that first. then rebuild the connector
Recently Google spreadsheet Version2 connector has been created with REST API and added to WSO2 store. The Connector zip file can be downloaded from here. Go to the link and click the 'download connector' button and follow the documentation for the configuration.
You can checkout the connector source from the https://github.com/wso2/esb-connectors/blob/master/googlespreadsheet and built it.
Then add the connector to the ESB from the UI according to the https://docs.wso2.com/display/ESB481/Managing+Connectors+in+Your+ESB+Instance

Grails application that copies and unzippes files from remote server to another remote server using SSH

I'm new in JAVA\Grails\Groovy. Just began to create simple apps.
I've got a task to create grails app that:
1) shows a list of source zip files on a remote server, that is available by FTP and SSH
2) shows a list of destination remote servers with predefined target folders, that are available only by SSH
3) after choosing source zip and dest server it copies zip to target server\folder and unzippes. Progress bar must be shown.
4) performs some additional commands, such as ls or something like that
All configurations must by either in config files or in the database.
No information should be hardcoded in app.
Please help me to choose approach, plugin or framework.
Any help would be appreciated
I've used JSch a lot for SCP file transfer and remote exec using SSH and works very well. You could use it directly like you would in a Java app, by adding a dependency for the jar in BuildConfig.groovy
compile 'com.jcraft:jsch:0.1.51'
but the most trivial Google search I could manage that included "Grails" and "SSH" tells me that there's this plugin which looks great, and this plugin which also looks great, and this blog post which looks great, and also this plugin which uses a different library but also looks great.
Those options cover the ssh and scp/sftp parts, and you can use the JDK support for Zip files, e.g. java.util.zip.ZipFile and the other related classes in that package, to unzip the files. The rest is pretty straightforward, but if you need more help ask more questions (one question per question).

Resources