Currently I am using a saved_model file stored on my local disk to read an inference graph and use it in servers. Unfortunately giving a GCS path doesn't work for SavedModelBundle.load api.
Tried providing GCS path for the file but did not work.
Is this even supported, if not how can i achieve this using the SavedModelBundle api because i have some production servers running on google cloud that i want to serve some tensor-flow graphs.
A recent commit inadvertently broke the ability to load files from GCS. This has been fixed and is available in github.
Related
When I attempt load data into BigQuery from Google Cloud Storage it asks for the Google Cloud Storage URI (gs://). I have reviewed all of your online support as well as stackoverflow and cannot find a way to identify the URL for my uploaded data via the browser based Google Developers Console. The only way I see to find the URL is via gsutil and I have not been able to get gsutil to work on my machine.
Is there a way to determine the URL via the browser based Google Developers Console?
The path should be gs://<bucket_name>/<file_path_inside_bucket>.
To answer this question more information is needed. Did you already load your data into GCS?
If not, the easiest would be to go to the project console, click on project, and Storage -> Cloud Storage -> Storage browser.
You can create buckets there and upload files to the bucket.
Then the files will be found at gs://<bucket_name>/<file_path_inside_bucket> as #nmore says.
Couldn't find a direct way to get the url. But found an indirect way and below are the steps:
Go to GCS
Go into the folder in which the file has been uploaded
Click on the three dots at the right end of your file's row
Click rename
Click on gsutil equivalent link
Copy the url alone
Follow the following steps :
1. Go to GCS
2. Go into the folder in which the file has been uploaded
3. On the top you can see overview option
4. You can see there will be Link URL and link for GSUtil
Retrieving the Google Cloud Storage URI
To create an external table using a Google Cloud Storage data source, you must provide the Cloud Storage URI.
The Cloud Storage URI comprises your bucket name and your object (filename). For example, if the Cloud Storage bucket is named mybucket and the data file is named myfile.csv, the bucket URI would be gs://mybucket/myfile.csv. If your data is separated into multiple files you can use a wildcard in the URI. For more information, see Cloud Storage Request URIs.
BigQuery does not support source URIs that include multiple consecutive slashes after the initial double slash. Cloud Storage object names can contain multiple consecutive slash ("/") characters. However, BigQuery converts multiple consecutives slashes into a single slash. For example, the following source URI, though valid in Cloud Storage, does not work in BigQuery: gs://[BUCKET]/my//object//name.
To retrieve the Cloud Storage URI:
Open the Cloud Storage web UI.
CLOUD STORAGE WEB UI
Browse to the location of the object (file) that contains the source data.
At the top of the Cloud Storage web UI, note the path to the object. To compose the URI, replace gs://[BUCKET]/[FILE] with the appropriate path, for example, gs://mybucket/myfile.json. [BUCKET] is the Cloud Storage bucket name and [FILE] is the name of the object (file) containing the data.
If you need help on subdirectories, check this out on https://cloud.google.com/storage/docs/gsutil/addlhelp/HowSubdirectoriesWork
And https://cloud.google.com/storage/images/gsutil-subdirectories-thumb.png, if you need to see how gsutil provides a hierarchical view of objects in a bucket.
I installed Apache Marmotta with Docker using docker pull apache/marmotta on an AWS server. I am able to see Core Services (http://34.229.180.217:8080/marmotta/core/admin/import) via the Import interface in my browser. However, I am not able to import RDF files through the interface.
The files (RDF and TTL) are on both my local machine and on the server. The files are very large (over 2 GB each) and so I'd like to use KiWi Loader to bring them into Marmotta so I can run SPARQL queries against them.
Is there a parameter I can adjust in Marmotta to allow for larger file imports? Otherwise, is it possible to use the KiWi Loader through the Docker installation? Any suggestions would be great.
You can import using the local directory. Just copy your RDF/TTL files to $MARMOTTA_HOME/import. You can define you context base in file-like structure. For example, if you want to store your data in http://34.229.180.217:8080/marmotta/foo, just store your file in $MARMOTTA_HOME/import/foo, here you are using the default context. However, if you want to store in other context create a folder with a URL encoded. For more details of the options that Apache Marmotta provide to import files check the documentation.
IMHO, I have had a lot of problems uploading big files. I think is mostly because Apache Marmotta commit the data after is everything in memory, it is an implementation of KiWi. I don't if you can upload by chunks, and using importer.batchsize property hasn't work much for me.
Is it possible to read/write data on local without using DirectPipelineRunner?
Suppose I create a dataflow template on cloud and I want it to read some local data. Is this possible?
Thanks..
You will want to stage your input files to Google Cloud Storage first and read from there. Your code will look something like this:
p.apply(TextIO.read().from(gs://bucket/folder)
where gs://bucket/folder is the path to your folder in GCS, and assuming you are using the latest Beam release (2.0.0). Afterwards, you can download the output from GCS to your local computer.
I need help moving the images I have from Parse to S3 on AWS. I have viewed numerous supposed guides and GitHub projects, but everything stops short at giving you all the information. One even says, you need GCS bucket set up, but gives no details on how to set up one. Just someone please help me with this. I have the S3 File Adapter in my index.js all set up for the app, but none of the images are there, they are still hosted in parse.
If you are referring to old images that where hosted with parse.com that you want to move across to your own environment then it can be done with the utility tool.
Get all files across all classess in a Parse database. Print file URLs
to console OR transfer to S3, GCS, or filesystem. Rename files so that
Parse Server no longer detects that they are hosted by Parse. Update
MongoDB with new file names.
https://github.com/parse-server-modules/parse-files-utils
Moving forward if you have setup your S3 bucket correctly all new images from your app will be stored there.
https://github.com/ParsePlatform/parse-server/wiki/Configuring-File-Adapters
I'm currently looking to move my Umbraco installation over to a load balanced setup. In order to do this, I need to move the Media library over to a CDN like Amazon's S3. I tested a few plugins that allow upload to s3, but they all list media files on the local file directory. This flat out will not work.
I was thinking I would write the code to browse the CDN, but how can I override the built-in media library code so that it uses my version instead? I didn't see a clear way to do this in the docs?
I am using this plugin: http://our.umbraco.org/projects/website-utilities/amazon-s3-media for amazon s3. The source code is here: https://bitbucket.org/gibedigital/umbraco-amazons3provider . He recently just updated the plugin. The plugin does not use the local file system. The developer was pretty responsive (and made a few updates for me when I asked).
However, I am adding to his project because his plugin did not allow saving within a predefined directory (amazon's virtual directories). But his source code is a start.
Good luck,
Robin