MLflow - How can I run python code using a REST API - machine-learning

I'am a newbie on MachineLearning. Just a simple question, how can I run python code using REST API?
Here is the documentation
https://mlflow.org/docs/latest/rest-api.html
But there are no examples for REST API.
I just created an experiment, but I cant create python run?
Any examples like this? "This is create just a experiment"
curl -X POST http://localhost:5000/api/2.0/preview/mlflow/experiments/create -d '{"name":"TEST"}'

Related

Why can I not run a Kafka connector?

Background
Firstly - a bit of background - I am trying to learn a bit more about Kafka and Kafka connect. In that vein I'm following along to an early release book 'Kafka Connect' by Mickael Maison and Kate Stanley.
Run Connectors
Very early on (Chapter 2 - components in a connect data pipeline) they give an example of 'How do you run connectors'. Note that the authors are not using Confluent. Here in the early stages, we are advised to create a file named sink-config.json and then create a topic called topic-to-export with the following line of code:
bin/kafka-topics.sh --bootstrap-server localhost:9092 \
--create --replication-factor 1 --partitions 1 --topic topic-to-export
We are then instructed to "use the Connect REST API to start the connector with the configuration you created"
$ curl -X PUT -H "Content-Type: application/json" \ http://localhost:8083/connectors/file-sink/config --data "#sink-config.json"
The Error
However, when I run this command it brings up the following error:
{"error_code":500,"message":"Cannot deserialize value of type `java.lang.String` from Object value (token `JsonToken.START_OBJECT`)\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 36] (through reference chain: java.util.LinkedHashMap[\"config\"])"}
Trying to fix the error
Keeping in mind that I'm still trying to learn Kafka and Kafka Connect I've done a pretty simple search which has brought me to a post on StackOverflow which seemed to suggest maybe this should have been a POST not a PUT. However, changing this to:
curl -d #sink-config.json -H "Content-Type: application/json" -X POST http://localhost:8083/connectors/file-sink/config
simply brings up another error:
{"error_code":405,"message":"HTTP 405 Method Not Allowed"}
I'm really not sure where to go from here as this 'seems' to be the way that you should be able to get a connector to run. For example, this intro to connectors by Baeldung also seems to specify this way of doing things.
Does anyone have any ideas what is going on? I'm not sure where to start...
First, thanks for taking a look at the early access version of our book.
You found a mistake in this example!
To start a connector, the recommended way is to use the PUT /connectors/file-sink/config endpoint, however the example JSON we provided is not correct.
The JSON file should be something like:
{
"name": "file-sink",
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"tasks.max": 1,
"topics": "topic-to-export",
"file": "/tmp/sink.out",
"value.converter": "org.apache.kafka.connect.storage.StringConverter"
}
The mistake comes because there's another endpoint that can be used to start connectors, POST /connectors, and the JSON we provided is for that endpoint.
We recommend you use PUT /connectors/file-sink/config as the same endpoint can also be used to reconfigure connectors. In addition, the same JSON file can also be used with the PUT /connector-plugins/{connector-type}/config/validate endpoint.
Thanks again for spotting the mistake and reporting it, we'll fix the example in the coming weeks. We'll also reply to your emails about the other questions shortly.

Google Endpoints YAML file update: Is there a simpler method

When using Google Endpoints with Cloud Run to provide the container service, one creates a YAML file (stagger 2.0 format) to specify the paths with all configurations. For EVERY CHANGE the following is what I do (based on the documentation (https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions)
Step 1: Deploying the Endpoints configuration
gcloud endpoints services deploy openapi-functions.yaml \
--project ESP_PROJECT_ID
This gives me the following output:
Service Configuration [CONFIG_ID] uploaded for service [CLOUD_RUN_HOSTNAME]
Then,
Step 2: Download the script to local machine
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
Then,
Step 3: Re deploy the Cloud Run service
gcloud run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/ESP_PROJECT_ID/endpoints-runtime-serverless:CLOUD_RUN_HOSTNAME-CONFIG_ID" \
--allow-unauthenticated \
--platform managed \
--project=ESP_PROJECT_ID
Is this the process for every API path change? Or is there a simpler direct method of updating the YAML file and uploading it somewhere?
Thanks.
Based on the documentation, yes, this would be the process for every API path change. However, this may change in the future as this feature is currently on beta as stated on the documentation you shared.
You may want to look over here in order to create a feature request to GCP so they can improve this feature in the future.
In the meantime, I could advise to create a script for this process as it is always the same steps and doing something in bash that runs these commands would help you automatize the task.
Hope you find this useful.
When you use the default Cloud Endpoint image as described in the documentation the parameter --rollout_strategy=managed is automatically set.
You have to wait up to 1 minutes to use the new configuration. Personally it's what I observe in my deployments. Have a try on it!

Using REST API to deploy Liberty Docker containers

Has anyone been able to use the WebSphere Liberty REST API to deploy remote Docker containers? The docs describe all the steps, but I am not able to repeat the results. I get an error when calling REST deploy function (details are posted on the other forum).
If anyone was able to do it - please let me know and would be great if you share how you were able to do it.
Not long after posting this question I tried using the curl command with the JSON header spec and now it works. Here is the curl command I am using that works:
curl --verbose --insecure -X POST --data #docker_deploy.json --header "Content-Type: application/json" -u ${ADMIN_USER}:${ADMIN_PASSWORD} https://${CONTROLLER_HOST_NAME}:${CONTROLLER_PORT}/ibm/api/collective/v1/deployment/deploy

neo4j: load2neo not working with arrays

I'm trying to use load2neo to read in a graph in Geoff format (which I wrote out using load2neo!). The extension is installed properly and works with simple queries:
$ curl -X POST http://localhost:7474/load2neo/load/geoff -d '(alice)-[:KNOWS]->(bob)'
returns:
{"alice":70,"bob":69}
and
$ curl -X POST http://localhost:7474/load2neo/load/geoff -d '(bob)-[:KNOWS]->(carol)'
{"carol":72,"bob":71}
Both show up fine in the graph browser. But when I try to do both at once:
curl -X POST http://localhost:7474/load2neo/load/geoff -d '[(alice)<-[:KNOWS]->(bob),(bob)-[:KNOWS]->(carol)]'
it fails silently. It also fails silently with:
curl -X POST http://localhost:7474/load2neo/load/geoff -d #test.geoff
with the file contents:
[(alice)-[:KNOWS]->(bob),(bob)-[:KNOWS]->(carol)]
It's not an authentication problem, I don't think I have the syntax wrong (I've copied it directly from the files that load2neo itself output, and double-checked it against the spec), but I just can't figure out why it's not working. Any ideas?
This is with load2neo 0.6.0 downloaded from the website and Neo4J 2.3.1, community edition.

how to make facebook graph api curl request in ruby

From facebook graph api docs this is curl request i need to make. From console its working. Now I want to move this post in delayed job. How can I do this:
curl -F 'access_token=xxxxxxxxx' \
-F 'photo=http://xxxxxx.com/photos/13' \
'https://graph.facebook.com/me/my_app_namespace:upload'
PS: I have token etc everything. I just want to code this curl request in ruby.
For quick and dirty, just send the command to the shell with tics or system.
For a more elegant and efficient (having native bindings) solution, use curb Ruby gem.
Curb (probably CUrl-RuBy or something) provides Ruby-language bindings for the libcurl(3), a
fully-featured client-side URL transfer library. cURL and libcurl live
at http://curl.haxx.se/ .
Net::HTTP should be all you need for a simple request like this

Resources