I want to use TensorFlow Serving for a custom model (No pre-trained starting point).
I've made it through the pre-Kubernetes part of the TensorFlow Serving tutorial for Inception, using Docker: http://tensorflow.github.io/serving/serving_inception
I understand (roughly) that the Bazel compiling is central to how everything works. But I am trying to understand how the generated predict_pb2 from tensorflow_serving.apis works, so that I can swap in my own custom model.
To be clear, this is what the main in inception_client.py currently looks like:
def main(_):
host, port = FLAGS.server.split(':')
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
# Send request
with open(FLAGS.image, 'rb') as f:
# See prediction_service.proto for gRPC request/response details.
data = f.read()
request = predict_pb2.PredictRequest()
request.model_spec.name = 'inception'
request.model_spec.signature_name = 'predict_images'
request.inputs['images'].CopyFrom(
tf.contrib.util.make_tensor_proto(data, shape=[1]))
result = stub.Predict(request, 10.0) # 10 secs timeout
print(result)
https://github.com/tensorflow/serving/blob/65f50621a192004ab5ae68e75818e94930a6778b/tensorflow_serving/example/inception_client.py#L38-L52
It's hard for me to unpack and debug what predict_pb2.PredictRequest() is doing since it's Bazel-generated. But I would like to re-point this to a totally different, saved model, with its own .pb file, etc.
How can I refer to a different saved model?
PredictionService, defined here, is the gRPC API service definition which declares what RPC functions the server will respond to. From this proto, bazel/protoc can generate code that will be linked in the server and in the client (predict_pb2 that you mentioned).
The server extends the autogenerated service here and provides an implementation for each function.
Python clients use the provided predict_pb2 and use that to build a request and send an RPC using the right API.
predict_pb2.PredictRequest() is a PredictRequest proto defined here which is the request type for the Predict() API call (see PredictService Proto definition linked above). That part of the code simply builds a request and result = stub.Predict(request, 10.0) is where the request is actually sent.
In order to use a different model, you'd just need to change the ModelSpec's model name to your model. In the example above, the server loaded the iception model with the name "inception", so the client queries it with request.model_spec.name = 'inception'. To use your model instead, you'd just need to change the name to your model name. Note that you'll probably also need to change the signature_name to your custom name or remove it entirely to use the default signature (assuming it's defined).
Related
Because I am rewriting a legacy app, I cannot change what the clients either send or accept. I have to accept and return JSON, HTML, and an in-house XML-like serialization.
They do, fortunately set headers that describe what they are sending and what they accept.
So right now, what I do is have a decoder module and an encoder module with methods that are basically if/elif/else chains. When a route is ready to process/return something, I call the decoder/encoder module with the python object and the header field, which returns the formatted object as a string and the route processes the result or returns Response().
I am wondering if there is a more Quart native way of doing this.
I'm also trying to figure out how to make this work with Quart-Schema. I see from the docs that one can do app.json_encoder = <class> and I suppose I could sub in a different processor there, but it seems application global, there's no way to set it based on what the client sends. Optimally, it would be great if I could just pass the results of a dynamically chosen parser to Quart-Schema and let it do it's thing on python objects.
Thoughts and suggestions welcome. Thanks!
You can write your own decorator like the quart-schema #validation_headers(). Inside the decorator, check the header for the Content-Type, parse it, and pass the parsed object to the func(...).
Context
I've been running an intranet admin panel in Symfony 3.x for several years. The users login with google oauth and the system checks if the email matches a validated one in a lookup-list. The oauth client handling is done with the "HWI OAuth Bundle".
In order to start a clean way to migrate this admin panel into SF4 and later to SF5 we've started breaking our monolyth into microservices running in docker.
Moving to docker behind a reverse proxy
Today we were moving this admin panel into a docker. Then we are having the public apache2 doing a ProxyPass towards the docker running the admin panel. Let's imagine the docker runs in http://1.2.3.4:7540 Let's assume the public address is https://admin-europe.example.com
What happens is that the symfony application has a relative URL, as the route google_login configured in the routing.yml and in the service configuration defined in the security.yml:
routing:
# Required by the HWI OAuth Bundle.
hwi_oauth_redirect:
resource: "#HWIOAuthBundle/Resources/config/routing/redirect.xml"
prefix: /connect
hwi_oauth_connect:
resource: "#HWIOAuthBundle/Resources/config/routing/connect.xml"
prefix: /connect
hwi_oauth_login:
resource: "#HWIOAuthBundle/Resources/config/routing/login.xml"
prefix: /login
# HWI OAuth Bundle route needed for each resource provider.
google_login:
path: /login/check-google
logout:
path: /logout
security:
firewalls:
# disables authentication for assets and the profiler, adapt it according to your needs
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
secured_area:
anonymous: true
logout:
path: /logout
target: /
handlers: [ admin.security.logout.handler ]
oauth:
resource_owners:
google: "/login/check-google"
login_path: /
use_forward: false
failure_path: /
oauth_user_provider:
service: admin.user.provider
So when the application was not dockerized, it run properly because the route requested to be the "redirect route" to google was https://admin-europe.example.com/login/check-google.
Nevertheless, now that it's inside the docker when the HWI bundle is building the data block to send to google it requests for this http://1.2.3.4:7540/login/check-google to be authorised as the "redirect URI" but of course it should not. Of course the redirect URI should continue to be https://admin-europe.example.com/login/check-google.
I naturally get this error message:
The reverse proxy
We already have in the reverse proxy the ProxyPassReverse and, in fact, the very same configuration has been working hassle-free for over a month with another microservice we already successfully moved (but that service did not need auth, was a public site).
This is natural, as ProxyPassReverse will tackle into http data but the google-oauth info-block is not handled by the ProxyPassReverse, as it's natural.
The problem
The problem here is not to have this address validated (put a domain alias into the private IP address, etc.)
The problem here is how to generate the "proper public URL" from inside the docker without creating a hard-dependency for the container contents in function of the environment it's going to run. Doing so would be an anti-pattern.
Exploring solutions
Of course the "easy" solution would be to "hardcode" the "external route" inside the container.
But this has a flaw. If I also want the same docker to be accessed from, say, https://admin-asia.example.com/ (note the -asia instead of the -europe), I'll run into problems as the asia users will be redirected to the europe route. This is a mere example, don't care about the specific europe-asia thing... the point is that the container should not be conscious of the sorrounding architecture. Or at least, conscious to "interact" but definitively not to have "hardcoded" inside the container things that depend on the environment.
Ie: Forget about the -europe and -asia thing. Imagine the access is admin-1111. It does not make sense that I have to "recompile" and "redeploy" the container if one day I want it to be accessible as admin-2222.
Temporal solution
I think it would solve the problem to point both the route in the rounting.yml and the config in the security.yml to a "parameter" (in 3.x in parameters.yml) and then move that into an Environment Variable when updating to SF4, but I'm unsure on how the cache compiler of the symfony would behave with a route that does not have a value, but a route that "changes dynamically".
Then pass the value of the redirecion when the container is started. This would solve the problem only partially: All the container would be bound to a redirect route set at the time of start, but it still would not solve the case of the same container instance accessed via different names thus needing multiple redirect routes. Instead when running non-dockerized that works as it just takes the "hostname" to build the absolute path on a relative-path definition.
Investigation so far
When accessing, the browser shows I'm going to
https://accounts.google.com/o/oauth2/auth
?response_type=code
&client_id=111111111111-aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa.apps.googleusercontent.com
&scope=email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.profile.emails.read+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.login
&redirect_uri=http%3A%2F%2Fmy.nice.domain.example.com%3A7040%2Fapp_dev.php%2Flogin%2Fcheck-google
Here we see that the redirect_uri parameter is the place where we'll return after passing the control to google momentarily.
So somebody needs to be building this URL.
I seeked for "redirect_uri" within the source code and I found that the involved classes are GoogleResourceOwner which extends GenericOAuth2ResourceOwner.
Both classes seem to belong to the domain as per the tests passing the $redirectUri as a string which needs to be already built by the caller.
The involved method is public function getAuthorizationUrl($redirectUri, array $extraParameters = array()). This receives the redirect URI and builds the auth URI with the redurect URI encoded as a parameter.
So, who are the consumers/clients of getAuthorizationUrl()?
I only found one single client usage in OAuthUtils, in a line that says return $resourceOwner->getAuthorizationUrl($redirectUrl, $extraParameters); within the function public function getAuthorizationUrl(Request $request, $name, $redirectUrl = null, array $extraParameters = array())
I see that mainly this OAuthUtils is acting as an adapter between the Symfony Request and the OAuth domain model. Within this method we mainly find code to create the $redirectUri.
The cleanest solution for me would be to create a child class OAuthUtilsBehindProxy inheriting from OAuthUtils, overwriting the method getAuthorizationUrl() and having it interpret the X-FORWARDED-* headers of the request, and then have the dependency injection to autowire my class everywhere the OAuthUtils is used with the hope that noone is doing a new OAuthUtils and every user of this class is getting it passed on the constructor.
This would be clean and woul work.
But frankly it seems an overkill to me. I'm pretty sure someone before me has put an app that needs Google OAuth made with HWI behind a reverse proxy and I wonder if there's a "config option" that I'm missing or really I have to re-code all this and inject it via D.I.
So, question
How do I have HWI-OAuth bundle to behave properly when running in a docker container behind a reverse proxy in regards on how to build the "redirect route" for the google-oauth service?
Is there any way to tell either the HWI bundle or either symfony to add a "full-host" prefix IN FUNCTION of the the X-FORWARDED-* headers "if available"? This would leave the docker image "fixed" and would run in "any" environment.
The underlying reason is the way Symfony generated the full-addresses from a relative path or route name.
Here's the investigation:
The method HWI/OAuthUtils::getAuthorizationUrl() is the one that generates the OAUth auth URI and consumes the method Symfony/HttpUtils::generateUri() to get the absolute URI of the redirect_to callback that will be encoded inside the Auth URI.
The method Symfony/HttpUtils::generateUri() generates an absolute URI (that in our case will be the callback) and to do so, the method handles 3 general cases:
The parameter is already an absolute URI (the return is the parameter without further processing)
The parameter is a relative URL (the function calls the Request class to build the proto + host + port + project-path prefix to prepend to the relative URI)
The parameter is a route name (the funcion calls the Router class to build the absolute URI)
In my example I was configuring a relative URL (google: "/login/check-google") in the security.yml so HttpUtils was delegating into the Request class.
Looking at the source of the Request class we observe:
The Request class is able to use proxy headers to build the absolute class.
But for security, by default symfony does not trust that a proxy exists merely because there are X-FORWARDED-* headers in it.
Indeed it's more secure plus more flexible.
There are 2 levels of security:
Somewhere we need to tell the Request class what is the list of trusted IPs that are proxies accessing the application.
Somewhere else we need to tell the Request class what specific proxy headers are trusted and what headers are not, even it supports different standards headers (RFC headers, non-RFC apache headers, etc)
Stated here https://symfony.com/blog/fixing-the-trusted-proxies-configuration-for-symfony-3-3 is that you need to configure the trusted proxies in the front-controller by calling the static method Request::setTrustedProxies();
So adding those couple of lines in the front-controller one killing non-nee4ded headers and the other with the IP ranges of the proxies, solved the problem:
# app.php
<?php
use Symfony\Component\HttpFoundation\Request;
$loader = require __DIR__.'/../app/autoload.php';
include_once __DIR__.'/../var/bootstrap.php.cache';
$kernel = new AppKernel('prod', false);
$kernel->loadClassCache();
Request::setTrustedHeaderName( Request::HEADER_FORWARDED, null ); # <-- Kill unneeded header.
Request::setTrustedProxies( [ '192.168.104.0/24', '10.0.0.0/8' ] ); # <-- Trust any proxy that lives in any of those two private nets.
$request = Request::createFromGlobals();
$response = $kernel->handle($request);
$response->send();
$kernel->terminate($request, $response);
With this change:
Symfony Request is able to build correct public absolute addresses from relative addresses if called thru a proxy, by deducting the host from HTTP_X_FORWARDED_HOST and HTTP_X_FORWARDED_PORT instead of HTTP_HOST and SERVER_PORT.
Symfony HttpUtils also, as it was delegating to Request.
HWI is in turn able to build a correct absolute callback redirect_to.
HWI can set the proper callback encoded inside the AuthUri.
The AuthURI that contains the proper absolute URI taking in account the proxy effect is sent to google.
Google sees the "public URI" as the one registered in the google configuration.
The workflow completes and the login process can end successfully.
i am using "gulp-connect" as a development server and i am trying to implement react router 1.0.0-rc1.
Currently i am using "createHashHistory" which adds junk something like: ?_k=ckuvup in the URL, which is deliberate as defined in the document. I am ok with it until i am sending query strings along with URL and my link looks something like this with the junk appending just after the domain name rather then at the end:
http://localhost:8080/#/?_k=y754gg/jobs?latitude=27.686784000000003&longitude=85.2690875&query_location=Liverpool, United Kingdom&query=fjdkf
Expected URL (something like this) :
http://localhost:8080/#/jobs?latitude=27.686784000000003&longitude=85.2690875&query_location=Liverpool, United Kingdom&query=fjdkf/?_k=y754gg
I could have used "createBrowserHistory" which has a much clear URL but the problem is:
1) Server configuration. Example provided only shows how to do in Express. I am planning to use nginx in production and am using gulp-connect in development. As i could not find any reference on how to do in this servers i had to choose "createBrowserHistory".
2) My backend is on rails and if i through my front end in "public" folder, URL with # should separate client and server routes. But i keep on thinking there must be a way to use createBrowserHistory with some configuration in nginx.
My priority from this question is the first part on appending the key at the end. Any reference on how configuration are done in different server will be appreciated.
You should be able to disable the URL hash by setting queryKey: false when creating your history:
var history = History.createHashHistory({
queryKey: false
});
I would like to know the Application of Pre-Processor and Post-Processor in JMeter.
As name suggests these components are used to process something (request, response, custom operations) before and after the sampler (request).
Pre processors :
These components are used before the request to perform custom actions.
Ex:
Suppose If I want to add something to request before sending it to server then preprocessor is added. For example if could be fetching some information from DB or Regex operations. Thus after performing those operation we can pass results of these actions to request. Thus we can modify/update request or request parameters before sending request to server.
Post processors :
These components are used after the response of request has arrived to perform custom actions.
Ex:
Suppose If I have asked for something on google and I want to find out something from response for below actions,
To validate response
Extract something and process to pass data to next request
Perform custom actions like DB operations, file operations etc.
Then post processors can be used.
See above snapshot,
Components pointed by arrow will be executed before request is sent to server and components within square will be executed after response has arrived.
I hope this was helpful.
Pre Processors are designed to provide any setup actions required for the test sample like generate some unique test data or amend parent sampler dynamically
Post-Processors are designed to tear down the sampler, or most commonly to extract "interesting" bits from response for later re-use (this is called "correlation")
Pre and Post Processors execution time isn't included into test reports. If you want to change this behavior you need to use Transaction Controller
Shouldn't PUT be used to Create and POST used to Update since PUT is idempotent.
That way multiple PUTs for the same Order will place only one Order?
The difference is that a PUT is for a known resource, and therefor used for updating, as stated here in rfc2616.
The fundamental difference between the POST and PUT requests is
reflected in the different meaning of the Request-URI. The URI in a
POST request identifies the resource that will handle the enclosed
entity. That resource might be a data-accepting process, a gateway to
some other protocol, or a separate entity that accepts annotations. In
contrast, the URI in a PUT request identifies the entity enclosed with
the request -- the user agent knows what URI is intended and the
server MUST NOT attempt to apply the request to some other resource.
I do see where you are coming from based on the names themselves however.
I usually look at POST as it should be the URI that will handle the content of my request (in most cases the params as form values) and thus creating a new resource, and PUT as the URI which is the subject of my request (/users/1234), a resource which already exists.
I believe the nomenclature goes back a long ways, consider the early web. One might want to POST their message to a message board, and then PUT additional content into their message at a later date.
There's no strict correspondence between HTTP methods and CRUD. This is a convention adopted by some frameworks, but it has nothing to do with REST constraints.
A PUT request asks the server to replace whatever is at the given URI with the enclosed representation, completely ignoring the current contents. A good analogy is the mv command in a shell. It creates the new file at the destination if it doesn't exist, or replaces whatever exists. In either case, it completely ignores whatever is in there. You can use this to create, but also to update something, as long as you're sending a complete representation.
POST asks the target resource to process the payload according to predefined rules, so it's the method to use for any operation that isn't already standardized by the HTTP protocol. This means a POST can do anything you want, as long as you're not duplicating functionality from other method -- for instance, using POST for retrieval when you should be using GET -- and you document it properly.
So, you can use both for create and update, depending on the exact circumstances, but with PUT you must have consistent semantics for everything in your API and you can't make partial updates, and with POST you can do anything you want, as long as you document how exactly it works.
PUT should be used for creates if and only if possible URI of the new resource is known for a client. New URI maybe advertised by the service in resource representation. For example service may provide with some kind of submit form and specify action URI on it which can be a pre populated URI of the new resource. In this case yes, if initial PUT request successfully creates resource following PUT request will only replace it.
It's ok to use POST for updates, it was never said that POST is for "create" operations only.
You are trying to correlate CRUD to HTTP, and that doesn't work. The philosophy of HTTP is different, and does not natively correspond to CRUD. The confusion arises because of REST; which does correspond to CRUD. REST uses HTTP, but with additional constraints upon what is allowed. I've prepared this Q & A to explain the HTTP approach to things:
What's being requested?
A POST requests an action upon a collection.
A PUT requests the placement of a resource into a collection.
What kind of object is named in the URI?
The URI of a POST identifies a collection.
The URI of a PUT identifies a resource (within a collection).
How is the object specified in the URI, for POST and PUT respectively?
/collectionId
/collectionId/resourceId
How much freedom does the HTTP protocol grant the collection?
With a POST, the collection is in control.
With a PUT, the requestor is in control (unless request fails).
What guarantees does the HTTP protocol make?
With a POST, the HTTP protocol does not define what is supposed to happen with the collection; the rfc states that the server should "process ... the request according to the [collection's] own specific semantics." (FYI: The rfc uses the confusing phrase "target resource" to mean "collection".) It is up to the server to decide upon a contract that defines what a POST will do.
With a PUT, the HTTP protocol requires that a response of "success" must guarantee that the collection now contains a resource with the ID and content specified by the request.
Can the operation result in the creation of a new resource within the collection?
Yes, or no, depending upon the contract. If the contract is a REST protocol, then insertion is required. When a POST creates a new resource, the response will be 201.
Yes, but that means the requestor is specifying the new ID. This is fine for bulletin boards, but problematic with databases. (Hence, for database applications, PUT will generally not insert, but only update.) When a PUT creates a new resource, the response will be 201.
Is the operation idempotent?
A POST is generally not idempotent. (The server can offer any contract it wishes, but idempotency is generally not part of that contract).
A PUT is required to be idempotent. (The state of the identified resource is idempotent. Side effects outside of that resource are allowed.)
Here is the rfc:
https://www.rfc-editor.org/rfc/rfc7231#section-4.3.3
It depends..
you can create/update sites/records with both.
When the client is specifying the URI then PUT is the way to go.
e.g. Any Code Editor like Dreamweaver, PUT is the right protocol to use.
have also a look at this thread: put vs post in rest