So I am trying to write simple function here, but everytime I run swagger I got above mentioned error.
Here's my function:
def authenticate_user(username: str, password: str, db: Session = Depends(bd.get_db)):
user = db.query(bd.User.username).filter(username == username).first()
if not user:
return False
if not verify_password(password, user.password_hash):
return False
return user
and here's my get_db function it is pretty standard:
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
I've noticed that Depends(bd.get_db) works perfectly fine within endpoint functions (the ones with #app.post/#app.get decorators), but somehow doesn't work within plain functions.
Apparently I don't quite understand the concept of dependency injections, but I can't quite grasp it yet.
Any help would be appreciated.
Thank you kindly!
This page helped me a lot, https://github.com/tiangolo/fastapi/issues/1693#issuecomment-665833384
you can't use Depends in your own functions, it has to be in FastAPI
functions, mainly routes. You can, however, use Depends in your own
functions when that function is also a dependency, so could can have a
chain of functions.
Eg, a route uses Depends to resolve a 'getcurrentuser', which also
uses Depends to resolve 'getdb', and the whole chain will be resolved.
But if you then call 'getcurrentuser' without using Depends, it won't
be able to resolve 'getdb'.
What I do is get the DB session from the route and then pass it down
through every layer and function. I believe this is also better
design.
I had a similar problem trying to run a background task which was also defined as an endpoint. What I did to circumvent the issue was to create a wrapper around it. I'm not sure what scenario are you facing, but I think my example can help.
Here is my code:
DB Function for dependency injection
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
Function definition without endpoint decorator
async def function(args, db:Session):
... do stuff ...
... interact with db ...
return "success!"
Endpoint function
#app.post("/function_endpoint/")
async def function_endpoint(args, db:Session=Depends(get_db)):
return await function(args, db)
Now, I can call function without loosing endpoint functionality and duplicate code
Complex routine in need of background task
#app.post("/complex_routine/")
async def complex_routine_endpoint(
args, background_tasks:BackgroundTasks, db:Session=Depends(get_db)
):
... do complex stuff ...
... call other functions and pass db ...
background_tasks.add_task(function, args, db)
return "all good!"
Related
client = ...
fut = client.submit(fun, args)
Is there some accessor to get the args from the submit call or should we wrap with something?
UPDATE:
So basically when debugging the response (on error especially), it is often useful to get the full task spec. I think of this as the (fun, args, ...) stuff.
Previously, I have created my own wrappers for errors and args and returned result in kind of dict(status=..., args=..., result=..., ...) dict.
Just wondering if there is a built-in pattern that I should be using or pushing folks towards.
Calling client.recreate_... doesn't necessarily give you the args that went into the submitted function.
The short answer is "no", these are not stored anywhere convenient today.
The args and kwargs are stored in the scheduler, but not in a particularly accessible form. Typically in cases like these I record these things somewhere else as you seem to already be doing.
I'm trying to test my rails application which using Stripe APIs, So I started with models, I'm using Rspec, The model which i want to test is called bank_account.rb inside it there is a function called (create_bank_account) with argument (bank_token) its pseudocode is something like this:
def create_bank_account(bank_token)
# make a Stripe request and save it in local variable
# save needed data in my bank_account table in my DB
end
when i started to test this function, I found that there is an API call inside it, Which is not good, I need my test not to depend on Internet, So after searching I found 'StripeMock` gem, It is useful and i started to use it with Rspec, but I found my self writing a test like this:
it 'with valid bank_token` do
# create a double for bank_account
# using StripeMock to get a faked response for creating
# new bank_account
# expect the doube to receive create_bank_account
# function and response with saving the data inside the DB
end
but after writing this I noticed that I didn't actually run create_bank_account function i faked it, So my questions are:
1- How can i test function that includes API request but run the function it self not faking it?
2- I read a lot about when we use doubles and stubs and what i understood is when a function is not completed, but if the functions is already implemented should i use doubles to avoid something like functions that call APIs?
First and foremost:
Do not create a double for bank_account.
Do not mock/stub bank_account.create_bank_account.
If you do either of these things, in a test that is supposed to be testing behaviour of BankAccount#create_bank_account, then your test is worthless.
(To prove this point, try writing broken code in the method. Your tests should obviously fail. But if you're mocking the method, everything will remain passing!!)
One way or another, you should only be mocking the stripe request, i.e. the behaviour at the boundary between your application and the internet.
I cannot provide a working code sample without a little more information, but broadly speaking you could refactor your code from this:
def create_bank_account(bank_token)
# make a Stripe request and save it in local variable
# save needed data in my bank_account table in my DB
end
To this:
def create_bank_account(bank_token)
stripe_request = make_stripe_request(bank_token)
# save needed data in my bank_account table in my DB
end
private
def make_stripe_request(bank_token)
# ...
end
...And then in your test, you can use StripeMock to only fake the response of BankAccount#make_stripe_request.
If the code is not so easy to refactor(?!), then stubbing the Stripe library directly like this might not be practical. An alternative approach you can always take is use a library like webmock to simply intercept all HTTP calls.
I am new to rails and I have a task to write a common method that will update a specific database field with a given value. And I should be able to invoke the method from anywhere in the app.(I understand about the security flaw and so on.. But I was asked to do it anyway) In my application controller I tried
def update_my_model_status(model,id,field, value)
#model = model.find(id)
#model.update(field: value)
end
Of course this doesn't work.. How to achieve this? What is the right way to do this? And if it is possible how to pass a model as an argument to a method?
If you're using Rails, why not use Rails?
Compare update_all:
MyModel.where(id: 1).update_all(banned: true)
or maybe update_attribute:
my_model.update_attribute(:banned, true)
to:
update_my_model_status(MyModel, 1, :banned, true)
Notice how, despite being shorter, the first two approaches are significantly more expressive than the last - it is much more obvious what is happening. Not only that, but they are immediately more familiar to any Rails developer off the street, while the custom one has a learning curve. This, combined with the added code from the unnecessary method adds to the maintenance cost of the application. Additionally, the Rails methods are well tested and documented - are you planning to write that, too? Finally, the Rails methods are better thought out - for example, your prototype naively uses attribute validations, but does not check them (which could result in unexpected behavior) and makes more SQL queries than it needs to. It's fine to write custom methods, but let's not write arbitrary wrappers around perfectly fine Rails methods...
Try this:
def update_my_model_status(model,id,field, value)
#model_var = model.capitalize.constantize.find(id)
#model_var.update_attributes(field: value)
end
Instead of just using update you should use update_attributes:
def update_my_model_status(model,id,field, value)
#model_var = model.find(id)
#model.update_attributes(field: value)
end
http://api.rubyonrails.org/classes/ActiveRecord/Persistence.html#method-i-update
With delayed_job, I was able to do simple operations like this:
#foo.delay.increment!(:myfield)
Is it possible to do the same with Rails' new ActiveJob? (without creating a whole bunch of job classes that do these small operations)
ActiveJob is merely an abstraction on top of various background job processors, so many capabilities depend on which provider you're actually using. But I'll try to not depend on any backend.
Typically, a job provider consists of persistence mechanism and runners. When offloading a job, you write it into persistence mechanism in some way, then later one of the runners retrieves it and runs it. So the question is: can you express your job data in a format, compatible with any action you need?
That will be tricky.
Let's define what is a job definition then. For instance, it could be a single method call. Assuming this syntax:
Model.find(42).delay.foo(1, 2)
We can use the following format:
{
class: 'Model',
id: '42', # whatever
method: 'foo',
args: [
1, 2
]
}
Now how do we build such a hash from a given call and enqueue it to a job queue?
First of all, as it appears, we'll need to define a class that has a method_missing to catch the called method name:
class JobMacro
attr_accessor :data
def initialize(record = nil)
self.data = {}
if record.present?
self.data[:class] = record.class.to_s
self.data[:id] = record.id
end
end
def method_missing(action, *args)
self.data[:method] = action.to_s
self.data[:args] = args
GenericJob.perform_later(data)
end
end
The job itself will have to reconstruct that expression like so:
data[:class].constantize.find(data[:id]).public_send(data[:method], *data[:args])
Of course, you'll have to define the delay macro on your model. It may be best to factor it out into a module, since the definition is quite generic:
def delay
JobMacro.new(self)
end
It does have some limitations:
Only supports running jobs on persisted ActiveRecord models. A job needs a way to reconstruct the callee to call the method, I've picked the most probable one. You can also use marshalling, if you want, but I consider that unreliable: the unmarshalled object may be invalid by the time the job gets to execute. Same about "GlobalID".
It uses Ruby's reflection. It's a tempting solution to many problems, but it isn't fast and is a bit risky in terms of security. So use this approach cautiously.
Only one method call. No procs (you could probably do that with ruby2ruby gem). Relies on job provider to serialize arguments properly, if it fails to, help it with your own code. For instance, que uses JSON internally, so whatever works in JSON, works in que. Symbols don't, for instance.
Things will break in spectacular ways at first.
So make sure to set up your debugging tools before starting off.
An example of this is Sidekiq's backward (Delayed::Job) compatibility extension for ActiveRecord.
As far as I know, this is currently not supported. You can easily simulate this feature using a custom-defined proxy-job that accepts a model or instance, a method to be performed and a list of arguments.
However, for the sake of code testing and maintainability, this shortcut is not a good approach. It's more effective (even if you need to write a little bit more of code) to have a specific job for everything you want to enqueue. It forces you to think more about the design of your app.
I wrote a gem that can help you with that https://github.com/cristianbica/activejob-perform_later. But be aware that I believe that having methods all around your code that might be executed in workers is the perfect recipe for disaster is not handled carefully :)
I have a CommentList class with a static method fetch. The problem is, that it is not an ActiveRecord Model, but it makes http calls to fetch data.
class CommentList
def self.fetch
# http-foo here
return ['some', 'data']
end
end
Now I want an other model to use this fetch method and mock away the CommentList#fetch method to return a given dataset in my specs.
I only could find mocking gems that play together with a DB.
Am I totally overlooking something?
If you're using rspec, it should be easy to do it something like this:
CommentList.stub(:fetch => ['some', 'data'])
or to make it more of an expectation:
CommentList.should_receive(:fetch).and_return(['some', 'data'])
Another more elaborate solution would be to set up VCR. Basically what it does in this situation is the first time you run the test, CommentList would really hit the external http service and get back data. VCR then saves that response and from then on, it returns the cached response.
The good thing is that if you ever want to retest the external API call (maybe their API changed?), you just delete the VCR saved data, run your tests, and your tests will again run against the external service and cache fresh data.