-
Notifications
You must be signed in to change notification settings - Fork 830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error handling #902
Comments
This is possible! You need to wrap the resolver like such def wrap_resolver(cls, resolver):
def wrapped_resolver(root, info, **args):
try:
with transaction.atomic():
return resolver(root, info, **args)
except Exception as e:
return cls(errors=str(e))
return wrapped_resolver
class CustomMutation(Mutation):
class Meta:
abstract = True
@classmethod
def __init_subclass_with_meta__(cls, resolver=None, **options):
if not resolver:
mutate = getattr(cls, 'mutate', None)
assert mutate, 'All mutations must define a mutate method in it'
resolver = get_unbound_function(mutate)
resolver = wrap_resolver(cls, resolver)
super(CustomMutation, cls).__init_subclass_with_meta__(resolver, **options)
cls._meta.fields['errors'] = (Field(String, name='errors')) Then you use CustomMutation as class that all your Mutations will subclass. This wraps the resolver and catches any exceptions. In my own implementation I have a custom Exception class that i'm looking for. But thats up for you to decide |
@BossGrand yes, for mutations it works, and I solved it in the same way, but for queries doesn't work. That's the main problem, having to check errors differently for mutations and queries is not a good user experience. So to keep them consistent we need to have a errors field for queries as well, if the root response cannot be extended, then subclassing ObjectType and adding an errors field, and use the custom ObjectType as the base type for every query result. Queries may fail, but also, the resolvers of each field from the query response may fail, so you need a way of catching any of those errors, and some how populate the error field with them. Problem is you don't control when fields are resolved, thus cannot force the error field to be resolved at the very end to make sure all the other resolvers were executed and catched (if they errored). Its a bit confusing, but hopefully you got the idea. |
what type of errors do you expect to get via queries? |
Well queries are resolved in many different ways, from different microservices, so errors can happen. |
I'm actually struggling trying to do the same thing with queries I created a query that I can pass a django form to perform validation and return but the problem is that, returned value is considered as a normal response |
if the value is an invalid CC, like
i know we could just use an enum, but |
So, just for the record, @bclermont I went with the decorator approach, catching every possible error on both queries and mutations (using a base class for both that wraps every resolver). In the error handler decorator I just catch, format and re-raise errors and finally in the global error handler I decide what data to output in the |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Thanks for clarifying, @sebastiandev. I'm still interested how other people do this: Is anybody approaching a more middleware-esque approach on a higher graphene level? Am having quite a few resolvers and don't want to be adding decorators to all of them, if avoidable. |
@fwirtz if you just subclass def catch_errors(func):
f_data = inspect.signature(func)
if not 'info' in f_data.parameters.keys():
raise ValueError(f"Resolver {func.__name__} is missing the info parameter")
@wraps(func)
def wrapper(*args, **kwargs):
info = None
for arg in list(args) + list(kwargs.values()):
if isinstance(arg, graphql.execution.base.ResolveInfo):
info = arg
try:
res = func(*args, **kwargs)
return res
except Exception as e:
# your code for error handling here...
class BaseObjectType(ObjectType):
class Meta:
abstract = True
@classmethod
def __init_subclass_with_meta__(
cls,
interfaces=(),
possible_types=(),
default_resolver=None,
_meta=None,
**options
):
super().__init_subclass_with_meta__(
interfaces, possible_types, default_resolver, _meta, **options)
for f in cls._meta.fields:
resolver_name = "resolve_{}".format(f)
if hasattr(cls, resolver_name):
setattr(cls, resolver_name, catch_errors(getattr(cls, resolver_name))) for mutations you can apply a similar approach, I have a different code since we have built a layer on top of graphene to have mutations as plain classes (to group mutations methods in one class, and other goodies..) |
Bump on this issue. Error handling is a big deal when implementing any API and graphene's middleware doesn't offer any obvious way to do this. A very common use case is catching all exceptions and stripping any internal error handling. This is crucial for security purposes: it's not uncommon for stringified exceptions to have PII and other sensitive information. The only way to do this in graphene is to decorate every resolver with error handling functionality or using the metaclass approach described above. Ariadne, by contrast, treats this as a first class use case: https://ariadnegraphql.org/docs/error-messaging Is proper error handling somewhere on the roadmap? |
Does anyone have a workaround for this in the meantime? All exceptions raised are printed to the console during testing/development. Some packages such as |
@clin88 I agree that error handling is not handled (pun not intended!) well at the moment in graphene. There are no plans as far as I'm aware to improve it though because it feels like something that could devolve into configuration soup trying to cover everyone's particular use case. I am very open to hearing suggestions on how it could work though. Do you have any ideas? @dspacejs You should be able to setup your python logging config to silence the errors in testing and development if you wish. |
It is actually possible to catch and handle resolver errors without needing to decorate or use a custom sub class for each every query / mutation. You do still need to use a custom meta class similar to others suggestions, but it only needs to be used when you build your schema. E.g.
Then use this class to build your query and mutation objects. E.g.
This clearly isn't an ideal solution, but it does obviate the need to replace every ObjectType and Mutation individually. |
@WillySchu is there something I'm missing about your approach or does it only work for top level resolvers? If I have nested fields, I'm not seeing how this approach would recurse to modify the resolvers of those as well. |
@beezee You are right that this only wraps the resolvers for the top level objects. However, this will catch exceptions raised in the nested resolvers as they are called from the top level resolvers. This was sufficient for my use case, as it wasn't terribly important to catch the exceptions at precisely the level they were raised, so long as I got them all. If such functionality is necessary for you, it should be fairly straight forward to modify my example to recursively look for lower level fields and wrap their resolvers as well. But as you correctly point out, it does not do so presently. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Still an issue |
I gave up on graphql python, error handling is a basic feature |
I posted this in another thread but you guys might be interestead as well, it's a small validation library on top of Graphene that follows Django Rest Framework's structure (it does not require Django of course :) ): https://github.com/chpmrc/graphene-validator. Probably not ready for production use but I think it only needs some polishing. It does exactly what some of you have been trying to do (field level validation, even with nested inputs). Let me know what you think! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
no |
Indeed coming from |
I know error handling is a bit complicated in graphql in general right now. For the purposes of this discussion, I think it's useful to differentiate between expected error conditions and unexpected error conditions. While it might make sense to swallow and format expected error conditions, not differentiating between something like an explicitly raised
One common approach for error handling in graphql is to make expected errors part of the schema design. Even doing that, though, there's a handful of "expected" errors which are (correctly) raised by While not as verbose, a very similar issue was sort of already raised in on One way With exceptions not being swallowed at the lower level, it becomes much more obvious how to catch and handle them at the higher levels (which expose views), etc. For example, stripping internal errors can then be done the same way it's done for other views served by whatever web framework exposes the graphql api. The way all these libraries interact is a bit complicated so it's hard to tell if I'm on the right track here. If this does seem like the right approach I'd be happy to file a PR with this change. @jkimbo What do you think? |
Thanks for the detailed analysis @AlecRosenbaum and I think you're on the right track. If you could make a PR to implement a custom ExecutionContext that would be very helpful. @syrusakbary any thoughts? |
@jkimbo , are you able to review the pull request so we can merge it? |
Is there any way to get this on 2.1.8? |
@datavistics you can create a PR against the v2 branch. Btw v2 used graphql-core-legacy which has a different api. |
Ive been looking at this issue for my current predicament. Would it make sense that instead of dropping the catchall Exception handler, like UnforgivingExecutionContext does, and allowing for passing in a custom exception handler that would allow a developer to at least intercept the exception for logging purposes, or just breakpointing in their own code? |
If you still want to use the solution proposed by @WillySchu for queries, you'll need to make sure you're using the In other words, this won't work
But this will
|
We have decided to approach error handling like suggested in many blogs, by having errors defined as fields rather than raising exceptions.
The thing with this is that we need to make sure we catch every exception that we want to handle and somehow keep track of them to be able to inform them.
So we came up with:
catch_errors
just populatesinfo.context.errors
for each operationWhile implementing this approach for error handling I have also found an issue when resolving the error field. I have a base class that extends ObjectType by iterating over the resolvers and wrapping them to n catch any errors and setting an errors dict in the info.context.
Problem is that when resolving a type field that throws an exception, I do catch it and set the error, but the error field was already processed/resolved. Is there a way to force a particular field to be processed at the end or to force it to update? I need to make sure every resolver runs before the error field is resolver so I make sure all errors are caught.
The text was updated successfully, but these errors were encountered: