Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error handling #902

Closed
sebastiandev opened this issue Feb 6, 2019 · 31 comments · Fixed by #1255
Closed

Error handling #902

sebastiandev opened this issue Feb 6, 2019 · 31 comments · Fixed by #1255
Labels

Comments

@sebastiandev
Copy link

We have decided to approach error handling like suggested in many blogs, by having errors defined as fields rather than raising exceptions.

The thing with this is that we need to make sure we catch every exception that we want to handle and somehow keep track of them to be able to inform them.

So we came up with:

class BaseObjectType(graphene.ObjectType):

    class Meta:
        abstract = True

    @classmethod
    def __init_subclass_with_meta__(
        cls,
        interfaces=(),
        possible_types=(),
        default_resolver=None,
        _meta=None,
        **options
    ):
        super().__init_subclass_with_meta__(
            interfaces, possible_types, default_resolver, _meta, **options)

        for f in cls._meta.fields:
            if f in ['errors']:
                continue

            resolver_name = "resolve_{}".format(f)
            if hasattr(cls, resolver_name):
                setattr(cls, resolver_name, catch_errors(getattr(cls, resolver_name)))


class BaseResponse(BaseObjectType):

    class Meta:
        abstract = True

    errors = graphene.String()

    @staticmethod
    def resolve_errors(root, info, **kwargs):
        operation_name = info.path[0]
        error_key = f"{operation_name}_errors"
   
        if not root.errors and error_key in info.context.errors:
            root.errors = ",".join(info.context.errors[error_key])

        return root.errors

catch_errors just populates info.context.errors for each operation

While implementing this approach for error handling I have also found an issue when resolving the error field. I have a base class that extends ObjectType by iterating over the resolvers and wrapping them to n catch any errors and setting an errors dict in the info.context.

Problem is that when resolving a type field that throws an exception, I do catch it and set the error, but the error field was already processed/resolved. Is there a way to force a particular field to be processed at the end or to force it to update? I need to make sure every resolver runs before the error field is resolver so I make sure all errors are caught.

@BossGrand
Copy link
Member

BossGrand commented Feb 18, 2019

This is possible! You need to wrap the resolver like such

def wrap_resolver(cls, resolver):

    def wrapped_resolver(root, info, **args):
        try:
            with transaction.atomic():
                return resolver(root, info, **args)

        except Exception as e:
            return cls(errors=str(e))

    return wrapped_resolver


class CustomMutation(Mutation):

    class Meta:
        abstract = True


    @classmethod
    def __init_subclass_with_meta__(cls, resolver=None, **options):

        if not resolver:
            mutate = getattr(cls, 'mutate', None)
            assert mutate, 'All mutations must define a mutate method in it'
            resolver = get_unbound_function(mutate)
            resolver = wrap_resolver(cls, resolver)

        super(CustomMutation, cls).__init_subclass_with_meta__(resolver, **options)

        cls._meta.fields['errors'] = (Field(String, name='errors'))

Then you use CustomMutation as class that all your Mutations will subclass. This wraps the resolver and catches any exceptions. In my own implementation I have a custom Exception class that i'm looking for. But thats up for you to decide

@sebastiandev
Copy link
Author

sebastiandev commented Feb 18, 2019

@BossGrand yes, for mutations it works, and I solved it in the same way, but for queries doesn't work. That's the main problem, having to check errors differently for mutations and queries is not a good user experience.

So to keep them consistent we need to have a errors field for queries as well, if the root response cannot be extended, then subclassing ObjectType and adding an errors field, and use the custom ObjectType as the base type for every query result.

Queries may fail, but also, the resolvers of each field from the query response may fail, so you need a way of catching any of those errors, and some how populate the error field with them. Problem is you don't control when fields are resolved, thus cannot force the error field to be resolved at the very end to make sure all the other resolvers were executed and catched (if they errored).

Its a bit confusing, but hopefully you got the idea.

@BossGrand
Copy link
Member

what type of errors do you expect to get via queries?

@sebastiandev
Copy link
Author

sebastiandev commented Feb 19, 2019

Well queries are resolved in many different ways, from different microservices, so errors can happen.
Like unauthorized access to a resource, bad search input (which is not strictly typed), and some complex queries that might return domain errors. I just don't think having to check in 2 places for errors is convenient for the user. So you use the errors payload or have a errors field on each query/mutation, but not both.

@jeanlucmongrain
Copy link

I'm actually struggling trying to do the same thing with queries

I created a query that I can pass a django form to perform validation and return errors like the mutation of django integration

but the problem is that, returned value is considered as a normal response

@jeanlucmongrain
Copy link

what type of errors do you expect to get via queries?

class Query(graphene.ObjectType):
        credit_card=graphene.String(
            required=True,
            description="Which CC to use"
        )

if the value is an invalid CC, like 1234, we need something like:

errors: [
  {"type":"credit_card", "messages":["invalid credit card number"]}
]

i know we could just use an enum, but

@sebastiandev
Copy link
Author

So, just for the record, @bclermont I went with the decorator approach, catching every possible error on both queries and mutations (using a base class for both that wraps every resolver). In the error handler decorator I just catch, format and re-raise errors and finally in the global error handler I decide what data to output in the errors payload, depending on the type.
I gave up with the errors field approach as is not possible to use it for queries.

@stale
Copy link

stale bot commented Jul 29, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 29, 2019
@flowirtz
Copy link

flowirtz commented Aug 5, 2019

Thanks for clarifying, @sebastiandev.

I'm still interested how other people do this: Is anybody approaching a more middleware-esque approach on a higher graphene level? Am having quite a few resolvers and don't want to be adding decorators to all of them, if avoidable.

@stale stale bot removed the wontfix label Aug 5, 2019
@sebastiandev
Copy link
Author

sebastiandev commented Aug 5, 2019

@fwirtz if you just subclass ObjectType and use your custom class as a base for all you queries/mutations, it will automatically wrap all your resolvers, and you wont need to care about missing a resolver or anything. That's how I did it

def catch_errors(func):
    f_data = inspect.signature(func)

    if not 'info' in f_data.parameters.keys():
        raise ValueError(f"Resolver {func.__name__} is missing the info parameter")

    @wraps(func)
    def wrapper(*args, **kwargs):
        info = None
        for arg in list(args) + list(kwargs.values()):
            if isinstance(arg, graphql.execution.base.ResolveInfo):
                info = arg

        try:
            res = func(*args, **kwargs)
            return res
       except Exception as e:
           # your code for error handling here...


class BaseObjectType(ObjectType):

    class Meta:
        abstract = True

    @classmethod
    def __init_subclass_with_meta__(
        cls,
        interfaces=(),
        possible_types=(),
        default_resolver=None,
        _meta=None,
        **options
    ):
        super().__init_subclass_with_meta__(
            interfaces, possible_types, default_resolver, _meta, **options)

        for f in cls._meta.fields:
            resolver_name = "resolve_{}".format(f)
            if hasattr(cls, resolver_name):
                setattr(cls, resolver_name, catch_errors(getattr(cls, resolver_name)))

for mutations you can apply a similar approach, I have a different code since we have built a layer on top of graphene to have mutations as plain classes (to group mutations methods in one class, and other goodies..)

@clin88
Copy link

clin88 commented Aug 13, 2019

Bump on this issue.

Error handling is a big deal when implementing any API and graphene's middleware doesn't offer any obvious way to do this.

A very common use case is catching all exceptions and stripping any internal error handling. This is crucial for security purposes: it's not uncommon for stringified exceptions to have PII and other sensitive information.

The only way to do this in graphene is to decorate every resolver with error handling functionality or using the metaclass approach described above.

Ariadne, by contrast, treats this as a first class use case:

https://ariadnegraphql.org/docs/error-messaging

Is proper error handling somewhere on the roadmap?

@demeralde
Copy link

Does anyone have a workaround for this in the meantime? All exceptions raised are printed to the console during testing/development.

Some packages such as django-graphql-jwt raise exceptions for things like user authentication. Those kinds of exceptions are just for the API user to see, so it'd be nice to be able to raise exceptions without them polluting the console output.

@jkimbo
Copy link
Member

jkimbo commented Sep 22, 2019

@clin88 I agree that error handling is not handled (pun not intended!) well at the moment in graphene. There are no plans as far as I'm aware to improve it though because it feels like something that could devolve into configuration soup trying to cover everyone's particular use case.

I am very open to hearing suggestions on how it could work though. Do you have any ideas?

@dspacejs You should be able to setup your python logging config to silence the errors in testing and development if you wish.

@WillySchu
Copy link

It is actually possible to catch and handle resolver errors without needing to decorate or use a custom sub class for each every query / mutation. You do still need to use a custom meta class similar to others suggestions, but it only needs to be used when you build your schema. E.g.

def catch_errors(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        try:
            return func(*args, **kwargs)
        except Error as e:
            # Handle errors here.

    return wrapper


class SchemaObjectType(ObjectType):

    class Meta:
        abstract = True

    @classmethod
    def __init_subclass_with_meta__(
            cls,
            interfaces=(),
            possible_types=(),
            default_resolver=None,
            _meta=None,
            **options):

        super().__init_subclass_with_meta__(
            interfaces, possible_types, default_resolver, _meta, **options)

        for f in cls._meta.fields:
            field = getattr(cls, f)
            if hasattr(field, "resolver"):
                field.resolver = catch_errors(field.resolver)

Then use this class to build your query and mutation objects. E.g.

class Query(SchemaObjectType):
    example = example_query_field()


class Mutation(SchemaObjectType):
    example_mutation = ExampleMutation.Field()


schema = graphene.Schema(query=Query, mutation=Mutation)

This clearly isn't an ideal solution, but it does obviate the need to replace every ObjectType and Mutation individually.

@beezee
Copy link

beezee commented Jan 8, 2020

@WillySchu is there something I'm missing about your approach or does it only work for top level resolvers? If I have nested fields, I'm not seeing how this approach would recurse to modify the resolvers of those as well.

@WillySchu
Copy link

WillySchu commented Jan 13, 2020

@beezee You are right that this only wraps the resolvers for the top level objects. However, this will catch exceptions raised in the nested resolvers as they are called from the top level resolvers. This was sufficient for my use case, as it wasn't terribly important to catch the exceptions at precisely the level they were raised, so long as I got them all. If such functionality is necessary for you, it should be fairly straight forward to modify my example to recursively look for lower level fields and wrap their resolvers as well. But as you correctly point out, it does not do so presently.

@stale
Copy link

stale bot commented Apr 12, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Apr 12, 2020
@iamareebjamal
Copy link

Still an issue

@stale stale bot removed the wontfix label Apr 15, 2020
@jeanlucmongrain
Copy link

I gave up on graphql python, error handling is a basic feature

@chpmrc
Copy link

chpmrc commented Apr 15, 2020

I posted this in another thread but you guys might be interestead as well, it's a small validation library on top of Graphene that follows Django Rest Framework's structure (it does not require Django of course :) ): https://github.com/chpmrc/graphene-validator. Probably not ready for production use but I think it only needs some polishing. It does exactly what some of you have been trying to do (field level validation, even with nested inputs).

Let me know what you think!

@stale
Copy link

stale bot commented Jul 14, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 14, 2020
@iamareebjamal
Copy link

no

@stale stale bot removed the wontfix label Jul 14, 2020
@amille44420
Copy link

amille44420 commented Jul 18, 2020

Indeed coming from graphql-js it's hard to believe there is no proper solution to customize error handling

@AlecRosenbaum
Copy link
Contributor

I know error handling is a bit complicated in graphql in general right now. For the purposes of this discussion, I think it's useful to differentiate between expected error conditions and unexpected error conditions.

While it might make sense to swallow and format expected error conditions, not differentiating between something like an explicitly raised GraphQLError and an unexpected Exception/ValueError/KeyError/etc is unfavorable for a few reasons:

  • As @clin88 stated, not leaking raw exceptions messages is security critical.
  • Even if you're okay with API users seeing raw exception strings, this makes it really hard to proactively detect and catch issues. Tools like Rollbar/Sentry/etc allow us to get proactive alerts about issues, and automatically collect a whole lot of relevant data for us to make debugging easier. This sort of all goes out the window unless you hack it into the middle and try to differentiate between explicitly raised GraphQLError's and GraphQLError's that were coerced from a different underlying exception.
  • During development, pytest --pdb is a fantastic tool for understanding and debugging issues. When unexpected exceptions are swallowed, pdb becomes significantly less useful. While you still have a reference to the original_error buried in the execution result error list, there's a lot of friction in actually getting at any useful data from it. At a minimum you have to:
    from traceback import print_tb
    print_tb(result.errors[0].original_error.__traceback__)
    After the 50th time typing variations of that line it gets a little old. But even with this boilerplate, you don't get a nice pdb shell with frame navigation, variable inspection, etc.

One common approach for error handling in graphql is to make expected errors part of the schema design. Even doing that, though, there's a handful of "expected" errors which are (correctly) raised by graphene/graphql-core and don't neatly fall into that bucket.

While not as verbose, a very similar issue was sort of already raised in on graphql-core. The proposed resolution was optionally raising exceptions out of the library resolvers, but original_error was implemented instead. Given the constraint of graphql-core needing to closely align with graphql-js, I understand it might never make sense to make that change there. Instead, I do think it might be worth making that change here.

One way graphql-core deviates from graphql-js is that it allows for a custom execution context. From what I can tell, the default execution context is what's responsible for swallowing exceptions. graphene's Schema.execute calls graphql_sync, which accepts the optional execution_context_class kwarg. graphene could extend the default execution context to remove the exception swallowing code. This new execution context could either be used as the default execution_context_class (with the option of manually specifying the graphql-core error-swallowing context as a kwarg), or it could even just be provided as an optional class which can easily be added to Schema.execute calls and referenced in docs.

With exceptions not being swallowed at the lower level, it becomes much more obvious how to catch and handle them at the higher levels (which expose views), etc. For example, stripping internal errors can then be done the same way it's done for other views served by whatever web framework exposes the graphql api.

The way all these libraries interact is a bit complicated so it's hard to tell if I'm on the right track here. If this does seem like the right approach I'd be happy to file a PR with this change.

@jkimbo What do you think?

@jkimbo
Copy link
Member

jkimbo commented Aug 12, 2020

Thanks for the detailed analysis @AlecRosenbaum and I think you're on the right track. If you could make a PR to implement a custom ExecutionContext that would be very helpful.

@syrusakbary any thoughts?

@datavistics
Copy link

@jkimbo , are you able to review the pull request so we can merge it?

@jkimbo
Copy link
Member

jkimbo commented Oct 21, 2020

Sorry for the delay. #1255 has been released in v3.0.0b6

@datavistics
Copy link

Is there any way to get this on 2.1.8?

@jkimbo
Copy link
Member

jkimbo commented Oct 23, 2020

@datavistics you can create a PR against the v2 branch. Btw v2 used graphql-core-legacy which has a different api.

@paultop6
Copy link

Ive been looking at this issue for my current predicament. Would it make sense that instead of dropping the catchall Exception handler, like UnforgivingExecutionContext does, and allowing for passing in a custom exception handler that would allow a developer to at least intercept the exception for logging purposes, or just breakpointing in their own code?

@mcsimps2
Copy link

mcsimps2 commented Mar 9, 2022

If you still want to use the solution proposed by @WillySchu for queries, you'll need to make sure you're using the Field and resolver keyword arguments explicitly. It works out of the box for mutations right now, but queries that rely on an implicit declaration of the resolve_foo method on the class will not route through the catch_errors decorator.

In other words, this won't work

class MyQuery(SchemaObjectType):
    foo = graphene.String()
    
    @staticmethod
    def resolve_foo(root, info):
         return "foo"

But this will

def do_resolve(root, info):
    return "foo"

class MyQuery(SchemaObjectType):
     foo = Field(String, resolver=do_resolve)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.