Skip to content

Memory-efficient attention (without xformers) #1892

Open
@Birch-san

Description

@Birch-san

I implemented sub-quadratic attention (as described in https://arxiv.org/abs/2112.05682v2):
https://twitter.com/Birchlabs/status/1607503573906063362
Birch-san#1
Birch-san/diffusers-play@a573e3d

is this worth upstreaming? it enables creation of images larger than can be achieved with attention slicing.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions