Skip to content

Feature Request: tl.atomic_add for bfloat16 #1387

@peterbell10

Description

@peterbell10

For additional context, see pytorch/pytorch#97016. torch.index_put(..., accumulate=True) currently fails for torch.bfloat16 under torch.compile because tl.atomic_add doesn't support BFloat16.

The PTX instruction atom.add.bf16 requires compute capability 9.0+, however when you compile atomicAdd in CUDA with compute capability 8.0+ it generates a CAS loop instead. Would it be reasonable for triton to do the same?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions