Skip to content

Update nn.py #21250

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 0 commits into from
Closed

Update nn.py #21250

wants to merge 0 commits into from

Conversation

pctablet505
Copy link
Collaborator

Added support for flash attention with sharding, fixed issue when using flash attention on tpu.

@codecov-commenter
Copy link

codecov-commenter commented May 5, 2025

Codecov Report

Attention: Patch coverage is 15.78947% with 32 lines in your changes missing coverage. Please review.

Project coverage is 82.56%. Comparing base (4595239) to head (ace9536).

Files with missing lines Patch % Lines
keras/src/backend/jax/nn.py 15.78% 30 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21250      +/-   ##
==========================================
- Coverage   82.59%   82.56%   -0.04%     
==========================================
  Files         564      564              
  Lines       54556    54580      +24     
  Branches     8479     8486       +7     
==========================================
+ Hits        45062    45065       +3     
- Misses       7405     7426      +21     
  Partials     2089     2089              
Flag Coverage Δ
keras 82.37% <15.78%> (-0.04%) ⬇️
keras-jax 63.63% <15.78%> (-0.03%) ⬇️
keras-numpy 58.76% <0.00%> (-0.03%) ⬇️
keras-openvino 32.97% <0.00%> (-0.02%) ⬇️
keras-tensorflow 64.05% <0.00%> (-0.03%) ⬇️
keras-torch 63.70% <0.00%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!
The gating logic is a little confusing for me. I left some comments. Thanks!

)
is_tpu = jax.devices()[0].platform == "tpu"

# Determine flash attention compatibility
Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am very confused about the logic here.

  • why is FA disabled if inputs are sharded?

flash_attention = (
not inputs_sharded or is_tpu
) and _can_use_flash_attention(query, key, value, bias)
elif flash_attention and inputs_sharded and not is_tpu:
Copy link
Collaborator

@divyashreepathihalli divyashreepathihalli May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this condition is weird
if FA is enabled and inputs are sharded and is not running on TPU - you are disabling FA? why? can you please explain?
following this you are checking if running on TPU and FA is enabled - this will never be true if inputs are sharded - whats the point?


# `dot_product_attention` is only available in jax>=0.4.31
# Process mask for Splash Attention
custom_mask = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets verify numerics remain consistent with this updated code to mask

@pctablet505
Copy link
Collaborator Author

#21254
I've raised a new pull request, as I had to delete the repository due to some reasons. I corrected the logic for when to enable flash attention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants