-
Notifications
You must be signed in to change notification settings - Fork 280
Added LayoutLMv3 #2178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Added LayoutLMv3 #2178
Conversation
@carrycooldude That you for the PR - the code structure does not match KerasHub style. |
@@ -0,0 +1,152 @@ | |||
"""Tests for LayoutLMv3 backbone.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these docstring at the start of the file.
Adding General code structuring comments.
Refer any existing model implementations here https://github.com/keras-team/keras-hub/tree/master/keras_hub/src/models The test cases also should follow the template we are following in the models. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added few comments, most of it are general practice which we follow. Incorporate those general suggested changes across all the files.
And remove the files and directory which are not required like env directory.
@@ -0,0 +1 @@ | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this directory and file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still needs to be removed
@@ -0,0 +1,4 @@ | |||
"""LayoutLMv3 document classifier.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file needs to be empty, all the import is handled in keras_hub/api directory and will be automatically generated whenever you run git commit -m "<message>"
Make sure you run pre-commit install
for the first time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending
@@ -0,0 +1,15 @@ | |||
from keras_hub.src.models.layoutlmv3.layoutlmv3_backbone import LayoutLMv3Backbone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is mainly to register presets, follow other models to understand the format we follow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending
|
||
def __init__( | ||
self, | ||
vocab_size: int = 30522, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove type annotation from everywhere, we don't follow type annotation in Keras Hub
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still type annotation needs to be removed
References: | ||
- [LayoutLMv3 Paper](https://arxiv.org/abs/2204.08387) | ||
- [LayoutLMv3 GitHub](https://github.com/microsoft/unilm/tree/master/layoutlmv3) | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This entire doctring needs to be inside the Backbone class
""" | ||
|
||
import os | ||
from typing import Dict, List, Optional, Tuple, Union |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this once type annotation is removed
|
||
from .layoutlmv3_tokenizer import LayoutLMv3Tokenizer | ||
from .layoutlmv3_presets import backbone_presets | ||
from .layoutlmv3_transformer import LayoutLMv3TransformerLayer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change from relative imports to absolute imports everywhere.
maintaining spatial relationships in documents. | ||
|
||
Args: | ||
vocab_size: int, defaults to 30522. Size of the vocabulary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Format for Args we follow is:
vocab_size: int. Size of the vocabulary. Defaults to 30522
This format should be followed for all and make sure it conveys the proper and complete required information.
``` | ||
""" | ||
|
||
presets = backbone_presets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need of this here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can keep the example, but we don't need presets = backbone_presets
self.use_rel_pos = use_rel_pos | ||
self.rel_pos_bins = rel_pos_bins | ||
self.max_rel_pos = max_rel_pos | ||
self.spatial_embedding_dim = spatial_embedding_dim |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should come at last.
You can follow below order:
# === Layers ===
# === Functional Model ===
# === Config ===
@sachinprasadhs any updates on this one? |
Still the review comments are not addressed, could you please fix those before I can suggest any more changes |
I guess I fixed it , can you tell me which are those? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointed the comments where previous reviews were not addressed.
Also, remove layoutmv3_env
directory
``` | ||
""" | ||
|
||
presets = backbone_presets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can keep the example, but we don't need presets = backbone_presets
|
||
def __init__( | ||
self, | ||
vocab_size: int = 30522, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still type annotation needs to be removed
@@ -0,0 +1,4 @@ | |||
"""LayoutLMv3 document classifier.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending
@@ -0,0 +1,15 @@ | |||
from keras_hub.src.models.layoutlmv3.layoutlmv3_backbone import LayoutLMv3Backbone |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pending
# Copyright 2024 The Keras Hub Authors. All Rights Reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# ============================================================================== | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this
"""LayoutLMv3 tokenizer implementation. | ||
|
||
This tokenizer inherits from WordPieceTokenizer and adds LayoutLMv3-specific | ||
functionality for document understanding tasks. | ||
|
||
Example: | ||
```python | ||
# Initialize the tokenizer | ||
tokenizer = LayoutLMv3Tokenizer.from_preset("layoutlmv3_base") | ||
|
||
# Tokenize text | ||
tokens = tokenizer("Hello world!") | ||
``` | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this, move the example inside LayoutLMv3Tokenizer if necessary.
"""Tests for LayoutLMv3 tokenizer.""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this
from ..layoutlmv3.layoutlmv3_tokenizer import LayoutLMv3Tokenizer | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No relative imports
"""LayoutLMv3 transformer layer implementation. | ||
|
||
This module implements the transformer layer used in the LayoutLMv3 model. | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this
from typing import Dict, Optional | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need of this
Description
This PR fixes the LayoutLMv3 checkpoint conversion script to properly handle different spatial embedding dimensions between the base and large models. The base model uses 128 dimensions for all spatial embeddings, while the large model uses 171 dimensions for x/y coordinates and 170 dimensions for height/width.
Changes Made
Technical Details
The conversion script now:
Testing
Output Example