Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
110 commits
Select commit Hold shift + click to select a range
b3bfa35
Add Ovis2 model and processor implementation
thisisiron Mar 28, 2025
51c9efd
Apply style fixes
thisisiron Mar 28, 2025
9891508
Add unit tests for Ovis2 image processing and processor
thisisiron Mar 29, 2025
fde1b2a
Refactor image processing functions for clarity and efficiency
thisisiron Mar 29, 2025
6b0e5d4
Add Ovis2 ImageProcessorFast
thisisiron Mar 30, 2025
6b8ae7e
Refactor Ovis2 code
thisisiron Mar 31, 2025
91f72b2
Refactor Ovis2 model components and update processor functionality
thisisiron Mar 31, 2025
aacbab3
Fix repo consistency issues for Ovis2: docstring, config cleanup
thisisiron Mar 31, 2025
7305a22
Update Ovis2 model integration tests
thisisiron Mar 31, 2025
355a91c
Update Ovis2 configuration and processing classes for improved docume…
thisisiron Mar 31, 2025
ac232e0
Remove duplicate entry for 'ovis2' in VLM_CLASS_NAMES
thisisiron Apr 1, 2025
16d71f8
Fix conflict
thisisiron Apr 1, 2025
a7b5094
Fix import order
thisisiron Apr 1, 2025
4d56043
Update image processor class names
thisisiron Apr 4, 2025
7f1cbc0
Update Ovis2 model structure
thisisiron May 5, 2025
a4e37e6
Refactor Ovis2 configuration
thisisiron May 5, 2025
11a2a09
Merge remote-tracking branch 'upstream/main' into add-ovis2
thisisiron May 5, 2025
5999659
Fix typos
thisisiron May 5, 2025
f66426c
Refactor Ovis2 model classes and remove unused code
thisisiron May 5, 2025
ae1ea0d
Fix typos
thisisiron May 5, 2025
4e540b5
Refactor Ovis2 model initialization
thisisiron May 5, 2025
83a7cca
Fiix typos
thisisiron May 5, 2025
234edb2
Merge branch 'main' into add-ovis2
thisisiron May 29, 2025
db59777
Remove Ovis2 model mapping from MODEL_MAPPING_NAMES in modeling_auto.py
thisisiron May 29, 2025
b604a70
Add license and update type hints
thisisiron May 29, 2025
f26717d
Refactor token function and update docstring handling
thisisiron May 29, 2025
890abdc
Add license
thisisiron May 30, 2025
97e84a4
Merge branch 'main' into add-ovis2
thisisiron May 30, 2025
67a45ab
Merge branch 'main' into add-ovis2
thisisiron May 31, 2025
764e74f
Merge branch 'main' into add-ovis2
thisisiron Jun 2, 2025
178fc10
Add Ovis2 model support and update documentation
thisisiron Jun 23, 2025
2e278a4
Refactor Ovis2 model structure and enhance multimodal capabilities
thisisiron Jun 23, 2025
17afef9
Update Ovis2 weight mapping for consistency and clarity in key patterns
thisisiron Jun 23, 2025
1a87ab3
Remove unused 'grids' parameter from Ovis2 model and Update processin…
thisisiron Jun 23, 2025
f3c498e
Refactor Ovis2 model test structure to include Ovis2Model
thisisiron Jun 23, 2025
ec0ffd5
Merge branch 'main' into add-ovis2
thisisiron Jun 23, 2025
0f418e8
Add optional disable_grouping param to Ovis2ImageProcessorFast
thisisiron Jun 23, 2025
afd50aa
Refactor type hints in Ovis2 modules
thisisiron Jun 23, 2025
bdbcb22
Add licensing information in Ovis2 modules and tests
thisisiron Jun 25, 2025
cd369a6
Refactor Ovis2 model by removing unused methods
thisisiron Jun 25, 2025
b459f50
Refactor Ovis2 model tests by renaming test classes and removing skip…
thisisiron Jun 25, 2025
4ae2f70
Merge branch 'main' into add-ovis2
thisisiron Jun 25, 2025
57abe35
Refactor Ovis2 model output classes
thisisiron Jun 25, 2025
541dc7f
Refactor Ovis2 weight conversion and Update model embedding classes
thisisiron Jun 28, 2025
5e7846c
Merge branch 'main' into add-ovis2
thisisiron Jun 28, 2025
d13eaea
Refactor Ovis2 model imports and remove unused functions
thisisiron Jun 28, 2025
a10e3db
Enhance vision configuration extraction in Ovis2 weight conversion
thisisiron Jun 28, 2025
0501e0f
Refactor Ovis2 model's forward method to remove interpolation option
thisisiron Jun 28, 2025
c19231f
Update Ovis2 model documentation
thisisiron Jun 28, 2025
6083141
Merge branch 'main' into add-ovis2
thisisiron Jul 3, 2025
c27bf25
Refactor Ovis2 model input handling and tokenizer configuration
thisisiron Jul 4, 2025
58c0c0a
Merge branch 'main' into add-ovis2
thisisiron Jul 4, 2025
94fd529
Update return type hints in Ovis2 model
thisisiron Jul 4, 2025
8402244
Merge branch 'main' into add-ovis2
thisisiron Jul 4, 2025
2cd3837
Remove commented-out code
thisisiron Jul 4, 2025
1a5f6a9
fix config for tests and remove key mappings
Cyrilvallez Jul 8, 2025
e919722
Update tokenizer configuration to use add_special_tokens method
thisisiron Jul 8, 2025
2de5a94
Merge branch 'main' into add-ovis2
thisisiron Jul 8, 2025
e7e2464
Merge branch 'add-ovis2' of https://github.com/thisisiron/transformer…
thisisiron Jul 8, 2025
d9a8599
skip torchscript
Cyrilvallez Jul 8, 2025
94ba3aa
Fix image placeholder generation in Ovis2Processor
thisisiron Jul 8, 2025
8392223
Merge branch 'add-ovis2' of https://github.com/thisisiron/transformer…
thisisiron Jul 8, 2025
0f19c79
Merge branch 'main' into add-ovis2
thisisiron Jul 9, 2025
d335aaa
Refactor Ovis2 model to rename visual_table to visual_embeddings_table
thisisiron Jul 9, 2025
91e924c
Enhance Ovis2 model by adding vision_feature_select_strategy parameter
thisisiron Jul 9, 2025
3b02fe1
Refactor Ovis2 model weights conversion and architecture
thisisiron Jul 9, 2025
7376160
Refactor Ovis2 model by removing vision_feature_select_strategy param…
thisisiron Jul 9, 2025
683d3e9
Merge branch 'main' into add-ovis2
thisisiron Jul 9, 2025
a8ffbd4
Update Ovis2 model examples
thisisiron Jul 9, 2025
432a718
Refactor Ovis2 model
thisisiron Jul 12, 2025
1d4a1e9
Update Ovis2 model
thisisiron Jul 12, 2025
933cadd
Update Ovis2 model configuration
thisisiron Jul 12, 2025
9ecdd76
Merge branch 'main' into add-ovis2
thisisiron Jul 12, 2025
c024a10
Refactor Ovis2 model test setup
thisisiron Jul 12, 2025
5fb7870
Merge branch 'main' into add-ovis2
thisisiron Jul 16, 2025
3fcdb3a
Merge branch 'main' into add-ovis2
thisisiron Jul 27, 2025
a48468a
Refactor flash attention support
thisisiron Jul 28, 2025
5b02165
Merge branch 'main' into add-ovis2
thisisiron Jul 28, 2025
b5b2eb6
Refactor
thisisiron Jul 28, 2025
5e9c276
Fix typo
thisisiron Jul 28, 2025
0f3163a
Refactor
thisisiron Jul 28, 2025
0c13cfc
Refactor model classes
thisisiron Jul 29, 2025
8d495ee
Update expected output in Ovis2
thisisiron Jul 29, 2025
9d995c3
Refactor docstrings
thisisiron Jul 29, 2025
ccfdb43
Fix
thisisiron Jul 29, 2025
192cc10
Merge branch 'main' into add-ovis2
thisisiron Jul 29, 2025
cfe3a3b
Fix
thisisiron Jul 29, 2025
530aad0
Fix
thisisiron Jul 29, 2025
5d92825
Update input in tests
thisisiron Jul 29, 2025
7bb0e2b
Merge branch 'main' into add-ovis2
thisisiron Jul 29, 2025
c4a83b6
Fix
thisisiron Jul 29, 2025
ac31c2a
Merge branch 'main' into add-ovis2
thisisiron Jul 31, 2025
7b78029
Fix get_decoder method
thisisiron Jul 31, 2025
c230e72
Refactor
thisisiron Jul 31, 2025
3b0a94a
Refactor Ovis2
thisisiron Aug 8, 2025
7cff46b
Merge branch 'main' into add-ovis2
thisisiron Aug 8, 2025
9afdbad
Fix
thisisiron Aug 8, 2025
bd69fb5
Fix
thisisiron Aug 8, 2025
3ed0cb6
Fix test
thisisiron Aug 8, 2025
2b0621c
Add get_placeholder_mask
thisisiron Aug 8, 2025
11802a4
Merge branch 'main' into add-ovis2
thisisiron Aug 8, 2025
38b6f15
Merge branch 'main' into add-ovis2
thisisiron Aug 8, 2025
0e7d6ed
Refactor Ovis2 model tests
thisisiron Aug 14, 2025
7ce5c4e
Fix
thisisiron Aug 14, 2025
0c6571d
Refactor
thisisiron Aug 14, 2025
13010fa
Merge branch 'main' into add-ovis2
thisisiron Aug 14, 2025
2773182
Fix
thisisiron Aug 14, 2025
8642f7d
Fix
thisisiron Aug 14, 2025
62f2023
Fix Ovis2 test
thisisiron Aug 18, 2025
dd47f25
Merge branch 'main' into add-ovis2
thisisiron Aug 18, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add Ovis2 ImageProcessorFast
  • Loading branch information
thisisiron committed Apr 1, 2025
commit 6b0e5d47eb97da6b4dcaee80b6a8fe31338cf82f
5 changes: 5 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1363,7 +1363,11 @@
_import_structure["models.llava"].append("LlavaImageProcessorFast")
_import_structure["models.llava_next"].append("LlavaNextImageProcessorFast")
_import_structure["models.llava_onevision"].append("LlavaOnevisionImageProcessorFast")
<<<<<<< HEAD
_import_structure["models.phi4_multimodal"].append("Phi4MultimodalImageProcessorFast")
=======
_import_structure["models.ovis2"].append("Ovis2ImageProcessorFast")
>>>>>>> Add Ovis2 ImageProcessorFast
_import_structure["models.pixtral"].append("PixtralImageProcessorFast")
_import_structure["models.qwen2_vl"].append("Qwen2VLImageProcessorFast")
_import_structure["models.rt_detr"].append("RTDetrImageProcessorFast")
Expand Down Expand Up @@ -6668,6 +6672,7 @@
from .models.llava_next import LlavaNextImageProcessorFast
from .models.llava_onevision import LlavaOnevisionImageProcessorFast
from .models.phi4_multimodal import Phi4MultimodalImageProcessorFast
from .models.ovis2 import Ovis2ImageProcessorFast
from .models.pixtral import PixtralImageProcessorFast
from .models.qwen2_vl import Qwen2VLImageProcessorFast
from .models.rt_detr import RTDetrImageProcessorFast
Expand Down
2 changes: 1 addition & 1 deletion src/transformers/models/auto/image_processing_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@
("nat", ("ViTImageProcessor", "ViTImageProcessorFast")),
("nougat", ("NougatImageProcessor",)),
("oneformer", ("OneFormerImageProcessor",)),
("ovis2", "Ovis2ImageProcessor"),
("ovis2", ("Ovis2ImageProcessor", "Ovis2ImageProcessorFast")),
("owlv2", ("Owlv2ImageProcessor",)),
("owlvit", ("OwlViTImageProcessor",)),
("paligemma", ("SiglipImageProcessor", "SiglipImageProcessorFast")),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/ovis2/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
if TYPE_CHECKING:
from .configuration_ovis2 import *
from .image_processing_ovis2 import *
from .image_processing_ovis2_fast import *
from .modeling_ovis2 import *
from .processing_ovis2 import *
else:
Expand Down
21 changes: 16 additions & 5 deletions src/transformers/models/ovis2/image_processing_ovis2.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,13 @@ def get_optimal_tiled_canvas(
return best_grid


def compute_patch_covering_area(left, upper, right, lower, side):
def compute_patch_covering_area(
left: int,
upper: int,
right: int,
lower: int,
side: int
) -> float:
w = right - left
h = lower - upper
w, h = max(w, h), min(w, h)
Expand Down Expand Up @@ -132,7 +138,7 @@ def split_image_into_grid(h: int, w: int, grid: Tuple[int, int]) -> List[Tuple[i
def get_min_tile_covering_grid(
image_size: Tuple[int, int],
target_patch_size: int,
max_image_tiles: int = 9,
max_image_tiles: int,
covering_threshold: float = 0.9,
) -> Tuple[int, int]:
image_height, image_width = image_size
Expand Down Expand Up @@ -221,6 +227,7 @@ def __init__(
image_mean: Optional[Union[float, List[float]]] = None,
image_std: Optional[Union[float, List[float]]] = None,
do_convert_rgb: bool = True,
use_covering_area_grid: bool = True,
**kwargs,
) -> None:
super().__init__(**kwargs)
Expand Down Expand Up @@ -363,6 +370,9 @@ def preprocess(
- `"channels_first"` or `ChannelDimension.FIRST`: image in (num_channels, height, width) format.
- `"channels_last"` or `ChannelDimension.LAST`: image in (height, width, num_channels) format.
- `"none"` or `ChannelDimension.NONE`: image in (height, width) format.
use_covering_area_grid (`bool`, *optional*, defaults to `True`):
Whether to use the covering area grid to determine the number of patches. Only has an effect if
`crop_to_patches` is set to `True`.
"""
do_resize = do_resize if do_resize is not None else self.do_resize
crop_to_patches = crop_to_patches if crop_to_patches is not None else self.crop_to_patches
Expand Down Expand Up @@ -459,6 +469,7 @@ def crop_image_to_patches(
use_covering_area_grid: bool = True,
patch_size: Union[Tuple, int, dict] = None,
data_format: ChannelDimension = None,
covering_threshold: float = 0.9,
):
"""
Crop the image to patches and return a list of cropped images.
Expand Down Expand Up @@ -489,13 +500,13 @@ def crop_image_to_patches(
patch_size_height, patch_size_width = patch_size["height"], patch_size["width"]
original_height, original_width = images.shape[-2:]

# calculate the number of patches from original ovis2
if use_covering_area_grid:
# Use the original OVIS2 approach: compute the minimal number of tiles that cover at least 90% of the image area
num_columns, num_rows = get_min_tile_covering_grid(
(original_height, original_width),
side=patch_size_height, # square patch size
target_patch_size=patch_size_height, # square patch size
max_image_tiles=max_patches,
covering_threshold=0.9,
covering_threshold=covering_threshold,
)
else:
# find the closest aspect ratio to the target
Expand Down
261 changes: 261 additions & 0 deletions src/transformers/models/ovis2/image_processing_ovis2_fast.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,261 @@
# coding=utf-8
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.



from typing import List, Optional, Tuple, Union

from ...image_processing_utils import BatchFeature
from ...image_processing_utils_fast import (
BASE_IMAGE_PROCESSOR_FAST_DOCSTRING,
BASE_IMAGE_PROCESSOR_FAST_DOCSTRING_PREPROCESS,
DefaultFastImageProcessorKwargs,
BaseImageProcessorFast,
reorder_images,
group_images_by_shape,
)
from ...image_utils import (
OPENAI_CLIP_MEAN,
OPENAI_CLIP_STD,
PILImageResampling,
SizeDict,
PILImageResampling,
ImageInput,
)
from ...processing_utils import Unpack
from ...utils import (
TensorType,
add_start_docstrings,
is_torch_available,
is_torchvision_available,
is_torchvision_v2_available,
)

from .image_processing_ovis2 import (
get_optimal_tiled_canvas,
get_min_tile_covering_grid
)

if is_torch_available():
import torch

if is_torchvision_available():
if is_torchvision_v2_available():
from torchvision.transforms.v2 import functional as F
else:
from torchvision.transforms import functional as F


class Ovis2ImageProcessorKwargs(DefaultFastImageProcessorKwargs):
crop_to_patches: Optional[bool]
min_patches: Optional[int]
max_patches: Optional[int]
use_covering_area_grid: Optional[bool]


@add_start_docstrings(
"Constructs a fast Ovis2 image processor.",
BASE_IMAGE_PROCESSOR_FAST_DOCSTRING,
)
class Ovis2ImageProcessorFast(BaseImageProcessorFast):
resample = PILImageResampling.BICUBIC
image_mean = OPENAI_CLIP_MEAN
image_std = OPENAI_CLIP_STD
size = {"height": 384, "width": 384}
default_to_square = None
do_resize = True
do_rescale = True
do_normalize = True
do_convert_rgb = True
crop_to_patches = False
min_patches = 1
max_patches = 12
use_covering_area_grid = True
valid_kwargs = Ovis2ImageProcessorKwargs

@add_start_docstrings(
BASE_IMAGE_PROCESSOR_FAST_DOCSTRING_PREPROCESS,
"""
crop_to_patches (`bool`, *optional*, defaults to `False`):
Whether to crop the image to patches. Can be overridden by the `crop_to_patches` parameter in the
`preprocess` method.
min_patches (`int`, *optional*, defaults to 1):
The minimum number of patches to be extracted from the image. Only has an effect if `crop_to_patches` is
set to `True`. Can be overridden by the `min_patches` parameter in the `preprocess` method.
max_patches (`int`, *optional*, defaults to 12):
The maximum number of patches to be extracted from the image. Only has an effect if `crop_to_patches` is
set to `True`. Can be overridden by the `max_patches` parameter in the `preprocess` method.
""",
)
def preprocess(self, images: ImageInput, **kwargs: Unpack[valid_kwargs]) -> BatchFeature:
return super().preprocess(images, **kwargs)

def crop_image_to_patches(
self,
images: "torch.Tensor",
min_patches: int,
max_patches: int,
use_covering_area_grid: bool = True,
covering_threshold: float = 0.9,
patch_size: Union[Tuple, int, dict] = None,
interpolation: Optional["F.InterpolationMode"] = None,
):
"""
Crop the images to patches and return a list of cropped images.
The number of patches and their grid arrangement are determined by the original image size,
the target patch size and the minimum and maximum number of patches.
The aspect ratio of the patches grid is chosen to be the closest to the original image aspect ratio.

Args:
images (`torch.Tensor`):
The images to be cropped.
min_patches (`int`):
The minimum number of patches to be extracted from the image.
max_patches (`int`):
The maximum number of patches to be extracted from the image.
use_thumbnail (`bool`, *optional*, defaults to `True`):
Whether to add a thumbnail image to the list of cropped patches.
patch_size (`int`, `Tuple[int, int]`, `dict`, *optional*):
The size of the output patches.
The format of the image data. If `None`, the format is inferred from the input image.

Returns:
List[`PIL.Image.Image`] or List[np.ndarray]: The list of cropped images.
"""
num_image = images.shape[0]
patch_size_height, patch_size_width = patch_size.height, patch_size.width
original_height, original_width = images.shape[-2:]

if use_covering_area_grid:
# Use the original OVIS2 approach: compute the minimal number of tiles that cover at least 90% of the image area
num_columns, num_rows = get_min_tile_covering_grid(
(original_height, original_width),
target_patch_size=patch_size_height, # square patch size
max_image_tiles=max_patches,
covering_threshold=covering_threshold,
)
else:
# find the closest aspect ratio to the target
num_columns, num_rows = get_optimal_tiled_canvas(
(original_height, original_width),
(patch_size_height, patch_size_width),
min_patches,
max_patches
)

# calculate the target width and height
target_width = patch_size_width * num_columns
target_height = patch_size_height * num_rows
num_blocks = num_columns * num_rows

# resize the image so that each patch is of patch_size
resized_image = self.resize(
images, SizeDict(height=target_height, width=target_width), interpolation=interpolation
)
# split the image into patches
processed_images = []
for i in range(num_blocks):
column = i % num_columns
row = i // num_columns
box = (
column * patch_size_width,
row * patch_size_height,
(column + 1) * patch_size_width,
(row + 1) * patch_size_height,
)
# split the image
patch_image = resized_image[..., box[1] : box[3], box[0] : box[2]]
processed_images.append(patch_image)

if len(processed_images) != 1:
thumbnail_img = self.resize(
images, patch_size, interpolation=interpolation
)
processed_images.insert(0, thumbnail_img)

processed_images = torch.stack(processed_images, dim=0).transpose(0, 1).contiguous()
grid = [[num_rows, num_columns] for _ in range(num_image)]

return processed_images, grid

def _preprocess(
self,
images: List["torch.Tensor"],
do_resize: bool,
size: SizeDict,
crop_to_patches: bool,
min_patches: int,
max_patches: int,
use_covering_area_grid: bool,
interpolation: Optional["F.InterpolationMode"],
do_center_crop: bool,
crop_size: SizeDict,
do_rescale: bool,
rescale_factor: float,
do_normalize: bool,
image_mean: Optional[Union[float, List[float]]],
image_std: Optional[Union[float, List[float]]],
return_tensors: Optional[Union[str, TensorType]],
) -> BatchFeature:
if crop_to_patches and max_patches > 1:
grouped_images, grouped_images_index = group_images_by_shape(images)
processed_images_grouped = {}
grids = {}
for shape, stacked_images in grouped_images.items():
stacked_images, grid = self.crop_image_to_patches(
stacked_images,
min_patches,
max_patches,
patch_size=size,
use_covering_area_grid=use_covering_area_grid,
interpolation=interpolation,
)
processed_images_grouped[shape] = stacked_images
grids[shape] = grid
images = reorder_images(processed_images_grouped, grouped_images_index)
images = [image for images_list in images for image in images_list]
grids = reorder_images(grids, grouped_images_index)
else:
grids = [[1, 1] for _ in range(len(images))]

# Group images by size for batched resizing
grouped_images, grouped_images_index = group_images_by_shape(images)
resized_images_grouped = {}
for shape, stacked_images in grouped_images.items():
if do_resize:
stacked_images = self.resize(image=stacked_images, size=size, interpolation=interpolation)
resized_images_grouped[shape] = stacked_images
resized_images = reorder_images(resized_images_grouped, grouped_images_index)

# Group images by size for further processing
# Needed in case do_resize is False, or resize returns images with different sizes
grouped_images, grouped_images_index = group_images_by_shape(resized_images)
processed_images_grouped = {}
for shape, stacked_images in grouped_images.items():
if do_center_crop:
stacked_images = self.center_crop(stacked_images, crop_size)
# Fused rescale and normalize
stacked_images = self.rescale_and_normalize(
stacked_images, do_rescale, rescale_factor, do_normalize, image_mean, image_std
)
processed_images_grouped[shape] = stacked_images

processed_images = reorder_images(processed_images_grouped, grouped_images_index)
processed_images = torch.stack(processed_images, dim=0) if return_tensors else processed_images
return BatchFeature(
data={"pixel_values": processed_images, "grids": grids}, tensor_type=return_tensors
)

__all__ = ["Ovis2ImageProcessorFast"]
7 changes: 7 additions & 0 deletions src/transformers/utils/dummy_torchvision_objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,13 @@ def __init__(self, *args, **kwargs):


class Phi4MultimodalImageProcessorFast(metaclass=DummyObject):
_backends = ["torchvision"]

def __init__(self, *args, **kwargs):
requires_backends(self, ["torchvision"])


class Ovis2ImageProcessorFast(metaclass=DummyObject):
_backends = ["torchvision"]

def __init__(self, *args, **kwargs):
Expand Down
Loading