1.10 GIM
约 821 字大约 3 分钟
2025-05-12
https://github.com/xuelunshen/gim/issues/61
https://github.com/xuelunshen/gim/issues/50
系统信息:
❯ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.2 LTS
Release: 24.04
Codename: noble
❯ lspci | grep -i vga
01:00.0 VGA compatible controller: NVIDIA Corporation AD102 [GeForce RTX 4090] (rev a1)
05:00.0 VGA compatible controller: NVIDIA Corporation AD102 [GeForce RTX 4090] (rev a1)
参考以上的经验,尝试了很多方法,最终得到的解决方案:
mamba create -f environment.yaml -n gim
mamba activate gim
pip install xformers==0.0.20
运行 demo 时遇到报错:
/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.12.1 with CUDA 1106 (you have 2.7.0+cu126)
Python 3.9.16 (you have 3.9.21)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/triton/softmax.py:30: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@custom_fwd(cast_inputs=torch.float16 if _triton_softmax_fp16_enabled else None)
/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/triton/softmax.py:87: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
def backward(
/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/swiglu_op.py:107: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
def forward(cls, ctx, x, w1, b1, w2, b2, w3, b3):
/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/swiglu_op.py:128: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
def backward(cls, ctx, dx5):
/mnt/data/home/xxxx/matching/gim/networks/lightglue/models/matchers/lightglue.py:21: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Traceback (most recent call last):
File "/mnt/data/home/xxxx/matching/gim/demo.py", line 432, in <module>
dense_matches, dense_certainty = model.match(image0_, image1_)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/roma.py", line 836, in match
corresps = self.forward_symmetric(batch)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/roma.py", line 740, in forward_symmetric
feature_pyramid = self.extract_backbone_features(
File "/mnt/data/home/xxxx/matching/gim/networks/roma/roma.py", line 673, in extract_backbone_features
feature_pyramid = self.encoder(X, upsample=upsample)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/roma.py", line 625, in forward
dinov2_features_16 = self.dinov2_vitl14[0].forward_features(x)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/dino.py", line 532, in forward_features
x = blk(x)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/dino.py", line 166, in forward
x = x + attn_residual_func(x)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/dino.py", line 145, in attn_residual_func
return self.ls1(self.attn(self.norm1(x)))
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
File "/mnt/data/home/xxxx/matching/gim/networks/roma/dino.py", line 314, in forward
x = memory_efficient_attention(q, k, v, attn_bias=attn_bias)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/fmha/__init__.py", line 192, in memory_efficient_attention
return _memory_efficient_attention(
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/fmha/__init__.py", line 290, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/fmha/__init__.py", line 306, in _memory_efficient_attention_forward
op = _dispatch_fw(inp)
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 94, in _dispatch_fw
return _run_priority_list(
File "/mnt/data/home/xxxx/miniforge3/envs/gim/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 69, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(2, 2305, 16, 64) (torch.float32)
key : shape=(2, 2305, 16, 64) (torch.float32)
value : shape=(2, 2305, 16, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0.0
`flshattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
Operator wasn't built - see `python -m xformers.info` for more info
`tritonflashattF` is not supported because:
xFormers wasn't build with CUDA support
dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
requires A100 GPU
Only work on pre-MLIR triton for now
`cutlassF` is not supported because:
xFormers wasn't build with CUDA support
Operator wasn't built - see `python -m xformers.info` for more info
`smallkF` is not supported because:
xFormers wasn't build with CUDA support
max(query.shape[-1] != value.shape[-1]) > 32
Operator wasn't built - see `python -m xformers.info` for more info
unsupported embed per head: 64
重点发现:
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 1.12.1 with CUDA 1106 (you have 2.7.0+cu126)
Python 3.9.16 (you have 3.9.21)
原来是因为之前通过 environment.yaml
安装的 Pytorch 被后来安装 pytorch-lightning 的时候给顶了
重新安装 Pytorch:
pip uninstall torch
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
...
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchmetrics 1.7.0 requires torch>=2.0.0, but you have torch 1.12.1+cu113 which is incompatible.
Successfully installed torch-1.12.1+cu113 torchaudio-0.12.1+cu113 torchvision-0.13.1+cu113
出现了 ERROR,但是没关系,可以跑了🫡