You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any issue related to onnx exporting when using timm models as encoders? I'm trying to export a model with timm-MobileNetV3 as encoder and FPN as decoder.
I'm running the script in a conda environment with python=3.7, pytorch=1.8, segmenation_model_pytorch=0.2.1
Building network:
import segmentation_models_pytorch as smp
def build_fpn_mobv3(input_shape):
model = smp.FPN(encoder_name="timm-mobilenetv3_large_100",
encoder_weights=None,
in_channels=3,
classes=1,
activation="sigmoid")
shape = (1,3,) + input_shape
x = torch.zeros(shape,dtype=torch.float32,device=torch.device('cpu'))
model.eval()
model(x)
return model
Hi,
Is there any issue related to onnx exporting when using timm models as encoders? I'm trying to export a model with timm-MobileNetV3 as encoder and FPN as decoder.
I'm running the script in a conda environment with python=3.7, pytorch=1.8, segmenation_model_pytorch=0.2.1
Building network:
Exporting trained model
The error is:
RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11.
Is there any workaround to export using opset version 10? I'm able to export using other encoders but none of them are timm ones.
Thanks for attention and time.
The text was updated successfully, but these errors were encountered: