Skip to content

123859 - rpi4b aidge

Summary: Pipeline failed, but error.log is filled.

Model Details

  • model name : efficientnet-lite4-11-int8.onnx
  • model url : Download here

Logs Details

user.log
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_stem_conv2d_Conv2D_quant] of type [QLinearConv] as
[NOTICE]   a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [0, 0, 1, 1]
[NOTICE]   	* strides : [2, 2]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 32
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_0_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_1_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 144
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [0, 0, 1, 1]
[NOTICE]   	* strides : [2, 2]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_2_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 192
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_2_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_3_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 192
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_3_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_4_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 192
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_4_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_5_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 192
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [1, 1, 2, 2]
[NOTICE]   	* strides : [2, 2]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_6_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 336
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_6_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_7_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 336
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_7_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_8_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 336
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_8_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_9_conv2d_Conv2D_quant] of type [QLinearConv]
[NOTICE]   as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 336
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [0, 0, 1, 1]
[NOTICE]   	* strides : [2, 2]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_10_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_10_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_11_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_11_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_12_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_12_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_13_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_13_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_14_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_14_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_15_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 672
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_16_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_16_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_17_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_17_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_18_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_18_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_19_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_19_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_20_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_20_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_21_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 960
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [1, 1, 2, 2]
[NOTICE]   	* strides : [2, 2]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_22_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_22_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_23_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_23_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_24_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_24_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_25_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_25_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_26_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_26_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_27_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_27_Add_quant] of type [QLinearAdd] as a
[NOTICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_28_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [5, 5]
[NOTICE]   	* pads : [2, 2, 2, 2]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_28_Add_quant] of type [QLinearAdd] as a
[NOTMODEL : efficientnetlite411int8.onnx

===============
ONNX Graph
===============

images:0 [1, 224, 224, 3]

===============
Aidge Graph
===============

  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_head_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5', optype='Transpose', parents: [0], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_head_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear', optype='Dequantizer', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_Squeeze_0_quantized', optype='Squeeze', parents: [1, 0], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear', optype='Quantizer', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='gemm_MatMul_quant', optype='QLinearMatMul', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='gemm_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_quant', optype='QLinearAveragePool', parents: [1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_22_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='Softmax', optype='Softmax', parents: [1], children: [[]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_16_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_6_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_1_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_2_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_10_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])

===============
Supported nodes
===============

Native operators: 611 (6 types)
- Dequantizer: 1
- Producer: 606
- Quantizer: 1
- Softmax: 1
- Squeeze: 1
- Transpose: 1
Generic operators: 117 (4 types)
- QLinearAdd: 24
- QLinearAveragePool: 1
- QLinearConv: 91
- QLinearMatMul: 1
Native types coverage: 60.0% (6/10)
Native operators coverage: 83.9% (611/728)
  (defaultdict(<class 'int'>, {'Producer': 606, 'Quantizer': 1, 'Softmax': 1, 'Transpose': 1, 'Dequantizer': 1, 'Squeeze': 1}), defaultdict(<class 'int'>, {'QLinearAdd': 24, 'QLinearConv': 91, 'QLinearMatMul': 1, 'QLinearAveragePool': 1}))

===============\Graph manipulation
===============

Remove flatten
Fuse batchnorm
Expand metaop
Fuse to metaop

===============
New Aidge Graph
===============

  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='gemm_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_CastOut', optype='Cast', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_CastIn', optype='Cast', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_RoundQuant', optype='Round', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_ClipQuantMax', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_ClipQuantMin', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_ClipQuant', optype='Clip', parents: [1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ZeroPoint_1', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_AddQuant', optype='Add', parents: [1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ScalingFactor_1', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_MulQuant', optype='Mul', parents: [1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='Softmax', optype='Softmax', parents: [1], children: [[]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5', optype='Transpose', parents: [0], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_head_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_head_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ZeroPoint', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ScalingFactor', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_MulDeQuant', optype='Mul', parents: [1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_SubDeQuant', optype='Sub', parents: [1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_Squeeze_0_quantized', optype='Squeeze', parents: [1, 0], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_29_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='gemm_MatMul_quant', optype='QLinearMatMul', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_head_AvgPool_quant', optype='QLinearAveragePool', parents: [1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_27_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_28_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_27_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_22_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_26_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_22_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_24_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_22_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_21_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_23_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_CastIn', optype='Cast', parents: [1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_6_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_2_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_16_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_1_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_10_tpu_batch_normalization_ReadVariableOp_1_0_quantized', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_Add_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_4_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_Add_0_zero_point', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  ICE]   GenericOperator.
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_29_conv2d_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Trying to load node named
[WARNING]   [efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant] of type
[WARNING]   [QLinearConv].
[WARNING]   Loading node using a [GenericOperator].
[WARNING]   Please report this issue at https://gitlab.eclipse.org/eclipse/aidge/aidge_onnx by
[WARNING]   providing your ONNX model and the following error:
[WARNING]   "ONNX_NODE_CONVERTER_ returned: list index out of range"
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1632
[NOTICE]   	* kernel_shape : [3, 3]
[NOTICE]   	* pads : [1, 1, 1, 1]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_quant] of type
[NOTICE]   [QLinearConv] as a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearConv[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [efficientnet-lite4_model_head_conv2d_Conv2D_quant] of type [QLinearConv] as
[NOTICE]   a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* dilations : [1, 1]
[NOTICE]   	* group : 1
[NOTICE]   	* kernel_shape : [1, 1]
[NOTICE]   	* pads : [0, 0, 0, 0]
[NOTICE]   	* strides : [1, 1]
[NOTICE] - Loaded node [efficientnet-lite4_model_head_AvgPool_quant] of type [QLinearAveragePool] as
[NOTICE]   a GenericOperator.
[NOTICE]   	* auto_pad : b'NOTSET'
[NOTICE]   	* ceil_mode : 0
[NOTICE]   	* count_include_pad : 0
[NOTICE]   	* kernel_shape : [7, 7]
[NOTICE]   	* strides : [1, 1]
[WARNING] - Aidge currently only supports layerwise scaling and not channelwise for
[WARNING]   QLinearMatMul[Weight DequantizeLinear] node. This node will be filled by a
[WARNING]   GenericOperator.
[NOTICE] - Loaded node [gemm_MatMul_quant] of type [QLinearMatMul] as a GenericOperator.
[NOTICE] - Loaded node [gemm_Add_quant] of type [QLinearAdd] as a GenericOperator.
[NOTICE] - Node name "ScalingFactor" is a duplicate, renaming it to ScalingFactor_1.
[NOTICE] - Node name "ZeroPoint" is a duplicate, renaming it to ZeroPoint_1.
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[ERROR] - Unable to forward data type for node efficientnet-lite4_model_stem_conv2d_Conv2D_quant (of
[ERROR]   type QLinearConv)
[ERROR] - Unable to forward data type for node efficientnet-lite4_model_stem_conv2d_Conv2D_quant (of
[ERROR]   type QLinearConv)
[ERROR] - Unable to forward data type for node efficientnet-lite4_model_stem_conv2d_Conv2D_quant (of
[ERROR]   type QLinearConv)
[WARNING] - Unable to forward data type (circular dependency and/or wrong dimensions and/or data
[WARNING]   dependent dimension?). Unable to compute output data type for nodes
[WARNING]   ["efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_21_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_5_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_7_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_11_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_10_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_11_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_15_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "gemm_Add_quant (QLinearAdd)", "efficientnet-lite4_model_blocks_17_conv2d_Conv2D_quant
[WARNING]   (QLinearConv)", "efficientnet-lite4_model_blocks_18_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_16_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_26_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_24_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_26_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_AvgPool_quant (QLinearAveragePool)",
[WARNING]   "efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "gemm_MatMul_quant (QLinearMatMul)",
[WARNING]   "efficientnet-lite4_model_blocks_29_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_Squeeze_0_quantized (Squeeze)",
[WARNING]   "efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_SubDeQuant (Sub)",
[WARNING]   "efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_MulDeQuant (Mul)",
[WARNING]   "efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_1_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_stem_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_0_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_quant (QLinearConv)"].
[WARNING] - GenericOperator: cannot compute output dims, no ComputeDimsFunc function provided.
[WARNING] - GenericOperator: cannot compute output dims, no ComputeDimsFunc function provided.
[WARNING] - GenericOperator: cannot compute output dims, no ComputeDimsFunc function provided.
[WARNING] - Unable to forward dimensions (circular dependency and/or wrong dimensions and/or data
[WARNING]   dependent dimension?). Unable to compute output dims for nodes
[WARNING]   ["efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_21_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_5_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_7_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_8_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_11_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_10_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_11_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_15_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_14_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "gemm_Add_quant (QLinearAdd)", "efficientnet-lite4_model_blocks_17_conv2d_Conv2D_quant
[WARNING]   (QLinearConv)", "efficientnet-lite4_model_blocks_18_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_16_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_20_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_26_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_24_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_26_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_27_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_28_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_AvgPool_quant (QLinearAveragePool)",
[WARNING]   "efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "gemm_MatMul_quant (QLinearMatMul)",
[WARNING]   "efficientnet-lite4_model_blocks_29_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_Squeeze_0_quantized (Squeeze)",
[WARNING]   "efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_SubDeQuant (Sub)",
[WARNING]   "efficientnet-lite4_model_head_dense_BiasAdd_0_DequantizeLinear_MulDeQuant (Mul)",
[WARNING]   "efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_head_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_1_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_stem_conv2d_Conv2D_quant (QLinNode(name='efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_2_Add_quant', optype='QLinearAdd', parents: [1, 1, 1, 1, 1, 1, 1, 1], children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_bias_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_scale', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_zero_point', optype='Producer', children: [[1, 1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_zero_point', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_scale', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_quantized', optype='Producer', children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant', optype='QLinearConv', parents: [1, 1, 1, 1, 1, 1, 1, 1, 1], children: [[1]])
  Node(name='efficientnet-lite4_model_blocks_6_Add_0_scale', optype='Producer', children: [[1, 1, 1]])
  Node(name='efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_scale', optype='Producer', children: [[1]])

===============
Supported nodes 2
===============

Native operators: 624 (10 types)
- Add: 1
- Cast: 3
- Clip: 1
- Mul: 2
- Producer: 612
- Round: 1
- Softmax: 1
- Squeeze: 1
- Sub: 1
- Transpose: 1
Generic operators: 117 (4 types)
- QLinearAdd: 24
- QLinearAveragePool: 1
- QLinearConv: 91
- QLinearMatMul: 1
Native types coverage: 71.4% (10/14)
Native operators coverage: 84.2% (624/741)
  (defaultdict(<class 'int'>, {'Producer': 612, 'Cast': 3, 'Round': 1, 'Clip': 1, 'Add': 1, 'Mul': 2, 'Softmax': 1, 'Transpose': 1, 'Sub': 1, 'Squeeze': 1}), defaultdict(<class 'int'>, {'QLinearConv': 91, 'QLinearAdd': 24, 'QLinearMatMul': 1, 'QLinearAveragePool': 1}))

===============
Supported nodes
===============

Native operators: 624 (10 types)
- Add: 1
- Cast: 3
- Clip: 1
- Mul: 2
- Producer: 612
- Round: 1
- Softmax: 1
- Squeeze: 1
- Sub: 1
- Transpose: 1
Generic operators: 117 (4 types)
- QLinearAdd: 24
- QLinearAveragePool: 1
- QLinearConv: 91
- QLinearMatMul: 1
Native types coverage: 71.4% (10/14)
Native operators coverage: 84.2% (624/741)
  (defaultdict(<class 'int'>, {'Producer': 612, 'Cast': 3, 'Round': 1, 'Clip': 1, 'Add': 1, 'Mul': 2, 'Softmax': 1, 'Transpose': 1, 'Sub': 1, 'Squeeze': 1}), defaultdict(<class 'int'>, {'QLinearConv': 91, 'QLinearAdd': 24, 'QLinearMatMul': 1, 'QLinearAveragePool': 1}))

===============
Compile
===============

OK

===============
Create Scheduler
===============

OK

===============
Name nodes
===============

efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_3_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_3_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_3_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_4_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_5_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_5_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_5_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_5_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_6_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_2_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_2_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_2_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
efficientnet-lite4_model_blocks_3_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_3_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_3_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_3_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_stem_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_scale (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_zero_point (Producer)
efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_0_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_0_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_0_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_0_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_2_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_2_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_0_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_1_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_1_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_1_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_4_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_4_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_4_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_6_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_6_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_6_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_6_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_6_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_6_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_7_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_7_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_7_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_7_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_7_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_7_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_8_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_8_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_8_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_8_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_9_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_9_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_9_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_9_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_10_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_10_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_9_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_10_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_10_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_10_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_10_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_10_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_11_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_11_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_11_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_11_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_11_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_11_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_12_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_12_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_12_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_12_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_12_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_12_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_13_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_13_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_13_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_13_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_13_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_13_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_14_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_14_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_14_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_14_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_14_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_14_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_15_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_15_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_15_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_16_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_16_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_16_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_16_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_16_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_16_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_16_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_17_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_17_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_17_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_17_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_17_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_18_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_18_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_18_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_18_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_18_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_18_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_18_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_19_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_19_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_19_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_19_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_19_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_19_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_20_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_20_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_20_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_20_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_20_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_20_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_20_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_21_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_21_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_21_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_21_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_21_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_blocks_22_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_blocks_22_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_22_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_22_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_22_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_22_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_22_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_23_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_23_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_23_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_23_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_23_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_23_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_23_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_24_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_24_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_24_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_24_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_24_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_25_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_25_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_25_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_25_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_25_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_25_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_26_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_26_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_26_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_26_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_26_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_26_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_27_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_27_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_27_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_28_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_27_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_27_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_27_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_28_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_28_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_28_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_28_Add_0_zero_point (Producer)
efficientnet-lite4_model_blocks_28_Add_0_scale (Producer)
efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_29_conv2d_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_29_conv2d_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_bias_fused_bn_quantized (Producer)
earConv)",
[WARNING]   "efficientnet-lite4_model_blocks_6_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_2_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_2_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_1_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_17_depthwise_conv2d_depthwise_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_0_conv2d_Conv2D_quant (QLinearConv)",
[WARNING]   "efficientnet-lite4_model_blocks_4_Add_quant (QLinearAdd)",
[WARNING]   "efficientnet-lite4_model_blocks_1_conv2d_1_Conv2D_quant (QLinearConv)"].
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - Generiefficientnet-lite4_model_blocks_29_depthwise_conv2d_depthwise_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_1_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_scale (Producer)
efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_weights_fused_bn_zero_point (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_blocks_29_conv2d_1_Conv2D_bias_fused_bn_quantized (Producer)
efficientnet-lite4_model_blocks_29_tpu_batch_normalization_2_FusedBatchNormV3_0_scale (Producer)
ScalingFactor (Producer)
ZeroPoint (Producer)
ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_quantized (Producer)
ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_scale (Producer)
ConvBnFusion_W_efficientnet-lite4_model_head_conv2d_Conv2D_ReadVariableOp_0_zero_point (Producer)
ConvBnFusion_BN_B_efficientnet-lite4_model_head_tpu_batch_normalization_ReadVariableOp_1_0_quantized (Producer)
efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_zero_point (Producer)
efficientnet-lite4_model_head_tpu_batch_normalization_FusedBatchNormV3_0_scale (Producer)
efficientnet-lite4_model_head_AvgPool_0_zero_point (Producer)
efficientnet-lite4_model_head_AvgPool_0_scale (Producer)
efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_quantized (Producer)
efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_scale (Producer)
efficientnet-lite4_model_head_dense_MatMul_ReadVariableOp_0_zero_point (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_zero_point (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_0_MatMul_scale (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_0_zero_point (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_0_scale (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_scale (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_quantized (Producer)
efficientnet-lite4_model_head_dense_BiasAdd_ReadVariableOp_0_zero_point (Producer)
ScalingFactor_1 (Producer)
ZeroPoint_1 (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_ClipQuantMin (Producer)
efficientnet-lite4_model_stem_conv2d_Conv2D__5_0_QuantizeLinear_ClipQuantMax (Producer)
_Transpose_0 (Transpose)
_Cast_0 (Cast)
_Mul_0 (Mul)
_Round_0 (Round)
_Add_0 (Add)
_Clip_0 (Clip)
_Cast_1 (Cast)
_QLinearConv_0 (QLinearConv)
_QLinearConv_1 (QLinearConv)
_QLinearConv_2 (QLinearConv)
_QLinearConv_3 (QLinearConv)
_QLinearConv_4 (QLinearConv)
_QLinearConv_5 (QLinearConv)
_QLinearConv_6 (QLinearConv)
_QLinearConv_7 (QLinearConv)
_QLinearConv_8 (QLinearConv)
_QLinearAdd_0 (QLinearAdd)
_QLinearConv_9 (QLinearConv)
_QLinearConv_10 (QLinearConv)
_QLinearConv_11 (QLinearConv)
_QLinearAdd_1 (QLinearAdd)
_QLinearConv_12 (QLinearConv)
_QLinearConv_13 (QLinearConv)
_QLinearConv_14 (QLinearConv)
_QLinearAdd_2 (QLinearAdd)
_QLinearConv_15 (QLinearConv)
_QLinearConv_16 (QLinearConv)
_QLinearConv_17 (QLinearConv)
_QLinearConv_18 (QLinearConv)
_QLinearConv_19 (QLinearConv)
_QLinearConv_20 (QLinearConv)
_QLinearAdd_3 (QLinearAdd)
_QLinearConv_21 (QLinearConv)
_QLinearConv_22 (QLinearConv)
_QLinearConv_23 (QLinearConv)
_QLinearAdd_4 (QLinearAdd)
_QLinearConv_24 (QLinearConv)
_QLinearConv_25 (QLinearConv)
_QLinearConv_26 (QLinearConv)
_QLinearAdd_5 (QLinearAdd)
_QLinearConv_27 (QLinearConv)
_QLinearConv_28 (QLinearConv)
_QLinearConv_29 (QLinearConv)
_QLinearConv_30 (QLinearConv)
_QLinearConv_31 (QLinearConv)
_QLinearConv_32 (QLinearConv)
_QLinearAdd_6 (QLinearAdd)
_QLinearConv_33 (QLinearConv)
_QLinearConv_34 (QLinearConv)
_QLinearConv_35 (QLinearConv)
_QLinearAdd_7 (QLinearAdd)
_QLinearConv_36 (QLinearConv)
_QLinearConv_37 (QLinearConv)
_QLinearConv_38 (QLinearConv)
_QLinearAdd_8 (QLinearAdd)
_QLinearConv_39 (QLinearConv)
_QLinearConv_40 (QLinearConv)
_QLinearConv_41 (QLinearConv)
_QLinearAdd_9 (QLinearAdd)
_QLinearConv_42 (QLinearConv)
_QLinearConv_43 (QLinearConv)
_QLinearConv_44 (QLinearConv)
_QLinearAdd_10 (QLinearAdd)
_QLinearConv_45 (QLinearConv)
_QLinearConv_46 (QLinearConv)
_QLinearConv_47 (QLinearConv)
_QLinearConv_48 (QLinearConv)
_QLinearConv_49 (QLinearConv)
_QLinearConv_50 (QLinearConv)
_QLinearAdd_11 (QLinearAdd)
_QLinearConv_51 (QLinearConv)
_QLinearConv_52 (QLinearConv)
_QLinearConv_53 (QLinearConv)
_QLinearAdd_12 (QLinearAdd)
_QLinearConv_54 (QLinearConv)
_QLinearConv_55 (QLinearConv)
_QLinearConv_56 (QLinearConv)
_QLinearAdd_13 (QLinearAdd)
_QLinearConv_57 (QLinearConv)
_QLinearConv_58 (QLinearConv)
_QLinearConv_59 (QLinearConv)
_QLinearAdd_14 (QLinearAdd)
_QLinearConv_60 (QLinearConv)
_QLinearConv_61 (QLinearConv)
_QLinearConv_62 (QLinearConv)
_QLinearAdd_15 (QLinearAdd)
_QLinearConv_63 (QLinearConv)
_QLinearConv_64 (QLinearConv)
_QLinearConv_65 (QLinearConv)
_QLinearConv_66 (QLinearConv)
_QLinearConv_67 (QLinearConv)
_QLinearConv_68 (QLinearConv)
_QLinearAdd_16 (QLinearAdd)
_QLinearConv_69 (QLinearConv)
_QLinearConv_70 (QLinearConv)
_QLinearConv_71 (QLinearConv)
_QLinearAdd_17 (QLinearAdd)
_QLinearConv_72 (QLinearConv)
_QLinearConv_73 (QLinearConv)
_QLinearConv_74 (QLinearConv)
_QLinearAdd_18 (QLinearAdd)
_QLinearConv_75 (QLinearConv)
_QLinearConv_76 (QLinearConv)
_QLinearConv_77 (QLinearConv)
_QLinearAdd_19 (QLinearAdd)
_QLinearConv_78 (QLinearConv)
_QLinearConv_79 (QLinearConv)
_QLinearConv_80 (QLinearConv)
_QLinearAdd_20 (QLinearAdd)
_QLinearConv_81 (QLinearConv)
_QLinearConv_82 (QLinearConv)
_QLinearConv_83 (QLinearConv)
_QLinearAdd_21 (QLinearAdd)
_QLinearConv_84 (QLinearConv)
_QLinearConv_85 (QLinearConv)
_QLinearConv_86 (QLinearConv)
_QLinearAdd_22 (QLinearAdd)
_QLinearConv_87 (QLinearConv)
_QLinearConv_88 (QLinearConv)
_QLinearConv_89 (QLinearConv)
_QLinearConv_90 (QLinearConv)
_QLinearAveragePool_0 (QLinearAveragePool)
_Squeeze_0 (Squeeze)
_QLinearMatMul_0 (QLinearMatMul)
_QLinearAdd_23 (QLinearAdd)
_Cast_2 (Cast)
_Sub_0 (Sub)
_Mul_1 (Mul)
_Softmax_0 (Softmax)

===============
Set backend
===============

cOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[WARNING] - GenericOperator::setBackend(): cannot set backend for a generic operator, as no
[WARNING]   implementation has been provided!
[ERROR] - Assertion failed: exists(key) in /opt/aidge/aidge/aidge_core/include/aidge/utils/Registrar.hpp:87
[FATAL] - missing or invalid registrar key: "export_cpp" for registrable object N5Aidge8Round_OpE
[FATAL]   Did you include/import the corresponding module?
[FATAL]   If so, it is possible that the object is not yet supported.
[FATAL]   
[FATAL]   Available registrar keys are:
[FATAL]       cpu
Traceback (most recent call last):
  File "/app/AI_Project/01_generate_cpp.py", line 115, in <module>
    model_aidge.set_backend(aidge_export_cpp.ExportLibCpp._name)
RuntimeError: missing or invalid registrar key: "export_cpp" for registrable object N5Aidge8Round_OpE
Did you include/import the corresponding module?
If so, it is possible that the object is not yet supported.

Available registrar keys are:
    cpu
Error: export_model/data directory does not exist.
error.log
sed: can't read export_model/Makefile: No such file or directory
sed: can't read export_model/Makefile: No such file or directory
sed: can't read export_model/Makefile: No such file or directory
sed: can't read export_model/Makefile: No such file or directory
./AI_Project/03_build.sh: line 17: cd: export_model: No such file or directory
make: AI_Build AI_Deploy AI_Manager AI_Project AI_Support ConvNet.onnx MLP_MNIST.onnx MobileNet-v2.onnx README.md __pycache__ config.json docker efficientnetlite411int8.onnx examples exit_functions.sh mnist.onnx mnist_test_input_type2.bin model_1D_classifier.onnx model_type2.onnx print_raw_output.py type1_test.sh type2_test.sh type3_test.sh No targets specified and no makefile found. Stop.

Report Details

report.json
null