You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 12, 2024. It is now read-only.
Under tf 2.5.0, I converted my pre-trained model from saved_model to tflite.
Afterwards, in Docker container, when I was converting this tflite model to pb format using tflite2tensorflow, the following error occured:
ERROR: The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.
(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as .tflite, which is why I did not directly convert saved_model to pb)
The text was updated successfully, but these errors were encountered:
In fact, I tried to implement that operation a month ago, but there were not enough samples of the model to create a good conversion program. To the extent possible, can you provide the following resources? The minimum amount of information that you are willing to disclose is fine.
Source code for building the LSTM model.
saved_model
tflite file converted from saved_model
I'm having trouble with TFLite's UNIDIRECTIONAL_SEQUENCE_LSTM because it is very difficult to connect it to TensorFlow's standard operations.
Hi, sorry for the late reply. I have attached a zip file of my models (only initialized, without training) and source code, let me know if there's a problem with it!
By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?
Thank you! I'm very busy with my day job, so I'll examine it carefully when I have time.
By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?
I am aware of this point as well. I do not need to provide resources as I have a large number of samples and I know that I can technically handle it. If you are in a hurry to convert your Quantize layer, you can try the following tool. https://github.com/onnx/tensorflow-onnx
OS you are using: MacOS 11.4
Version of TensorFlow: v2.5.0
Environment: Docker
Under tf 2.5.0, I converted my pre-trained model from
saved_model
totflite
.Afterwards, in Docker container, when I was converting this
tflite
model topb
format using tflite2tensorflow, the following error occured:(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as
.tflite
, which is why I did not directly convertsaved_model
topb
)The text was updated successfully, but these errors were encountered: