(Replying to PARENT post)

Would that be a viable option to deploy TensorFlow models on serverless environments (Lambda, Functions)?
๐Ÿ‘คbarbolo๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

You can deploy TensorFlow model binaries as serverless APIs on Google Cloud ML Engine [1]. But I would also be interested in seeing a TensorFlow Lite implementation.

[1] https://cloud.google.com/ml-engine/docs/deploying-models

Disclaimer: I work for Google Cloud.

๐Ÿ‘คrasmi๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

The main TensorFlow interpreter provides a lot of functionality for larger machines like servers (e.g. Desktop GPU support and distributed support). Of course, TensorFlow lite does run on standard PCs and servers, so using it on non-mobile/small devices is possible. If you wanted to create a very small microservice, TensorFlow lite would likely work, and weโ€™d love to hear about your experiences, if you try this.
๐Ÿ‘คinfnorm๐Ÿ•‘8y๐Ÿ”ผ0๐Ÿ—จ๏ธ0