Friday, October 18

TensorFlow 2.0 strengthens integration with Keras and TensorRT

With a development experience close to the Python universe, TensorFlow version 2.0 integrates tightly with Keras by providing the eager execution by default. The popular machine learning framework standardized a file format for running drive models and accelerated its performance with multi-GPU support.

In a video, Laurence Moroney, Google's Developer Relations Manager, introduces TensorFlow 2.0 news.

In alpha in March, in beta in June, version 2.0 of TensorFlow is now delivered in its final edition. This is, with Scikit-Learn, one of the most used machine learning frameworks in the last 5 years, with Keras for deep learning, as shown by Kaggle and Medium (Then follow randomForest, Xgboost and pyTorch). With version 2.0 of TensorFlow, the project team said they had followed the community that required flexibility of use with deployment capability on any platform. The framework brings together tools to create and train machine learning models and develop applications that can be scaled. Python developers should find a familiar experience. The Keras library is tightly integrated with TensorFlow 2.0 with the default eager execution to drive Keras models without constructing a graph and executing functions based on the Python universe conventions. The framework also targets JavaScript developers with TensorFlow.

A major effort has been made on the low-level APIs of the open source ML framework developed by Google. The development team explains – in a Medium post – that it now exports all internally used operations and provides legacy interfaces for variables and checkpoints, allowing them to rely on these elements without having to “Rebuild TensorFlow”. A file format, SavedModel, has been standardized to run the models on a variety of runtimes and deploy them in the cloud, on the web, across browsers or with Node.js, on mobile or in embedded systems. The Distribution Strategy API will help distribute high performance workout scenarios with minimal code change.

Support for multi-GPU acceleration

Support for multi-GPU acceleration on drive capabilities is available, that of the TPU Cloud (Tensor processing unit) will arrive later. The project team highlights the tight integration of TensorFlow 2.0 with Nvidia’s TensorRT inference server, as well as the addition of many GPU acceleration features, and highlights the significant performance improvements achieved (with a revisited API) on the Google cloud with T4 GPUs.

Regarding the data used for ML models, the arrival of Datasets provides a standard interface to extend access to a variety of data types (text, images, video …). The new features of the framework version 2.0 will be amply detailed at the next TensorFlow World which will be held at the end of the month in Santa Clara, from the 28th to the 31st of October. Already, a guide is proposed for users of TensorFlow 1.x wishing to migrate their code to version 2.0.