With a development experience close to the Python universe, TensorFlow version 2.0 integrates tightly with Keras by providing the eager execution by default. The popular machine learning framework standardized a file format for running drive models and accelerated its performance with multi-GPU support.
A major effort has been made on the low-level APIs of the open source ML framework developed by Google. The development team explains – in a Medium post – that it now exports all internally used operations and provides legacy interfaces for variables and checkpoints, allowing them to rely on these elements without having to “Rebuild TensorFlow”. A file format, SavedModel, has been standardized to run the models on a variety of runtimes and deploy them in the cloud, on the web, across browsers or with Node.js, on mobile or in embedded systems. The Distribution Strategy API will help distribute high performance workout scenarios with minimal code change.
Support for multi-GPU acceleration
Support for multi-GPU acceleration on drive capabilities is available, that of the TPU Cloud (Tensor processing unit) will arrive later. The project team highlights the tight integration of TensorFlow 2.0 with Nvidia’s TensorRT inference server, as well as the addition of many GPU acceleration features, and highlights the significant performance improvements achieved (with a revisited API) on the Google cloud with T4 GPUs.
Regarding the data used for ML models, the arrival of Datasets provides a standard interface to extend access to a variety of data types (text, images, video …). The new features of the framework version 2.0 will be amply detailed at the next TensorFlow World which will be held at the end of the month in Santa Clara, from the 28th to the 31st of October. Already, a guide is proposed for users of TensorFlow 1.x wishing to migrate their code to version 2.0.