Meta has announced the release of PyTorch Live, a set of tools meant to make AI-powered experiences more accessible and accelerate the path from research prototyping to production deployment.
Moreover, setting up the build environment for new AI projects can be tedious and difficult. The PyTorch Live command line interface (CLI) offers a simple setup routine to automate the installation of commonly required dependencies for apps that leverage AI.
Libraries and tools include Homebrew, OpenJDK, Yarn, Watchman, and CocoaPods, as well as various Android command-line tools.
“If you want to showcase a mobile app model that runs on Android and iOS, it would have taken days to configure the project and build the user interface. With PyTorch Live, it cuts the cost in half, and you don’t need to have Android and iOS developer experience,” explained Roman Radle, an AI software engineer at Meta AI.
Under the hood, PyTorch Live offers a data processing API. This API is used to prepare and integrate custom models that can be accessed by the PyTorch Live API from PyTorch Live mobile apps.
Meta plans to enable the sharing of PyTorch models and demos through PyTorch Live and provide a more customizable data processing API. In the future, it will also support developers to work with audio and video.
The beta release currently supports macOS with Windows and Linux support touted on the project’s site as “coming soon”. While the PyTorch Live CLI does not currently support the Apple M1 chipset, the availability of a manual workaround suggests that this omission will be rectified soon.
A tutorial to get started with PyTorch Live is available here.
Image credit: iStockphoto/monstArrr_