The realization of deep learning inference (DL) at the edge requires a flexibly scalable solution that is power efficient and has low latency. At the edge mainly compact and passive cooled systems are used that make quick decisions without uploading data to the cloud.
The new Mustang-V100 AI acceler ...
↧