"ML in a Minute" is our conversational series on answering machine learning questions. Have questions you want answered? Tweet at us.
What is TensorRT (in 60 Seconds or Fewer)?
TensorRT is a machine learning framework that is published by Nvidia to run inference that is machine learning inference on their hardware. TensorRT is highly optimized to run on NVIDIA GPUs. It's likely the fastest way to run a model at the moment.
If you Want to Convert your Model to TensorRT, How Do You Do that?
In order to get to tensor RT you're usually starting in something like PyTorch or TensorFlow, and then you need to be able to move from that framework into the TensorRT framework. The nice thing is that Roboflow, makes it easy to do all these things: https://docs.roboflow.com/inference/nvidia-jetson
Liked this? Be sure to also check out the computer vision glossary.