Roboflow has extensive deployment options for getting your model into production. But, sometimes, you just want to get something simple running on your development machine.
If you don't want to futz with installing CUDA and Docker, we've got you covered. The roboflow inference server now runs via
npx. On any machine that supports tfjs-node (including any modern 64-bit Intel, AMD, or Arm CPU like the M1 Macbook or the Raspberry Pi 4), simply run
If this doesn't work, make sure you have
nodeinstalled on your machine (installation instructions).
This will download and install the inference server and run it locally on port
Now you can use any of our sample code or client SDKs to get predictions from your model.
The simplest way to test it is using
curl via your command line:
# save a picture of a playing card to card.jpg # and put your roboflow api key in a file called .roboflow_key # then run: base64 card.jpg | curl -d @- "http://localhost:9001/playing-cards-ow27d/2?api_key=`cat .roboflow_key`"
Finding Models to Use
Build Computer Vision Applications
Now you can build a computer vision powered application! For an example, check out our live-stream of creating a blackjack strategy app.