Deploying Your Model With DeepStack

Deploying your model to DeepStack is the simplest part, once you have downloaded the best.pth file from your training.

Create a directory on your system to store your models, here we shall assume your folder is named `my-models` Put your best.pth file in the folder and rename it to whatever you want it to be, here we assume you named it catsanddogs.pt

Run DeepStack

Starting DeepStack

Run the command below as it applies to the version you have installed

bash
sudo docker run -v /path-to/my-models:/modelstore/detection -p 80:5000 deepquestai/deepstack

Basic Parameters

-v /path-to/my-models:/modelstore/detection This specifies the local directory where you stored your custom models

-p 80:5000 This makes DeepStack accessible via port 80 of the machine.

Run Inference

python
import requests

image_data = open("test-image.jpg","rb").read()

response = requests.post("http://localhost:80/v1/vision/custom/catsanddogs",files={"image":image_data}).json()

for object in response["predictions"]:
    print(object["label"])

print(response)