From 2f27983ae59b4ee4f1e5a98ba5c98afa9209083c Mon Sep 17 00:00:00 2001 From: Thomas Zerbs <71798125+thomaszerbs@users.noreply.github.com> Date: Mon, 5 Jan 2026 16:00:42 -0500 Subject: [PATCH] Update README.md Updated image path for Figure 1 --- .../vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Tutorials/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/README.md b/Tutorials/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/README.md index df482d7b..420dce81 100644 --- a/Tutorials/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/README.md +++ b/Tutorials/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/vitis-ai-5.1-multi-tenancy-tutorial-vek280-main/README.md @@ -205,7 +205,7 @@ ifconfig end0 192.168.1.2 With temporal execution, multiple models are run sequentially per batch of images on the same NPU IP. The same hardware resources (AI engines) are shared over time. This approach may be effective when your hardware design is resource constrained to a single NPU IP, but near real-time inference across multiple models is still required. While it doesn’t offer true parallelism, it simplifies deployment by avoiding the need to manage multiple IPs, snapshots, and resource constraints across AIE, PL and DDR memory. -![Figure 1](/images/temporal.png) +![Figure 1](./images/temporal.png) ***Figure 1:** Images are fed into the first model, ResNet50, which occupies 10 AIE columns in the NPU IP. Once inference is complete, the AIE resources are freed up for the second model, ResNet18, to receive images and begin inference.*