推荐用户优先使用此方式完成部署,操作简单、无需命令行基础。
- 解压 aipc-ds.zip 压缩包,运行其中的
setup安装软件主体; - 可自行选择安装位置;
- 在运行前务必将模型权重所有文件夹放入安装目录下的
AIPC-DS\ds-amd\_internal\models文件夹中; - 双击运行桌面图标或
AIPC.exe启动程序。
📦 安装包下载链接(通过百度网盘分享):
点击下载 aipc-ds.zip 提取码: ecxk
在AMD Ryzen系列芯片上部署deepseek模型,支持的模型列表如下:
仅支持运行Windows 11的Strix (STX)和Krackan Point (KRK)处理器。
- 创建conda环境
conda create --name <env name> python=3.10- 激活conda环境
conda activate <env name>- 安装wheel文件
cd wheel
pip install onnxruntime_genai-0.4.0.dev0-cp310-cp310-win_amd64.whl
pip install onnxruntime_directml-1.20.1-cp310-cp310-win_amd64.whl- 安装requirements
pip install -i requirements.txt-
从huggingface下载所需的模型,并将模型拷贝到
models目录下 -
打开
genai_config.json文件。位于已下载模型文件夹中的文件。使用位于wheel文件夹中的onnx_custom_ops.dll的完整路径更新custom_ops_library的值.
"session_options": {
...
"custom_ops_library":"wheel\\onnx_custom_ops.dll",
...
}python run_model.py --model_dir path_to\your\model修改server.py中的models_paths值为你的模型路径。
models_paths = {"DeepSeek-R1-Distill-Qwen-1.5B":"path\\to\\DeepSeek-R1-Distill-Qwen-1.5B-awq-asym-uint4-g128-lmhead-onnx-hybrid ",
"DeepSeek-R1-Distill-Llama-8B":"path\\to\\DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead-onnx-hybrid",
"DeepSeek-R1-Distill-Qwen-7B":"path\\to\\DeepSeek-R1-Distill-Qwen-7B-awq-asym-uint4-g128-lmhead-onnx-hybrid"
}开启接口服务
python server.py --port 9090- Body :
{"status": "ok" } - the model is successfully loaded and the server is ready.
example :
{"input":"Please solve following problem and explain it to me. Then give me final answer at the end with a single number preceded by string '#### '. Question: Rory orders 2 subs for $7.50 each, 2 bags of chips for $1.50 each and 2 cookies for $1.00 each for delivery.\nAnswer:"}example:
{
"model_name":"DeepSeek-R1-Distill-Qwen-7B"
}