網(wǎng)站自助建站河南今日重大新聞
背景
stable-diffusion-webui 安裝完畢后,默認(rèn)的模型生成的效果圖并不理想,可以根據(jù)具體需求加載指定的模型文件。國內(nèi) modelscope 下載速度較快,以該站為例進(jìn)行介紹
操作步驟
- 找到指定的模型文件
在 https://modelscope.cn/models 中查找文本生成圖片
標(biāo)簽的模型,根據(jù)自己喜好點(diǎn)進(jìn)模型詳情頁,初次使用,我們可以挑選一些模型文件較小的文件用于測試
- 找到模型文件的下載地址
找到模型文件列表中的.safetensors
文件,例如:flux1-dev.safetensors
,右鍵單擊下載
按鈕,復(fù)制鏈接地址。 - 下載該模型文件
進(jìn)入服務(wù)器路徑/stable-diffusion-webui-docker/data/models/Stable-diffusion
下,wget https://modelscope.cn/models/black-forest-labs/FLUX.1-dev/resolve/master/flux1-dev.safetensors
即可 - 重啟
stable-diffusion-webui
在左上角的模型下拉列表框中選中切換即可
下載 Lora 模型
modelscope 上模型標(biāo)簽有 LoRA
的都是支持 Lora 的模型,進(jìn)入模型詳情頁,通過上一節(jié)的方法找到 Lora 模型的下載鏈接,進(jìn)入服務(wù)器路徑/stable-diffusion-webui-docker/data/models/Lora
下,wget 下載即可。webui頁面上,單擊 Lora 頁簽,單擊刷新
使用時(shí),單擊模型即可
運(yùn)行時(shí)報(bào)錯(cuò)
Expected all tensors to be on the same device, but found at least two devices
auto-1 | File "/stable-diffusion-webui/modules/sd_hijack.py", line 348, in forward
auto-1 | inputs_embeds = self.wrapped(input_ids)
auto-1 | File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
auto-1 | return self._call_impl(*args, **kwargs)
auto-1 | File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
auto-1 | return forward_call(*args, **kwargs)
auto-1 | File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 163, in forward
auto-1 | return F.embedding(
auto-1 | File "/opt/conda/lib/python3.10/site-packages/torch/nn/functional.py", line 2264, in embedding
auto-1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
auto-1 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
參考 https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16263 解決,我的解決方案是修改 /stable-diffusion-webui-docker/docker-compose.yml
,指定 --device-id=0
,如下:
services:download:build: ./services/download/profiles: ["download"]volumes:- *v1auto: &automatic<<: *base_serviceprofiles: ["auto"]build: ./services/AUTOMATIC1111image: sd-auto:78environment:- CLI_ARGS=--allow-code --medvram --xformers --enable-insecure-extension-access --api --device-id 0
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
auto-1 | warnings.warn(
auto-1 | creating model quickly: SafetensorError
auto-1 | Traceback (most recent call last):
auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 973, in _bootstrap
auto-1 | self._bootstrap_inner()
auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
auto-1 | self.run()
auto-1 | File "/opt/conda/lib/python3.10/threading.py", line 953, in run
auto-1 | self._target(*self._args, **self._kwargs)
auto-1 | File "/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
auto-1 | shared.sd_model # noqa: B018
auto-1 | File "/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
auto-1 | return modules.sd_models.model_data.get_sd_model()
auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 620, in get_sd_model
auto-1 | load_model()
auto-1 | File "/stable-diffusion-webui/modules/sd_models.py", line 723, in load_model
auto-1 | sd_model = instantiate_from_config(sd_config.model)
auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1 | return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in __init__
auto-1 | self.instantiate_cond_stage(cond_stage_config)
auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
auto-1 | model = instantiate_from_config(config)
auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
auto-1 | return get_obj_from_str(config["target"])(**config.get("params", dict()))
auto-1 | File "/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 104, in __init__
auto-1 | self.transformer = CLIPTextModel.from_pretrained(version)
auto-1 | File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2604, in from_pretrained
auto-1 | state_dict = load_state_dict(resolved_archive_file)
auto-1 | File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 450, in load_state_dict
auto-1 | with safe_open(checkpoint_file, framework="pt") as f:
auto-1 | safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
參考
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14267
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15568
大概有兩種原因,要么是 safetensors 模型文件損壞,要么是文件 SHA 值校驗(yàn)不正確,有個(gè)暴力優(yōu)化的辦法是,直接刪除 /stable-diffusion-webui-docker/data/.cache
隱藏目錄,然后重啟