|
发表于 2023-3-2 20:52:49
|
显示全部楼层
本帖最后由 wangyilin13 于 2023-3-2 21:20 编辑
load StableDiffusion checkpoint
Traceback (most recent call last):
File "F:\AI\lora-scripts\sd-scripts\train_network.py", line 507, in <module>
train(args)
File "F:\AI\lora-scripts\sd-scripts\train_network.py", line 96, in train
text_encoder, vae, unet, _ = train_util.load_target_model(args, weight_dtype)
File "F:\AI\lora-scripts\sd-scripts\library\train_util.py", line 1860, in load_target_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path)
File "F:\AI\lora-scripts\sd-scripts\library\model_util.py", line 869, in load_models_from_stable_diffusion_checkpoint
_, state_dict = load_checkpoint_with_text_encoder_conversion(ckpt_path)
File "F:\AI\lora-scripts\sd-scripts\library\model_util.py", line 846, in load_checkpoint_with_text_encoder_conversion
checkpoint = torch.load(ckpt_path, map_location="cpu")
File "F:\AI\lora-scripts\venv\lib\site-packages\torch\serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "F:\AI\lora-scripts\venv\lib\site-packages\torch\serialization.py", line 1049, in _load
result = unpickler.load()
File "F:\AI\lora-scripts\venv\lib\site-packages\torch\serialization.py", line 1019, in persistent_load
load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location))
File "F:\AI\lora-scripts\venv\lib\site-packages\torch\serialization.py", line 997, in load_tensor
storage = zip_file.get_storage_from_record(name, numel, torch._UntypedStorage).storage()._untyped()
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:81] data. DefaultCPUAllocator: not enough memory: you tried to allocate 58982400 bytes.
大佬,求问,我一直报这个错,可是我内存应该是够的啊
解决了,是虚拟内存没设置好,感谢大佬的教程
|
|