|
发表于 2023-3-4 02:44:48
|
显示全部楼层
大佬们请问这个是什么问题??
steps: 20%|████████████▏ | 339/1664 [05:19<20:48, 1.06it/s, loss=0.126]Traceback (most recent call last):
File "D:\13\lora训练\lora-scripts\lora-scripts\sd-scripts\train_network.py", line 510, in <module>
train(args)
File "D:\13\lora训练\lora-scripts\lora-scripts\sd-scripts\train_network.py", line 395, in train
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
File "D:\13\lora训练\lora-scripts\lora-scripts\venv\lib\site-packages\accelerate\accelerator.py", line 1374, in clip_grad_norm_
return torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=norm_type)
File "D:\13\lora训练\lora-scripts\lora-scripts\venv\lib\site-packages\torch\nn\utils\clip_grad.py", line 42, in clip_grad_norm_
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
File "D:\13\lora训练\lora-scripts\lora-scripts\venv\lib\site-packages\torch\nn\utils\clip_grad.py", line 42, in <listcomp>
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
File "D:\13\lora训练\lora-scripts\lora-scripts\venv\lib\site-packages\torch\functional.py", line 1451, in norm
return _VF.norm(input, p, dim=_dim, keepdim=keepdim) # type: ignore[attr-defined]
RuntimeError: CUDA error: an illegal memory access was encountered |
|