deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 617|回复: 5

模型训练出错

[复制链接]

1

主题

5

帖子

103

积分

高级丹童

Rank: 2

积分
103
 楼主| 发表于 2022-11-9 10:07:00 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
论坛花60丹新买的模型,读取数据后开始训练的时候提示出错。各位前辈帮我看看是什么原因:


启动中. 按回车键停止训练并保存进度。

保存时间|迭代次数|单次时间|源损失|目标损失
Error: OOM when allocating tensor with shape[8,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Add_49 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_40 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:524) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'Add_49', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\mainscripts\Trainer.py", line 62, in trainerThread
    debug=debug)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\ModelBase.py", line 197, in __init__
    self.on_initialize()
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 389, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 156, in forward
    x = self.res1(x)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 68, in forward
    x = self.conv1(inp)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 105, in forward
    x = tf.add(x, bias)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 365, in add
    "Add", x=x, y=y, name=name)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[8,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Add_49 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_40 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:524) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Traceback (most recent call last):
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[8,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Add_49}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node concat_40}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\mainscripts\Trainer.py", line 140, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\ModelBase.py", line 478, in train_one_iter
    losses = self.onTrainOneIter()
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 725, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 545, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[8,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Add_49 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_40 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:524) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'Add_49', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\mainscripts\Trainer.py", line 62, in trainerThread
    debug=debug)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\ModelBase.py", line 197, in __init__
    self.on_initialize()
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 389, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 156, in forward
    x = self.res1(x)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 68, in forward
    x = self.conv1(inp)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 105, in forward
    x = tf.add(x, bias)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 365, in add
    "Add", x=x, y=y, name=name)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[8,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Add_49 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:105) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_40 (defined at D:\Program Files\DeepFaceLab_NVIDIA_up_to_RTX2080Ti_0602\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:524) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

回复

使用道具 举报

67

主题

569

帖子

4243

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4243
发表于 2022-11-9 10:27:06 | 显示全部楼层
设置d盘虚拟内存100个G就行
回复 支持 反对

使用道具 举报

1

主题

5

帖子

103

积分

高级丹童

Rank: 2

积分
103
 楼主| 发表于 2022-11-9 14:28:32 | 显示全部楼层
wwe456999 发表于 2022-11-9 10:27
设置d盘虚拟内存100个G就行

还是不管用,我已经把C盘和D盘虚拟内存都设的100G。结果同样报错
回复 支持 反对

使用道具 举报

67

主题

569

帖子

4243

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4243
发表于 2022-11-9 16:02:41 | 显示全部楼层
nba577 发表于 2022-11-9 14:28
还是不管用,我已经把C盘和D盘虚拟内存都设的100G。结果同样报错

那就用me版,正常来说不是太大的丹2080ti足够了,你丹参数是不是很大
回复 支持 反对

使用道具 举报

17

主题

125

帖子

1000

积分

初级丹圣

Rank: 8Rank: 8

积分
1000
发表于 2022-11-9 19:07:18 | 显示全部楼层
显存不够
回复

使用道具 举报

1

主题

5

帖子

103

积分

高级丹童

Rank: 2

积分
103
 楼主| 发表于 2022-11-10 13:04:21 | 显示全部楼层

显存11G的,是不是该换显卡了
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-22 20:24 , Processed in 0.085897 second(s), 10 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表