deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 635|回复: 9

求教!训练报错问题

[复制链接]

4

主题

132

帖子

814

积分

高级丹师

Rank: 5Rank: 5

积分
814
 楼主| 发表于 2022-5-19 10:30:48 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
请求大神解惑,我之前用的是20系显卡,训练时都很正常。
最近用新机的3060,先是提示内存不够(16G物理内存),把虚拟内存改为系统托管也没效果,改成80000-100000好了。
但开始报错OOM了,bs改为4甚至是2还在报错,请问是什么原因呢?

---------------------------模型选项----------------------------


            resolution: 224
             face_type: f
     models_opt_on_gpu: True
                 archi: df-d
               ae_dims: 192
                e_dims: 48
                d_dims: 48
           d_mask_dims: 20
       masked_training: True
       eyes_mouth_prio: False
           uniform_yaw: False
             adabelief: True
            lr_dropout: n
           random_warp: True
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: rct
              clipgrad: True
              pretrain: False
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
           random_flip: True
            batch_size: 4
             gan_power: 0.0
        gan_patch_size: 28
              gan_dims: 16
         blur_out_mask: False
                  猫の汉化: http://t.hk.uy/4ks
                  商业合作: 出售仙丹
                  联系方式: QQ微信:564646676
       random_src_flip: False
       random_dst_flip: True


---------------------------运行信息----------------------------


                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 3060 Laptop GPU
                  显存大小: 4.62GB


启动中. 按回车键停止训练并保存进度。


保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Square_2 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py:299) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


         [[concat_4/concat/_1131]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


  (1) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Square_2 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py:299) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


0 successful operations.
0 derived errors ignored.


Errors may have originated from an input operation.
Input Source operations connected to node Square_2:
mul_9 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:445)


Input Source operations connected to node Square_2:
mul_9 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:445)


Original stack trace for 'Square_2':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\mainscripts\Trainer.py", line 63, in trainerThread
    debug=debug)
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in on_initialize
    gpu_src_loss =  tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 299, in dssim
    den1 = reducer(tf.square(img1) + tf.square(img2))
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 10174, in square
    "Square", x=x, name=name)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
    self._traceback = tf_stack.extract_stack()


Traceback (most recent call last):
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Square_2}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


         [[concat_4/concat/_1131]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


  (1) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Square_2}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


0 successful operations.
0 derived errors ignored.


During handling of the above exception, another exception occurred:


Traceback (most recent call last):
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\mainscripts\Trainer.py", line 141, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 580, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Square_2 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py:299) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


         [[concat_4/concat/_1131]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


  (1) Resource exhausted: OOM when allocating tensor with shape[4,3,224,224] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Square_2 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py:299) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


0 successful operations.
0 derived errors ignored.


Errors may have originated from an input operation.
Input Source operations connected to node Square_2:
mul_9 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:445)


Input Source operations connected to node Square_2:
mul_9 (defined at E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:445)


Original stack trace for 'Square_2':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\mainscripts\Trainer.py", line 63, in trainerThread
    debug=debug)
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in on_initialize
    gpu_src_loss =  tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
  File "E:\Deepfake\0906-RTX30\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 299, in dssim
    den1 = reducer(tf.square(img1) + tf.square(img2))
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 10174, in square
    "Square", x=x, name=name)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
    op_def=op_def)
  File "E:\Deepfake\0906-RTX30\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
    self._traceback = tf_stack.extract_stack()

回复

使用道具 举报

4

主题

23

帖子

337

积分

初级丹师

Rank: 3Rank: 3

积分
337
发表于 2022-5-19 10:41:40 | 显示全部楼层
有換30系显卡 的訓練嗎?
回复 支持 反对

使用道具 举报

4

主题

132

帖子

814

积分

高级丹师

Rank: 5Rank: 5

积分
814
 楼主| 发表于 2022-5-19 10:48:19 | 显示全部楼层
换了的,之前20系的就正常,30系切脸查错都报错
回复 支持 反对

使用道具 举报

4

主题

132

帖子

814

积分

高级丹师

Rank: 5Rank: 5

积分
814
 楼主| 发表于 2022-5-19 10:55:28 | 显示全部楼层
重启一下,bs改为2又能跑了,20系显卡都能跑bs=4,为啥30系还不如20呢
回复 支持 反对

使用道具 举报

3

主题

133

帖子

2138

积分

初级丹圣

Rank: 8Rank: 8

积分
2138

万事如意节日勋章

发表于 2022-5-19 11:04:27 | 显示全部楼层
试试禁用集显,把电脑调成独显直连,或者试试论坛里的RG版本呢,笔记本的显卡确实有这样那样难以解决的问题
回复 支持 反对

使用道具 举报

23

主题

323

帖子

1733

积分

初级丹圣

Rank: 8Rank: 8

积分
1733

万事如意节日勋章

发表于 2022-5-19 11:07:01 | 显示全部楼层
本帖最后由 该账户已被注销 于 2022-5-19 15:21 编辑

软件版本问题吧?20系列显卡我记得和30系列用的是不同版本的软件的,是分显卡的
回复 支持 反对

使用道具 举报

4

主题

132

帖子

814

积分

高级丹师

Rank: 5Rank: 5

积分
814
 楼主| 发表于 2022-5-19 11:16:55 | 显示全部楼层
scuuuuu0410 发表于 2022-5-19 11:04
试试禁用集显,把电脑调成独显直连,或者试试论坛里的RG版本呢,笔记本的显卡确实有这样那样难以解决的问题 ...

好的好的,多谢老哥的帮助
回复 支持 反对

使用道具 举报

4

主题

132

帖子

814

积分

高级丹师

Rank: 5Rank: 5

积分
814
 楼主| 发表于 2022-5-19 11:17:08 | 显示全部楼层
该账户已被注销 发表于 2022-5-19 11:07
软件版本问题吧?20系列显卡我记得和30系列用的是不用版本的软件的,是分显卡的 ...

感谢老哥帮助
回复 支持 反对

使用道具 举报

2

主题

154

帖子

4234

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4234

万事如意节日勋章

发表于 2022-5-19 11:34:07 | 显示全部楼层
学习学习
回复

使用道具 举报

18

主题

429

帖子

2213

积分

初级丹圣

Rank: 8Rank: 8

积分
2213
发表于 2022-5-19 14:02:48 | 显示全部楼层
bs是不是有点高了些兄弟
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-21 22:05 , Processed in 0.114704 second(s), 10 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表