deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 574|回复: 6

小白求助!训练没有预览是什么情况

[复制链接]

1

主题

14

帖子

102

积分

高级丹童

Rank: 2

积分
102
 楼主| 发表于 2024-3-6 01:15:23 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
模型名字: WF 320 DF-UD_SAEHD

                  当前迭代: 237912

----------------------模型选项----------------------

            resolution: 320
             face_type: wf
     models_opt_on_gpu: True
                 archi: df-ud
               ae_dims: 256
                e_dims: 64
                d_dims: 64
           d_mask_dims: 22
       masked_training: True
           uniform_yaw: True
            lr_dropout: y
           random_warp: False
             gan_power: 0.1
       true_face_power: 0.1
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: none
              clipgrad: True
              pretrain: False
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
           random_flip: True
            batch_size: 4
       eyes_mouth_prio: True
             adabelief: True
       random_src_flip: True
       random_dst_flip: True
        gan_patch_size: 80
              gan_dims: 32
         blur_out_mask: False
      random_hsv_power: 0.0

----------------------运行信息----------------------

                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 3070
                  显存大小: 5.32GB

Error: 2 root error(s) found.
  (0) Resource exhausted: failed to allocate memory
         [[node sub_19 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:487) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1715]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: failed to allocate memory
         [[node sub_19 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:487) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node sub_19:
mul_5 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:448)
mul_10 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:456)

Input Source operations connected to node sub_19:
mul_5 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:448)
mul_10 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:456)

Original stack trace for 'sub_19':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 487, in on_initialize
    gpu_dst_loss += tf.reduce_mean ( 10*tf.square(  gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1367, in binary_op_wrapper
    return func(x, y, name=name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 548, in subtract
    return gen_math_ops.sub(x, y, name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 10653, in sub
    "Sub", x=x, y=y, name=name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: failed to allocate memory
         [[{{node sub_19}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1715]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: failed to allocate memory
         [[{{node sub_19}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: failed to allocate memory
         [[node sub_19 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:487) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1715]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: failed to allocate memory
         [[node sub_19 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:487) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node sub_19:
mul_5 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:448)
mul_10 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:456)

Input Source operations connected to node sub_19:
mul_5 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:448)
mul_10 (defined at E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:456)

Original stack trace for 'sub_19':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 487, in on_initialize
    gpu_dst_loss += tf.reduce_mean ( 10*tf.square(  gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 1367, in binary_op_wrapper
    return func(x, y, name=name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 548, in subtract
    return gen_math_ops.sub(x, y, name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 10653, in sub
    "Sub", x=x, y=y, name=name)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\AI\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

这是啥意思,内存吗?以前用原版可以正常使用,现在换了汉化版DFL就这样了

回复

使用道具 举报

14

主题

2944

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
15961

真我风采勋章万事如意节日勋章

发表于 2024-3-6 01:23:07 | 显示全部楼层
本帖最后由 wtxx8888 于 2024-3-6 01:28 编辑

不是报了 OOM ?炸显存了。你显卡带不动,此丹目前的参数设置!!
回复 支持 反对

使用道具 举报

0

主题

75

帖子

1513

积分

初级丹圣

Rank: 8Rank: 8

积分
1513

万事如意节日勋章

QQ
发表于 2024-3-6 08:36:03 | 显示全部楼层
什么显卡?换个低参丹吧
回复 支持 反对

使用道具 举报

33

主题

708

帖子

7869

积分

高级丹圣

【少女之友】

Rank: 13Rank: 13Rank: 13Rank: 13

积分
7869

开心娱乐节日勋章

发表于 2024-3-6 09:35:44 | 显示全部楼层
论坛百分八十的错误都是OOM
回复 支持 反对

使用道具 举报

1

主题

14

帖子

102

积分

高级丹童

Rank: 2

积分
102
 楼主| 发表于 2024-3-6 15:25:07 | 显示全部楼层
MirrorStudio 发表于 2024-3-6 08:36
什么显卡?换个低参丹吧

3070
回复 支持 反对

使用道具 举报

0

主题

25

帖子

427

积分

初级丹师

Rank: 3Rank: 3

积分
427
发表于 2024-3-6 18:52:26 | 显示全部楼层
oom显卡带不动,换丹降参数
回复 支持 反对

使用道具 举报

1

主题

14

帖子

102

积分

高级丹童

Rank: 2

积分
102
 楼主| 发表于 2024-3-7 12:41:42 | 显示全部楼层
deepface6666 发表于 2024-3-7 12:37
你这参数  3070关掉GAN都炸显存,3060可以训练,3070炸显存还不如买个二手 P40强,几百块钱 24G显存 ...

后悔了,早知道加点钱上3070ti了
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-11-23 02:27 , Processed in 0.182558 second(s), 36 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表