deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 1481|回复: 14

【求助】liae结构的迪丽热巴万能模型 训练出错

[复制链接]

1

主题

13

帖子

2230

积分

初级丹圣

Rank: 8Rank: 8

积分
2230
 楼主| 发表于 2023-7-18 01:25:31 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
求助各位大佬,论坛下的 liae结构的迪丽热巴万能模型 保持默认参数,启动训练的时候提示以下错误,后把 batch_size改成2,还是提示错误,论坛看了一圈暂时找不到解决办法,用了猫哥的神丹模型是可以的,帮忙看看以下参数正不正确,感谢!


---------------------------模型选项----------------------------


            resolution: 224
             face_type: wf
     models_opt_on_gpu: True
                 archi: liae-udt
               ae_dims: 512
                e_dims: 64
                d_dims: 64
           d_mask_dims: 32
       masked_training: True
       eyes_mouth_prio: True
           uniform_yaw: True
         blur_out_mask: True
             adabelief: True
            lr_dropout: y
           random_warp: False
      random_hsv_power: 0.1
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: none
              clipgrad: False
              pretrain: False
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
       random_src_flip: False
       random_dst_flip: True
            batch_size: 2
             gan_power: 0.1
        gan_patch_size: 28
              gan_dims: 32


---------------------------运行信息----------------------------


                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 4060 Laptop GPU
                  显存大小: 5.33GB


===========================================================
猫之汉化




错误代码如下:



启动中. 按回车键停止训练并保存进度。


保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_28 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


  (1) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_28 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


0 successful operations.
0 derived errors ignored.


Errors may have originated from an input operation.
Input Source operations connected to node Conv2D_28:
Pad_32 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/res3/conv1/weight/read (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)


Input Source operations connected to node Conv2D_28:
Pad_32 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/res3/conv1/weight/read (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)


Original stack trace for 'Conv2D_28':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DFLab\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DFLab\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\DFLab\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 424, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 230, in forward
    x = self.res3(x)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 82, in forward
    x = self.conv1(inp)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 101, in forward
    x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2397, in conv2d
    name=name)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 972, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)


Traceback (most recent call last):
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Conv2D_28}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


  (1) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Conv2D_28}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


0 successful operations.
0 derived errors ignored.


During handling of the above exception, another exception occurred:


Traceback (most recent call last):
  File "E:\DFLab\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\DFLab\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "E:\DFLab\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "E:\DFLab\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_28 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


  (1) Resource exhausted: OOM when allocating tensor with shape[2,128,112,112] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_28 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


0 successful operations.
0 derived errors ignored.


Errors may have originated from an input operation.
Input Source operations connected to node Conv2D_28:
Pad_32 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/res3/conv1/weight/read (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)


Input Source operations connected to node Conv2D_28:
Pad_32 (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/res3/conv1/weight/read (defined at E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)


Original stack trace for 'Conv2D_28':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DFLab\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DFLab\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\DFLab\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 424, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 230, in forward
    x = self.res3(x)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 82, in forward
    x = self.conv1(inp)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFLab\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 101, in forward
    x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2397, in conv2d
    name=name)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 972, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DFLab\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

回复

使用道具 举报

20

主题

172

帖子

2763

积分

初级丹圣

Rank: 8Rank: 8

积分
2763
发表于 2023-7-18 01:40:47 | 显示全部楼层
本帖最后由 dfllearner 于 2023-7-18 04:07 编辑

出错信息里说了, Resource exhausted: OOM,显存不够。
虽然分辨率是224,但是维度512太高
解决方案是升级显卡(在这之前试一下那个把model和optimizer放在GPU的关掉,看看省下来的显存够不够,回来更新下,我也好奇想知道帮助有多大
回复 支持 反对

使用道具 举报

13

主题

994

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
11929
发表于 2023-7-18 02:37:46 | 显示全部楼层
可以试下把那个模型放在GPU的参数关掉。gan的维度也可以调小一点。一般来说,gan的维度不用这么大
回复 支持 反对

使用道具 举报

45

主题

503

帖子

3121

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3121
发表于 2023-7-18 13:18:00 | 显示全部楼层
换显卡,8g不够。要么换模型
回复 支持 反对

使用道具 举报

45

主题

503

帖子

3121

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3121
发表于 2023-7-18 13:19:23 | 显示全部楼层
要么先把gan关了。以后关扭曲的时候再开
回复 支持 反对

使用道具 举报

3

主题

138

帖子

1585

积分

初级丹圣

Rank: 8Rank: 8

积分
1585
发表于 2023-7-18 14:27:13 | 显示全部楼层
降低丹方配置
回复 支持 反对

使用道具 举报

1

主题

13

帖子

2230

积分

初级丹圣

Rank: 8Rank: 8

积分
2230
 楼主| 发表于 2023-7-18 23:44:44 | 显示全部楼层
dfllearner 发表于 2023-7-18 01:40
出错信息里说了, Resource exhausted: OOM,显存不够。
虽然分辨率是224,但是维度512太高
解决方案是升级 ...

感谢大佬,把 models_opt_on_gpu关了后,其他参数仍旧默认,就可以跑起来了,看了任务管理器,显存只用了6.8G,另外没看到这个维度512在哪里改呢
回复 支持 反对

使用道具 举报

1

主题

13

帖子

2230

积分

初级丹圣

Rank: 8Rank: 8

积分
2230
 楼主| 发表于 2023-7-18 23:46:04 | 显示全部楼层
seancai110 发表于 2023-7-18 02:37
可以试下把那个模型放在GPU的参数关掉。gan的维度也可以调小一点。一般来说,gan的维度不用这么大 ...

感谢大佬,关掉GPU参数就可以跑了,但是gan维度一般是多少好呢
回复 支持 反对

使用道具 举报

1

主题

13

帖子

2230

积分

初级丹圣

Rank: 8Rank: 8

积分
2230
 楼主| 发表于 2023-7-18 23:47:19 | 显示全部楼层
ccctttccct 发表于 2023-7-18 13:19
要么先把gan关了。以后关扭曲的时候再开

谢谢大佬,按操作已经能跑起来了
回复 支持 反对

使用道具 举报

1

主题

13

帖子

2230

积分

初级丹圣

Rank: 8Rank: 8

积分
2230
 楼主| 发表于 2023-7-18 23:48:37 | 显示全部楼层

谢谢大佬指导
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-23 20:28 , Processed in 0.111108 second(s), 9 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表