deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 444|回复: 7

五彩大佬的288神丹这么跑不了啊?求助

[复制链接]

1

主题

16

帖子

540

积分

高级丹师

Rank: 5Rank: 5

积分
540
 楼主| 发表于 2024-10-19 12:37:50 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
  模型名字: new_SAEHD

                  当前迭代: 100000

-----------------------模型选项------------------------

            resolution: 288
             face_type: wf
     models_opt_on_gpu: True
                 archi: df
               ae_dims: 384
                e_dims: 92
                d_dims: 72
           d_mask_dims: 22
       masked_training: True
            lr_dropout: n
           random_warp: True
             gan_power: 0.0
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: none
              clipgrad: False
              pretrain: False
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
           random_flip: True
            batch_size: 8
       eyes_mouth_prio: False
           uniform_yaw: False
             adabelief: True
       random_src_flip: False
       random_dst_flip: True
        gan_patch_size: 36
              gan_dims: 16
                  模型训练: 五彩艺术
                  严禁转卖: 违者必究
                  联系QQ: 83365298
         blur_out_mask: False
      random_hsv_power: 0.0

-----------------------运行信息------------------------

                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 4060 Ti
                  显存大小: 13.25GB

===================================================
猫之汉化
出售模型、商业换脸程序开发、商业换脸视频定制
QQ\微信:564646676
淘宝店地址:http://t.hk.uy/4ks
=============================================

启动中. 按回车键停止训练并保存进度。

保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_32 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat/concat/_969]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_32 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_32:
LeakyRelu_29 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_32:
LeakyRelu_29 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_32':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 410, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 226, in forward
    x = self.res2(x)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 84, in forward
    x = self.conv2(x)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_32}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat/concat/_969]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_32}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_32 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat/concat/_969]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1152,290,290] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_32 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_32:
LeakyRelu_29 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_32:
LeakyRelu_29 (defined at C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_32':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 410, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 226, in forward
    x = self.res2(x)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 84, in forward
    x = self.conv2(x)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\Users\Administrator\Desktop\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

回复

使用道具 举报

5

主题

39

帖子

412

积分

初级丹师

Rank: 3Rank: 3

积分
412
发表于 2024-10-19 13:09:11 | 显示全部楼层
看见oom就知道是显存不够了
回复 支持 反对

使用道具 举报

14

主题

2985

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
16186

真我风采勋章万事如意节日勋章

发表于 2024-10-19 13:11:34 | 显示全部楼层
本帖最后由 wtxx8888 于 2024-10-19 13:13 编辑

新手,刚开始多看资料,教程,注意事项。
OOM代表显存炸了,你显卡带不动BS 8,要么改低BS,要么换模型。
此丹架构为DF,注意后面没其他参数,
首先,显存耗量为-D的两倍,其次没有-U,是SAE普通模型,并不是SAEHD高清模型,这模型老的很。。。
回复 支持 反对

使用道具 举报

1

主题

16

帖子

540

积分

高级丹师

Rank: 5Rank: 5

积分
540
 楼主| 发表于 2024-10-19 14:10:03 | 显示全部楼层
wtxx8888 发表于 2024-10-19 13:11
新手,刚开始多看资料,教程,注意事项。
OOM代表显存炸了,你显卡带不动BS 8,要么改低BS,要么换模型。
...

好的谢谢
回复 支持 反对

使用道具 举报

80

主题

1548

帖子

8099

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
8099

万事如意节日勋章

发表于 2024-10-19 15:31:36 | 显示全部楼层
288这个丹比较吃显存
回复 支持 反对

使用道具 举报

0

主题

53

帖子

924

积分

高级丹师

Rank: 5Rank: 5

积分
924

万事如意节日勋章

发表于 2024-10-19 16:44:51 | 显示全部楼层
爆显存了
回复

使用道具 举报

21

主题

284

帖子

3380

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3380
发表于 2024-10-19 20:31:32 | 显示全部楼层
你的显卡是4060ti 16g的?
回复 支持 反对

使用道具 举报

15

主题

189

帖子

3095

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3095
发表于 2024-10-30 18:01:05 | 显示全部楼层
改这个    batch_size: 8(改到4试试)
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-12-4 01:21 , Processed in 0.156370 second(s), 36 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表