deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 223|回复: 5

大神帮忙看看报错

[复制链接]

1

主题

9

帖子

42

积分

初级丹童

Rank: 1

积分
42
 楼主| 发表于 2024-12-7 02:07:18 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
=========================模型概要==========================

                  模型名字: 9_SAEHD

                  当前迭代: 1540312

-----------------------模型选项------------------------

            resolution: 288
             face_type: wf
     models_opt_on_gpu: True
                 archi: liae-ud
               ae_dims: 512
                e_dims: 96
                d_dims: 96
           d_mask_dims: 22
       masked_training: True
    retraining_samples: False
        high_loss_auto: True
       high_loss_power: 15
      number_of_cycles: 10
             eyes_prio: False
            mouth_prio: False
           uniform_yaw: False
         blur_out_mask: False
             adabelief: True
            lr_dropout: y
           random_warp: False
      random_hsv_power: 0.05
     random_downsample: False
          random_noise: False
           random_blur: False
           random_jpeg: False
         random_shadow: none
      background_power: 0.0
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: lct
          random_color: False
              clipgrad: False
              pretrain: False
       preview_samples: 2
    force_full_preview: False
                    lr: 5e-05
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
       random_src_flip: False
       random_dst_flip: False
            batch_size: 8
             gan_power: 0.0
        gan_patch_size: 40
              gan_dims: 16
         gan_smoothing: 0.1
             gan_noise: 0.0

-----------------------运行信息------------------------

                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 4060 Ti
                  显存大小: 13.25GB

===================================================
启动中. 按回车键停止训练并保存进度。

保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node LeakyRelu_22 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_793]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node LeakyRelu_22 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node LeakyRelu_22:
Add_37 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107)

Input Source operations connected to node LeakyRelu_22:
Add_37 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107)

Original stack trace for 'LeakyRelu_22':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 59, in trainerThread
    debug=debug)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 510, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 243, in forward
    m = self.upscalem2(m)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 72, in forward
    x = act(x, 0.1)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 29, in act
    return tf.nn.leaky_relu(x, alpha)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3621, in leaky_relu
    return gen_nn_ops.leaky_relu(features, alpha=alpha, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 5118, in leaky_relu
    "LeakyRelu", features=features, alpha=alpha, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node LeakyRelu_22}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_793]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node LeakyRelu_22}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 571, in train_one_iter
    losses, iter_time = self.onTrainOneIter()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 1008, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 727, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node LeakyRelu_22 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_7/concat/_793]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[8,176,72,72] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node LeakyRelu_22 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node LeakyRelu_22:
Add_37 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107)

Input Source operations connected to node LeakyRelu_22:
Add_37 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107)

Original stack trace for 'LeakyRelu_22':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 59, in trainerThread
    debug=debug)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 510, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 243, in forward
    m = self.upscalem2(m)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 72, in forward
    x = act(x, 0.1)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 29, in act
    return tf.nn.leaky_relu(x, alpha)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 3621, in leaky_relu
    return gen_nn_ops.leaky_relu(features, alpha=alpha, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 5118, in leaky_relu
    "LeakyRelu", features=features, alpha=alpha, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

回复

使用道具 举报

3

主题

209

帖子

4436

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4436
发表于 2024-12-7 02:28:28 | 显示全部楼层
试试 Bath Size 降到2看看跑不跑得了?
回复 支持 反对

使用道具 举报

14

主题

3357

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
17960

真我风采勋章万事如意节日勋章

发表于 2024-12-7 02:41:45 | 显示全部楼层
本帖最后由 wtxx8888 于 2024-12-7 02:45 编辑

OOM炸显存了,你显卡的显存不足,模型的现有参数,运行不了。

论坛有报错的综合帖,搜索都不会用?
https://dfldata.cc/forum.php?mod=viewthread&tid=1729

另,这个LIAE架构,没有T参数,可以直接仍了。
T参数,对LIAE架构的作用巨大。LIAE-UDT才能用。

回复 支持 反对

使用道具 举报

1

主题

9

帖子

42

积分

初级丹童

Rank: 1

积分
42
 楼主| 发表于 2024-12-7 02:51:15 | 显示全部楼层
Terea 发表于 2024-12-7 02:28
试试 Bath Size 降到2看看跑不跑得了?

=========================模型概要==========================

                  模型名字: 9_SAEHD

                  当前迭代: 1540312

-----------------------模型选项------------------------

            resolution: 288
             face_type: wf
     models_opt_on_gpu: True
                 archi: liae-ud
               ae_dims: 512
                e_dims: 96
                d_dims: 96
           d_mask_dims: 22
       masked_training: True
    retraining_samples: False
        high_loss_auto: True
       high_loss_power: 15
      number_of_cycles: 10
             eyes_prio: False
            mouth_prio: False
           uniform_yaw: False
         blur_out_mask: False
             adabelief: True
            lr_dropout: y
           random_warp: False
      random_hsv_power: 0.05
     random_downsample: False
          random_noise: False
           random_blur: False
           random_jpeg: False
         random_shadow: none
      background_power: 0.0
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: lct
          random_color: False
              clipgrad: False
              pretrain: False
       preview_samples: 4
    force_full_preview: False
                    lr: 5e-05
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
       random_src_flip: False
       random_dst_flip: False
            batch_size: 2
             gan_power: 0.0
        gan_patch_size: 40
              gan_dims: 16
         gan_smoothing: 0.1
             gan_noise: 0.0

-----------------------运行信息------------------------

                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 4060 Ti
                  显存大小: 13.25GB

===================================================
启动中. 按回车键停止训练并保存进度。

保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: OOM when allocating tensor with shape[248832,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients/MatMul_2_grad/MatMul_1 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


Errors may have originated from an input operation.
Input Source operations connected to node gradients/MatMul_2_grad/MatMul_1:
mul_1 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py:400)

Original stack trace for 'gradients/MatMul_2_grad/MatMul_1':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 59, in trainerThread
    debug=debug)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 692, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights )]
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
    unconnected_gradients)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 682, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 338, in _MaybeCompile
    return grad_fn()  # Exit early
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 682, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 1745, in _MatMulGrad
    grad_b = gen_math_ops.mat_mul(a, grad, transpose_a=True)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5713, in mat_mul
    name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

...which was originally created as op 'MatMul_2', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 505, in on_initialize
    gpu_dst_inter_B_code = self.inter_B (gpu_dst_code)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 151, in forward
    x = self.dense1(x)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Dense.py", line 66, in forward
    x = tf.matmul(x, weight)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 3655, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5713, in mat_mul
    name=name)

Traceback (most recent call last):
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[248832,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients/MatMul_2_grad/MatMul_1}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 571, in train_one_iter
    losses, iter_time = self.onTrainOneIter()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 1008, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 727, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[248832,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients/MatMul_2_grad/MatMul_1 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.


Errors may have originated from an input operation.
Input Source operations connected to node gradients/MatMul_2_grad/MatMul_1:
mul_1 (defined at C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py:400)

Original stack trace for 'gradients/MatMul_2_grad/MatMul_1':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\mainscripts\Trainer.py", line 59, in trainerThread
    debug=debug)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 692, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights )]
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
    unconnected_gradients)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 682, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 338, in _MaybeCompile
    return grad_fn()  # Exit early
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 682, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 1745, in _MatMulGrad
    grad_b = gen_math_ops.mat_mul(a, grad, transpose_a=True)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5713, in mat_mul
    name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

...which was originally created as op 'MatMul_2', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\ModelBase.py", line 206, in __init__
    self.on_initialize()
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 505, in on_initialize
    gpu_dst_inter_B_code = self.inter_B (gpu_dst_code)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 151, in forward
    x = self.dense1(x)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\DeepFaceLab\core\leras\layers\Dense.py", line 66, in forward
    x = tf.matmul(x, weight)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 3655, in matmul
    a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
  File "C:\DFL1120_RTX30XX\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5713, in mat_mul
    name=name)
回复 支持 反对

使用道具 举报

3

主题

209

帖子

4436

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4436
发表于 2024-12-7 02:55:06 | 显示全部楼层
adsllk 发表于 2024-12-7 02:51
=========================模型概要==========================

                  模型名字: 9_SAEHD

那还是听楼上 wtxx8888 大佬的劝,换个LIAE-UDT的丹吧~
回复 支持 反对

使用道具 举报

43

主题

999

帖子

5357

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5357

万事如意节日勋章开心娱乐节日勋章

发表于 2024-12-7 20:59:56 | 显示全部楼层
本帖最后由 pasanonic 于 2024-12-7 21:02 编辑

没有T的 liae是废丹 别浪费时间了

lr_dropout很耗显存  降低bs 后期开gan你还要降
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2025-3-14 03:17 , Processed in 0.110552 second(s), 33 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表