deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 791|回复: 9

万能模型复训练报错

[复制链接]

7

主题

63

帖子

1194

积分

初级丹圣

Rank: 8Rank: 8

积分
1194
发表于 2021-10-17 19:22:04 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
新手求教。
购买的滚石大佬的万用模型,自己加入SRT素材进行复训练。
训练大概几千次之后会报错,如下:
Error: OOM when allocating tensor with shape[4,176,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients_2/Conv2D_45_grad/Conv2DBackpropInput (defined at S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'gradients_2/Conv2D_45_grad/Conv2DBackpropInput', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\mainscripts\Trainer.py", line 57, in trainerThread
    debug=debug,
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 471, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ]
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 420, in _MaybeCompile
    return grad_fn()  # Exit early
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_grad.py", line 532, in _Conv2DGrad
    data_format=data_format),
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1307, in conv2d_backprop_input
    name=name)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

...which was originally created as op 'Conv2D_45', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 338, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 158, in forward
    x3 = tf.nn.sigmoid(self.out_conv3(x))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 99, in forward
    x = tf.nn.conv2d(x, weight, self.strides, 'VALID', dilations=self.dilations, data_format=nn.data_format)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1026, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4,176,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients_2/Conv2D_45_grad/Conv2DBackpropInput (defined at S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Traceback (most recent call last):
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,176,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node gradients_2/Conv2D_45_grad/Conv2DBackpropInput}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\mainscripts\Trainer.py", line 123, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
    losses = self.onTrainOneIter()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 636, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm_all, warped_dst, target_dst, target_dstm_all)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 503, in src_dst_train
    self.target_dstm_all:target_dstm_all,
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4,176,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients_2/Conv2D_45_grad/Conv2DBackpropInput (defined at S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'gradients_2/Conv2D_45_grad/Conv2DBackpropInput', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\mainscripts\Trainer.py", line 57, in trainerThread
    debug=debug,
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 471, in on_initialize
    gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ]
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
    grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 664, in gradients
    unconnected_gradients)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 420, in _MaybeCompile
    return grad_fn()  # Exit early
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 965, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_grad.py", line 532, in _Conv2DGrad
    data_format=data_format),
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1307, in conv2d_backprop_input
    name=name)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

...which was originally created as op 'Conv2D_45', defined at:
  File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
    self.on_initialize()
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 338, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 158, in forward
    x3 = tf.nn.sigmoid(self.out_conv3(x))
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 99, in forward
    x = tf.nn.conv2d(x, weight, self.strides, 'VALID', dilations=self.dilations, data_format=nn.data_format)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1026, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "S:\DeepFaceLab_NVIDIA802_汉化版\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4,176,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node gradients_2/Conv2D_45_grad/Conv2DBackpropInput (defined at S:\DeepFaceLab_NVIDIA802_汉化版\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


素材内容为:

==================== Model Summary =====================
==                                                    ==
==            Model name: DF-UD256_SAEHD              ==
==                                                    ==
==     Current iteration: 1158091                     ==
==                                                    ==
==------------------ Model Options -------------------==
==                                                    ==
==            resolution: 256                         ==
==             face_type: f                           ==
==     models_opt_on_gpu: True                        ==
==                 archi: df-ud                       ==
==               ae_dims: 352                         ==
==                e_dims: 88                          ==
==                d_dims: 88                          ==
==           d_mask_dims: 28                          ==
==       masked_training: True                        ==
==             eyes_prio: False                       ==
==           uniform_yaw: False                       ==
==            lr_dropout: n                           ==
==           random_warp: True                        ==
==             gan_power: 0.1                         ==
==       true_face_power: 0.01                        ==
==      face_style_power: 0.01                        ==
==        bg_style_power: 0.0                         ==
==               ct_mode: rct                         ==
==              clipgrad: True                        ==
==              pretrain: False                       ==
==       autobackup_hour: 3                           ==
== write_preview_history: True                        ==
==           target_iter: 0                           ==
==           random_flip: True                        ==
==            batch_size: 4                           ==
==                                                    ==
==-------------------- Running On --------------------==
==                                                    ==
==          Device index: 0                           ==
==                  Name: NVIDIA GeForce GTX 1060 6GB ==
==                  VRAM: 6.00GB


请教一下为什么。
回复

使用道具 举报

15

主题

287

帖子

1万

积分

高级丹圣

Monster

Rank: 13Rank: 13Rank: 13Rank: 13

积分
13729
发表于 2021-10-17 19:56:23 | 显示全部楼层
把GAN关了吧,6G带不动256的GAN
回复 支持 反对

使用道具 举报

7

主题

63

帖子

1194

积分

初级丹圣

Rank: 8Rank: 8

积分
1194
 楼主| 发表于 2021-10-17 20:18:24 | 显示全部楼层
qoungyoung 发表于 2021-10-17 19:56
把GAN关了吧,6G带不动256的GAN

还是配置问题吗?
话说gan这个参数到底是什么相关哇……
回复 支持 反对

使用道具 举报

15

主题

287

帖子

1万

积分

高级丹圣

Monster

Rank: 13Rank: 13Rank: 13Rank: 13

积分
13729
发表于 2021-10-17 22:00:25 | 显示全部楼层
新手剪辑师 发表于 2021-10-17 20:18
还是配置问题吗?
话说gan这个参数到底是什么相关哇……

提升细节的,可以看滚石的教程
回复 支持 反对

使用道具 举报

2

主题

772

帖子

4078

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4078
发表于 2021-10-17 23:21:32 | 显示全部楼层
显存小了 试试改参数
回复 支持 反对

使用道具 举报

39

主题

1492

帖子

8232

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
8232

万事如意节日勋章

发表于 2021-10-18 09:17:39 | 显示全部楼层
报错里不用看懂别的,只要看到OOM,就从显存内存找原因
回复 支持 反对

使用道具 举报

7

主题

63

帖子

1194

积分

初级丹圣

Rank: 8Rank: 8

积分
1194
 楼主| 发表于 2021-10-18 10:37:20 | 显示全部楼层
fw1019 发表于 2021-10-17 23:21
显存小了 试试改参数

十分感谢。
回复 支持 反对

使用道具 举报

7

主题

63

帖子

1194

积分

初级丹圣

Rank: 8Rank: 8

积分
1194
 楼主| 发表于 2021-10-18 10:37:33 | 显示全部楼层
qoungyoung 发表于 2021-10-17 22:00
提升细节的,可以看滚石的教程

十分感谢。
回复 支持 反对

使用道具 举报

25

主题

2095

帖子

1万

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
12085
发表于 2021-10-18 10:47:30 | 显示全部楼层
配置不够吧?
回复 支持 反对

使用道具 举报

2

主题

70

帖子

942

积分

高级丹师

Rank: 5Rank: 5

积分
942
发表于 2021-12-9 10:36:16 | 显示全部楼层
看看你的版本是不是20211120的版本,滚石的那个最新的万能丹只能在这个版本下跑
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-11-28 18:55 , Processed in 0.128639 second(s), 35 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表