deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
楼主: aaa2002911

DFL常见报错答疑搜集贴,不定时更新

  [复制链接]

1

主题

35

帖子

206

积分

初级丹师

Rank: 3Rank: 3

积分
206
发表于 2022-9-18 21:44:42 | 显示全部楼层
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_85 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:242) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_15/concat/_1557]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_85 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:242) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_85:
mul_25 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:472)

Input Source operations connected to node Pad_85:
mul_25 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:472)

Original stack trace for 'Pad_85':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 472, in on_initialize
    gpu_src_loss += nn.style_loss(gpu_pred_src_dst_no_code_grad*tf.stop_gradient(gpu_pred_src_dstm), tf.stop_gradient(gpu_pred_dst_dst*gpu_pred_dst_dstm), gaussian_blur_radius=resolution//8, loss_weight=10000*face_style_power)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 261, in style_loss
    target = gaussian_blur(target, gaussian_blur_radius)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 242, in gaussian_blur
    x = tf.pad(x, padding )
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_85}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_15/concat/_1557]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_85}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_85 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:242) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_15/concat/_1557]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[12,480,480] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_85 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py:242) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_85:
mul_25 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:472)

Input Source operations connected to node Pad_85:
mul_25 (defined at D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:472)

Original stack trace for 'Pad_85':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 472, in on_initialize
    gpu_src_loss += nn.style_loss(gpu_pred_src_dst_no_code_grad*tf.stop_gradient(gpu_pred_src_dstm), tf.stop_gradient(gpu_pred_dst_dst*gpu_pred_dst_dstm), gaussian_blur_radius=resolution//8, loss_weight=10000*face_style_power)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 261, in style_loss
    target = gaussian_blur(target, gaussian_blur_radius)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 242, in gaussian_blur
    x = tf.pad(x, padding )
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "D:\dfl\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)
大佬帮我看看
回复 支持 反对

使用道具 举报

0

主题

2

帖子

315

积分

初级丹师

Rank: 3Rank: 3

积分
315
发表于 2022-9-20 11:05:17 | 显示全部楼层
运行5.XSeg) data_dst mask for XSeg trainer - edit.bat脚本打不开编辑器,请问是怎么回事?
回复 支持 反对

使用道具 举报

0

主题

21

帖子

157

积分

禁止访问

积分
157
发表于 2022-9-25 14:17:12 | 显示全部楼层
给力,很有帮助
回复 支持 反对

使用道具 举报

0

主题

27

帖子

257

积分

初级丹师

Rank: 3Rank: 3

积分
257
发表于 2022-9-26 09:23:43 | 显示全部楼层
海克斯科技,狠活,kls
回复 支持 反对

使用道具 举报

5

主题

112

帖子

747

积分

高级丹师

Rank: 5Rank: 5

积分
747
发表于 2022-9-26 10:28:55 | 显示全部楼层
回帖,尊重劳动成果,
回复 支持 反对

使用道具 举报

0

主题

4

帖子

135

积分

高级丹童

Rank: 2

积分
135
发表于 2022-9-27 22:05:52 | 显示全部楼层
先回后看
回复

使用道具 举报

1

主题

25

帖子

250

积分

初级丹师

Rank: 3Rank: 3

积分
250
发表于 2022-10-3 14:52:26 | 显示全部楼层
感谢大佬,尊重劳动成果
回复 支持 反对

使用道具 举报

0

主题

11

帖子

113

积分

高级丹童

Rank: 2

积分
113
发表于 2022-10-7 20:07:04 | 显示全部楼层
学习学习
回复

使用道具 举报

0

主题

3

帖子

35

积分

初级丹童

Rank: 1

积分
35
发表于 2022-10-11 16:59:53 | 显示全部楼层
不错的帖子,很有用
回复 支持 反对

使用道具 举报

0

主题

51

帖子

3532

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3532

万事如意节日勋章开心娱乐节日勋章

发表于 2022-10-20 13:26:59 | 显示全部楼层
刚开始用,还没遇到这些报错。收藏了!
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-11-24 22:17 , Processed in 0.155977 second(s), 37 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表