deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 811|回复: 2

搞了个模型 报错了

[复制链接]

1

主题

1

帖子

10

积分

初级丹童

Rank: 1

积分
10
 楼主| 发表于 2022-11-8 12:20:12 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
搞了个300W迭代的模型 想拿来训练一下 结果报错 求问各位大佬问题出在哪里 入坑一天 很迷茫
================== Model Summary ===================
==                                                ==
==            Model name: DF-UD256_SAEHD          ==
==                                                ==
==     Current iteration: 3490014                 ==
==                                                ==
==---------------- Model Options -----------------==
==                                                ==
==            resolution: 256                     ==
==             face_type: f                       ==
==     models_opt_on_gpu: True                    ==
==                 archi: df-ud                   ==
==               ae_dims: 352                     ==
==                e_dims: 88                      ==
==                d_dims: 88                      ==
==           d_mask_dims: 28                      ==
==       masked_training: True                    ==
==           uniform_yaw: False                   ==
==            lr_dropout: y                       ==
==           random_warp: True                    ==
==             gan_power: 0.0                     ==
==       true_face_power: 0.1                     ==
==      face_style_power: 0.0                     ==
==        bg_style_power: 0.0                     ==
==               ct_mode: rct                     ==
==              clipgrad: True                    ==
==              pretrain: False                   ==
==       autobackup_hour: 12                      ==
== write_preview_history: False                   ==
==           target_iter: 0                       ==
==           random_flip: False                   ==
==            batch_size: 8                       ==
==       eyes_mouth_prio: True                    ==
==             adabelief: False                   ==
==        gan_patch_size: 32                      ==
==              gan_dims: 16                      ==
==                  猫の汉化: 猫猫YES                   ==
==                  出售仙丹: QQ564646676             ==
==                  免费教程: 人脸素材                    ==
==                  大神答疑: 进阶技巧                    ==
==                  人脸素材: 请访问dfldata.xyz          ==
==       random_src_flip: False                   ==
==       random_dst_flip: True                    ==
==         blur_out_mask: False                   ==
==      random_hsv_power: 0.0                     ==
==                                                ==
==------------------ Running On ------------------==
==                                                ==
==          Device index: 0                       ==
==                  Name: NVIDIA GeForce RTX 2080 ==
==                  VRAM: 6.13GB                  ==
==                                                ==
====================================================





Error: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'Pad_18', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 193, in __init__
    self.on_initialize()
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 409, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 226, in forward
    x = self.res2(x)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 84, in forward
    x = self.conv2(x)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Traceback (most recent call last):
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
    return fn(*args)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_18}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[{{node concat_5}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 129, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 474, in train_one_iter
    losses = self.onTrainOneIter()
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
    run_metadata_ptr)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
    run_metadata)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Caused by op 'Pad_18', defined at:
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 193, in __init__
    self.on_initialize()
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 409, in on_initialize
    gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 226, in forward
    x = self.res2(x)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 84, in forward
    x = self.conv2(x)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
    op_def=op_def)
  File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
    self._traceback = tf_stack.extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

回复

使用道具 举报

17

主题

125

帖子

1000

积分

初级丹圣

Rank: 8Rank: 8

积分
1000
发表于 2022-11-8 15:24:49 | 显示全部楼层
显存不够,batch_size开小点
回复 支持 反对

使用道具 举报

0

主题

9

帖子

105

积分

高级丹童

Rank: 2

积分
105
发表于 2022-11-8 18:09:32 | 显示全部楼层
oom,是显存小了,有一个常见报错的帖子,里面有详细解决办法
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-22 20:17 , Processed in 0.089080 second(s), 9 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表