deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
查看: 956|回复: 10

求大佬拯救 看下是什么问题!

[复制链接]

1

主题

6

帖子

159

积分

高级丹童

Rank: 2

积分
159
 楼主| 发表于 2022-11-25 17:59:35 | 显示全部楼层 |阅读模式
星级打分
  • 1
  • 2
  • 3
  • 4
  • 5
平均分:NAN  参与人数:0  我的评分:未评
选择一个模型, 或者输入一个名称去新建模型.
[r] : 重命名
[d] : 删除

[0] : new - latest
:
0
加载名为 new_SAEHD 的模型...

选择一个或多个 GPU 编号(以逗号分隔).

[CPU] : CPU
  [0] : NVIDIA GeForce RTX 3070

[0] 选择哪些 GPU 索引? :
0

Initializing models: 100%|###############################################################| 7/7 [00:09<00:00,  1.43s/it]
加载样本中...: 100%|############################################################| 12800/12800 [00:25<00:00, 499.10it/s]
Sort by yaw: 100%|##################################################################| 128/128 [00:00<00:00, 137.19it/s]
加载样本中...: 100%|############################################################| 18565/18565 [00:40<00:00, 463.12it/s]
Sort by yaw: 100%|##################################################################| 128/128 [00:01<00:00, 120.64it/s]

========================模型概要========================

                  模型名字: new_SAEHD

                  当前迭代: 4714856

----------------------模型选项----------------------

            resolution: 224
             face_type: wf
     models_opt_on_gpu: True
                 archi: liae-udt
               ae_dims: 512
                e_dims: 64
                d_dims: 64
           d_mask_dims: 32
       masked_training: True
       eyes_mouth_prio: True
           uniform_yaw: True
         blur_out_mask: True
             adabelief: True
            lr_dropout: y
           random_warp: False
      random_hsv_power: 0.1
       true_face_power: 0.0
      face_style_power: 0.0
        bg_style_power: 0.0
               ct_mode: none
              clipgrad: False
              pretrain: False
       autobackup_hour: 0
write_preview_history: False
           target_iter: 0
       random_src_flip: False
       random_dst_flip: True
            batch_size: 8
             gan_power: 0.1
        gan_patch_size: 28
              gan_dims: 32

----------------------运行信息----------------------

                  设备编号: 0
                  设备名称: NVIDIA GeForce RTX 3070
                  显存大小: 5.33GB

================================================
猫之汉化
出售模型、商业换脸程序开发、商业换脸视频定制
QQ\微信:564646676
淘宝店地址:http://t.hk.uy/4ks
=============================================

启动中. 按回车键停止训练并保存进度。

保存时间|迭代次数|单次时间|SRC损失|DST损失
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_56 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_56 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Conv2D_56:
Pad_60 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/upscalem0/conv1/weight/read (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)

Input Source operations connected to node Conv2D_56:
Pad_60 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/upscalem0/conv1/weight/read (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)

Original stack trace for 'Conv2D_56':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 425, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 241, in forward
    m = self.upscalem0(z)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 71, in forward
    x = self.conv1(x)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 101, in forward
    x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2397, in conv2d
    name=name)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 972, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Conv2D_56}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Conv2D_56}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
    losses = self.onTrainOneIter()
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_56 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_17/concat/_1335]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[1024,2048,3,3] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Conv2D_56 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:101) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Conv2D_56:
Pad_60 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/upscalem0/conv1/weight/read (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)

Input Source operations connected to node Conv2D_56:
Pad_60 (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87)
decoder/upscalem0/conv1/weight/read (defined at E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:61)

Original stack trace for 'Conv2D_56':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
    self.on_initialize()
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 425, in on_initialize
    gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 241, in forward
    m = self.upscalem0(z)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 71, in forward
    x = self.conv1(x)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 101, in forward
    x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2397, in conv2d
    name=name)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 972, in conv2d
    data_format=data_format, dilations=dilations, name=name)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DFL_maozhihanhua_RTX3000\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

训练模型一直就这样不动了,虚拟内存我感觉够大了啊!还是不行  虚拟内存如下!!
QQ截图20221125175400.png










回复

使用道具 举报

3

主题

489

帖子

4084

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4084

万事如意节日勋章

发表于 2022-11-25 18:42:45 | 显示全部楼层
OOM了,3070作为一张优秀的太监卡,拒绝以BS=8跑这个丹
回复 支持 反对

使用道具 举报

3

主题

434

帖子

2448

积分

初级丹圣

Rank: 8Rank: 8

积分
2448
发表于 2022-11-25 19:11:41 | 显示全部楼层
帮你翻译一下,显存不够,程序崩了
减小BS的值,可以减少显存占用
先试试BS=6,还不行就BS=4.
4也不行的话,换个低参数的丹吧
回复 支持 反对

使用道具 举报

7

主题

126

帖子

717

积分

高级丹师

Rank: 5Rank: 5

积分
717
发表于 2022-11-25 19:36:15 | 显示全部楼层
剩下5G多显存,没法子跑512的三维的,三维你配置的不合理,384 96 96还好吧
回复 支持 反对

使用道具 举报

1

主题

6

帖子

159

积分

高级丹童

Rank: 2

积分
159
 楼主| 发表于 2022-11-25 20:17:49 | 显示全部楼层
ken2099 发表于 2022-11-25 19:11
帮你翻译一下,显存不够,程序崩了
减小BS的值,可以减少显存占用
先试试BS=6,还不行就BS=4.

BS 就是批量大小的意思吗?
回复 支持 反对

使用道具 举报

1

主题

6

帖子

159

积分

高级丹童

Rank: 2

积分
159
 楼主| 发表于 2022-11-25 20:27:51 | 显示全部楼层
ken2099 发表于 2022-11-25 19:11
帮你翻译一下,显存不够,程序崩了
减小BS的值,可以减少显存占用
先试试BS=6,还不行就BS=4.

3070这么废的吗?跑4还是崩溃了
回复 支持 反对

使用道具 举报

1

主题

6

帖子

159

积分

高级丹童

Rank: 2

积分
159
 楼主| 发表于 2022-11-25 20:29:44 | 显示全部楼层
anazyz 发表于 2022-11-25 19:36
剩下5G多显存,没法子跑512的三维的,三维你配置的不合理,384 96 96还好吧

这个是设置哪里呢?
回复 支持 反对

使用道具 举报

7

主题

126

帖子

717

积分

高级丹师

Rank: 5Rank: 5

积分
717
发表于 2022-11-26 07:17:43 | 显示全部楼层
你看一下精品教程里面滚石的贴子,介绍的挺详细。
ae_dims: 512
e_dims: 64
d_dims: 64
d_mask_dims: 32
这3个参数,对应的选项是:
11. 模型中间瓶颈层的宽度
[256] AutoEncoder dimensions ( 32-1024 ?:help ) :

介绍:模型最中间一层神经元的数量。可以理解为越大这个模型能力越强,但要求的显存也越大。就像大脑,人脑比猪脑神经元更多,能力也越强,但需要的头骨容量也越大。
推荐值:256及以上

12. 模型编码层的宽度
[64] Encoder dimensions ( 16-256 ?:help ) :

介绍:模型前半部分神经元的数量。可以理解为越大这个模型能力越强,但要求的显存也越大。就像大脑,人脑比猪脑神经元更多,能力也越强,但需要的头骨容量也越大。
推荐值:64及以上

13. 模型解码层的宽度
[64] Encoder dimensions ( 16-256 ?:help ) :

介绍:模型后半部分神经元的数量。可以理解为越大这个模型能力越强,但要求的显存也越大。就像大脑,人脑比猪脑神经元更多,能力也越强,但需要的头骨容量也越大。
推荐值:64及以上



14. 模型解码器遮罩层宽度
[16] Decoder mask dimensions ( 16-256 ?:help ) : ?Typical mask dimensions = decoder dimensions / 3. If you manually cut out obstacles from the dst mask, you can increase this parameter to achieve better quality.
回复 支持 反对

使用道具 举报

4

主题

601

帖子

3560

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3560

万事如意节日勋章

发表于 2022-11-26 16:08:08 | 显示全部楼层
虚拟内存要设置的大一些,起码60G以上,100G不嫌多。“batch_size”调小一些,如果还跑不了,就把“Place models and optimizer on GPU”关闭。如果还跑不了,就试试RG版。
回复 支持 反对

使用道具 举报

3

主题

434

帖子

2448

积分

初级丹圣

Rank: 8Rank: 8

积分
2448
发表于 2022-11-26 22:06:33 | 显示全部楼层
ahszlele 发表于 2022-11-25 20:27
3070这么废的吗?跑4还是崩溃了

3070性能还能看,就是显存太小了。
换ME版跑跑看。论坛里自己搜
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-9-22 23:16 , Processed in 0.102420 second(s), 10 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表