deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
楼主: a931386

《384底丹 430万》 wf/df-udt/448/96/96/32

  [复制链接]

2

主题

32

帖子

1038

积分

初级丹圣

Rank: 8Rank: 8

积分
1038
发表于 2023-6-5 09:55:53 | 显示全部楼层

嗯,已经下好了!
就是我8g显存跑不起来
能降低参数来跑吗
回复 支持 反对

使用道具 举报

6

主题

56

帖子

5297

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5297
 楼主| 发表于 2023-6-5 10:48:39 | 显示全部楼层
perryfans 发表于 2023-6-5 09:55
嗯,已经下好了!
就是我8g显存跑不起来
能降低参数来跑吗

参数三圍沒辦法降了,應該要16G才能跑
回复 支持 反对

使用道具 举报

6

主题

56

帖子

5297

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5297
 楼主| 发表于 2023-6-5 10:49:50 | 显示全部楼层

練到400萬不動之後就停了
回复 支持 反对

使用道具 举报

20

主题

498

帖子

4646

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
4646

万事如意节日勋章

发表于 2023-6-5 22:13:53 | 显示全部楼层
a931386 发表于 2023-6-5 10:49
練到400萬不動之後就停了

只是下降幅度慢了不可能不动,毕竟平均一下一张图才训练了几十次
回复 支持 反对

使用道具 举报

2

主题

877

帖子

5643

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5643
发表于 2023-6-6 02:31:47 | 显示全部楼层
路过支持一波
回复 支持 反对

使用道具 举报

0

主题

3

帖子

125

积分

高级丹童

Rank: 2

积分
125
发表于 2023-6-6 17:31:03 | 显示全部楼层
我这里是直接跑不起来。是什么原因呢?
Initializing models: 100%|###############################################################| 5/5 [00:04<00:00,  1.20it/s]
Loading samples: 100%|##############################################################| 165/165 [00:00<00:00, 433.80it/s]
Loading samples: 100%|############################################################| 1665/1665 [00:03<00:00, 498.34it/s]
======================== Model Summary ========================
==                                                           ==
==            Model name: 384_SAEHD                          ==
==                                                           ==
==     Current iteration: 4313541                            ==
==                                                           ==
==---------------------- Model Options ----------------------==
==                                                           ==
==            resolution: 384                                ==
==             face_type: wf                                 ==
==     models_opt_on_gpu: True                               ==
==                 archi: df-udt                             ==
==               ae_dims: 448                                ==
==                e_dims: 96                                 ==
==                d_dims: 96                                 ==
==           d_mask_dims: 32                                 ==
==       masked_training: True                               ==
==       eyes_mouth_prio: False                              ==
==           uniform_yaw: False                              ==
==         blur_out_mask: False                              ==
==             adabelief: True                               ==
==            lr_dropout: y                                  ==
==           random_warp: False                              ==
==      random_hsv_power: 0.0                                ==
==       true_face_power: 0.0                                ==
==      face_style_power: 0.0                                ==
==        bg_style_power: 0.0                                ==
==               ct_mode: none                               ==
==              clipgrad: True                               ==
==              pretrain: False                              ==
==       autobackup_hour: 6                                  ==
== write_preview_history: False                              ==
==           target_iter: 0                                  ==
==       random_src_flip: False                              ==
==       random_dst_flip: False                              ==
==            batch_size: 8                                  ==
==             gan_power: 0.0                                ==
==        gan_patch_size: 48                                 ==
==              gan_dims: 16                                 ==
==                                                           ==
==----------------------- Running On ------------------------==
==                                                           ==
==          Device index: 0                                  ==
==                  Name: NVIDIA GeForce RTX 3060 Laptop GPU ==
==                  VRAM: 9.37GB                             ==
==                                                           ==
===============================================================
Starting. Press "Enter" to stop training and save model.
Error: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_12 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_5/concat/_1459]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_12 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_12:
LeakyRelu_11 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_12:
LeakyRelu_11 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_12':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
    self.on_initialize()
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 408, in on_initialize
    gpu_dst_code     = self.inter(self.encoder(gpu_warped_dst))
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 113, in forward
    x = self.down2(x)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 43, in forward
    x = self.conv1(x)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

Traceback (most recent call last):
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
    return fn(*args)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
    target_list, run_metadata)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_12}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_5/concat/_1459]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node Pad_12}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 129, in trainerThread
    iter, iter_time = model.train_one_iter()
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 474, in train_one_iter
    losses = self.onTrainOneIter()
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
    src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
    self.target_dstm_em:target_dstm_em,
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
    run_metadata_ptr)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
    feed_dict_tensor, options, run_metadata)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
    run_metadata)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
    raise type(e)(node_def, op, message)  # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_12 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

         [[concat_5/concat/_1459]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

  (1) Resource exhausted: OOM when allocating tensor with shape[768,196,196] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node Pad_12 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node Pad_12:
LeakyRelu_11 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Input Source operations connected to node Pad_12:
LeakyRelu_11 (defined at E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)

Original stack trace for 'Pad_12':
  File "threading.py", line 884, in _bootstrap
  File "threading.py", line 916, in _bootstrap_inner
  File "threading.py", line 864, in run
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
    debug=debug)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
    self.on_initialize()
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 408, in on_initialize
    gpu_dst_code     = self.inter(self.encoder(gpu_warped_dst))
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 113, in forward
    x = self.down2(x)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 43, in forward
    x = self.conv1(x)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
    return self.forward(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
    x = tf.pad (x, padding, mode='CONSTANT')
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
    return target(*args, **kwargs)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
    result = gen_array_ops.pad(tensor, paddings, name=name)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
    "Pad", input=input, paddings=paddings, name=name)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
    op_def=op_def)
  File "E:\DeepFaceLab\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
    self._traceback = tf_stack.extract_stack_for_node(self._c_op)

回复 支持 反对

使用道具 举报

6

主题

56

帖子

5297

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
5297
 楼主| 发表于 2023-6-6 21:07:04 | 显示全部楼层
canyue 发表于 2023-6-6 17:31
我这里是直接跑不起来。是什么原因呢?
Initializing models: 100%|##################################### ...

你照我4090的配置batch_size8 當然跑不動
回复 支持 反对

使用道具 举报

5

主题

57

帖子

1089

积分

初级丹圣

Rank: 8Rank: 8

积分
1089
发表于 2023-6-7 09:46:49 | 显示全部楼层
感谢分享
回复

使用道具 举报

0

主题

70

帖子

1423

积分

初级丹圣

Rank: 8Rank: 8

积分
1423

开心娱乐节日勋章

发表于 2023-6-8 21:52:14 | 显示全部楼层
感谢分享,打算购买
回复 支持 反对

使用道具 举报

0

主题

3

帖子

40

积分

初级丹童

Rank: 1

积分
40
发表于 2023-6-8 22:11:28 | 显示全部楼层
感谢分享
回复

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2024-5-9 21:16 , Processed in 0.086350 second(s), 9 queries , Redis On.

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表