|
发表于 2021-9-28 19:36:06
|
显示全部楼层
我的显卡是1050ti ,用的训练模型来自 DFL2.0_WF_320_DF-UD_1kk_免费预训练的超级仙丹,我是下载后直接解压在model文件夹。在运行 6) train SAEHD.bat之后,出现了下方报错,有没有大佬帮我看看是在回事?
Running trainer.
Choose one of saved models, or enter a name to create a new model.
[r] : rename
[d] : delete
[0] : WF 320 DF-UD - latest
:
0
Loading WF 320 DF-UD_SAEHD model...
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1050 Ti
[0] Which GPU indexes to choose? :
0
Initializing models: 100%|###############################################################| 5/5 [00:06<00:00, 1.37s/it]
Loaded 15843 packed faces from E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\pretrain_faces
Sort by yaw: 100%|#######################################################################################################################################################################################| 128/128 [00:00<00:00, 212.39it/s]
Sort by yaw: 100%|#######################################################################################################################################################################################| 128/128 [00:00<00:00, 214.51it/s]
================ Model Summary =================
== ==
== Model name: WF 320 DF-UD_SAEHD ==
== ==
== Current iteration: 1001991 ==
== ==
==-------------- Model Options ---------------==
== ==
== resolution: 320 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: df-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== uniform_yaw: True ==
== lr_dropout: n ==
== random_warp: False ==
== gan_power: 0.0 ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: True ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_flip: True ==
== batch_size: 8 ==
== eyes_mouth_prio: False ==
== adabelief: True ==
== gan_patch_size: 40 ==
== gan_dims: 16 ==
== ==
==---------------- Running On ----------------==
== ==
== Device index: 0 ==
== Name: GeForce GTX 1050 Ti ==
== VRAM: 2.93GB ==
== ==
================================================
Starting. Press "Enter" to stop training and save model.
Error: OOM when allocating tensor with shape[8,64,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Add (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_3 (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:527) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'Add', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 376, in on_initialize
gpu_src_code = self.inter(self.encoder(gpu_warped_src))
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 89, in forward
x = nn.flatten(self.down1(x))
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 52, in forward
x = down(x)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 32, in forward
x = self.conv1(x)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 107, in forward
x = tf.add(x, bias)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 365, in add
"Add", x=x, y=y, name=name)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[8,64,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Add (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_3 (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:527) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[8,64,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Add}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node concat_3}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 129, in trainerThread
iter, iter_time = model.train_one_iter()
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 474, in train_one_iter
losses = self.onTrainOneIter()
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 744, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 549, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[8,64,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Add (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_3 (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:527) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'Add', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 376, in on_initialize
gpu_src_code = self.inter(self.encoder(gpu_warped_src))
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 89, in forward
x = nn.flatten(self.down1(x))
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 52, in forward
x = down(x)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 32, in forward
x = self.conv1(x)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 107, in forward
x = tf.add(x, bias)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 365, in add
"Add", x=x, y=y, name=name)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[8,64,160,160] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Add (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:107) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_3 (defined at E:\DEEP\DeepFaceLab\DeepFaceLab_NVIDIA_up_to_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:527) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
期待大佬的解答 |
|