|  | 
 
 发表于 2023-12-5 17:27:20
|
显示全部楼层 
| 训练重量级模型 SAEHD 按默认和论坛教程设置都报错,什么情况?报错代码 Error: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
 [[node Pad_22 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 [[node concat_12 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:562) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 
 Caused by op 'Pad_22', defined at:
 File "threading.py", line 884, in _bootstrap
 File "threading.py", line 916, in _bootstrap_inner
 File "threading.py", line 864, in run
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
 debug=debug)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
 self.on_initialize()
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 409, in on_initialize
 gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
 return self.forward(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 236, in forward
 self.out_conv3(x)), nn.conv2d_ch_axis), 2) )
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
 return self.forward(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
 x = tf.pad (x, padding, mode='CONSTANT')
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
 result = gen_array_ops.pad(tensor, paddings, name=name)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
 "Pad", input=input, paddings=paddings, name=name)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
 op_def=op_def)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
 return func(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
 op_def=op_def)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
 self._traceback = tf_stack.extract_stack()
 
 ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
 [[node Pad_22 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 [[node concat_12 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:562) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 
 Traceback (most recent call last):
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
 return fn(*args)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
 options, feed_dict, fetch_list, target_list, run_metadata)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
 run_metadata)
 tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
 [[{{node Pad_22}}]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 [[{{node concat_12}}]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 
 During handling of the above exception, another exception occurred:
 
 Traceback (most recent call last):
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 131, in trainerThread
 iter, iter_time = model.train_one_iter()
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 480, in train_one_iter
 losses = self.onTrainOneIter()
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
 src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
 self.target_dstm_em:target_dstm_em,
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
 run_metadata_ptr)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
 feed_dict_tensor, options, run_metadata)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
 run_metadata)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
 raise type(e)(node_def, op, message)
 tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
 [[node Pad_22 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 [[node concat_12 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:562) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 
 Caused by op 'Pad_22', defined at:
 File "threading.py", line 884, in _bootstrap
 File "threading.py", line 916, in _bootstrap_inner
 File "threading.py", line 864, in run
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
 debug=debug)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\ModelBase.py", line 199, in __init__
 self.on_initialize()
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 409, in on_initialize
 gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
 return self.forward(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 236, in forward
 self.out_conv3(x)), nn.conv2d_ch_axis), 2) )
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
 return self.forward(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
 x = tf.pad (x, padding, mode='CONSTANT')
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
 result = gen_array_ops.pad(tensor, paddings, name=name)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
 "Pad", input=input, paddings=paddings, name=name)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
 op_def=op_def)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
 return func(*args, **kwargs)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
 op_def=op_def)
 File "D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
 self._traceback = tf_stack.extract_stack()
 
 ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
 [[node Pad_22 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 [[node concat_12 (defined at D:\BaiduNetdiskDownload\DFL_maozhihanhua_RTX2080Ti\_internal\DeepFaceLab\models\Model_SAEHD\Model.py:562) ]]
 Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 
 | 
 |