Error: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'Pad_18', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 409, in on_initialize
gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 226, in forward
x = self.res2(x)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 84, in forward
x = self.conv2(x)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py", line 87, in forward
x = tf.pad (x, padding, mode='CONSTANT')
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
"Pad", input=input, paddings=paddings, name=name)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Traceback (most recent call last):
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Pad_18}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node concat_5}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 129, in trainerThread
iter, iter_time = model.train_one_iter()
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 474, in train_one_iter
losses = self.onTrainOneIter()
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 584, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
Caused by op 'Pad_18', defined at:
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py", line 409, in on_initialize
gpu_pred_src_src, gpu_pred_src_srcm = self.decoder_src(gpu_src_code)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 226, in forward
x = self.res2(x)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\archis\DeepFakeArchi.py", line 84, in forward
x = self.conv2(x)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py", line 87, in forward
x = tf.pad (x, padding, mode='CONSTANT')
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2299, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 5539, in pad
"Pad", input=input, paddings=paddings, name=name)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "F:\DFL_UPTO_RTX2080Ti\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1408,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_18 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[node concat_5 (defined at F:\DFL_UPTO_RTX2080Ti\_internal\DFL\models\Model_SAEHD\Model.py:563) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.