deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
楼主: 滚石

最强图生视频FramePack中文一体包,6G显存可玩

  [复制链接]

0

主题

8

帖子

80

积分

高级丹童

Rank: 2

积分
80
发表于 2025-7-10 06:39:14 | 显示全部楼层
白雾青森 发表于 2025-7-10 06:37
大佬,我是1660显卡,包里的cuda装完后,运行framepack直接蓝屏了

重启后,再运行没出现蓝屏了
回复 支持 反对

使用道具 举报

3

主题

140

帖子

1680

积分

初级丹圣

Rank: 8Rank: 8

积分
1680
发表于 2025-7-10 08:55:48 | 显示全部楼层
滚石 发表于 2025-4-22 12:52
加大虚拟内存试试

请问怎么加大虚拟内存啊
回复 支持 反对

使用道具 举报

2

主题

19

帖子

422

积分

初级丹师

Rank: 3Rank: 3

积分
422
发表于 2025-7-12 11:28:03 | 显示全部楼层
1660ti可以用吗
回复 支持 反对

使用道具 举报

0

主题

47

帖子

606

积分

高级丹师

Rank: 5Rank: 5

积分
606
发表于 2025-7-13 11:56:06 | 显示全部楼层
无法正常输出。提供硬件不足。
内容如下:
Currently enabled native sdp backends: ['flash', 'math', 'mem_efficient', 'cudnn']
Xformers is not installed!
Flash Attn is not installed!
Sage Attn is not installed!
Namespace(share=False, server='127.0.0.1', port=7869, inbrowser=True)
Free VRAM 7.0478515625 GB
High-VRAM Mode: False
Downloading shards: 100%|███████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 441.70it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  7.37it/s]
Fetching 3 files: 100%|██████████████████████████████████████████████████████████████████████████| 3/3 [00:00<?, ?it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 10.84it/s]
transformer.high_quality_fp32_output_for_inference = True
* Running on local URL:  http://127.0.0.1:7869

To create a public link, set `share=True` in `launch()`.
Unloaded DynamicSwap_LlamaModel as complete.
Unloaded CLIPTextModel as complete.
Unloaded SiglipVisionModel as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Unloaded DynamicSwap_HunyuanVideoTransformer3DModelPacked as complete.
Loaded CLIPTextModel to cuda:0 as complete.
Unloaded CLIPTextModel as complete.
Loaded AutoencoderKLHunyuanVideo to cuda:0 as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Loaded SiglipVisionModel to cuda:0 as complete.
latent_padding_size = 27, is_last_section = False
Unloaded SiglipVisionModel as complete.
Moving DynamicSwap_HunyuanVideoTransformer3DModelPacked to cuda:0 with preserved memory: 6 GB
  0%|                                                                                           | 0/25 [00:03<?, ?it/s]
Traceback (most recent call last):
  File "D:\framepack_cu126_torch26\webui\demo_gradio.py", line 241, in worker
    generated_latents = sample_hunyuan(
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\pipelines\k_diffusion_hunyuan.py", line 116, in sample_hunyuan
    results = sample_unipc(k_model, latents, sigmas, extra_args=sampler_kwargs, disable=False, callback=callback)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 141, in sample_unipc
    return FlowMatchUniPC(model, extra_args=extra_args, variant=variant).sample(noise, sigmas=sigmas, callback=callback, disable_pbar=disable)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 118, in sample
    model_prev_list = [self.model_fn(x, vec_t)]
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 23, in model_fn
    return self.model(x, t, **self.extra_args)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\wrapper.py", line 37, in k_model
    pred_positive = transformer(hidden_states=hidden_states, timestep=timestep, return_dict=False, **extra_args['positive'])[0].float()
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 995, in forward
    hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 832, in gradient_checkpointing_method
    result = block(*args)
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 652, in forward
    attn_output, context_attn_output = self.attn(
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\framepack_cu126_torch26\system\python\lib\site-packages\diffusers\models\attention_processor.py", line 605, in forward
    return self.processor(
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 172, in __call__
    hidden_states = attn_varlen_func(query, key, value, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
  File "D:\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 122, in attn_varlen_func
    x = torch.nn.functional.scaled_dot_product_attention(q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2)).transpose(1, 2)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 31.25 GiB. GPU 0 has a total capacity of 8.00 GiB of which 3.92 GiB is free. Of the allocated memory 2.69 GiB is allocated by PyTorch, and 419.94 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/ ... vironment-variables)
Unloaded DynamicSwap_LlamaModel as complete.
Unloaded CLIPTextModel as complete.
Unloaded SiglipVisionModel as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Unloaded DynamicSwap_HunyuanVideoTransformer3DModelPacked as complete.
回复 支持 反对

使用道具 举报

0

主题

1

帖子

20

积分

初级丹童

Rank: 1

积分
20
发表于 2025-7-13 14:23:54 | 显示全部楼层
软件好用是好用哈 ,不过变脸严重,开始1-2秒后就变了个人 (脸)。。。
回复 支持 反对

使用道具 举报

0

主题

9

帖子

60

积分

高级丹童

Rank: 2

积分
60
发表于 2025-7-17 14:02:46 | 显示全部楼层
谢谢分享
回复

使用道具 举报

3

主题

57

帖子

2862

积分

初级丹圣

Rank: 8Rank: 8

积分
2862
发表于 5 天前 | 显示全部楼层
请问cuda装的12.9版本,可以用吗
cuddn也是装的和cuda对应的版本
回复 支持 反对

使用道具 举报

0

主题

87

帖子

2582

积分

初级丹圣

Rank: 8Rank: 8

积分
2582

万事如意节日勋章开心娱乐节日勋章

发表于 前天 09:11 | 显示全部楼层
本帖最后由 yersamdy 于 2025-7-25 09:12 编辑
fzddn1 发表于 2025-7-10 08:55
请问怎么加大虚拟内存啊


1.png
回复 支持 反对

使用道具 举报

12

主题

130

帖子

731

积分

高级丹师

Rank: 5Rank: 5

积分
731
发表于 前天 19:51 | 显示全部楼层
我在提示词里已经输入了我想要的动作和表情,但最终生成的视频还是只有简单的手脚动作,根本不是我输入的提示内容,这是为什么呢?
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2025-7-27 13:27 , Processed in 0.111929 second(s), 39 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表