deepfacelab中文网

 找回密码
 立即注册(仅限QQ邮箱)
楼主: 滚石

最强图生视频FramePack中文一体包,6G显存可玩

  [复制链接]

0

主题

4

帖子

1790

积分

初级丹圣

Rank: 8Rank: 8

积分
1790
发表于 2025-5-13 01:32:07 | 显示全部楼层
大佬后续能不能在云服务器上搞一个这个,上次租了一次花了三个小时才下载完
回复 支持 反对

使用道具 举报

0

主题

15

帖子

181

积分

高级丹童

Rank: 2

积分
181
发表于 2025-5-13 15:13:41 | 显示全部楼层
这个bug比较多,爆内存,打不开第二次
回复 支持 反对

使用道具 举报

4

主题

17

帖子

309

积分

初级丹师

Rank: 3Rank: 3

积分
309

万事如意节日勋章开心娱乐节日勋章

发表于 2025-5-13 20:20:16 | 显示全部楼层
ky8000 发表于 2025-4-27 22:16
帮忙看看出这两个错误是什么问题啊

这个问题有解决么,我也遇到了
回复 支持 反对

使用道具 举报

0

主题

19

帖子

588

积分

高级丹师

Rank: 5Rank: 5

积分
588
发表于 2025-5-14 00:25:05 | 显示全部楼层
这个可以换色色模型吗
回复 支持 反对

使用道具 举报

9

主题

239

帖子

1600

积分

初级丹圣

Rank: 8Rank: 8

积分
1600
发表于 2025-5-15 10:48:43 | 显示全部楼层
Gailgamesh 发表于 2025-4-23 00:41
这什么问题继续以后直接没有任何反应了

虚拟内存设置60g以上就好了
回复 支持 反对

使用道具 举报

9

主题

239

帖子

1600

积分

初级丹圣

Rank: 8Rank: 8

积分
1600
发表于 2025-5-15 10:51:21 | 显示全部楼层
本帖最后由 platexyxy 于 2025-5-15 10:52 编辑
小蛋筒 发表于 2025-4-23 23:54
版主,貌似这个2080TI 22G的显卡用不了吧

20系有个补丁,原版或者滚石大佬版本覆盖这个也可以用,不过要把滚石大佬的汉化过的py文件保留下来
https://github.com/freely-boss/FramePack-nv20
回复 支持 反对

使用道具 举报

0

主题

54

帖子

912

积分

高级丹师

Rank: 5Rank: 5

积分
912

万事如意节日勋章

发表于 2025-5-16 13:12:53 | 显示全部楼层
本帖最后由 可是雪 于 2025-5-16 13:16 编辑

奈斯,真的很棒
回复 支持 反对

使用道具 举报

0

主题

1

帖子

25

积分

初级丹童

Rank: 1

积分
25
发表于 2025-5-16 15:54:35 | 显示全部楼层
msshbah 发表于 2025-4-21 19:12
大佬,请问这个怎么解决

同问题 顶一下
回复 支持 反对

使用道具 举报

0

主题

168

帖子

2606

积分

初级丹圣

Rank: 8Rank: 8

积分
2606

万事如意节日勋章开心娱乐节日勋章

发表于 2025-5-17 10:02:44 | 显示全部楼层
大大 报错了 请帮忙看看Currently enabled native sdp backends: ['flash', 'math', 'mem_efficient', 'cudnn']
Xformers is not installed!
Flash Attn is not installed!
Sage Attn is not installed!
Namespace(share=False, server='127.0.0.1', port=7869, inbrowser=True)
Free VRAM 5.013671875 GB
High-VRAM Mode: False
Downloading shards: 100%|██████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 2005.17it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:00<00:00,  6.43it/s]
Fetching 3 files: 100%|████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3010.99it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 18.24it/s]
transformer.high_quality_fp32_output_for_inference = True
* Running on local URL:  http://127.0.0.1:7869

To create a public link, set `share=True` in `launch()`.
Unloaded DynamicSwap_LlamaModel as complete.
Unloaded CLIPTextModel as complete.
Unloaded SiglipVisionModel as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Unloaded DynamicSwap_HunyuanVideoTransformer3DModelPacked as complete.
Loaded CLIPTextModel to cuda:0 as complete.
Unloaded CLIPTextModel as complete.
Loaded AutoencoderKLHunyuanVideo to cuda:0 as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Loaded SiglipVisionModel to cuda:0 as complete.
latent_padding_size = 27, is_last_section = False
Unloaded SiglipVisionModel as complete.
Moving DynamicSwap_HunyuanVideoTransformer3DModelPacked to cuda:0 with preserved memory: 6 GB
  0%|                                                                                           | 0/25 [00:10<?, ?it/s]
Traceback (most recent call last):
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\demo_gradio.py", line 241, in worker
    generated_latents = sample_hunyuan(
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\pipelines\k_diffusion_hunyuan.py", line 116, in sample_hunyuan
    results = sample_unipc(k_model, latents, sigmas, extra_args=sampler_kwargs, disable=False, callback=callback)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 141, in sample_unipc
    return FlowMatchUniPC(model, extra_args=extra_args, variant=variant).sample(noise, sigmas=sigmas, callback=callback, disable_pbar=disable)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 118, in sample
    model_prev_list = [self.model_fn(x, vec_t)]
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\uni_pc_fm.py", line 23, in model_fn
    return self.model(x, t, **self.extra_args)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\k_diffusion\wrapper.py", line 37, in k_model
    pred_positive = transformer(hidden_states=hidden_states, timestep=timestep, return_dict=False, **extra_args['positive'])[0].float()
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 973, in forward
    hidden_states, encoder_hidden_states = self.gradient_checkpointing_method(
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 832, in gradient_checkpointing_method
    result = block(*args)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 652, in forward
    attn_output, context_attn_output = self.attn(
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\system\python\lib\site-packages\diffusers\models\attention_processor.py", line 605, in forward
    return self.processor(
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 172, in __call__
    hidden_states = attn_varlen_func(query, key, value, cu_seqlens_q, cu_seqlens_kv, max_seqlen_q, max_seqlen_kv)
  File "G:\FramePack\framepack_cu126_torch26\framepack_cu126_torch26\webui\diffusers_helper\models\hunyuan_video_packed.py", line 122, in attn_varlen_func
    x = torch.nn.functional.scaled_dot_product_attention(q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2)).transpose(1, 2)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 31.32 GiB. GPU 0 has a total capacity of 6.00 GiB of which 1.72 GiB is free. Of the allocated memory 2.86 GiB is allocated by PyTorch, and 422.93 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/ ... vironment-variables)
Unloaded DynamicSwap_LlamaModel as complete.
Unloaded CLIPTextModel as complete.
Unloaded SiglipVisionModel as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Unloaded DynamicSwap_HunyuanVideoTransformer3DModelPacked as complete.
回复 支持 反对

使用道具 举报

5

主题

634

帖子

3778

积分

高级丹圣

Rank: 13Rank: 13Rank: 13Rank: 13

积分
3778
发表于 2025-5-18 14:02:25 | 显示全部楼层
4dashuaige 发表于 2025-4-24 08:40
2080TI22G.生成PNG图片。没有视频

解决了没,我也遇到了这个问题
回复 支持 反对

使用道具 举报

QQ|Archiver|手机版|deepfacelab中文网 |网站地图

GMT+8, 2025-6-7 15:51 , Processed in 0.125559 second(s), 38 queries .

Powered by Discuz! X3.4

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表