- 
                Notifications
    
You must be signed in to change notification settings  - Fork 1.5k
 
Add Start and End Frame control,works great! #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| 
           nice work!  | 
    
| 
           NICE  | 
    
| 
           Amazing work I have implemented into my app and working amazing My app tutorial : https://youtu.be/HwMngohRmHg?si=ImYvFey-R030fbiM 250420_194342_833_7054_seed269740392_37.mp4 | 
    
| 
           Interpolating between the start and the end conditionings depending on the frame number can be cool as well (actually, such interpolation is a unique feature of next frame prediction models), it will require some caching though, but it's for the future maybe :) Anyway, amazing work, works like a charm! 🚀  | 
    
| 
           I tried doing the same and if the images are different enough the effect is bad. All segments but the first would animate the last frame, and the first segment would quickly turn from the first frame to last. Maybe I did something wrong, can you try with a longer video like 10-15 seconds?  | 
    
| 
           It could have been great but for now this only works for the first 1 second (all the transformation is done on 1st second and all the other seconds are just static).  | 
    
| 
           I think maybe I should create a repo like   | 
    
          
 @lllyasviel - All Issues and PRs here seem related to the software/studio - Maybe create a new repo for the Research?  | 
    
          
 That was my experience as well, I described my findings here: #32 (comment)  | 
    
          
 I have seen you in many issues and you are always trying to copy others' open source ideas to make money,don't you feel ashamed?  | 
    
          
 for long video, you will need more middle frame or change the schedule method. still under study...  | 
    
| 
           Nice work!! If we could use controlnet also, this make a difference in guiding from a existing video  | 
    
| total_generated_latent_frames = 0 | ||
| 
               | 
          ||
| latent_paddings = reversed(range(total_latent_sections)) | ||
| # 将迭代器转换为列表 | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
English comment would be great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
中文评论才是精华
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
佩服😁
| 
           it would be cool to set keyframes for any frame, not just start/end.  | 
    
| 
           效果非常好!  | 
    
          
 If we have start and end frames when we can just split video generation in sections... IMHO the next step will be batch generation and joining.  | 
    
          
 You are literally saying you implemented someone else's work and are essentially selling it. Have you no shame?  | 
    
| 
           Great job, @TTPlanetPig and hhy! It worked great for me. One issue that I had was that the size of the output videos was smaller than the i2v default output files with the same settings. I turned down the MPEG compression to zero, which made me feel a bit better. Thank you, @lllyasviel, for Framepack!  | 
    
| 
           Sorry im a new guy here. How do i implement this first-last frame into my already existing Framepack?  | 
    
| 
           My first and last frame sampling stops after 2-3 times, and no video is generated. Is there anyone who has the same problem as me? How can I solve it?  | 
    
| 
           That's great! It would be perfect if it supported 20 series graphics cards  | 
    
| 
           How to implement this? I do git pull, but it says already up to date.  | 
    
          
 You have to pull this specific PR 167, something like this over your local pull:  | 
    
used some code from those PR, thanks to their respective authors : lllyasviel#167 lllyasviel#178 lllyasviel#158
| 
           I don't know python but I can see this opens the doors to Slow Motion, and Frame Interpolation (in between frames) converting 30 fps video to 60 (or higher).  | 
    
| 
           Has anyone succeeded in using video as more start-frames and end-frames? If possible, FramePack could be used for video inpainting with motion trajectories intact.  | 
    
* add transformer * add pipeline * fixes * make fix-copies * update * add flux mu shift * update example snippet * debug * cleanup * batch_size=1 optimization * add pipeline test * fix for model cpu offloading' * add last_image support; credits: lllyasviel/FramePack#167 * update example with flf2v * update penguin url * fix test * address review comment: #11428 (comment) * address review comment: #11428 (comment) * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video_framepack.py --------- Co-authored-by: Linoy Tsaban <[email protected]>
| 
           did have F1 ver?  | 
    
| 
           Tried this out and the video started immediately with the endframe?  | 
    
| 
           works fantastic! thanks! i would have loved to integrate the slider for the resolution here, but i have no idea how.  | 
    
          
 Maybe you can have a look at my code? I've integrated some of the most commonly used functions.  | 
    
          
 Thanks, but my Chinese is just as bad as my programming skills. Do you have a step by step guide?  | 
    
          
 I've already created a bilingual version of the README.  | 
    
| 
           Hello everyone, I need help please. I have used the install package of the original framepack, which works great, as well as the F1 add on. "  File "C:\framepack_cu126_torch26\webui\demo_gradio_f1_video.py", line 16, in  I then used pip install decord within the main directory where framepack was installed and ran into this error: "PS C:\framepack_cu126_torch26> pip install decord Can someone please guide me out of this situation?  | 
    
          
 Hi  | 
    






by simply replace the last predicted image by inserted end image. now we can easily control the video to what we want. the Pincer attack from TeneT now is complete.
Thanks my friend "hhy" helped me on the coding as I only have the idea but he is much better on coding!
4.20.-1.mp4
250420_204753_497_5473_10.mp4