Skip to content

Conversation

yiyixuxu
Copy link
Collaborator

@yiyixuxu yiyixuxu commented Jul 5, 2024

part of #8796

you can use this to test once this PR is fixed #8798

from diffusers import LavenderFlowPipeline, LavenderFlowTransformer2DModel
import torch

repo = "AuraDiffusion/auradiffusion-v0.1a0"
dtype = torch.float16

transformer = LavenderFlowTransformer2DModel.from_pretrained(
    repo,
    revision="refs/pr/1",
    torch_dtype=dtype,
    subfolder="transformer"
    ) 

pipeline = LavenderFlowPipeline.from_pretrained(
	"AuraDiffusion/auradiffusion-v0.1a0", 
	revision="refs/pr/1",
    transformer=transformer,
	torch_dtype=torch.float16
).to("cuda")


images = pipeline(
    prompt="a cute cat with tiger like looks",
    height=512,
    width=512,
    num_inference_steps=50, 
    generator=torch.Generator().manual_seed(666),
    guidance_scale=3.5,
).images[0]
images.save("yiyi_test_4_out.png")

@yiyixuxu yiyixuxu requested a review from sayakpaul July 5, 2024 23:47
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@@ -428,13 +429,14 @@ def __call__(
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)

# 4. Prepare timesteps

sigmas = np.linspace(1.0, 1 / num_inference_steps, num_inference_steps)
Copy link
Collaborator Author

@yiyixuxu yiyixuxu Jul 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this just to get 1:1 with the original implementation, I don't think it's needed here, the difference should be very small, but I didn't test for smaller number of steps
cc @cloneofsimo

without this line
yiyi_test_4_out

with this line
yiyi_test_4_out_yiyi

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Just a single comment.

Comment on lines 459 to 461
torch.tensor([t / 1000])
.expand(latent_model_input.shape[0])
.to(latents.device, dtype=latents.dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could unpack this code a bit and perhaps add a comment on why we're dividing by 1000. This is mainly because we're moving away from our normal pipeline implementations a bit which our users are used to reading and referring to.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done! feel free to merge into your PR now

@sayakpaul sayakpaul merged commit f23151b into lavender-flow Jul 7, 2024
@sayakpaul
Copy link
Member

Thanks a lot for your help, Yiyi!

@sayakpaul sayakpaul deleted the lavender-flow-yiyi-test branch July 7, 2024 19:27
@sayakpaul sayakpaul mentioned this pull request Jul 10, 2024
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants