You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we can use <break> to concat the prompt in a specific place.
However any large scale finetunes done on Onetrainer & SimpleTuner instead of Kohya, will work much better with conditioning average, instead of conditioning concat (due to the strict enforcement of the token limit)
Feature Request:
add optional conditionings to the <break> tag, such as: <break:average> (with <break:a> doing the same) which defaults to 0.5 strength, but <break:a:0.3> lets you control the strength of the averaging conditioning right in the prompt (especially useful for models trained with quality tags, like pony, illustrious & animagine - where changing to a non 0.5 averaging conditioning increases prompt adherence significantly, when combined with any lora/finetune trained on OneTrainer or SimpleTuner)
for the sake of completion, adding <break:combine> would make sense as well, though I'd advise against using it.
PS: If implemented, I'll add a wiki page with examples detailing when to use which conditioning, and showing results from different models, since this can be hard to conceptualize without examples and there being no "best practice"
Other
No response
The text was updated successfully, but these errors were encountered:
Feature Idea
Currently we can use
<break>
to concat the prompt in a specific place.However any large scale finetunes done on Onetrainer & SimpleTuner instead of Kohya, will work much better with conditioning average, instead of conditioning concat (due to the strict enforcement of the token limit)
Feature Request:
add optional conditionings to the
<break>
tag, such as:<break:average>
(with<break:a>
doing the same) which defaults to 0.5 strength, but<break:a:0.3>
lets you control the strength of the averaging conditioning right in the prompt (especially useful for models trained with quality tags, like pony, illustrious & animagine - where changing to a non 0.5 averaging conditioning increases prompt adherence significantly, when combined with any lora/finetune trained on OneTrainer or SimpleTuner)for the sake of completion, adding
<break:combine>
would make sense as well, though I'd advise against using it.PS: If implemented, I'll add a wiki page with examples detailing when to use which conditioning, and showing results from different models, since this can be hard to conceptualize without examples and there being no "best practice"
Other
No response
The text was updated successfully, but these errors were encountered: