Imagens geradas podem ser vendidas ou usadas para fins comerciais
Permitir revenda de modelos ou sua venda após a combinação
Model Parameters:
Modelo básico:
SD 1.5
Ciclos Epoch:
0
Passos Steps:
0
clip skip clip pular:
0
Revisão:
0
Revisão
Hitomi T - Version 1.10:
Trigger Word: 8hit8
Rename 8hit8.pt.zip to 8hit8.pt and place in the embeddings directory
The new version of my Hitomi T textual inversion embedding is finally out! It solves some of the issues with my last embedding, but does admittedly have some of its own, as noted below.
I'm still new at this and I'm always learning, so feel free to drop me some tips and tricks. I suspect that the issues I'm having with getting a consistent likeness during generations are related to my dataset somehow.
Note: Some of the preview images use a different trigger word for the embedding. This is not because it's a different version of the embedding, but because I was changing the embedding name as I was training and testing. In order to replicate the previews, you will need to change the trigger to 8hit8 in the prompt.
Improvements Over Version 1.0:
When the embedding gets good results, it seems like it gets a much closer likeness.
Results seem to default to a more photorealistic style, i.e., they are more detailed regardless of other style prompting. So, if you prompt for a fantasy style painting, it will still look like a painting in that style, but with finer intricacies than if you just prompted using something like "woman."
Background elements from the dataset, i.e., desert plants and stone walls, no longer subtly influence generations.
It's much easier to change things like clothing, poses, and hair styles on the subject.
It seems to be easier to get the subject to keep their clothing on.
Challenges and Caveats:
For whatever reason, the embedding works worse for realistic generations and on the base SD1.5 model. Other models and artistic styles produce better results.
The subject's assets might not be distorted as much as in Version 1.0; however, the size isn't as true-to-life. You might need to prompt about those assets to make them an appropriate size. This doesn't seem to be much of an issue, but it still seems to crop up.
The likeness seems to blend very easily if you prompt for other people or types of people, i.e., "basketball player." Facial features will blend and not resemble the subject as much.
As such, you might have to mess around with the weight of the trigger word more in order to maintain a good likeness, moreso than in Version 1.0.
This embedding still favors lengthier and more complex prompts.
There seem to be more frequent issues with hands, specifically fingernails. I would recommend using negative prompts and embeddings to correct hand issues.
You still need to use negative prompts to get the subject to keep their clothing on, i.e., "naked," "topless," and "nudity."
For further tips & tricks, see the PNG info in the sample images. Yes, my prompting style is weird and complicated, but it works, right?
Sample Generations:
These sample generations were done in Galaxy Time Machine Photo for You. I didn't use HiRes Fix, ControlNet, Img2Img, Lora, Lycoris, or Negative Embeddings on any of these generations. I did use Face Restoration on a few.
The Future:
I would like to improve the consistency of generations with a good likeness, but I've hit a bit of a wall. If anyone has any tips to resolve this issue, please let me know. I'm still a beginner and have lots of room for improvement!
Hitomi T - Version 1.10:
Trigger Word: 8hit8
Rename 8hit8.pt.zip to 8hit8.pt and place in the embeddings directory
The new version of my Hitomi T textual inversion embedding is finally out! It solves some of the issues with my last embedding, but does admittedly have some of its own, as noted below.
I'm still new at this and I'm always learning, so feel free to drop me some tips and tricks. I suspect that the issues I'm having with getting a consistent likeness during generations are related to my dataset somehow.
Note: Some of the preview images use a different trigger word for the embedding. This is not because it's a different version of the embedding, but because I was changing the embedding name as I was training and testing. In order to replicate the previews, you will need to change the trigger to 8hit8 in the prompt.
Improvements Over Version 1.0:
When the embedding gets good results, it seems like it gets a much closer likeness.
Results seem to default to a more photorealistic style, i.e., they are more detailed regardless of other style prompting. So, if you prompt for a fantasy style painting, it will still look like a painting in that style, but with finer intricacies than if you just prompted using something like "woman."
Background elements from the dataset, i.e., desert plants and stone walls, no longer subtly influence generations.
It's much easier to change things like clothing, poses, and hair styles on the subject.
It seems to be easier to get the subject to keep their clothing on.
Challenges and Caveats:
For whatever reason, the embedding works worse for realistic generations and on the base SD1.5 model. Other models and artistic styles produce better results.
The subject's assets might not be distorted as much as in Version 1.0; however, the size isn't as true-to-life. You might need to prompt about those assets to make them an appropriate size. This doesn't seem to be much of an issue, but it still seems to crop up.
The likeness seems to blend very easily if you prompt for other people or types of people, i.e., "basketball player." Facial features will blend and not resemble the subject as much.
As such, you might have to mess around with the weight of the trigger word more in order to maintain a good likeness, moreso than in Version 1.0.
This embedding still favors lengthier and more complex prompts.
There seem to be more frequent issues with hands, specifically fingernails. I would recommend using negative prompts and embeddings to correct hand issues.
You still need to use negative prompts to get the subject to keep their clothing on, i.e., "naked," "topless," and "nudity."
For further tips & tricks, see the PNG info in the sample images. Yes, my prompting style is weird and complicated, but it works, right?
Sample Generations:
These sample generations were done in Galaxy Time Machine Photo for You. I didn't use HiRes Fix, ControlNet, Img2Img, Lora, Lycoris, or Negative Embeddings on any of these generations. I did use Face Restoration on a few.
The Future:
I would like to improve the consistency of generations with a good likeness, but I've hit a bit of a wall. If anyone has any tips to resolve this issue, please let me know. I'm still a beginner and have lots of room for improvement!