Google Collab – Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Fri, 07 Apr 2023 10:18:13 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg Google Collab – Prompt Muse https://promptmuse.com 32 32 How to Create an AI clone of yourself or Pet https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/ https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/#respond Tue, 20 Dec 2022 22:06:57 +0000 https://promptmuse.com/?p=891 How to Train a Model in Open Art AI: Welcome to this tutorial blog about Openart AI, an amazing app that allows you to create avatars of yourself and generate high quality artwork. With Openart AI, you can quickly create stunning artwork without any prompting, or you can use the app s prompts feature to [...]

<p>The post How to Create an AI clone of yourself or Pet first appeared on Prompt Muse.</p>

]]>
How to Train a Model in Open Art AI:

Welcome to this tutorial blog about Openart AI, an amazing app that allows you to create avatars of yourself and generate high quality artwork. With Openart AI, you can quickly create stunning artwork without any prompting, or you can use the app s prompts feature to customize your results even further. What s more is that once you finish training your model, you ll be given a .ckpt checkpoint file that can be used with Stable Diffusion and other AI platforms.

In this tutorial blog, we ll walk through the process of using Openart AI to train a model. We ll look at what you need – such as 15-30 photos of your subject – and how to set up your order. We also cover how to use the Photo Booth to upload and organize your images and how to create presets for your model. Finally, we ll explain how to use the Upscale and Enhance Face feature so that you get the best results from your trained models. So let s get started!

Step By Step Guide

Step 1: Gather 15-30 Photos of the Subject: To get the best results, you will need to gather 15-30 photos of the subject that you want to train. Ensure that you have permission to use these photos and they are only of the subject and not any other people or pets.

Step 2: Head over to the Openart AI website and select what type of model you would like to train. Options include person, pet, character, art style, object or other.

Step 3: Select the preset images if you don t have any of your own. Otherwise, select My AI Models and upload 15 to 30 photos of the subject you want to train. Make sure you have permission for these images!

Step 4: Enter Model Information: After selecting the preset, you will enter the model information such as the name of your training model, identifier (name that represents your character/person/animal) and class (dog, cat, horse, etc.).

Step 5: Give your model a name and class (e.g. if training a pet, the class could be dog).

Step 6: Drag and drop your images into the training area, making sure your subject (pet, person etc.) is centred in each frame.

Step 7: Once training is finished, click on your model to create prompts or preview all your presets. You can also click on any image to upscale and enhance it before downloading it.

Step 8: Congrats! You now have a trained model that can be used for art projects or other purposes.

FAQ

Q: What is Openart AI?
A: Openart AI is an app that allows users to create avatars of themselves and generate high quality artwork with no prompting required. The app also provides a trained model which can be used with prompts to customize the artwork. Additionally, the app offers a revenue share of presets that users create.

Q: How do I use Openart AI?
A: To use Openart AI, first select the type of artwork you would like to create (person, pet, character, art style, object or other). You will then need to upload 15-30 photos of the subject you would like to train. Once your payment has been processed, you will be taken to the Photo Booth where you can name your training model and enter an identifier for your character/subject. Finally, you can move the images around until the subject is centered in each frame and click Download once you are satisfied with the image quality.

Q: Are there any additional features in Openart AI?
A: Yes! Openart AI offers an Upscale and Enhance Face feature which can be used to improve the resolution of a face in an image or to upscale an image in general.

Q: What is OpenAI?
A: OpenAI is an AI technology company that provides a platform to develop models and create avatars with prompts. The platform includes a photo booth, library of presets and prompts, and access to trained models.

Q: What are the benefits of using OpenAI?
A: OpenAI offers fast, high-quality artwork generation without any prompting. It also offers revenue sharing for presets created by users, as well as checkpoint files to take into Stable Diffusion. Additionally, it has a clean and easy-to-use interface that makes training models simple and efficient.

Q: How do I train a model using OpenAI?
A: To train a model using OpenAI, you will need 15-30 photos of the subject you wish to train. Then, you can go to the Pet section on the OpenAI Photo Booth page, select a preset (first one is free), confirm your selection, and make payment. After that, you will need to enter your model name and identifier, as well as upload photos into the training folder. Once training is complete, you can download images and create additional prompts if desired.

Q: How do I use Openart AI?
A: To use Openart AI, you first select the type of model you would like to create (e.g. person, pet, character, art style, object or other). Then you will need to upload 15-30 photos of the subject (with permission) for the model to train on. Finally, you can then create additional characters with presets or via prompting.

Q: Is there a cost associated with using Openart AI?
A: Yes, there is a one-off cost associated with using the program depending on your selection. There are also additional credits available for creating additional prompts.

Q: Can I make money from Openart AI?
A: Yes, if you create particularly great presets, you can make money from royalties by having your preset featured on the Openart AI website.

TRANSCRIPT

Now I just want to tackle this head on before we get into the video. Welcome back to another prompt news video. In my comments section there is a vicious rumour. It is in fact that I’m a stability AI stable division beta, version four. This is simply not true.

And I want to prove it to you by first, showing my hands, and second, by showing you my Instagram. Just a normal person who takes normal pictures of myself and put them on Instagram because that’s what normal people do. I like this one because it shows emotion. Complete lie. This is a fake Instagram account I created using a model.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Now, I don’t suggest anybody does that at all, but I just wanted to use it as an example for you guys to see what you can do within minutes, like and subscribe. You may have seen in my previous videos where I trained models in Dream Booth, I’m actually going to be showing you how to train them easily without going into Google collab. Also without having a superior computer with all the correct Nvidia cards or with enough VRAM. Some people don’t want to have to connect to remote GPUs and do all that. They just want to click a button and have their images API.

I’ve been approached by quite a few companies that are AR generators to promote their product and I have said no. But funny enough, one of the programmes that I tried and really liked, they actually approached me to do a review on their app, which is amazing because they are actually very different from all the AI generators. They give you an AIT trained model at the end of it, a checkpoint file that you can then take into stable diffusion. So for those in the comments that saying that it’s too complicated for them or it’s not working, this is an alternative path that you can take. So the app I’m showing you today is created by openart AI.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So just head over to their website to cheque them out. Essentially, it’s a programme where you can create avatars of yourself like this and like this. But not only that, it provides a trained model which you can then use prompts which is simply just text to tell you what you want your model to do or look like or be stylized like. And not only that, what sets this app apart from all the others and why I decided to do this video is that it gives you a dot CKPT checkpoint file at the end, which none of the others, as far as some aware, actually do. You have to understand there are people who do not have time to install and run locally on their machine and this could be the API.

For them to train a model, all you need is 15 to 30 photos of the subject that you want to train. And I’m going to put disclaimer here. You are not entitled to create a model from someone’s photographs without their explicit permission. And the same goes for art styles. Use your own photographs that you have permission for and your own art styles.

So the presets within Openart API allow you to quickly generate high quality artwork without any prompting, which means you can just buy the package that you want and it will generate those images in that style automatically, like this to this, which is pretty cool and very, very fast and without zero prompting. And once you buy a package, you can then create additional characters on top of that. What’s more is that you still have the ability to prompt Open to you if you wanted to. Although if you are very, very good at prompting, openart AI offer you a revenue share of presets that you create. So if you create something that is spectacularly good, they’ll put it on their website and offer it to others.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So you can also make money from this app, which is very cool. So if you head over to the Openart API website, you can see it’s got a really clean and easy to use interface. On the front page, you’ve got the discovery community page, which pulls through from stable diffusion. Daly two and mid journey. You can switch between tabs to see what people are creating and also see their prompt.

So if you want a bit of inspiration for your prompt engineering, they’ve got a Stable Diffusion Prompt book. That is a really nice PDF that’s been put together really nicely. And it’s worth having a look if you’re new to prompting because it gives you ideas and suggestions on how to audit your prompt. To get the best results, go to Open Art Photo booth. This is where the training happens.

My AI models is where your AI models will live. You can see that I trained five models prior to actually making this video because I really wanted to test it out. Then you’ve got six options here. So you’ve got a person, pet, character, art style, object or other. Today we are going to be training a pet.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Now, I don’t actually have any pets that I borrowed my friend’s dog. If I click on Pets there, you can actually purchase preset images. You have an option to select a preset and your first preset is absolutely free. And this is perfect for somebody who just wants to go and create prints and design of their pets and not have to worry about prompting. But you still have the ability to prompt even if you buy a preset on the top tab.

Because I am training a dog, I’m going to go to the Pet section and I’m going to select the Pets Movie TV character. Click on the select button and it will show it selected and click Confirm. And that’s now added to my order. So the $10 is to train the model and I got a free preset. I’m going to click continue there and then it’s going to process my order for me and then you’re taken to your payment area and what I really like about this, it isn’t subscription based because I forget about them and end up with a huge bill.

This is just a one off payment for what you use you don’t have to pay to download the images afterwards, you don’t have to pay for the checkpoint model either. Afterwards you get 400 credits with this in order to create additional prompts if you want to. Once your payment has successfully gone through, you’re then taken to the photo booth where you’re going to put the name of your training model, and I’m going to call it Ringo the Dog. And then you’re going to put your Identifier. So the name that represents your character, your person or your animal.

And mine’s going to be Ringo because that’s his name. And then the class, this is where you want to identify if a dog, a cat, a horse, an alligator, whatever you’re doing. We’re going to put a dog as a class. So the image guidelines are really helpful for understanding what you need to train your model with. So you need at least 15 to 30 photos of your subject, five close up portrait photos of your pet’s face, five portrait photos of your pet’s half a body or front, at least five photos of your pet’s full body.

So the pet should be centre of the photo, there should be no other pets or people in your photo, and your model will be deleted in one month for privacy concerns. So for those who are worried about the privacy in AI, that one’s for you and you can always read the privacy statement as well we can then drag and drop our images into our training. This is my folder for Ringo so I’m just going to grab his images and upload them in it’s so easy and simple and uploaded 21 images. Once you’ve uploaded the images you can just simply move them around until Ringo or your pet is in the centre of each frame so once that’s finished training, it will say ready and you can simply just click into your model to create prompts or just preview all your presets. If you come down further you can see the library of your images have been organised into their presets and prompts so you can simply just click on the image that you like and then you can click Download from this button here.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So if the face is slightly off or the resolution isn’t as high as you would want, you can click on Upscale and Enhance Face. So by selecting these options it will either upscale the image or upscale the face and particularly on humans, you might want to upscale the face just to improve the image quality so once you’re happy, you just simply click on this icon and it will download the image to your computer, it’s that easy to use. I just want to show you a couple of my favourite images that I got from Open AI. I pinned original photo of Ringo on the top left hand corner so you can compare to what he actually looks like compared to his model and I think it got him perfectly. And they say AI is soulless.

I don’t think that’s the case anymore. And then the next lot is fan art with Ringo in Superman poses in Indiana Jones and I think that is Wonder Woman on the end there. You can do all different art styles of your dog as well if you’re looking to create a print. So whether you use Openart AI is up to you and I hope this video gives you an idea of what is currently on the market and if you like these videos, let me know. I do think using models going into the future is going to be the way that artists, models, photographers, designers can monetize.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

They create a model of their designs or the photography or themselves and then they can licence those models out to corporate companies. This product is primarily for people who do not want to run stable diffusion on their local computer or can’t because of the hardware or don’t want to have to connect to remote PCs. It’s basically a WYSIWYG inside a browser. You can keep all your models contained. Obviously with that you’re going to have to pay $10 to train your model.

So I’d love to know what you guys think in the comments. I do bring you this sort of resource as well so I make it fair and easy for everybody to use AI. In my next video I’m going to be showing you how to create a 3D character from a trained model so that’s exciting stuff. So you will need a model for that tutorial whether you make it in Dream Booth or Open AI or anything else doesn’t matter as long as you’ve got a CKPT file we can go ahead with that tutorial so I hope you have a joyful Christmas and a happy New Year from me. That will do it.

Goodbye.

TL/DR

Openart AI is an app that allows users to quickly create avatars of themselves or other subjects with high-quality artwork. It also provides a trained model which can be used to prompt the creation of images. Openart AI offers a revenue share for users who create presets that are especially good, and users can purchase pre-made presets for their own use. The app has a clean and easy to use interface, and the Stable Diffusion Prompt Book provides ideas and suggestions on how to create prompts. After selecting a subject and uploading photos, users can click on their model to create prompts or preview presets. The image guidelines recommend at least 15-30 photos of the subject, with five close up portrait photos of their face, five portrait photos of their half body or front, and five full body images. Users have the option to upscale or enhance faces if needed, and can download images once they are satisfied. Openart AI is a great tool for creating avatars and artwork quickly and easily.

<p>The post How to Create an AI clone of yourself or Pet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/feed/ 0
Consistent AI Characters in any pose: Tutorial https://promptmuse.com/consistent-ai-characters-in-any-pose-tutorial/ https://promptmuse.com/consistent-ai-characters-in-any-pose-tutorial/#comments Tue, 29 Nov 2022 16:50:49 +0000 https://promptmuse.com/?p=24 In this article, I’m gonna be showing you how to make a consistent character, not from photos but from scratch. I will show you how to create a character in Midjourney and go from one render to animation, and finally, a trained model, which then can be posed and placed in any environment. Creating your [...]

<p>The post Consistent AI Characters in any pose: Tutorial first appeared on Prompt Muse.</p>

]]>
In this article, I’m gonna be showing you how to make a consistent character, not from photos but from scratch. I will show you how to create a character in Midjourney and go from one render to animation, and finally, a trained model, which then can be posed and placed in any environment.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Workflow on how we will create a consistent character

Creating your character

Step one is creating the face of our character. So the first thing I’m going to do is head over to Midjourney. Here we will just create a front perspective of our character, showing their features clearly.

Like any good design, it’s good to have a plan and idea of what you would the overall style of the character to be, to suit your narrative. I want to start with the front perspective of my character, this is important as we will animating her so will need a good clear image of her features.

My prompt is:

/imagine Head and shoulders shot of Instagram model, with orange long hair, hyper-detailed.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Midjourny will give me an option of 4 images to select from. I really like variation 3 (V3). I select U3 which will upscale my chosen image. Now I have the front perspective of my character, I click on the image and right click and save onto my PC. We are now complete with Midjounry.

Consistent facial perspectives

I will now concentrate on getting multiple frames of my character which will enable me to build a model and in turn prompt her in any situation, environment and in any pose. A AI model is a collection of images on a subject that is then trained by AI.

I create a MP4, just simply by recording myself on my phone, making an array of emotions, ensure I keep my face steady. The dimension of this video is 410×412 px I also take my midjourney image and save that as 410×412 px

It’s important that I create multiple images of my character with an array of emotions to feed into the training dataset. To do this I will take the MP4 I created and esentiall skin it with the source.png within the Thin Spline Plate Model.

I name the video driving. Mp4 and the image source.png and upload to my googledrive (Top level/not in a folder), so I can easily upload it into google collab Thin Spline plate model

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Saving MP4 and png into my googledrive

Thin Spline Plate Model

I open the Thin spline plate model within google collab worksheet:

https://colab.research.google.com/drive/11pf0SkMIhz-d5Lo-m7XakXrgVHhycWg6?usp=sharing

I run each cell in the notebook, ensuring I have a green tick before I move on to the next cell.

It is vital that you mount your google drive (This is just a fancy way of saying connecting) As this is where we are pulling out mp4 and source images from. To do this, simply select the folder icon and then click on the Google drive folder to add the mounted drive to your notebook, Run this cell and log into your google drive.

Step1: Setup

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Step 2: Settings : Edit your source path: and driving video path to match the correct path to connect to your google drive.

You do this by locating your Gdrive from the left-hand folder hierarchy Content>drive>Mydrive. Find your Driving.mp4 video and next click on the 3 dots and select copy path. Simply now paste that path into the correct paths within Step 2. Once completed Run this Cell (Click play!)

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Okay, so here’s the magic. You can now see the source image of the driver video and your final result. And you can see the face is going a bit weird when you turn too fast. So don’t do that in your video! You can turn to a certain degree, but it starts screwing up when it goes to the side.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Step 3: Run Thin-Plate-Spline-Motion-Model

Once the previous step has given you a green tick, proceed to run step 3 cell (No addtional work required here!) This will autmatically create another folder in our folder structure over ont he right hand side. Once this video has finished being upscaled, we’re going to run it through something called Gspgan, and you might have heard of that before. It’s a facial restorer.

So we’re going to then split it into frames and then run it through there. And it should make the face a lot nicer, but it sounds like hard work. It isn’t. We’re just going to press Play on this and it’s going to do it for us. So we’re onto step five.

Step five is now going to divide your video up into frames, which will allow us to run it through the G FPG facial restorer on each frame. So I’m just going to hit five and you know what to do by now. So we’re just going to wait for that to split the files. You can see here in real time. It’s creating those files into the frames folder and you can see all those files being saved there at any time.

If you want to save an image or a video, you just go to these three little dots here and click on Download. And that will download it onto your computer. So that has now finished while I’ve been talking. And if we go down, we’re going to be now running it through the facial restorer, which will basically increase the resolution on the image and make the facial features more appealing. So I’m just going to hit play on that.

And then we’re nearly onto step seven, which is our last step, I promise. But it wasn’t that easy. It wasn’t too hard, I hope. And if you did have any red errors, any problems, just put down in the comments. We have now completed all seven steps and your final video is over here and it’s called Out MP4.

You’ve got all your frames as well, which you can download, and your fixed frames as well. I’m just going to click the three dots and then click Download, and then that will download this to my local computer. So I’m going to show you the results of this video. Mine’s not going to be very good because I know I don’t my head away too much in the video, but you can see our output. I’m now going to take these frames and train them in stable diffusion and create a model that I can use to make prompts and prompt this character into any place.

So these are the final frames that I’ve saved from my colab notebook, and I saved them locally on my machine. What I have done is just delete all the frames that are all the same and just kept some distinct frames of facial expressions that I could use to train stable diffusion with. So this is how we’re going to get a consistent character. You might be thinking, I don’t have any body shots, but we’re going to fix that in the next tutorial. We’re going to be using out painting to add a body onto this character.

Now, if you’re not too interested in doing the body, you can go ahead and skip the outpainting tutorial and go straight into training stable diffusion with me. So now what we are going to do is a bit of out painting. So we’ve got plenty of headshots of our character and what we want now is more of her physique or her body shots. And to do this, I’m going to be using out painting, which reimagines areas that are around your character using a prompt. So for this, there are many ways of doing this, but again, my computer is not powerful enough at the moment to run it locally on my machine.

So I’m going to be using Google Collabs. If you go in the link below, you will see the Stable Diffusion Infinity collab link. So if you click on that, this screen is what you will see. And I’ve already run all my cells here, so I’ve pressed play on each one. But if you just start the setup and click Play, and that will install all the files remotely to the left hand side of this file area over here.

And then once that’s complete, then go to the setup of Stable Diffusion infinity so step three will continuously run. So it’s going to continuously run in the background. And what you’re looking for is when it’s loaded, you will see running on public URL. So we’re going to take that one, not the local URL because that means we’re using our machine’s GPU and I want to use the remote GPU. So I’m going to just copy the running on public URL and copy and paste that into your browser.

So when you’ve copied and pasted that link into your browser, you will get this screen here and all you need to do is get your hugging face token. Hugging Face Token is a unique identifying key to yourself. So we just go to the hugging face website and if you don’t know how to get to this page, just simply go to Settings. And then here on the left, click on Access Tokens. Super simple.

And click on New Token. And this will create you a new token or a key or whatever you want to call it and just give it some name that has some reference to whatever you’re working in. I’m just going to call my Infinity because I can’t sell infinity. And then I’m going to click on Copy and then go back to Stabledfusion Infinity and paste my token in that section. Now, I’m just going to select the stable diffusion in painting.

You can have the in painting an image image, but I’m just going to use In Painting for now and that’s it. And just click to set up. And this will now load your interface up. So now you will see a screen like this if you’re successful. And this is really, really simple to use.

So if you go to upload image, I’m just going to upload just any image of my woman’s face. Just scale it down slightly and I’m going to make sure enable Safety checker is off because that’s not safe for work checker. Basically, if you’re showing any nudeness, it will vendor a black box and we don’t want that. So for some reason when you’re doing a woman’s body, even if she’s not nude, it will flag it up. So uncheck that and you can do not say for work images or what it regards not safe as work images.

Once you’ve placed her in a position on the canvas you’re happy with just click Confirm and that’s what sets her into place. And you’ve got a prompt to a woman in green top. I don’t know if that’s a very good prompt, probably not. You can actually interrogate the image and it will give you what it thinks the image is. You can then adjust that prompt to be a bit more stronger.

But for now, I’m just going to set it as that. My sample size is six, my strength is zero. I’m going to put zero seven. My mode is CV Two underscore NS. Everything looks good and I’m going to just click the output button.

And this starts rendering the output. So you can see the processing bar down here and you can also switch back to Google collabs. And if you go to the bottom, because I’ve been working on this, you can see the percentage bar there as well. So that’s why that’s continuously executing in the background because it’s basically running this interface that you can see here. So as you can see, it’s giving me the woman in a green top, some sort of weird white thing there, but you can just paint that out.

I’m going to click Accept on that. You can cancel or run a retry. Remember, this is absolutely free. So you can just retry as many times as you like that you can move this generation frame around the screen to create more out paintings if you wanted to put her shoulder in or hair on top of the head. And once you’ve finished, you can then just go to Export Image here and then export your image as whatever you want.

And that saves it locally to your downloads folder. That’s a really cool way of getting the body. So once you’re happy with the set of images you’ve got for training this is mine. It’s not quite a good collection, but I’m trying to film this tutorial at the same time as doing this. So hopefully yours will fare better than mine.

We are going to now head over to a new notebook. This is a Dream booth, stable diffusion, and the link is down below. So we’re going to be using this notebook to train a model on our images that we’ve created. So everything has led up to this point. So what we want to do, you should be used to this environment.

Now we’re just going to cheque the type of GPU and VRAM and yep, I’ve got Tesla TV Four I’m running remotely at the moment. And then I’m just going to install this. So we just need to log back into our hugging Faith account and go to Settings and then access tokens and then want to create a new token. I’ll call this Dream and then generate the token copy that you’ve done this before, so it should be easy. And then paste that token into your hugging face token area and hit run on that cell.

Okay. And then we’re going to just install these XFORM moves here. I would say this to my Google Drive, but I don’t actually have enough space at the moment. This is the model you are running. We’re actually on two now, overnight, or yesterday we went into version two, but I’m going to keep it on version one here.

You can change that path if you want to use another version. And the output directory, so that will be up here. That will create the directory here. I’m just going to keep it to the WX, but you can call it Redhead Lady or whatever the name project is. I’m just going to leave mine as default for this demonstration.

Okay, so there’s some sections here we want to change. So our lady is not a dog, I’m going to name her a person and photo of a person.

OK, so we’re just going to run the sell here and that will create our folders that we’re going to just drag our images into. So if we go to the file directory, we should have a new folder oops into data, sorry, and ZWX. And that’s where I’m going to drag my training data into, which is all the images we’ve created. So I’m just dragging these locally off my machine and just throwing them in and uploading them and that’s just a message saying once the runtime is finished, it will delete the images, which is fine if you read this one you can upload from your computer. But as we frame them into the file, we don’t need to do that.

So we can then just go to the next so just run this one, but I need to change that to person, not dog. And I’m going to keep all these settings as the same and then just run that cell. That last cell took a rather long time, so I hope you made yourself comfortable while that was running. So we’re now moving on to the weight cell. I’m not going to change that, so I’m going to keep that as it is.

I’m not going to run the grid. So this next section is what we’re going to do at the end. This is converting the weights to a CKPT package or model to use in web UIs like automatic one one. So that’s basically going to be our output that we’re going to load into our local stable diffusion to write our prompts. You can use this notebook to do that in, but you can do it locally on your machine, which is a lot easier and a lot better than doing in here.

But that is going to be our output. We’re not going to run that cell just yet. We can do that at the end. The next section we have is the inference. So I’m going to run that.

And these cells now from now on are a lot quicker than any of the ones above. Okay? So after inference, we are going to the sea. So you can set random seed here for reproducibility. I’m just going to press play.

Okay, so photo of ZW x in a dog in a bucket, that’s not what we’re creating. We’re just going to do ZWX face is that what my prompt was, which is a photo of a person. So ours is the WX person. So negative prompt is for example, if you want the person to have long hair, you’d put negative prompt in short hair, so it avoids short hair in any of the images. I’m not going to do anything for that at the moment.

Number of stance pulls. I’m going to keep it four guidance scale. You can lower that, but I’m going to keep my 10.5 and fahrenheit. Yeah, so I’m just happy with all those settings. So I’m going to run and see what this gives me.

Hopefully this works after all that. So this is pretty speedy. So it should give me somebody who looks like my Redhead model that I created in mid journey. There you go. There she is.

So she really looks like my training images, which is great. So if I go to my data and go to my ZWX and open Redhead, we can compare our training. Oh, that’s not a good thing. Let’s get a better one. There we go.

We can see our training data compared to what we’re getting, which is really good. It’s a really strong likeness, and now we can go up and save that model. So if we now go back up to convert weights to CKPT to use in the web UIs, we can now save this as a model to load into our local stable diffusion installation. If you have got that, if you haven’t got that on your machine and have more than 4GB of VRAM, you can run it. If you have less than that, I wouldn’t even try.

I’m doing a new video on installation of Stable Diffusion 2.0, because that has just come out overnight, so I’m very excited to do that. So the video will appear here once it’s done. If it’s not there, please badger me in the comments for that because I need to update mine and see the difference. So I’ve got to test that. So that now has saved my file, and it tells you, here the path it saved it to.

So let’s find that. Here we go. Up one level and into contents and model stable diffusion weights, the WX 800. And there she is, model CKPT. I am literally losing the ability to talk, so I’m just going to download that and I’ll show you how to save that into your local stable diffusion.

Sorry, I’ve gone completely nuts. So, yeah, we have now got our trained character. So here’s a few more prompts that I put in. So you can see sometimes it gives you a bit of a crazy image. And there you go.

She’s holding a glass of wine. So compared to the training image, you can see it’s got really good likeness. And now I’ve done photo of ZW, ex person, cyberpunk style. And as you can see against the training images there. So the CKPT file is going to take a while to download.

It’s a couple of gigabytes, but once it’s downloaded and you have stable diffusion locally installed on your PC, just go to that file where your stable diffusion lives, and then go to Models and Stable Diffusion, and then just put the CKPT file in here along with all the others. And then if you just go back up like this and click on Web user to launch your Stable Diffusion you have downloaded your CKPT file and put it into the Stable Diffusion Models folder. You should be able to see it in the dropdown list here that this is Mine. Here something very important. Now, when you are prompting your character, you need to write the name that you wrote in the instance when you were training your character within Dream Booth.

So for Mine, I trained Mine on being a person right at the beginning and then scroll back at the tutorial. And the name was ZWH. I think it was just left as the default. So her name is ZWS and she is a person. So I can change the beginning and end of this sentence, but I just have to always ensure that I indicate it’s ZWX person to get the strongest likeness to my trained character.

So now you can make any image you like of your character. You can make her into a comic strip, you can make her into a painting, you can make her into an Instagram model. Whatever you want to do, make sure it’s legal and completely above boards. And this is it. You’ve done it.

And as always, that will do it. Hit that notification button and subscribe everyone like. Thank you. Bye.

<p>The post Consistent AI Characters in any pose: Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/consistent-ai-characters-in-any-pose-tutorial/feed/ 1