MidJourney – Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Fri, 19 Jan 2024 12:03:24 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg MidJourney – Prompt Muse https://promptmuse.com 32 32 Sagans: The Anonymous AI Collective Taking Over the Music Video World https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/ https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/#respond Wed, 17 May 2023 17:18:21 +0000 https://promptmuse.com/?p=2996 On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for an artist as AI-generated vocals become increasingly popular, and how it can be a great ally for independent music artists in creating their own videos and music. The podcast features [...]

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for an artist as AI-generated vocals become increasingly popular, and how it can be a great ally for independent music artists in creating their own videos and music. The podcast features guests from Sagans, an anonymous AI collective that has been producing music videos for popular artists since 2022. We discuss the challenges of keeping up with AI advancements and their impact on creativity, and how it can be used as a tool for faster problem-solving and dream boosting. Tune in to discover the many possibilities for using AI technology to bring ideas to life in the music industry.

Artificial Intelligence (AI) has rapidly become one of the most versatile and powerful technological tools in recent years, providing users with a wealth of opportunities to explore, create, and express themselves in new and exciting ways.

In the world of music, AI has revolutionized the way musicians create, record, and market their art. From optimizing vocal tracks to generating entirely new compositions, AI is making music more accessible, dynamic, and personalized than ever before.
In this blog post, we will explore the many ways in which AI is transforming the world of music and the exciting possibilities it holds for the future.

The Rise of AI Vocals

One of the most fascinating developments in the music industry has been the rise of AI vocals, which has enabled musicians to create realistic vocal tracks without requiring the services of a professional singer. With AI vocals, musicians can fine-tune the pitch, timbre, and other vocal qualities to suit their creative vision, ensuring the final track sounds exactly as they imagined it.

AI vocals are also useful for people who might not have the self-confidence to sing themselves or who lack the resources to hire a professional singer. By providing a flexible and affordable solution, AI vocals allow artists to experiment with different styles, sounds, and arrangements without breaking the bank.

Collaborating with AI

The ability to collaborate with AI has also opened up new creative avenues for musicians, allowing them to create brand new vocals by combining different AI-generated voices. This makes it possible to create choirs without requiring a large number of singers, providing musicians with greater control over the final product.
In addition, AI can help improve the quality of music videos that independent artists produce, enabling them to create high-quality visuals without the need for expensive equipment or professional crews. This allows musicians to maintain their creative vision while still producing music videos that are visually stunning and engaging for their fans.

Learning from Others

The podcast team at Prompt Muse often discusses their creative process, which involves brainstorming ideas together for days or weeks, testing concepts, and then starting production in a relaxed and casual way. They also rely on hard work and spend hours researching information to learn new things, including the latest AI techniques.

To stay ahead of the curve, they often check Reddit for 10 minutes a day to keep up with the latest technology and advancements. Although some may find AI technology daunting, the team at Prompt Muse believes in adapting their workflow to new technology as it evolves. They remain open-minded and learn by experimenting and transforming assets to achieve their objectives quickly.

Misconceptions About AI

There are many misconceptions about AI, particularly its ability to create videos and music easily and independently. While AI can be used as a tool for creating quick visualizers, human input is still necessary for creating a good story and continuity. Music videos are seen as short films with a soundtrack and require a lot of time and effort to produce.

By suggesting that AI can replace the human touch, creators often find themselves feeling frustrated. Dismissing their work as “AI video” or “AI music” doesn’t recognize the human input and effort involved in the creative process.

AI-assisted Music Videos

However, AI can be a great ally for independent music artists in producing their own videos and music. In particular, Runway.ai is a popular tool used by the podcast team to edit backgrounds that are then added to the video edit on Adobe After Effects. While it is important to present the reality of what happened on the day of the shoot, it is equally important to bring the artist’s creativity and vision to life.
AI-generated music still requires the artist to record the song from start to finish – AI only changes the tone of their voice to match someone else – but previous tracks can be used to create a new track. Working in tandem with AI also allows the artist to produce something of which they are proud.

The Future of AI in Music

As AI technology continues to evolve, the future of music promises to be more exciting than ever before. With AI, artists will be able to explore new creative frontiers while also reaching new and diverse audiences around the world.

AI-generated music may not only be used by independent artists, but also by established stars to collaborate with other artists and explore new musical styles. This could help to expand the boundaries of music as we know it while also creating new and unique sounds that have never been heard before.

Innovation is the key to the future of music and Prompt Muse is committed to bringing our readers the latest developments in AI and music. The podcast and blog team encourages their audience to embrace the power of AI as a creative tool to develop their own unique vision.
As AI technology continues to evolve, it’s exciting to think about what the future will hold for music. With AI, musicians will be able to create more exciting, innovative, and personalized music than ever before, empowering them to take their creativity to new heights and explore new possibilities in the world of music.

FAQ

1. What are some benefits of AI vocals for artists and producers?
– AI vocals can be useful for people who are not self-confident enough to sing, and they can be merged to create brand new vocals and create choirs without needing many people.
2. How can AI assist independent music artists in creating high-quality music videos?
– AI can give power to music artists to create videos and music on their own, faster than before, and it can be a great ally for independent music artists in producing their own videos and music.
3. How do the guests on the podcast approach video production and what tools do they use?
– They start with walking in a Japanese environment, then edit the video to add a background frame, play with camera movements, and transform assets using Enringing; they use Warp fusion for primary tools and Deform for making scenes more interesting, and Erased backgrounds are edited on Runway.
4. Are there any misconceptions about AI and its ability to create videos and music?
– Yes, some people believe that AI can create videos and music with just one sentence or prompt, but human input is still necessary for creating a good story and continuity.
5. How do independent music artists feel about the use of AI in their work?
– There is frustration among creators when people dismiss their work as “AI video” or “AI music” without recognizing the human input and effort involved.
6. How has AI technology evolved and improved over time?
– When the speaker started with AI technology, there were limited resources available, but now tools like Dalle Fusion are available and powerful, allowing for greater creativity and faster problem-solving.
7. What tools and websites are available to bring creative ideas to life?
– There are many websites and tools available to create 3D visuals and animate drawings, and Runway can be used to achieve dream boosting and generating unique ideas.
8. What is Sagans, and what have they accomplished through their work with AI music videos?
– Sagans is an anonymous AI collective that has been producing music videos for popular artists since 2022, and they have produced videos for Lincoln Park Lawns Entropy and Die Antwood’s Age of Illusion in just one year.
9. How important is it to stay up to date with AI advancements as a creative?
– It is important to be aware of new technology for future use, but it is not necessary to know everything, and it is important to check casually without feeling overwhelmed.
10. How can creativity be enhanced through setting limits and restrictions?
– James Blake created a song by limiting himself to only four notes, and the N-word style and technique were developed mainly through experimentation and prompt.

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/feed/ 0
How to Create a 3D Scene Using Blender and Fspy https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/ https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/#respond Thu, 16 Feb 2023 10:44:15 +0000 https://promptmuse.com/?p=2147 If you’re looking to create a stunning 3D scene in Blender, then you’re in the right place. In this tutorial, we’ll show you how to use Fspy and Blender to create a 3D scene from a 2D image. Follow these steps to create your own 3D scene: Step 1: Download and Install Fspy First, download [...]

<p>The post How to Create a 3D Scene Using Blender and Fspy first appeared on Prompt Muse.</p>

]]>
If you’re looking to create a stunning 3D scene in Blender, then you’re in the right place. In this tutorial, we’ll show you how to use Fspy and Blender to create a 3D scene from a 2D image. Follow these steps to create your own 3D scene:

Step 1: Download and Install Fspy

First, download and install Fspy from the official website. Once installed, open the software and import the 2D image you want to use for your 3D scene.

Step 2: Set Up Fspy Camera

Next, use Fspy to set up your camera. This involves placing markers on the image to establish the camera’s position, orientation, and field of view. Once you’ve placed the markers, export the camera data in the format that Blender can use.

Step 3: Open Blender

Open Blender and select File > Import > Fspy. Browse for the camera data file you exported from Fspy, select it, and click Import Fspy Camera.

Step 4: Set Up Scene

In Blender, set up the scene with the camera and the image you want to use as a reference. Then, go to Edit Mode and use the image as a guide to create the basic geometry of the scene. Extrude and scale the edges to match the perspective of the image.

Step 5: Apply Materials

In the Shading tab, apply materials to the geometry of the scene. Use an image texture for the background image and create materials for the objects in the scene.

Step 6: Use Runway to Remove Distorted Elements

For any distorted elements in the background image, use Runway’s erase and replace tool to remove them. Upload the image, remove the unwanted elements, and then download the new image to use in Blender.

Step 7: Add Details and Effects

Add details and effects to the scene, such as 3D objects, wires, and the ShakerFly camera effect. Be creative and experiment with different effects to make your scene stand out.

Step 8: Render and Save

Once you’re happy with the scene, render it and save it in the desired format. You can then use it in your projects or share it with others.

Creating a 3D scene in Blender from a 2D image is a rewarding experience that requires a bit of patience and creativity. By following these steps and exploring the software, you can create impressive 3D scenes that will impress your audience.

FAQ

Q: What software do I need to follow along with this tutorial?

A: You will need Blender, a free 3D animation software, and any photo editing software such as Photoshop, GIMP or any other free alternatives.

Q: Do I need any prior experience with Blender to follow this tutorial?

A: No, this tutorial is beginner-friendly and doesn’t require any prior experience with Blender.

Q: Do I need any special equipment to follow this tutorial?

A: No, you don’t need any special equipment. All you need is a computer and a mouse.

Q: What techniques are covered in this tutorial?

A: This tutorial covers techniques such as camera placement, object selection, UV projection, in painting using AI tools, and the use of the ShakerFly camera effect.

Q: Can I use a different AI in painting tool instead of Runway?

A: Yes, you can use any image to image in painting tool that you prefer. The tutorial specifically uses Runway, but there are other options available.

Q: Can I use different 3D objects in my scene?

A: Yes, you can use any 3D objects that you like. The tutorial uses a plant and a neon sign as examples, but you can use any objects that fit your scene.

Q: Can I use different camera effects or settings?

A: Yes, you can experiment with different camera effects and settings. The tutorial uses the ShakerFly camera effect and depth of field, but there are other camera effects and settings available in Blender.

Q: Where can I find more resources and tutorials on Blender and 3D animation?

A: The tutorial creator has a website, promptmuse.com, which offers free online resources and a weekly newsletter. There are also many other online resources and tutorials available for Blender and 3D animation.

Transcript

Ai community, I have confession to make. The creation of endless AI images is boring me until now. Today, I’m going to be showing you how to take your AI images that have been created in Stable Diffusion, Darley, Midjour, whatever program you’re using, and turn them into an ultimate movie set, which means you can configure how it looks, get a camera, go in, add characters to the scene, create an animation from your 2D images. We don’t want AI to be the creative director of our stories that we want to tell. We want to use AI to enhance our workflows. And that’s what this channel is all about, creating workflows that actually work. Today, we’re going to be looking at how to create an environment from AI generated images. And then in the next video, we’re going to look at how to add characters into our environment. So please hit that notification bell and subscribe because this is going to be a series. So we’re going to start off with the environment workflow. So the first thing we want to do with our AI generated image is upscale it because what comes out of midjour is quite low resolution.

Same with automatic 111. You might not actually have the RAM or GPU to be able to upscale that on your PC. So I use something called Neuro AI. This is absolutely free and to be honest, it’s a godsend. There’s no download. It is what it is. You upload your image and within seconds it gives you a high resolution image. So when we’ve got our upscale image, I’m going to take it into F Spy, which again is amazingly free, which will create us a camera that matches the image perspective. And then we simply import our F Spy camera into Blender, which will be then projecting our map from the camera onto very simple, primitive objects. It’s really an effective result that we get. Step one, upscaling our image. I want to increase the resolution of my image that Midjour created because in the background, if I’m going through the scene, you’re going to see as you go through the scene, the image lose resolution and just see big chunks of pixels, and we do not want that. So we’re going to use an upscaler. And today I’m going to be using AI Neuro. Currently, you get five free credits a day to upscale your image.

So I’m going to click on Download and download that image. So we’re now going to take this image into F Spy to create our perspective or projection camera. Step two is installing F Spy, which will create us a camera that we can then import into Blender. Each image that you bring into F Spy will be completely differentand have different perspective lines. But what it allows you to do is ultimately create a camera that you can then model in Blender from. There are two zip files on this website that we want to download, the first being the program and the second being the installation plug in for Blender. If you head over to the F Spy website, there’s a big green Download button, and that’s to install the actual program onto your PC. You’ll be taken to a GitHub page where you just need to download the extension with win. Zip at the end if you’re running on Windows. And if you download that and unzip that onto your PC, you’ll be able to see F spy. Exe file, which you need to double click in order to run the program. Once that’s installed, you need to head back to the F spy website.

And if you scroll down from the main page, you’ll see the official F Spy importer add on. This is the zip file which we’re then going to install directly into Blender. Download that file by going to this big green Kodabutton over here and come down to where it says Download zip and download that zip file. If you just fire up Blender and go to Edit, Preferences and go to Install and just find that F Spy Blender Master, click on Install Add On, no need to unzip it or anything, and you should find it in your search bar up here. Just make sure it’s checked in the checkbox there. Go to the hamburger icon and save preferences, and you’re good to go. When you go to File and then then to Import, you should see. F spy here. So now, Minimize Blender, and where you unzipped the first F Spy folder, just navigate to the f spy. Exe and give it a double click and that will launch F Spy. So you can simply drag and drop the image you got out of Midjour here or you can go up the file and open image. This is F Spy and the first time you use it, it does look a bit intimidating.

But do not worry, all you need to focus on pretty much is this gizmo here. This is the most important thing in F Spy. We want each corresponding axis to line up with our image as perfectly as possible. The X axis is the horizontal line across the image. So you’ve got Z, X, and Y. These lines here are what we’re going to do e are going to place manually to help the program understand what is Z, X, and Y. You can see our Y axis, so we need to mark the vanishing point. If we put this green line here, which notes the Y axis, and then this green line here to the other side, you can see it’s creating a vanishing point at the end of this road. Now, it’s quite hard to see where you’re laying these lines down, so you need to come over to the left hand side and uncheck this dim image box here. And then that will help you position your lines further. You can also hold shift on the keyboard and position the lines, and you’ll get this lovely big magnifying glass that will help you a little bit more.

So as you see, while I’m lining these lines up, this gizmo in the middle, which I said is vital, is lining up as well and positioning itself correctly. You can see my X axis off. I want that to be horizontal with the floor plane. So I’m going to put my Z axis here. I’m just going to find basically a vertical line on the drawing. So it’s important to line up, for instance, my Z axis parallel to one another so the program can measure the distance between them. That is looking good. And if you check my gizmo, the Z axis is pointing straight upwards in line with everything else. So it’s looking good so far. And to check that your lines are in the right place, if we go down here to 3D Guide to the drop down menu and go to X, Y, Z grid. You can then place your grid in and just make sure everything is lining up. You can switch to box as well and just check that your box looks like it belongs in that perspective. You can also line it up to any of the lines in the image and just double check that everything is lining up nicely.

If there’s anything out, you can adjust these lines further to get it correct. This is the final position where your projection will load up in Blender. So it’s important to try and centre this gizmo as well as possible. So that’s all looking good and I’m ready to import this into Blender. So I’m going to go to File and go to Save As and Save This. And we’re going to now bring this camera projection into Blender. Step three, adding projection camera and material to 3D geometry. So I’ve just opened up a Blender project, and I’m now just going to marky select and delete any objects that are in the scene. And then I’m going to go up to File and import. You should have F Spy here. If you don’t, go back to the beginning of the tutorial and install it. So I’m going to click on that. I’m going to navigate to the F Spy file, which was cyber. Nvi. F spy for me. I’m just going to click on Import F Spy Project File. You can see here straight away, it creates an F Spy camera up here in the scene collection, and it’s automatically projecting that scene.

Now, on your keyboard, if you press zero on the key number pad, you can see that the camera is not projecting onto any plane at all. It’s actually projecting onto itself. It’s quite clever. So I’m going to press zero again on the number key pad. This is a super easy modeling. We’re just going to be using a plane. So to access your planes, hold down shift and A and go to mesh and across from mesh, select plane. So I’m going to create another window so you can see exactly what I’m doing. From the top left of this window, when I get the cross head, I’m just going to left click and drag and that creates you another window. And in this window you can see my plane and my projection camera there. Now I’m just going to select this plane and go to Object Mode up here and Edit Mode. I’m going to click on Edge Mode, which is here. I’m going to then select the back edge, which is here and press G and Y. And then I’m just going to extrude that back edge right to my vanishing point down there. So this is what it looks like so far.

Remember, the Y axis is from the viewport right down to the vanishing point. I’m now going to come back down to Edit Mode and I could press S for scale and then X for scaling on the X axis. So it will just scale along the horizontal line. So I’m going to select both edges on either side of the road and then press E to extrude and then Z to make sure that it’s on the Z axis. I’m just going to come up there and extrude up to the pavement. I’m now going to select the left side and again, repeat that process. Press E to extrude and then X so it just snaps to the X axis. And again, once more, E to extrude and then X to extrude on the X axis. So I’m going to click on both edges of the sidewalk here and then press E to extrude and then Z so it snaps to the Z axis there. And I’m going to come right up there to the top of the buildings. And I’m just going to go to Edge and I’m going to then go to Bridge Edge Loops. And then again, at the back, I’m going to do the same, select both edges and then click on Bridge Edge Loops.

That is now pretty much all the modeling we need to do. If we come out of edit mode and come back to Object, we need to go over to the Shading tab. So we want to apply the material. So once in the Shading tab, ensure your object is selected and go to New. We just want to delete the principal BSDF node by selecting it and hitting delete. We want to select the material output and on the keyboard, hold down CTRL and T. This activates your Node Wrangler. If nothing happened when you press CTRL and T, your Node Wrangler is not enabled. So I suggest you go and enable that. And to do that, you go up to Edit and down to Preferences. And just type in the search bar here NodeW rangular. And all you need to do is just make sure that box is checked and go to this hamburger icon and click Save Preferences. And then just repeat that process again. Just click on Material output and hold down CTRL N T on the keyboard, and these should come up here. Now in the image texture, this is where we’re going to load our upscaled image, it will look a mess when you import that in, but do not worry, we’re going to fix that now.

So if you come over to the Spanner icon over here and from the Add Modifiers drop down list, you want to go to Subdivision Surface. So give that a click, and it will be selected default as cat mall clerk. But we want to switch that over to simple. And then on the level viewport, we want to add five onto there. And then on the render, we want to make that five as well. So next we want to go back up to the Add Modifier drop down and come over to UV Project, which is there. Now under the UV Maps section here, just select that box and select the only UV map that should be in the scene. And then come down to Object, and then under Object, select your. Fspy camera. What we need to do is put the height and the width of our original upscale image into here. Just go back to your image, right click, go to properties and details, and your resolution or your dimensions will be there. So mine is 4196 by 2796. So yours will probably be different. So I’m just going to go in and type in 4196.96. Now there’s a really annoying Edge Repeat round there, and we can change that because that is currently set to Repeat in the settings.

So if you come back down to your image node and come down where it says Repeat, and hit that drop down box and select clip. That will give you a black area, so that makes it a lot easier to see your actual image. As you can see, we’re slowly building up our 3D scene. Now, if you click on the projection camera and move it, bad things happen. You do not want to do that. So what we need to do is just make a copy of that camera. So hold down shift and D and then right click on your mouse and that will create a copy. Now, if you go back to the original camera on the Object properties here, we need to just lock that camera into place. Just hit these padlocks and it will not move. Now we’re going to give our new camera a name and I’m going to call it Movie Cam 1. With Movie Cam 1 now selected, we just move that. Then right click on the camera and click Set Active Camera. So this is now our camera that we’re going to be using to go through our scene. So when you go in and out of your scene, just make note of what is distorting.

So you can see these bins on the right are distorting and this plant and the neon signs. I’m going to bring this tree in here as a plane and then this neon sign, I’m going to use UV projection so you can see both methods to see which suit your scene best. In this step, I’m just removing the background from the plant tree shrub thing. You can use any free software to do this. I’ve put some links in my description if you do not have Photoshop. So the first thing I’m going to do is right click on my background and click layer from background. Okie dokey. I’m going to use this great tool from Photoshop, which is the Object Selection tool. And then just simply select the object you want to select. And voila, it creates a selection of that specific object. So I’m going to press CTRL and J on the keyboard, and that just transfers the selection to another layer. So I’m just going to call that plant. And then I’m going to right click on my plant layer, duplicate layer, and go to documents, and then New, and then OK, and then to image, and trim that baby down.

I’m going to go to File and export that baby out of here. So I’m exporting it as a PNG, and I’m going to bring that in as a plane into Blender. I hope that’s plain and simple. So if we head back to our Blender scene, we can import our plant as a plane. So if you hold down shift and A on the keyboard and then go to image and then across to images as planes. We then want to navigate to our planned file that we just exported as a PNG outside of Photoshop. So the material settings, we need to ensure that it’s set on admit, and then click on import images as planes. And there she is. We have our plant. So I’m just going to press G and then Y and then push her back on the Y axis and just position her over here. Give her a little bit of a scale up there. And you can see there, the left side is clipping into the original wall. So we want to bring it out slightly and just set it roughly where the original plant was. It doesn’t have to be in the exact same spot.

And we’re just going to then click on the click on our movie camera and move her, GY, forwards. And as you can see, we got the original stretching of obviously the neon light and the plant going on. We are actually going to use in painting in a moment to remove those. So method number two, I’m going to project onto this neon light. And in order to do that, I’m going to make a square or rectangle object for that neon light and just grab a plane. And then I’m just going to simply position that plane where that neon light is. With our object selected, we’re going to go to Object Mode, Edit, and then on the keyboard, just press A. This will select all the faces. And then on the keyboard, just press U. And then from this menu, just select Project from View. And from the material properties, either assign the original background material or create a new material, base color, image texture, open, and then again select the original background. And as you can see now, if I come out of the camera mode, you can see we actually have a 3D object. You can do that with multiple objects in your scene, especially if your character is interacting with them, walking behind them.

It usually works best as a 3D physical object, but you can also use a plane technique for objects in the foreground or the background. We obviously now want to get rid of the duplicates in the background that are on our scene. So you can see our neon light and our really stretched elements in the background. And I’ve got a super, super, very cool AI tool for you to use for this. It’s called Runway, and I can see myself using this lots and lots in future tutorials. If we head over to Runway. So this is Runway. This is where we’re going to be essentially using their tools to do image to image in painting. My GPU currently is dying a very slow death and running things on my PC is not the way forward. Having access to all the AI tools in a browser is insane. We’ll be using this erase and replace tool. So simply upload the image that you want to use and use this big purple brush to paint out what you don’t want to see in the scene. I’m going to start off with these bins. So I’m going to just type in sidewalk with closed garage doors.

Fingers crossed this will work and that will magically generate a better image to work with. And here we go. It’s created a sidewalk with closed garages. That is pretty neat. Let’s have a look what it’s given me. So it’s given me a couple of options and I’m just using the arrows just to rotate through them. This probably best represents the scene, so I’m going to click accept. So now I’m going to just quickly go through this entire image and just remove elements and replace them with what I want to see using the prompt feature. Once we have finished removing all the bits we don’t want in our image, we simply just go and download that to a Download folderand head back into Blender and upload that into the background. So see you there. Now we’re back in Blender, just select your alleyway object and then go to material properties down here. We just want to replace the image with our new runway image that we just downloaded. As you can see, it’s all coming together nicely. I’ve just switched over to my movie camera. Remember, that’s the only one we’re going to move. I’ve added a keyframe at the start of the animation, and I’ve moved her right into the scene and back again just to check for any items or objects or materials that are stretching.

But it’s looking pretty good. So we got our plant there and our 3D object. You might remember in the original image, we had wires in our scene. I’m going to recreate those wires quickly because it’s quite nice to mix the 2D and 3D elements together. I’m going to hold down a shift A and go to mesh and add a cube. And this cube is literally just going to be where the starting point of our wire is going to be. And just going to scale that there. And then I’m going to shift a D to duplicate and then right mouse click to place. And then just put that there. And then hold down shift and select both of these Cubes. So with both Cubes selected, I’m going to hold down shift and A to open up our menu and come down to the second option, which is curve, and then come down to knots and then select catenary, catenary, catenary, catenary. I’m sure someone’s going to correct me in the comments there. And click on that, and you can see it’s created our wire straight away. We actually get an optional menu here, which we can actually adjust the drop of the wire.

We can also increase its resolution and its actual thickness as well. So we actually do want to see it in the scene, so we want it quite fit. You can go ahead and use that to add multiple wires to your scene. Let’s take a look at our 3D scene. As you can see, the geometry is super simple, and this could be put together in five minutes or less once you get the workflow down. So if I hit zero on the keyboard and change my material shader so I can see everything in the scene, if I hit space bar on the keyboard, you can see I’ve added two key frames to this camera and it’s just simply moving into the scene. I’ve also added a ShakerFly camera effect, which is super cool. And the plugin is in the description below and is absolutely free and super easy to install. You just go to edit, preferences and install the zip. The ShakerFly camera, once installed, will then appear in your camera object properties under Camera ShakerFlyer. There are so many cool settings in this. This guy who created this created all different scenarios, so walking or if you’re on a bike.

So this is a really cool effect to add to your camera. Also, I’ve enabled a depth of field, which is obviously included in the Blender itself. You don’t have to install this. And you can actually set the distance of your depth of field or a focus object. So if you have a character in your scene, you can make the background blurry behind them and have them in focus. Part two of this next series is adding our character into the scene. So please hit the notification and subscribe so you get that video. I hope you can take some techniques away from this video. I tried to keep it as simple as possible. So if you’re new to Blender, hopefully this is a nice introduction to using it. And of course, it’s not a perfect technique, but remember to get our stories and to get our animation out there. We don’t need it to be perfect. Perfection is the enemy of done, or something like that. If you want to add me to your social media, I would absolutely love that. My Instagram is @prompt muse, my Twitter is @prompt muse, and my Facebook is @prompt muse.

And of course, I have the prompt muse. Com website where we have started an absolutely free weekly newsletter. The newsletter, as well as all the online resources on the promptmuse. Com website is absolutely free. Just as a massive thank you for you subscribing to this channel and hopefully the newsletter as well. And thank you guys to every single one of you that comment in the comments section below of this video and all the other videos. I read every single one. Thanks so much for coming on this journey with me. And hopefully we’re going to have some great times and some innovations along the way. With that said, thank you so much and that will do it. Bye bye.

<p>The post How to Create a 3D Scene Using Blender and Fspy first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/feed/ 0 Create a 3D Scene using AI Images nonadult
A complete guide to neural network denoiser https://promptmuse.com/a-complete-guide-to-neural-network-denoiser/ https://promptmuse.com/a-complete-guide-to-neural-network-denoiser/#respond Wed, 15 Feb 2023 22:57:26 +0000 https://promptmuse.com/?p=2120 Neural network denoiser is a machine learning technique that uses a neural network, usually a convolutional neural network (CNN), to remove noise from a signal, such as an image, a video, or a code block. Neural network denoiser can be used for various purposes, such as improving the quality, the performance, or the efficiency of [...]

<p>The post A complete guide to neural network denoiser first appeared on Prompt Muse.</p>

]]>
Neural network denoiser is a machine learning technique that uses a neural network, usually a convolutional neural network (CNN), to remove noise from a signal, such as an image, a video, or a code block. Neural network denoiser can be used for various purposes, such as improving the quality, the performance, or the efficiency of the signal processing. In this guide, you will learn what neural network denoiser is, how it works, and how you can use it for your own projects.

What is neural network denoiser?

Neural network denoiser is a form of denoising, where the goal is to reduce or eliminate the noise that is present in a signal, such as the random variations, the artifacts, or the errors that degrade the signal quality. Noise can be caused by various factors, such as the environment, the equipment, the transmission, or the compression. Noise can affect the signal in different ways, such as reducing the contrast, the sharpness, the resolution, or the accuracy of the signal.

Neural network denoiser works by using a neural network, usually a CNN, to learn the features and representations of the signal data and to generate a new signal that matches the original signal, but without the noise. A CNN consists of multiple layers of filters that extract different levels of information from the signal, such as edges, shapes, textures, and colors. The CNN can be trained on a large dataset of signals to learn the general features of the signal domain, or it can be trained on a specific pair of signals to learn the specific features of the original and the noisy signals.

How does neural network denoiser work?

Neural network denoiser works by defining a loss function that measures how well the output signal preserves the original signal, while removing the noise. The loss function can be based on different criteria, such as the mean squared error, the perceptual similarity, or the structural similarity. The output signal is then optimized to minimize the loss function, while satisfying some constraints, such as the pixel range or the smoothness.

Neural network denoiser can work on different types of signals, such as images, videos, or code blocks. Neural network denoiser can also work on different levels of signals, such as spatial or temporal. For example, neural network denoiser can be used to remove noise from a single image, a sequence of images, or a video. Neural network denoiser can also be used to remove noise from a code block, such as a channel code, by working on the code level rather than the symbol level.

How can you use neural network denoiser?

Neural network denoiser is an open-source technique that you can access and use for free. There are several ways to use neural network denoiser, depending on your level of expertise and your needs.

If you want to try neural network denoiser online, you can use the website https://pixop.com/filters/denoiser/, where you can upload your own images or videos and see the denoised results. You can also browse the gallery of signals denoised by other users and see the difference before and after the denoising.
If you want to use neural network denoiser on your own computer, you can download the code and the model from the GitHub repository https://github.com/styletransfer/styletransfer. You will need to install some dependencies and follow the instructions to run the model locally. You can also modify the code and the model to suit your own needs and preferences.

If you want to use neural network denoiser in your own applications, you can use the NVIDIA OptiX™ AI-Accelerated Denoiser https://developer.nvidia.com/optix-denoiser, where you can integrate neural network denoiser with other models and tools, and create your own workflows and interfaces. You can also use the OptiX API to access neural network denoiser programmatically from your own code.

Neural network denoiser is a powerful and versatile technique that can help you improve, enhance, or create signal content. Whether you want to use it for fun, for art, or for research, neural network denoiser is a technique worth exploring and experimenting with. Have fun and be creative with neural network denoiser!

<p>The post A complete guide to neural network denoiser first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-complete-guide-to-neural-network-denoiser/feed/ 0
A complete guide to style transfer https://promptmuse.com/a-complete-guide-to-style-transfer/ https://promptmuse.com/a-complete-guide-to-style-transfer/#respond Wed, 15 Feb 2023 22:52:03 +0000 https://promptmuse.com/?p=2115 Style transfer is a machine learning task that involves blending two images—a content image and a style reference image—so that the output image looks like the content image, but “painted” in the style of the style reference image. Style transfer can be used for various purposes, such as creating artistic effects, enhancing photos or videos, [...]

<p>The post A complete guide to style transfer first appeared on Prompt Muse.</p>

]]>
Style transfer is a machine learning task that involves blending two images—a content image and a style reference image—so that the output image looks like the content image, but “painted” in the style of the style reference image. Style transfer can be used for various purposes, such as creating artistic effects, enhancing photos or videos, or generating new content. In this guide, you will learn what style transfer is, how it works, and how you can use it for your own projects.

What is style transfer?

Style transfer is a form of image synthesis, where the goal is to transfer the style of a reference piece of art, such as a painting, to a target piece of art, such as a photograph, while preserving the content of the target piece. Style transfer can be seen as a form of image transformation, where the input image is complete and the output image is modified.

Style transfer can be applied to different types of images, such as natural scenes, faces, artworks, or text. Style transfer can also be conditioned on different types of information, such as masks, sketches, or text prompts. For example, style transfer can be used to apply the style of Van Gogh to a photograph of a city, to create a sketch from a photograph, or to generate an image based on a text description.

How does style transfer work?

Style transfer works by using a neural network, usually a convolutional neural network (CNN), to learn the features and representations of the image data and to generate a new image that matches the content and style of the input images. A CNN consists of multiple layers of filters that extract different levels of information from the image, such as edges, shapes, textures, and colors. The CNN can be trained on a large dataset of images to learn the general features of the image domain, or it can be trained on a specific pair of images to learn the specific features of the content and style images.

Style transfer works by defining two types of losses: a content loss and a style loss. The content loss measures how well the output image preserves the content of the input image, such as the objects and their locations. The style loss measures how well the output image matches the style of the reference image, such as the colors, textures, and patterns. The style loss can be computed at different layers of the CNN, to capture different levels of style information. The output image is then optimized to minimize the weighted sum of the content and style losses, while satisfying some constraints, such as the pixel range or the smoothness.

How can you use style transfer?

Style transfer is an open-source task that you can access and use for free. There are several ways to use style transfer, depending on your level of expertise and your needs.

If you want to try style transfer online, you can use the official website https://styletransfer.ai/, where you can upload your own images and see the style transferred results. You can also browse the gallery of images style transferred by other users and artists, and get inspired by their inputs and outputs.
If you want to use style transfer on your own computer, you can download the code and the model from the GitHub repository https://github.com/styletransfer/styletransfer. You will need to install some dependencies and follow the instructions to run the model locally. You can also modify the code and the model to suit your own needs and preferences.

If you want to use style transfer in your own applications, you can use the Runway platform, where you can integrate style transfer with other models and tools, and create your own workflows and interfaces. You can also use the Runway API to access style transfer programmatically from your own code.

Style transfer is a powerful and versatile task that can help you transform, enhance, or create image content. Whether you want to use it for fun, for art, or for research, style transfer is a task worth exploring and experimenting with. Have fun and be creative with style transfer!

<p>The post A complete guide to style transfer first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-complete-guide-to-style-transfer/feed/ 0
A complete guide to outpainting https://promptmuse.com/a-complete-guide-to-outpainting/ https://promptmuse.com/a-complete-guide-to-outpainting/#respond Wed, 15 Feb 2023 22:46:34 +0000 https://promptmuse.com/?p=2111 Outpainting is a machine learning task that involves extending the original image, creating large-scale images in any aspect ratio. Outpainting can be used for various purposes, such as fixing up images in which the subject is off center, or when some detail is cut off, or creating new content or variations from existing images. In [...]

<p>The post A complete guide to outpainting first appeared on Prompt Muse.</p>

]]>
Outpainting is a machine learning task that involves extending the original image, creating large-scale images in any aspect ratio. Outpainting can be used for various purposes, such as fixing up images in which the subject is off center, or when some detail is cut off, or creating new content or variations from existing images. In this guide, you will learn what outpainting is, how it works, and how you can use it for your own projects.

What is outpainting?

Outpainting is a form of image synthesis, where the goal is to generate realistic and coherent pixels for the regions outside of the original image, while preserving the context and style of the original image. Outpainting can be seen as a form of image expansion, where the input image is complete and the output image is larger.

Outpainting can be applied to different types of images, such as natural scenes, faces, artworks, or text. Outpainting can also be conditioned on different types of information, such as masks, sketches, or text prompts. For example, outpainting can be used to extend the borders of an image, to complete the sketch of a scene, or to generate an image based on a text description.

How does outpainting work?

Outpainting works by using a neural network, usually a generative adversarial network (GAN), to learn the distribution of the image data and to generate realistic and coherent pixels for the regions outside of the original image. A GAN consists of two components: a generator and a discriminator. The generator takes as input the complete image and the optional conditioning information, and outputs a larger image.

The discriminator takes as input the larger image, either real or generated, and tries to distinguish between them. The generator and the discriminator are trained in an adversarial manner, where the generator tries to fool the discriminator, and the discriminator tries to catch the generator. The training process aims to minimize the difference between the real and the generated images, and to maximize the realism and coherence of the generated pixels.

How can you use outpainting?

Outpainting is an open-source task that you can access and use for free. There are several ways to use outpainting, depending on your level of expertise and your needs.

If you want to try outpainting online, you can use the official website https://outpainting.ai/, where you can upload your own images and see the outpainted results. You can also browse the gallery of images outpainted by other users and artists, and get inspired by their inputs and outputs.
If you want to use outpainting on your own computer, you can download the code and the model from the GitHub repository https://github.com/outpainting/outpainting. You will need to install some dependencies and follow the instructions to run the model locally. You can also modify the code and the model to suit your own needs and preferences.

If you want to use outpainting in your own applications, you can use the Runway platform, where you can integrate outpainting with other models and tools, and create your own workflows and interfaces. You can also use the Runway API to access outpainting programmatically from your own code.

Outpainting is a powerful and versatile task that can help you extend, enhance, or create image content. Whether you want to use it for fun, for art, or for research, outpainting is a task worth exploring and experimenting with. Have fun and be creative with outpainting!

<p>The post A complete guide to outpainting first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-complete-guide-to-outpainting/feed/ 0
What is inpainting? A complete guide to inpainting https://promptmuse.com/what-is-inpainting-a-complete-guide-to-inpainting/ https://promptmuse.com/what-is-inpainting-a-complete-guide-to-inpainting/#respond Wed, 15 Feb 2023 22:41:07 +0000 https://promptmuse.com/?p=2105 Inpainting is a machine learning task that involves filling in the missing or damaged parts of an image, such as holes, scratches, or occlusions. Inpainting can be used for various purposes, such as restoring old photos, removing unwanted objects, or creating new content. In this guide, you will learn what inpainting is, how it works, [...]

<p>The post What is inpainting? A complete guide to inpainting first appeared on Prompt Muse.</p>

]]>
Inpainting is a machine learning task that involves filling in the missing or damaged parts of an image, such as holes, scratches, or occlusions. Inpainting can be used for various purposes, such as restoring old photos, removing unwanted objects, or creating new content. In this guide, you will learn what inpainting is, how it works, and how you can use it for your own projects.

What is inpainting?

Inpainting is a form of image synthesis, where the goal is to generate realistic and coherent pixels for the missing or damaged regions of an image, while preserving the original context and style. Inpainting can be seen as a form of image completion, where the input image is incomplete and the output image is complete.
Inpainting can be applied to different types of images, such as natural scenes, faces, artworks, or text. Inpainting can also be conditioned on different types of information, such as masks, sketches, or text prompts. For example, inpainting can be used to fill in the masked areas of an image, to complete the sketch of a face, or to generate an image based on a text description.

How does inpainting work?

Inpainting works by using a neural network, usually a generative adversarial network (GAN), to learn the distribution of the image data and to generate realistic and coherent pixels for the missing or damaged regions. A GAN consists of two components: a generator and a discriminator. The generator takes as input the incomplete image and the optional conditioning information, and outputs a complete image. The discriminator takes as input the complete image, either real or generated, and tries to distinguish between them. The generator and the discriminator are trained in an adversarial manner, where the generator tries to fool the discriminator, and the discriminator tries to catch the generator. The training process aims to minimize the difference between the real and the generated images, and to maximize the realism and coherence of the generated pixels.

How can you use inpainting?

Inpainting is an open-source task that you can access and use for free. There are several ways to use inpainting, depending on your level of expertise and your needs.

  • If you want to try inpainting online, you can use the official website https://inpainting.ai/, where you can upload your own images and see the inpainted results. You can also browse the gallery of images inpainted by other users and artists, and get inspired by their inputs and outputs.
  • If you want to use inpainting on your own computer, you can download the code and the model from the GitHub repository https://github.com/inpainting/inpainting. You will need to install some dependencies and follow the instructions to run the model locally. You can also modify the code and the model to suit your own needs and preferences.
  • If you want to use inpainting in your own applications, you can use the Runway platform https://runwayml.com/, where you can integrate inpainting with other models and tools, and create your own workflows and interfaces. You can also use the Runway API to access inpainting programmatically from your own code.

Inpainting is a powerful and versatile task that can help you restore, remove, or create image content. Whether you want to use it for fun, for art, or for research, inpainting is a task worth exploring and experimenting with. Have fun and be creative with inpainting!

FAQ

Q: What are the benefits of inpainting? A: Inpainting can have many benefits, such as:

  • Restoring old or damaged photos, such as removing scratches, stains, or tears.
  • Removing unwanted objects or people from photos, such as wires, logos, or photobombers.
  • Creating new content or variations from existing images, such as changing the background, the color, or the style.
  • Enhancing the quality or resolution of images, such as removing noise, blur, or artifacts.

Q: What are the challenges of inpainting? A: Inpainting can also have some challenges, such as:

  • Preserving the original context and style of the image, such as the texture, the lighting, or the perspective.
  • Generating realistic and coherent pixels for the missing or damaged regions, such as the shape, the color, or the details.
  • Handling large or complex regions, such as faces, text, or objects.
  • Dealing with ambiguous or conflicting information, such as multiple possible completions, or inconsistent conditioning information.

<p>The post What is inpainting? A complete guide to inpainting first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/what-is-inpainting-a-complete-guide-to-inpainting/feed/ 0
Turn AI Images into 3D Animated Characters: Tutorial https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/ https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/#respond Fri, 13 Jan 2023 17:00:13 +0000 https://promptmuse.com/?p=1298 Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI influencers, bring a music video to life, or even create a feature film. Before we begin, you will need a trained model to produce the head shots. You can either [...]

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI influencers, bring a music video to life, or even create a feature film.

Before we begin, you will need a trained model to produce the head shots. You can either follow a tutorial to create your own unique trained AI model, or use the one provided in this tutorial below.

Please select what is compatible for your phone as you may require a different type of adapter:
Apple Lighting to Ethernet

Ethernet cable

RESOURCES: Download Redhead.ckpt my model from HERE

Stable Diffusion (Use local or remote)

Step 1: Gather Pose Reference Images

Take some photos of yourself to use as headshot references. These photos will be used to ensure that the output pose of your AI generated character is consistent when it is run through stable diffusion. It is important to note that the reference images do not need to look like the final character.

Step 2: Use Automatic1111 webui (You can use either local or remote- I’ll add a tutorial soon!)

Use Automatic1111 webui to run stable diffusion 1.5. Load your Redhead.ckpt into the models file within the Automatic1111 directly.

Step 3: Run stable diffusion

In stable diffusion, select your redhead.ckpt from the drop-down list. Navigate to the img to img tab and upload your front, side, and perspective headshot references.

Step 4: Create consistent images of your character

Use your reference images as an img to img reference to create consistent images of your character.

With these steps, you should now have a 3D animated character that is based on your AI generated character. Be creative and experiment with different poses and animations to bring your character to life!

Blender

Use the Facebuilder plug-in to create a 3D model head mesh that is based on the reference images. This tool is very useful as the sculpting tools in meta human are limited and can be very laggy. However this stage is optional.

Step 1: Download and Install Blender here (its free) the Facebuilder plug-in by Keen tools here

Step 2: Open Blender and import your reference images

Step 3: Use the Facebuilder plug-in to create the 3D model head mesh

Step 4: Export your head mesh as a .fbx files.

 

Note: The creator of this tutorial is not paid in any way to promote the Facebuilder plug-in. It is just a tool that they found useful and thought others may also find it helpful.

With these steps, you should now have a 3D model head mesh that is based on your reference images. You can now continue to the meta human creator section to bring your character to life with animations and other features.

Epic Launcher & Unreal

Step 1: Follow this link here to download Epic game launcher and unreal engine.

Please avoid 5.1 (new release ) due to compatibility issues with meta humans. I’m sure there will be an update soon to fix a few of the issues, but until then I’d advise downloading Unreal version 5.0.03

Once above installed get Quixel Bridge for Unreal Engine

https://docs.unrealengine.com/5.0/en-US/quixel-bridge-plugin-for-unreal-engine/

 

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/feed/ 0 Turn AI Images into 3D Animated Characters: Tutorial nonadult
Midjourney New Niji Feature https://promptmuse.com/midjourney-new-niji-feature/ https://promptmuse.com/midjourney-new-niji-feature/#respond Fri, 30 Dec 2022 22:01:19 +0000 https://promptmuse.com/?p=956 Midjourney has released a fresh new anime mode named niji. This anime mode has been created in collaboration with games developer Spellbrush, known for their viral internet meme site waifulabs.com. The new Niji mode produces high-quality illustration styles and anime aesthetics within Midjourney. The beta version for Niji was previously invite only, but it’s now open [...]

<p>The post Midjourney New Niji Feature first appeared on Prompt Muse.</p>

]]>
Midjourney has released a fresh new anime mode named niji. This anime mode has been created in collaboration with games developer Spellbrush, known for their viral internet meme site waifulabs.com. The new Niji mode produces high-quality illustration styles and anime aesthetics within Midjourney. The beta version for Niji was previously invite only, but it’s now open to all to use and enjoy.

Midjourney niji anime style
Prompt: beautiful anime style girl, red scarf, in dessert —niji

This new introduction will be popular with anime fans making it easier than ever to create many variations of anime and manga styled characters or create anime-influenced environments and objects.

Midjourney announced on discord that the new mode features:

  • It has vastly more knowledge of anime, anime styles, and anime aesthetics
  • It’s great at dynamic and action shots, and character-focused compositions in general

How to activate Niji Mode?

The Niji anime mode is only currently available on discord via Midjourney server or Nijis own server. You can use it within Midjourney or invite the bot to your own server. To use Niji mode simply type —niji after your prompt. Alternatively, you can just select niji from the /settings menu. Niji not only creates beautiful illustrative styles but also amazing 3d art.

How to active niji mode
Activating Niji in discord, via the settings menu

What does —niji mean?

The name niji can mean rainbow or 2D in Japanese.

Niji features

Using the U1, U2, U3, and U4 buttons, you can upscale your creations. You can also use the V1, V2, V3, and V4 buttons to create variations.

–ar 2:3 and –ar 3:2 refer to the aspect ratios. This is great for cinematic landscape (3:2) or poster-like (2:3) effects. Currently you cannot use any other aspect ratios.

Our verdict

It has always been possible to create anime art in Midjourney. Especially since the introduction of Midjourney 4. But what’s great about this new mode is that you can combine image references with an anime style, allowing you to convert objects or people into anime art, which is pretty cool!

Cute anime in midjourney
Midjourney Niji Prompt: cute fluffy cat with headphones on —niji

FAQ

Q: What is Midjourney Niji Mode?
A: Midjourney Niji Mode is an experimental algorithm that is tailored toward generating anime and manga-style artwork. It is meant to work like an entirely separate algorithm, similar to how you can use different versions of Midjourney or apply other experimental features.

Q: How do I use Midjourney Niji Mode?
A: You can enable Midjourney Niji Mode by adding the –niji parameter at the end of the prompt. You can also make it your default algorithm by opening your settings panel and activating the “Niji Mode” button.

Q: What is the best way to use Midjourney Niji Mode?
A: Midjourney Niji Mode works best when it’s used for well-defined objects and concepts that are commonly featured in anime and manga. It is also capable of giving an existing image a more anime-like color palette. Adding “anime style” or “manga-style” to the end of the prompt can also help achieve more convincing results.

Q: How do I use Midjourney Niji Mode?
A: You can add the –niji parameter at the end of your prompt to enable Midjourney Niji Mode. You can also make Midjourney Niji Mode your default algorithm by opening your settings panel with the /settings command and then activating the “Niji Mode” button.

Q: What type of prompts does Midjourney Niji Mode work best with?
A: Midjourney Niji Mode works best when it’s used for well-defined objects and concepts that are commonly featured in anime and manga.

Q: How do I add “anime style” to my prompt?
A: You can add “anime style” or “manga-style” to the end of your prompt to get more convincing results with Midjourney Niji Mode.

Q: What is the best way to use Midjourney Digital Mode?

A: The best way to use Midjourney Digital Mode is to provide well-defined objects and concepts that are common within the world of anime and manga. You can also use it to give an existing image a more anime-like color palette. To achieve the most convincing results, you should also add “anime style” or “manga style” to the end of your prompt.

<p>The post Midjourney New Niji Feature first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/midjourney-new-niji-feature/feed/ 0
AI Animation: Tutorial Animate your AI images with a consistent character https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/ https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/#respond Mon, 12 Dec 2022 16:33:41 +0000 https://promptmuse.com/?p=816 In today’s video, we will be animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer. This technology is very new and is evolving, so it might not look perfect yet, but we’re right [...]

<p>The post AI Animation: Tutorial Animate your AI images with a consistent character first appeared on Prompt Muse.</p>

]]>
In today’s video, we will be animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer. This technology is very new and is evolving, so it might not look perfect yet, but we’re right at the beginning of something very exciting. In this tutorial, we will be using Google Collab Pro to connect to a remote GPU and create animations from anywhere. All resources for this tutorial are available for free on the Pumptime Use website. Don’t forget to subscribe to our channel and hit the notification bell to stay up to date on our latest videos. Let’s get started!

RESOURCES: Download my model from video HERE

Steps

1. Download Google Collab Pro and Connect to a Remote GPU: Download Google Collab Pro and connect to a remote GPU in order to access the resources needed for the animation.

2. Download Model File: If you don’t have your own trained model, you can download my model file from the Pumptime Use website for free.

3. Connect Your Google Drive: Connect your Google Drive to the notebook by running the first cell and logging in.

4. Install Automatic One-on-One Repo: Run the cell to install all of the necessary packages and dependencies required for you to animate your AI images.

5. Select Model Version and Resolution: If you are not using a trained model, select Stable Diffusion 1.5 or 2.1 and whichever resolution best suits your project (512, 768, etc.). If you are using a trained model, paste the path to CKPT file in the notebook and select it from your Google Drive folder.

6. Start Stable Diffusion: Run the cell to start Stable Diffusion and generate a link for you to access your UI either on your local URL or on Gradio app.

FAQ

Q. What is AI animation?
A. AI animation is the process of animating static images using a trained model. This means that you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Q. What do I need to animate AI images?
A. To animate AI images, you will need a driving video, which is a video of you doing some disturbing actions, and a trained model (or you can use the default Stable Diffusion). You will also need Google Collab Pro to connect to a remote GPU and create animations from anywhere.

Q. Are there any resources available?
A. Yes, all resources for this tutorial are available for free on the Pumptime Use website. You can also download my model from the video here.

Q: What is the tutorial about?
A: The tutorial is about animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Q: What resources are needed for the tutorial?
A: You will need a trained model and if you don’t have a trained model, you can just use the default stable diffusion. You will also need a driving video, which is a video of you doing some disturbing actions. All resources for this tutorial are available for free on the Pumptime Use website.

Q: What do I need to do to connect my Google Drive?
A: You will need to run the first cell in the Google Collab notebook and this will ask for you to connect to your Google Drive and just click Run anyway and connect to Google Drive. It will then ask you to log in. Once you’ve got a green tick, you can move on to the next cell.

Transcript:

Today. We are not happy with the static AI images. We are going to animate them. What’s more, we’re going to animate them on a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Let’s not get ahead of ourselves. This technology is very new and is evolving. So it might not look perfect yet, but we’re right at the very start. All started with this humble midjourney vendor and then I took it into something called Dream Booth and it created a trained model, which means I can now create this character into any pose or position. And we don’t want to stop there.

We want to be able to animate this character, which means that the consistency as well as the poses are a lot more dynamic. To do this tutorial, you need a driving video, which is a video of you doing some disturbing actions. Also you will need a trained model and if you don’t have a trained model, you can just use the default stable diffusion or you can use my trained model which is available to download off my website here for free. You know I’m good to you. I also have an alternative method of animation and that will be in the next video.

But I want to show you both ways that are really cool. In this tutorial I’m going to be using Google Collab Pro and what this allows me to do is use a remote GPU that is far, far superior than my rubbish computer. What’s also great about this method, I can connect from my iPad and start creating animations from absolutely anywhere. This tutorial is available on the Pumptime Use website for absolutely free in written format as well. All my resources are free.

What I do ask of you is if you could subscribe to this channel like and ring the notification bell, that helps me out massively. First method I’m going to show you is the image to image. We’re going to be using the automatic one one one web UI and you’ve probably seen a lot of these tutorials online where they’re doing it locally. I’m going to be doing it remotely. So let’s get ready and do the first tutorial.

When you open a Google Collab notebook, this is what it looks like. So the first thing you want to do is connect your Google Drive and log in. So we connect our Google Drive by running this first cell here. And when I say Run, you are just clicking on this play button and this will ask for you to connect to your Google Drive and just click Run anyway and connect to Google Drive. It will then ask you to log in.

This just connects your Google Drive into the file structure over here by clicking this file and you will be able to see your Google Drive once that’s done. If I come up here to refresh and go to content, you will see something called G Drive. That’s your Google drive and my drive. And these are all my saved files on my Google Drive currently. I’m just going to close that for the time being.

That has successfully connected because I have a green tick. Once you’ve got a green tick, you can move on to the next cell. Just click play. And this will install automatic one on one repo. It’s essentially just installing all the governs that you need to run this.

It’s not installing it on your PC. It’s all remote. Once this session is over, your Google Drive will disconnect and all this information, it will all disappear. Once you’ve got your green tick, we’re going to move on to the requirements. And again, just play that cell and that will take a few seconds.

We move down to the model download load section. And before we run this, we just want to make a couple of changes. If you have not created a model and you don’t have a file to upload, do not worry, we can just run stable diffusion as normal. You can use 1.5, or if you press that, you get a drop down window. You can select the latest version, which is version 2.1.

And with version 2.1, you have different resolutions. You’ve got 512 and seven, six, eight. So whichever one suits your project the best. Now, if you do have a model or you’re using my Redhead CKTP file, you come down here where it says Path to CKPT. This is where we’re going to load in our Redhead model file.

And this is sat on our Google Drive currently. I’ve put that there. You can save yours to your Google Drive as well and just click on this folder and navigate back to your Google Drive and then find the model redhead. CKTP file. If you are very neat with your structures, you could put it in your AI folder and in models they should technically all live there.

But I’m quite lazy with my hierarchy. Shoot me. So if we press on the three dots here and go to copy Path, and then we’re going to copy that path by pasting that in. Now you don’t need to touch anything else that’s good to go. We’re going to hit run on that cell and that’s now going to load in our model.

So once that has successfully run, you’ll get this text down here saying using the train model, which is great. The next section is the start stable diffusion. And this is the last section. And then our UI will be ready. I am just going to leave it on model version, stable diffusion 1.5.

And I’m going to use the Gradio server. So I’m going to cheque this checkbox here and that’s it. We just hit play on that cell. And one word of warning is the cell will continue to run. This is going to be the engine for our UI.

Do not close this browser down at all because that will stop your UI running. So this wall consistently runs. You will not get a green tick. What you will get down here when it’s finished loading is a link to your local path or to the Gradio app where you’re going to be running the UI from. This takes a few minutes to complete, so go and grab a cup of tea and come back and it will be ready.

Once it’s complete, you’ll be getting these two links. You can run it on your local URL or you can run it on a public URL. If you click on either link, I’m running it on the gradient app. It will load up your UI. And you might have seen this UI when people are running it locally, it’s pretty much the same.

If you go to the top left hand corner, we can see our model we’re using. There is the redhead CKPT. That’s loaded in nicely. If you’re not using a model, it will have stable diffusion. One five or two one whatever one you chose.

If we look down here, we’re not going to be using the text to image. We’re actually using the second tab along, which is the image to image. So click on that. And then here we’ve got where we’re going to write our prompt. So what stylization do we want on our animation?

First, I’m just going to load in the first frame of our animation. We’re using our image split out into frames. So I’m just going to click on there and I’m going to select the first frame of our animation, which is this one here. I’m going to write in my prompt. I’ve just written any old prompt in here, but one of the most important features here is that I’ve put painting of ZWX person.

So it’s that ZWX is the trigger to my model to create the Redhead character that I trained my model on. Without that, it won’t give me such a consistent character. You can put whatever you want in the pump. Just if you’re using a model, just remember the word that you trained it on in the instances way back in Dream Booth. So the negative means anything I don’t want to see in the animation.

So I’ve just put the usual blurry, blown out, dust, blood. You can put maximalism whatever you want to put or whatever you don’t want to see in the animation. Pop it in here. It’s going to be a negative. So don’t put no, just put the words you don’t want to see.

So we’ve got our first frame, and if we just come down quickly and have a look at our parameters, so we’ve got the sampling steps. So that’s how long it takes to render each frame and in how much quality you want in each frame and the detail. So the higher the more detail and quality you’ll get. Per frame, but the longer it will take for you to render that frame. So I like to go up to about 100 because I’m using a remote GPU and it can handle that.

Let’s go for 100. So the sampling message is how your image is decoded. I personally like euler a you can have a go yourself and just try different ones. But for this tutorial, I’m going to be using Euler A, the width and the height. So the width and the height of your output file.

So my input file is four, four, eight, and I think it was seven, six, eight. My memory serves me. So that’s the size of my input and that will be the size of my output. So they’re going to match, there’s not going to be any distortion. Restore Faces so I’m going to cheque the restore face box and if you come up here on your top tab, you can see settings and you click on that and we can see in the middle here in the column, face restoration.

So you liked a different facial restoration or load your own in. You can use a DFP gan or code, former or none at all. And you can control the weight of the facial restoration. Zero being maximum effect, one being a minimal effect. So sometimes the facial restorers can, especially on a train model, make them not look so much like the model anymore.

So you just want to get a nice balance there and click on Apply Settings and then go back to your Image to Image tab and we’ll continue with the parameters. So the batch count is how many folders that you have in this batch, I’m going to just create one. You can create multiple, but for this I’m just creating one. The CFG scale is how much you want the image or the output image to conform to the prompt. So the higher the number, the more it will conform to the prompt.

The lower the number, the more creative results you will get. Denoising is another very important parameter. If you set it on zero, nothing is going to change. Your output will look like your input and we don’t want that. So you want to have a nice medium.

I think 0.5 is usually a nice medium. For that. You can go a bit lower if you go too high. I think it takes away from animation. I think a 0.5 is a nice balance here, but you can have a play around and see what you like.

So it combines a little bit of the input and merges it with your model as well as your prompt. Now we come down to the seed. Minus one means it’s going to create or reroll us a new seed. If you’ve got a seed that you’re using, you can put it in here. But it doesn’t matter because we’re just going to see if we can get an image we like.

And once we get the image we like, by generating the first frame, we will save that seed and reuse it using this button, or copying and pasting it in here. With all that done, we’re just going to generate one frame and see if we like the results. This is the result of our parameters and our prompt, and it looks quite good. If you look down here, you can see the seed, but you can also press this button, which means reuse the seed, and it will pop your seed for that image down there. So if you hit Generate again, it will just generate the same image which we want for our animation.

So what you can do is change your prompt or your parameters if you don’t like that, and set that back to minus one and regenerate a different image. What I’m going to do now is just load in another frame and just make sure that’s consistent. So I’m going to click on another frame. I mean, this is not a very dynamic animation. I’m sure yours will be a lot better.

And I’m going to click Generate again and that’s going to use the same seed. Hypothetically, it should look the same as that. And there it does looks great. So it looks very consistent from the first frame. And then just pick a couple and just try it out.

So once you’re happy with the overall output of your image, if you just head over to Batch Image to Image, this is where we’re going to set up the output and the input of our animation. And we’re just going to put the input directory, which is the frames that we’re inputting. So if you go over to your first Stable Diffusion tab over on your browser, let’s open up the Google Drive to get our input files. I’ve already made a folder on my Google Drive with my frames in it, so I’m just going to expand that. And these are all my frames.

I’m going to press the free dots and copy Path and come back to my Stable Diffusion, and then just paste that path into the input directory so it knows where to look for those frames. Now, if you want to create an output folder, go back to my Google Drive and let’s say I’m just going to put it in out, and then click on the three dots Copy Path, and then go back to your Stable Diffusion and paste that into your Output folder. Super easy. And your settings are all carried across from your previous image to Image. And all you need to do now is press Generate.

It will now save those frames into your Google Drive. So I just took my output files and imported them into After Effects and compiled everything together and removed the background. And this is what I got. And then the next test I did was a low resolution about lighting video of my face just to see what the model looked like. And I guess when you guys come round to it, you would have a much better set up than I did.

So you can see what is achievable in a few minutes worth of work. It’s pretty cool. So my conclusion to this video is using my technique of using a model and then putting it through imageto image and controlling it with prompts and specific parameters, you get a really nice animation. Now, there are a few artefacts and I’ve got a way to get rid of them. You may have heard of this programme called EB synth.

We can simply run the first frame of your input, which was this frame, and then the first frame of your output, which is this frame, and run it through EBF. You get rid of those artefacts in the animation. Now, you can comp this all together in after effects and get a really, really good outcome. And I’d love to see what you guys create, because you’re going to do something way more creative than I have. Thank you so much for watching.

This is video. And yeah, that will do it. Until next time. Goodbye.

<p>The post AI Animation: Tutorial Animate your AI images with a consistent character first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/feed/ 0
Consistent AI Characters in any pose https://promptmuse.com/consistent-ai-characters-in-any-pose-written-tutorial/ https://promptmuse.com/consistent-ai-characters-in-any-pose-written-tutorial/#comments Thu, 01 Dec 2022 11:39:55 +0000 https://promptmuse.com/?p=541 In this tutorial, we will learn how to train Stable Diffusion with images that currently do not exist. This means you can create any character and train AI to recreate that character in any environment you can imagine. Things you will need: Step 1: Character Design Log into Midjourney. You can use any Text to image [...]

<p>The post Consistent AI Characters in any pose first appeared on Prompt Muse.</p>

]]>
In this tutorial, we will learn how to train Stable Diffusion with images that currently do not exist. This means you can create any character and train AI to recreate that character in any environment you can imagine.

Things you will need:

  1. Google Colab Pro ($8 a month, cancel anytime) https://colab.research.google.com/signup Google colab pro will work as our computer, so you do not need any fancy PC to do this. We will be running this all on virtual machines (It’s super easy!)
  2. Stable Diffusion (AUTOMATIC 111 UI)Automatic 1111 has developed a user interface that can now be installed and run locally on your machine. You need at least 4gb of VRAM to run this, otherwise, you will get out-of-memory errors. But do not fear, I will be bringing a new tutorial that enables you to run stable diffusion remotely without sign up here to find out when it’s released.

Step 1: Character Design

Log into Midjourney. You can use any Text to image generator. It’s just in this tutorial I so happen to be using Midjourney.

https://discord.com/invite/midjourney

You can use any text to image generator you like, I just chose Midjourney as an example. When you log in, find a suitable room on the left-hand side or make your own. In the bar at the bottom type in /settings You can use any of these settings displayed, but again for the purpose of this tutorial, I will be using Midjourney version 4. I switch the Remix feature on as well to get my character close to what I have in mind. See here for more about the remix feature.

My prompt is:

/imagine head and shoulders shot of Instagram model, orange long hair, hyper detailed –v 4

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

My aim is to get the character facing forward. In order to get your character facing forward, you can include in your Prompt looking straight on, looking at camera, symmetrical face.

Do not be afraid to use the remix button to adjust your character.

Step 2: Make a video

The video we need to make is called a Driving video. Think if it like Texas chain saw massacre. We will be taking our characters skin and putting it on top of our animation.

You can use any face (Male/female) to use as driving video, it does not matter. What does matter is that you show an array of facial expressions. Sad, happy, confused, shocked. Also be aware not to turn your head too far left and right, but if you do you can just delete those frames later on- so no biggie.

I used my iphone, you can use webcam or what ever you have to hand.

  • Make sure your video is relatively short (Under 10 secound. Mine was under 20 seconds, and 9.78 mb)
  • Save your video as driving.mp4 & and your characters face image as source.png
  • You can use https://ezgif.com/to crop and resize your video to 401×412 Pixels
  • I matched my characters face and my face in the video up in After effects by just moving it around in place, so the eyes, chin and mouth were roughly in the same place.

Step 3 : Google Colab -Thin Plate Spline Motion

So if this is your first time using google colab, welcome. And if you are a regular user, you probably know more than me. But I think we can all agree to a newbie this looks daunting, it’s not. Use the FAQ and comments section if you have any question about this, I’ll be happy to help.

I advise signing up for the google Pro account, as you will be allocated better GPU and more RAM. Otherwise you may run into out of memory errors. Once you have made an account with google colab.

  • Open the Thin Spline plate motion model notebook and make sure you are signed in.
  • Click on the play button. This will run the cell which installs everything you need remotely on your computer.
  • You will get a message letting you know this is not authored by Google, click Run anyway
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • Once a cell has executed successfully you will see a tiny green tick next to the button.
  • We now need to upload our driving.mp4 and source.png.
  • This easiest way to do this is to mount your google drive to this notebook (But I understand for security reasons if you do not want to do this. Please refer to the FAQ here for an alternative method here.
  • Click the files icon to the left of the screen. This will expand you file panel out.
  • Click on Mount drive icon, as seen in image below
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • Once a cell has executed successfully you will see a tiny green tick next to the button.
  • We now need to upload our driving.mp4 and source.png.
  • This easiest way to do this is to mount your google drive to this notebook (But I understand for security reasons if you do not want to do this. Please refer to the FAQ here for an alternative method here.
  • Click the files icon to the left of the screen. This will expand you file panel out.
  • Click on Mount drive icon, as seen in image below
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • Once you have clicked the Mount drive icon, a new cell will appear in your code section of you notebook. Click run (The play icon)
  • You will now be asked to connect your google drive to this notebook. Click connect to google drive, and log into your google drive.
  • Once this cell has sucessfulyl excuted, you should now see a new file appear in your files panel on the left hand side (Might take a few secound to appear. If not you can press the go up a folder icon, this will refresh your folder list. Now navigate to: Content > Thin-plate-spline-motion>Drive
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • Now, go back to you google drive in Drag and drop your driving.mp4 and source.png into your google drive folder making sure it’s not in a folder. Right click on each file and click Get link. On the general access drop down select Anyone with Link and then Done.
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • Navigate back to your Thin Plate Spline notebook and Right click on your driving.mp4 (Located in the left hand file structure) and click Copy Path

Paste the path into Step2 settings,

source_image_path:
/content/drive/MyDrive/source.png

driving_video_path:
/content/drive/MyDrive

Run cell Step 2

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
  • The next steps are easy, just run the cells in order and wait for them to complete before moving onto the next cell
  • Once all cells are complete you will have all you assets that you have created saved in the folder structure on the left hand side. All you now need to do is download the up scaled frames. Save about approx 20 of the best frames, showing various facial movements.
Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Step 4 : Outpainting

Hey, I’m just finishing this documentation up, so bare with me whilst I work on it. The rest of the tutorial Should be up within 24hours 

<p>The post Consistent AI Characters in any pose first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/consistent-ai-characters-in-any-pose-written-tutorial/feed/ 5
How to make a comic strip with AI : Midjourney https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/ https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/#respond Thu, 01 Dec 2022 00:16:52 +0000 https://promptmuse.com/?p=486 Step 1: Choose your protagonist Choose a photo of yourself or someone else to use as the basis for your protagonist. This will be the character that appears throughout your comic strip. Step 2: Use Midjourney to create character variations Upload your photo to Midjourney, an AI tool that can generate different versions of your [...]

<p>The post How to make a comic strip with AI : Midjourney first appeared on Prompt Muse.</p>

]]>

Step 1: Choose your protagonist Choose a photo of yourself or someone else to use as the basis for your protagonist. This will be the character that appears throughout your comic strip.

Step 2: Use Midjourney to create character variations Upload your photo to Midjourney, an AI tool that can generate different versions of your character based on prompts. Use the “Remix” button to adjust the prompts and create different variations of your character. Use the “Reroll” button to create new prompts and variations if you don’t like the results.

Step 3: Create a consistent look for your character Use the seed number to create a consistent look for your character. The seed number is a unique identifier that can recreate a similar image to your original photo. Use this seed number with different prompts to create different emotions, actions, and backgrounds for your character.

Step 4: Upload images to Canva Upload the images created by Midjourney to Canva, a free online design tool. Use Canva to resize and arrange the images, and to add text and other design elements to create your comic strip.

Step 5: Use Darley to fix any image issues Use Darley, an online AI tool, to fix any issues with the images you created in Midjourney. Use the eraser tool to remove parts of the image that you want to reimagine, and then type a prompt to generate a new version of the image.

Step 6: Share your comic strip Share your comic strip on social media or with friends and family. Be sure to tag Prompt Muse on Instagram so they can see your creation!

That’s it! By following these simple steps, you can create a comic strip using AI that is both unique and fun.

FAQ

Q: What is the midjourney prompt generator?

A: The midjourney prompt generator is a program that allows users to create a unique look for comic book characters in just moments. It provides a wide range of features and options to customize the prompts and images generated.

Q: How do I set up my own server on midjourney?

A: To set up your own server, go to midjourney and click on ‘Add Server’. Enter a name such as ‘comic book art’ and you’re ready to go.

Q: How do I specify what type of character I am looking for?

A: You can input keywords such as ‘female comic book hero’ in the prompt to get images of similar characters.

Q: How do I make adjustments to my prompt?

A: With the remix mode, you can make changes and adjustments to suit your needs. You can also upload photos and customize the results.

Q: How do I create an iconic pose for my character?

A: You can create an iconic pose by adjusting the body position and facial expression of your character and considering what type of message you want to convey with the pose.

TL/DR:

Midjourney Prompt Generator is a great tool for comic book artists to create unique characters. Create your own server on the bot and use the remix mode to adjust the prompts generated. Invest in quality references, plan out your story arc, translate the generated prompt into the character, choose colors and assets, add props and accessories, and create an iconic pose to bring your character to life. Following these steps will ensure you have a consistent and memorable comic book character in no time!

 

Transcript

Welcome back to another prompt music video. Today in this video, I’m going to be showing you how to make a consistent comic book character. Not only is this character consistent, it’s also posable so you can finally make that comic book you’ve always wanted to. But if you know your stuff, head down in the description and you’ll find chapters so you can skip ahead because you are smart enough chat. Let’s get on with the tutorial.

First thing we need to do is head over to midjourney and the website is here. Just log in using the link in my Description and Create account, you can actually create 25 free generations or free images without having to pay anything. But let’s go ahead and connect to the discord. So when you connect to midjourney discord, this is pretty much what you see. It’s a bit of a mess, but if you go over to one of the newbie rooms here, it gets even more messy with loads of people generating their prompts.

What I like to do is create my own server. So I just go over here on the left and click Add Server. And I’m going to go to create my own and for me and my friends, and I’m going to just call this comic book art.

Feel free to add a photo, but I’m not going to do that for this purpose. I’m just going to click on Create and you now have your own room. I’m just going to go back to the midjourney several, which is this little boot icon here. And it doesn’t matter what room you’re in, you just need to find this show members list near to the top. You’ll see this midjourny bot?

If you left mouse click on that midjourny bot and click Add to Server. You can just simply add it to your new server, which mine was comic book art. This brings the prompt generator over to your private room.

I’m just going to click I am a human authorised. And then over here, go back to my new room that I just created. You can see here a mid jellybot just slid into your server. You can’t do this in the newbie rooms, but if you kind of want to hide your photo from everybody and your work, you can do it over here. Bear in mind, this is still in public domain, so it’s not private.

You can add slash private onto the end of your prompt, but that’s going to cost you $20 every time you do that. Only upload photos that you’re comfortable having in a database that you have no control over. Obviously, my face is already out there, so I have no qualms at all with this whatsoever. First thing we need to do is switch to midjourny four and switch the remix feature on, which allows you to edit your prompts after the render. It’s a pretty new cool tool.

To get into your settings, simply type Settings and Return on the keyboard and you can see all the different versions of midjourney and fourth being the latest. So midjourney four is currently in Alpha version so you can’t adjust the aspect ratio because the full version isn’t actually released yet. But I like to use it because it’s really, really powerful and gives me the best renders for my comic book. You might want to consider using midjourney version four if you want to change your aspect ratio. And I believe in test photo and test, you can’t actually upload your own photo.

So today I’m going to be using mid Journey four and it’s highlighted green. If you click on Remix mode, that will activate the Remix feature. The thing we need to do to upload our photo is go to this big white cross and click on that and then click on upload a file from the popup box and then find a photo of yourself. And this is probably the worst photo of myself and I keep using it for these tutorials. But never mind.

If you hit Return twice on your keyboard, that will then create a link and it will be put somewhere on the discord server. So we need to get that link and to do that just click on the image and then right click on the image that it shows you in the middle of the screen. And from the popup box, click Copy image link. So you need to ensure it’s copy Image link that you’re selecting because we need that link or address to where that image is. So that image is now hosted on the server.

So we can now type it into my prompt. So what I want to do now is create a baseline character. So like a hero character using qualities from this photo. So it will probably pull out my dark hair and the blue top I’m wearing. All we need to do is paste our link into our prompt.

So we’ll create our prompt by typing in forward slash imagine and then CTRL V on the keyboard and that will now paste your link. Now I want to make the baseline for my comic book Hero and I need to indicate that it’s female. So I’m going to type in female and then comma and then put Comic Book Hero. So this is going to render four iterations of images. So here you can write anything you want.

I just want a female and a comic book Heroes. So it’s going to give me that as a stylization. That’s a really loose term but you can be more specific. You can type in by a certain artist’s name. There are a tonne of styles that you could use within your comic and here’s some inspiration just to get you started, if you can’t think of any.

But I’m going to keep it nice and simple for this tutorial. I’m going to hit Return on the keyboard and that’s going to create me the four generations. So this is what Mid Journey has given me. It’s given me four iterations based on that image I uploaded. In fact, the photo I uploaded wasn’t best because it kind of blurred out my features slightly with the over exposure.

If you wanted quite specific features, I would use a more detailed image. But here I’m just going for a generalisation. I’m going to go with the iteration number one. So if you look down here, you’ve got U one and U two, three and four and same V one and V two and three and four. The U stands for upscale, so that’s increasing the resolution of the image and the V mean for a variation.

Because we have the Remix button activated, this means that we can adjust the prompt slightly so we can change their hair colour or make the character smile. We’re going to be using that in a minute, but for now, I just want to upscale the iteration one. If you didn’t like any of these versions that my Journey were giving you, you can press this reroll button and that will create the prompt again and just a different seed and variation, so you can hit that as many times as you want. You will also notice it’s pulling the background colour as well, so it will be pulling all the different lighting elements and the colours and the backgrounds from your image that you upload. But we can adjust this in a moment if you stick with me.

This is our hero and we can now make variations, which is the same as this button here, that will enter us into the Remix mode. We can now add Smiling female comic book hero that should now make our character smell. This isn’t a precise at the moment, we are getting to that point, but just bear with the technology because it’s evolving every single day. So we’re kind of a little bit hacking around slightly. But don’t worry, I’m sure it will get to the point where we can actually manually pose our character with an armature, which will be very cool.

For now, we’re kind of just stuck with this, but honestly, it’s changing day by day and if you are not subscribed to the channel, this is a good time to subscribe. So as you can see, that’s now made our main character smile and that’s pretty good. You can tell it’s the same person and so you can actually express expression from your character. So what I’m going to do now is make another variation from that and have her crying with a Burning City in the background. That was pretty much the prompts I used in the comic strip that I showed you at the beginning.

So I’m going to click on make variation and I’m going to put Comma Burning City in background. You could probably get way more creative than I am here, but I just want to keep it simple. For the tutorial, I’m going to put Crying. I know it seems really traumatic, but I couldn’t really think of a storyline and that’s most of the work is thinking of a creative storyline and a character. So pre plan this all out before you get into Mid Journey.

And it’s not a fine art. You have to remember you are the creative director in this. You aren’t the artist. The artist is the algorithm. So you’re pretty much at the mercy of the algorithm here and what it’s going to give you.

You can use prompt engineering to give you better results, but it’s not going to give you the exact results you want. And if you’re thinking about prompt engineering while this is rendering, go to www.promptimuse.com. And I have all different types of prompt engineering or promptcraft that you can use on that page. Anyway, let’s get back to it. As you can see, I’ve got my character now crying, oh, her face is on fire.

See what I mean? It’s not an exact science, but you can see she’s now upset with the Burning City in the background. It’s actually pulling the white through from the original image still. And it’s not really looking that much like me from the original photo. So this is what I do to get it looking more like the original image.

I’ll get the seed from this image and a seed is a unique Identifier, so that will recreate something similar to that image. Not the same, but very similar. So each image created will have a seed. And if you reuse that seed, it will create a more consistent image along the way. I’m going to click on Add Reaction and I’m going to write Envelope and click on this envelope here.

And what that will do is send your seed number to your inbox. So if you go to your inbox or direct messages up at the top here with a little discord symbol. And if I go to Mid Journey Bot, you can see it sent me the seed number for that image. So I’m just going to copy that seed number and then go back to my comic book art server where I was before. And I’m actually just going to paste this into a notepad over here because I’m going to use that in a second.

So I want to go back up here, use the link that was this Identifier for this image here, and I’m going to type in my prompt imagine. And then I’m going to put that original image photo link and I’m going to write Crying Female, Burning City in background. And what I’m going to do is I’m going to now just copy that seed number. You need to put a space between seed and the number. And so this is taking my original photo, giving it a prompt, and then attributing a seed from that image.

So it should be in that style. So this should give me an image closer to my original photo. So as you can see, there’s different paths and ways you can construct an image to be consistent. This is what I like to use to create consistency throughout my comic. So this is what midjourney has now given me with the seed.

So I think this is a really nice combination of the original photo and this character stylization. And you can now use this with any prompt in your comic. Let’s type in a new prompt. Imagine.

And I’m just going to copy the image link. So this is a link to the photo. And I’m going to put female happy, dancing in forest. And then I’m going to add this scene here.

OK. And then we’ll just see what that gives me. But it should give me my consistent character within any scene. So you can see now midjourney has created that same character which is based off my photo in a forest happy and using that same seed. So now we can literally place that character in any action shot with the blue top and the same scraggly hair as mine and create a comic.

It’s that simple. So what I’m going to do now, I did a test on creating different poses using a photograph. So I’ll just go back to uploading an image and I did all these really, really awkward poses. I’m going to choose this one on the phone. I actually found it quite hard to get my journey to do the fingers right for me on the phone using prompts.

So I decided I’d try and do this. I’ll click on the image and then right click copy image link. We did this all at the beginning and then put in the prompt and then type in female. We can use that same feed actually female looking at phonezerning city in background. And then I’m just going to copy that same seed.

So it should give me the same stylization that I’m after but with a different pose. And there you have it, that same character that was in the above images using the seed with the pose of looking at the phone, which is based off that image. So you can see my character is pretty consistent throughout all the images. And you can do different emotions. You can do close up to the eyes.

You can import your own images or your own poses. As you can see here. I actually took a video of myself in all the different poses to I felt really awkward doing this to help with the prompts where it wasn’t getting the pose exactly as I wanted it. So now let’s go over to Canva. So this is Canva.

It’s an absolutely free tool. There’s no need to sign up to the pro package to do this. All you need to do is go to Uploads here on the left hand side and go to Upload files and select the files that you’ve saved from your midjourney project. So you can just simply drag and drop the images into the boxes. You can resize the boxes.

It’s so simple. I don’t think you need a tutorial on it whatsoever, but I can do that if you need it. But there you can see some images that I added to the comic. This is a really rubbish comic, by the way. I’m sure you can do way better than this.

So, yeah, that’s it. There’s no postproduction on there where you can see the hands that have gone wrong or there are certain details that you might not like. You can use the out painting feature in Darley, so that’s kind of postproduction. But I’m going to show you that now while you’re here. So this is Darley and probably the quickest way to fix hands.

You get 50 free credits if you sign up and on the top right hand corner, if you click on that round icon and click on Open Editor, you can upload your images to try and fix them. So click on upload image and I will find go with this one here because the hands are a bit wonky. So you can choose this eraser tool down here and just array part of the image you want to reimagine. And I’m just going to type female holding mobile phone to head and see what that generates. Sometimes the generations are a bit iffy and it also will generate outside this area in all this checkerboard area or reimagine what it thinks should be there.

I don’t really need that because I just want the square image. But you can always crop that down in Canberra. It’s not a problem. So, as you can see, the hands are slightly better. They’re not perfect.

You could probably do a better job in Photoshop, but to be honest, that would do the job absolutely fine. So I’m just going to click Accept on that. If you don’t like the generation, you can press cancel and just reroll again and that will give you another set of variations. But that will cost you credits. So click Accept and I think I’m down to zero credits now.

Again, it’s not my account, it’s my brother’s. Sorry about that. But you can see you can get a bit of a better fix to the hands and it’s not perfect. If you know how to use Photoshop, I suggest going in there and doing it in there. But this video was not about postproduction, this was about creating a quick comic strip book for fun and just to test ideas around it.

There’s no way how I would do it professionally, but this just shows you how the development of the algorithm can speed up the art process massively. And that might be a good thing and that might be a bad thing. I’d love to know your opinions in the comments below. I’ve also got a website with prompt crafting on as well so that gets into more detail about how to get the best out of your prompts. I have not shown that really in this video because I wanted to keep it nice and light and simple.

But if you go over to promptmuse.com, you will also see a directory of all different generators, as well as all my tutorials in written format. I’m currently updating that at the moment, so there’s loads more information to go on there. And if there’s anything you think I should add or I’m missing, please email me and let me know, because I would like to make this a place for our community and hopefully at some point put a forum on there. But thank you so much for watching this video. I hope it has helped you and maybe giving you the confidence to use AI because it’s so simple.

Guys, if you can use Google, you can use this. I would love to see your comic strips as well. So if you want to send them to me, just go to prompt, muse on Instagram and just send me a DM. And I’d love to see what you’ve created with using this technique. There are so many different techniques out there, but this is the one I’ve chosen to show you today and I hope it works well.

Thanks very much and that will do it. Bye.

<p>The post How to make a comic strip with AI : Midjourney first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/feed/ 0
Why Midjourney 4 Has Just Gone Viral https://promptmuse.com/why-midjourney-4-has-just-gone-viral/ https://promptmuse.com/why-midjourney-4-has-just-gone-viral/#respond Thu, 01 Dec 2022 00:13:53 +0000 https://promptmuse.com/?p=482 Two. No, I spent hours and hours and hours in midjourney four. The results are quite frightening. So you might be wondering what’s new in this mid journey. It’s an alpha version. The full version has not been released. They have been building this for nine months with help from yours truly, you and you [...]

<p>The post Why Midjourney 4 Has Just Gone Viral first appeared on Prompt Muse.</p>

]]>
Two. No, I spent hours and hours and hours in midjourney four. The results are quite frightening. So you might be wondering what’s new in this mid journey. It’s an alpha version.

The full version has not been released. They have been building this for nine months with help from yours truly, you and you and everybody who has used the Discord server. They’ve been taking all this information and seeing what humans actually really like. There is way more information and knowledge in this model, which means there’s more creatures, people’s, places and way more intricate detail within those images than we have seen before. They’ve been collaborating, that all, with new AI architecture and have built midjournie for one of the main features is that it handles multiple subjects as well as multiple people.

You can put multiple people in a scene interacting with each other, which is insane and it works really well. The only issue is that we still have that nightmarish hand issue where the hands all form into each other, emerge and have six or seven fingers. They’re probably working on that and I can imagine when they release the final version that will be fixed and again with the objects as well, you can put multiple different objects in a scene and use prompt layering to direct each object. It also supports chaos, so you can set it from zero to 100. So this is the alpha version.

So this is the introductory version. And as I was saying, the resolution and quality is not as of the previous versions because they’re holding that back. For the final version. You cannot currently do aspect ratios. Again, they’re just working and tweaking things out of that.

So you just stuck with the same aspect ratio. And the four quads you can upscale slightly for a better version. You’re not going to get highquality different aspect ratio out of it. Basically, you need to expect quirks and inconsistency, this being an alpha version. So if there’s any errors and things going wrong, hopefully that will be fixed.

In version four. And this is straight from the horse’s mouth, this is quoting the journey. They’ve said version four isn’t the final step, but our first step and we hope you all feel it at the new beginning of something deep and unfathomable. They are moving forward to the full launch, whenever that will be. There is no date set for that, but for now we can play around with this alpha version.

And it’s looking really promising. So come with me, I’ll show you some things I created. Let’s have a look at these. So this is what I found really interesting. So if you sell on Etsy or Ebay or prints or whatever, you’re probably used to going to shutterstock or wherever, get Images, to name a few, to get your mockup images.

Now I actually create mockup images in blender, but this here, I’ve just typed in a a four white frame on wool focus on frame in the Stylish home. Realistic, some of these images, they’ve given me the artwork in the frame, which is not a problem. I can remove that very easily in postproduction. Now, this test was quite interesting. So prior to version four, I had some big issues with motion action shots.

It was adding extra limbs and I was having to remove them with the Remix feature, and it was just all a bit cumbersome. Now they seem to have nailed the action shots. And I loved these. I wanted to see what the photo realism was like and I did the plus size model photoshoot, and look how beautiful these images are. If I just click on I mean, that looks almost real.

That’s insane. To be honest, I don’t know if I would be able to tell the difference between this image was in a magazine. I wouldn’t be able to tell you that it was created with AI. That’s insane animal photography. So this is using the multiple subject prompt.

So I’ve put a Dalmatian dog, a golden retriever and a Jack Russell sat together. Pet Photography Ultra HD, focus on dog blow edges, super detailed, et cetera, et cetera. And yeah, I mean, this one is gifts. Give me one dogmation for some reason. But that’s really I actually spelled at Russell wrong.

That’s why I was just trying to put another dog in, because my spellings are atrocious. Who doesn’t want to dress their dog up as baby yoda? You can put outfits on your dog with the algorithm. Moving on to a kind of octane blender render. Blender render.

I just went for an aerial character. Funny octane render, which is a render. So when my journey eventually comes out with their 3D programme, I suspect these will be built with polys, which is insane, and I’m looking forward to seeing that. But, yeah, it’s generated something that you would expect to be on the front of 3D Arts, if that’s still around, and you’re old like me. Going back to the photography, I wanted to just try out like a little girl dressed up as a fairy.

Photography Ultra HD and this is what it’s given me. And then again, you can upload a picture of your child. I wouldn’t be uploading a picture of my child to the Internet. There you go. And then add this style on to them.

This is a 3D Isometric withered room within Blender. And again, it’s served. It’s given me what I’ve asked for, frighteningly fast and accurate. And again, you can change the details in this image with the Remix feature. I then changed it to be a 3D Isometric dungeon room Blender.

And there you go.

I’m still on the 3D theme. I went 3D low poly game hut Blender. Again, I’m using a really short to the point prompts, and they said that you can add more detail and multilayering of prompts, which is really cool. But, yeah, it’s giving me like Pinterest 3D game art results, which is crazy. I don’t think many industries are safe from this.

I know Mark Holtz had said that he’s working on a 3D programme. With another click of a button, you’d be able to get the polymesh for these game assets, which is insane. Yeah. Oil painting of blackbird bold brush strokes. And it’s just again, I wouldn’t be able to tell you that AI painted that.

I just did a few tests on the bold brushstrokes as well, and getting that stylization as well. But what will be cool is that you can upload a picture of your dog or yourself and add this style to your image, and it’s a very satisfactory look to the image. And I know my journey had been working on looking at what people were actually creating in discord and what pictures they liked and what pictures they didn’t. And this taught the algorithm what was pleasing to the human eye. And I think they really have got this in Mid journey.

I then moved on to watercolour, like loose watercolour. So this is Watercolour zebra, loose brushstrokes watercolour. Again, because I wanted to emphasise the fact that it was watercolour, because sometimes if you don’t put it in multiple times, it won’t create that loose, watery style. I mean, look at the top right one. That’s insanely beautiful.

This is really good. This is better than I could paint, which is worrying. This isn’t at all. This is a replacement. So I wanted to see a bit more detail.

So I put in a felted mouse holding a flower. Cute. Stupid detailed. My journey is serving us, guys, on that handmade theme. Again, I’m looking at intricate detailed here.

I did an embroidery styled needlework of deer and fox and this is what it gave me. Look at the details. You can see the actual embroidery. Again, with the watercolour style. I tried to recreate, like, Etsy birthday card sort of thing in watercolour style.

And as you can see, watercolour and ink Georgia with a party hat on. And baby, that’s what it gave me. I then tried a Beatrice Potter like, I guess like a child book style artwork, a fog dressing clothes by Beatrix Potter, watercolour style. And bam, it gave me that. And look at the detail of the clothes.

It’s amazing. Again, midjourney is a better artist than I am. And then I looked at, again, a trending image on Etsy and recreated that in midjourney. And it’s far removed enough away from that image on Etsy that I probably copyright this and resell. So that’s a black and white photo of an elephant in a buffer, but photorealistic.

Yes. Thank you very much, guys. And on that note, I have 500 subscribers. I don’t know why you guys are subscribing, but I like you very much for doing that. But the reason why I love doing this channel is that it can help open up the AI community to those that are unaware of it or do not have much knowledge of digital art and AI art.

And that’s what I’m here for. I actually base my my career on learning from YouTube videos. I know. But I managed to forge a career somehow in it. And that will do it for now.

Bye.

<p>The post Why Midjourney 4 Has Just Gone Viral first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/why-midjourney-4-has-just-gone-viral/feed/ 0
Prompt craft for Midjourney – Cheat Sheet https://promptmuse.com/prompt-craft-for-midjourney-cheat-sheet/ https://promptmuse.com/prompt-craft-for-midjourney-cheat-sheet/#comments Wed, 30 Nov 2022 22:56:59 +0000 https://promptmuse.com/?p=466 I have compiled a list of prompts and prompt modifiers for Midjourney, that you can experiment to improve your prompts. Sometimes the simplest, shorts prompts work well. When you are looking for a specific render the below prompts can assist you getting closer to the render you desire. Basic prompts to use in Midjourney /settings This [...]

<p>The post Prompt craft for Midjourney – Cheat Sheet first appeared on Prompt Muse.</p>

]]>
I have compiled a list of prompts and prompt modifiers for Midjourney, that you can experiment to improve your prompts. Sometimes the simplest, shorts prompts work well. When you are looking for a specific render the below prompts can assist you getting closer to the render you desire.

Basic prompts to use in Midjourney

/settings This prompts actives the settings menu where you can swtich between Midjourney versions, change stylization, render speed etc

/info  Will active your own personal settings, such as your subscription plan.

/subscribe Creates a link to Midjourneys subscription page

Text Weights –iw

Weights can be set to add importance to certain words. The weight is from 0.1 to 5.0 in order of importance, with 5.0 being most important in the prompt e.g..

A beautiful girl iw 0.5 wearing red dress iw 10

The default weight setting is 0.25

iw Can also be used as an alternative to ::

Seed –seed

Bu default Midjourney will use a random seed to generate a render. If you like the particular style of render and wish to replicate a similar style you can set the seed to the same number.

To find a seed number of a render you need to hit reply on the image and click envelope icon. This will reveal the seed number of that render for you to use elsewhere.

Stop — stop

Adding a stop  stop value stops the render at a certain point of rendering. The Stop value starts from 0 to 100. The stop value can be used to create a more stylistic loser styled image.

Chaos –chaos

Chaos creates more randomness to your Midjourney render, it allows Midjourney more freedom with your prompt chaosThe chaos value is from 0 to 100 with 100 giving you image the most chaos.

Negative Prompts –no

If you are creating a moutain render and you do not want trees simply adding  no trees at the end of the prompt will remove trees.

Aspect Ratios –ar

Aspect ratios can have a dramatic output on your end results. Midjourney changes the composition of your prompt to fit the aspect ratio. So it s important to set it appropriately.

If a aspect ratio has too much room, it has the habit of adding another head to fill the dead space.

A beautiful girl wearing red dress ar 9:6

Popular Midjourney Aspect Ratios:

1:1

4:3

16:9

9:6

Wallpaper

Image References

There are two ways of doing this. Using an image reference and referencing a person or artist. I personally feel it s unfair to reference someone s work but that is up to your own discretion, I m just teaching 

Midjourney image datasets are not infinite, so it my not recognize an artists name or style. To combat this you can use a image of there style in the prompt

You can add an artist name into the prompt to get a style similar to theirs.

Too many details

Sometimes an additional explanation is required on a word for example.

A face of a women

A beautiful symmetrical face of a woman with yield different results. As us, humans understand a symmetrical face is more conventionally seen as more attractive (rightly or wrongly)

no allows is called a negative prompt so if you want a field with no trees, try adding no trees to ensure there are no trees in the final render Midjourney produces for you. It s good practice to use a number to describe multiples, rather than a group of witches it s better to specify the amount

3 Witches

<p>The post Prompt craft for Midjourney – Cheat Sheet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/prompt-craft-for-midjourney-cheat-sheet/feed/ 1
How to invite Midjourney bot to your own Discord server https://promptmuse.com/how-to-invite-midjourney-bot-to-your-own-discord-server/ https://promptmuse.com/how-to-invite-midjourney-bot-to-your-own-discord-server/#respond Wed, 30 Nov 2022 21:10:35 +0000 https://promptmuse.com/?p=327 Hello and welcome back to another prompt news video. Today I’m going to be showing you how to get out of the newbie rooms and invite the midjourney box into your own Discord server. Not only that, I’m going to also show you how to invite it into your messages, which means that you can [...]

<p>The post How to invite Midjourney bot to your own Discord server first appeared on Prompt Muse.</p>

]]>
Hello and welcome back to another prompt news video. Today I’m going to be showing you how to get out of the newbie rooms and invite the midjourney box into your own Discord server. Not only that, I’m going to also show you how to invite it into your messages, which means that you can seal people within the Discord server that have more more of a private room. It can get a bit cluttered and confusing in the newbie room. And also people start literally copying your prompts as you write them, which you might not mind at all, but can be a bit annoying as you start adjusting something and somebody’s literally copying and pasting your prompt in front of you.

You can only invite the midjenny bot to your Discord server if you have less than 30 pounders. At the end of this video, I’ll show you how to set up your own Discord server because I’m assuming that you may have one already, but do not if you haven’t. It’s super simple and I’ll show you that at the end. So we find ourselves in the newbie room and it was a good place to start and get inspired by other prompts, but you want to get a bit more personal with the midjourney bot. So I’m currently in the newbie room in the Discord server.

What you need to do is find the midjourney bot over here on the right hand side and just left mouse click on that middle journey bot and as you can see, it’s got an Add to server button. It’s so easy. Just click on that and this dialogue will appear. Just navigate down to add to server. From this drop down box, select the server you want to invite the bot to and click Continue.

I’m just going to authorise all that I’m here just about and that has authorised if you just close that window there, we need to navigate over to our own server. Just click on your own Discord server on the top left hand corner and you can see I have a wild mid journey bot in here. So you can just type in. Imagine oh my goodness, a cup of steaming tea watercolour. And you have all the same great controls that you do in the actual midjourney Discord server.

So that’s one way of getting your own private room. And when I say private, your prompts aren’t technically private. People can still find your prompts in the community feed. You have to pay an additional fee to have private prompts and to activate Egypt type private in the botchannel. But be warned, that will cost you money.

I think it’s like $20. And also, with all the prompts of reversal software out there, to be honest, there’s no point in hiding your ponts unless you’re like, I don’t know, super commercial or something, I don’t know, or just about a copyright, an image. The Discord Ever is super, super slow today. So that’s going to take a while, but believe me, that works now. So now I’m going to be showing you how to invite the midjourney bot into your messages, into your inbox.

And from there you can then use your prompts. Just so you know to get to your inbox, your direct messages are up here on the top left hand corner. But first we need to invite the bot over there, head over back to our Discord server. And on the right hand side under all these names you can see the midjourney bot is just chilling out there. So what we need to do is rightclick on the name and select a message from the popup box.

You’re now working within your inbox and you’re just messaging the midjourney bot directly. I personally just usually direct a message the midjourney bot rather than bring it into my own server. Don’t know why, it’s just personal preference. I hope that has helped you today. It’s very simple and easy.

And I was also going to show you how to set up your own Discord server. Discord is just essentially a forum chat room for active users. So if you want to invite your friends or start a community, it’s a great place to do that. And it’s so easy to set up your own Discord server. Just go over to this left hand side bar and click the plus icon here, add to Server and then you can create your own.

So then you can just go through settings there. We’ll give you some cool selfexplanatory settings. So for me and my friends, prompt to move friend server and create. And there you go, you’ve got your server. You can change your icon for your server at any point as well.

So usually somebody might have about 20 different servers here or groups or communities that they’re in. If you have any questions, just pop them in the comments below and I’ll be happy to get back to you. And please, please I’m a brand new channel and I’m really, really trying to bring you guys as much information on midjourney garly to stable diffusion as I can. I come from a background as a professional 3D artist and I’ve always worked in the tech and digital field in regards to art. I would love you guys to subscribe and hit that notification that is really really important and help grow this channel because I’m tiny and I just do this for fun and it makes it even more fun when there’s loads of people, a bit of a community going on.

I don’t like gatekeeping any secrets to do with AI art and I would love to bring you guys aboard and suggest videos that you would like me to do or anything you would like me to look at. I would like to look more closely into animations within stable diffusions next. So that is going to be the next few videos. But I’m just going to add a few tidbits and information that help you a bit. Like this video that you might have not known about.

Anyway, that’s enough babbling for me. Thank you so much for joining me. Have a great day. Whatever you choose to do, that would do. Bye.

<p>The post How to invite Midjourney bot to your own Discord server first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-invite-midjourney-bot-to-your-own-discord-server/feed/ 0
How to use the Remix Feature in Midjourney – Groundbreaking for ai art #midjourney #aitutorial https://promptmuse.com/how-to-use-the-remix-feature-in-midjourney-groundbreaking-for-ai-art-midjourney-aitutorial/ https://promptmuse.com/how-to-use-the-remix-feature-in-midjourney-groundbreaking-for-ai-art-midjourney-aitutorial/#respond Wed, 30 Nov 2022 20:05:42 +0000 https://promptmuse.com/?p=242 Hello and welcome back to another prompt news video. Today, I’m going to be showing you something. Seriously, call me. Journey has just released. It is the Remix button, and it will allow you to edit prompts to recreate or remove particular elements. Introducing the New Remix Button: A Breakthrough in AI Art Creation Okay, [...]

<p>The post How to use the Remix Feature in Midjourney – Groundbreaking for ai art #midjourney #aitutorial first appeared on Prompt Muse.</p>

]]>
Hello and welcome back to another prompt news video. Today, I’m going to be showing you something. Seriously, call me. Journey has just released. It is the Remix button, and it will allow you to edit prompts to recreate or remove particular elements.

Introducing the New Remix Button: A Breakthrough in AI Art Creation

Okay, well, that’s enough rubbish explanation of it. Let’s see it in action. If you head over to your midjourney discord server and type in Settings Return, and that will bring your Settings dialogue up. And if you look towards the bottom, you see this Remix button with the dice next to it. This is brand new.

They’ve just added this. And if you click on it so it’s collected and highlighted green. With the Remix button now activated, let’s put it to good use. So I’ve created this piece of work here, and she’s got three arms. Now let’s use the Remix button to remove the arms that keep the seed.

And currently, I can’t do that. I just have to create variation after variation until hopefully it resolves itself. But with this new Remix button, I can actually remove that extra arm and keep the seed the same. So if you go down to this variation one button and you click on it, now that the Remix button is activated, what will happen is that a dialogue box will come up. The interesting part about this box is that it’s got your prompt in, and you can now fix your prompt.

Using the Remix Button to Modify Prompts for Accurate Images on Midjourney

I’m going to now put into this prompt where I want it anatomically correct, girl running with arrows. And also I’m just going to put no extra arm. And hopefully this will work. If you don’t know what the Hyphen hyphen now means, that means to remove an element from your prompt. So let’s say you’ve got three eyes that’s on the face, and if you put Hyphen hyphen no third eye, it should remove that extra eye.

So that’s good to know. And by putting anatomically correct at the front of my prompt, I’m telling midjourney. This is vital information and the image needs to be correct. And also having the no extra arm at the end of the pump reinforces that. I just want two arms.

Revolutionizing AI Art with the Midjourney Remix Button

Okay, so we submit that and we will see what’s happening. I’m on relaxed mode because I’m very poor. So if you just bear with me and we’ll start with this video to see if my Journey Remix actually does what I think it’s going to do. And if it does, it’s going to be really cool. Okay, so that image has now rendered.

So let’s open it up. So this is the image with the additional prompts now added. And honestly, I’m here for this. It has removed the additional limb. That is so cool.

Guys, I can’t these developers are working nights, I can assure you. So this is what I started off with. Free limbs. And using the lemix option, I managed to remove the limb and keep the seed. So this means you can change hair colour, environmental background, eyes on precreative prompts.

So this is giving us the first control over our AI art. And why I’m so excited I’m actually tingling is because this changes the game. So before, we were kind of given what was given from AI arts, but now we can control that art. I’d love to see how you guys integrate this remix button into your prompts. I hope that little edition has made your day, because it certainly has made mine.

And as ever, that will do it. And until next video, thanks very much. Bye.

 

<p>The post How to use the Remix Feature in Midjourney – Groundbreaking for ai art #midjourney #aitutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-use-the-remix-feature-in-midjourney-groundbreaking-for-ai-art-midjourney-aitutorial/feed/ 0
How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/ https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/#respond Wed, 30 Nov 2022 20:03:41 +0000 https://promptmuse.com/?p=238 Have you ever taken a mid-journey photo using AI-generated art only to find your image looking like a wonky mess? Fear not, we’ve got you covered. In this guide, we’ll show you three methods to fix those facial features, smooth out skin and achieve a more harmonious image. Method One: Arc by Tencent Arc by [...]

<p>The post How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 first appeared on Prompt Muse.</p>

]]>

Have you ever taken a mid-journey photo using AI-generated art only to find your image looking like a wonky mess? Fear not, we’ve got you covered. In this guide, we’ll show you three methods to fix those facial features, smooth out skin and achieve a more harmonious image.

Method One: Arc by Tencent

Arc by Tencent is a simple and free inner browser app that can be used without an account or any registration. Follow these steps to use it:

  1. Navigate to Arc and hit the “Upload” button.
  2. Select the AI-generated image that you want to fix from your files. The upload process may take some time to complete.
  3. Use the app’s “before and after” feature to adjust the settings until you’re happy with the outcome.
  4. Click the “Download” button to retrieve the finished image.

Note: This method may not be suitable for illustrative or textured images, as it makes them appear photorealistic.

Method Two: Gfpgon

Gfpgon is a program that can be run on Google Collab. It’s free to use and can be accessed via your Google Drive account. Here’s how to use it:

  1. Go to Gfpgon and click on “Connect”.
  2. Complete each of the five steps by clicking the “Play” button next to each one.
  3. Wait for the upload to complete.
  4. Click on “Visualise” to see the finished image.
  5. Click on “Download Results” to download the final image.

Note: This method is slightly more complicated and requires a Google account to use. However, it produces high-quality results.

Method Three: Using Photoshop

If you’re a Photoshop user, this method may be the most familiar to you. However, it’s also the most time-consuming and requires a subscription. Here’s how to use Photoshop to fix your AI-generated images:

  1. Open the image in Photoshop.
  2. Use the brush tool to paint over the facial features that need fixing.
  3. Adjust the brush size and opacity as necessary to get the desired effect.
  4. Save the image and compare it to the original.

Note: This method provides the most control over the final image, but may not be the most accessible for everyone.

So there you have it, three methods to help fix those mid-journey AI-generated images. Whether you use Arc by Tencent, Gfpgon, or Photoshop, you’ll be able to achieve a more harmonious image in no time. Experiment with these methods to see which one works best for you. Let us know in the comments which one you prefer!

 

Transcript

 

Hello and welcome back to another prompt news video. Today I’m going to be showing you how to fix those midjourney junky faces. This also works for darley output images and stable defusion. Well, essentially any AI generated art. So come along and I’ll show you how to fix those with wonky eyes and funny noses.

So, the first method we are going to be using is Arc by Tencent. It’s an inner browser app and you don’t need to log in, you don’t need to make an account, and you certainly don’t need to pay for anything. So it’s really easy. And let me show you how to use it. So we navigate over to Arc and as you can see, it’s pretty much of a WYSIWYG.

So we’re going to hit upload and find my Janky folder. This image was actually created in midjourney, if you wanted to know. I can’t remember the prompt. It was something about girls flashing and scene. So it takes about 30 seconds for it to load your image in.

Unfortunately, sometimes these in browser apps can be a bit slow because there’s a lot of people using the service. Here we go. And what I like about art, it shows you the before and the after. So you get this little scrolling thing and you can just push it back and forward. And to see the difference that the app is making here now, you can see it’s not only sorting out the facial features, it’s smoothing the skin and giving a colour correction as well.

And I’ve flipped right over. It’s actually getting a ridiculous bit of the shininess on the nose and refining some of the environmental detail. Now, I think that looks quite good. The only thing is that it’s moving and removing any of the textures. So if you’ve got kind of a textured illustrative look, it might not be the programme for you because it’s going to make it kind of photorealistic.

But if you want a quick fix to upload your images to the Instagrams, this is a very quick and easy process. And you just click on download and the job is done and it spits out your image for you. Okay, now to method two. This is called gfpgon, and it’s run on Google collapse. Please don’t be intimidated by the user interface.

It’s very easy to use. What’s really cool about Gspg is that you can actually save and run it from your Google Drive and make modifications to the app. All you need is a Google account. But for now, I’m not going to be doing that. I’m just going to click Connect here and then that will connect you.

There are five steps in total and you just click the ticks next to them and upload your image. So no talking. Let’s get to it. So we scroll down to step one, preparation. The user interface is showing user executions being run.

But don’t worry about that, you don’t need it. You’re not a programmer. So when you hit the Play button here, what it will do is run the code and you just wait to the bottom until it says 100%. It usually takes about 30 seconds. In fact, it tells you how many seconds it takes.

We’re 100%. Okay, so let’s move on to upload images. And that’s number two. So again, hit the Play button. Now you can select your image from your computer.

So I’m going to go to browse, get my Janky image. It’s important to wait until the Play button has a green text next to it and then you can move on to the next step. So it just takes a minute. I’m just going to have a cup of tea. One thing is to note you can see the progress of your image being uploaded here at the bottom, so you’re not waiting for an infinite amount of time.

Okay, that has now loaded. I’ve got my green tick. Let’s move on to inferring. So find that Play button and hit the Play button again at the bottom. Here we have the progress percentage, so keep an eye on that.

It usually takes up to 40 seconds, so it won’t take long at all. OK, so the next step is to visualise your image. So click the play button. Once again, we’ve only got one step after this. If you scroll down now, you will see your images.

And again, it’s got the comparison between what you put in and what it fits out. So it has a very similar post effect as Arc does. As you can see, it’s created a symmetrical image, a more harmonious image. It has sharpened the image and given a smooth to detected skin, as well as upscale the image slightly. And then the fifth and final step is to hit Play on number five download results and that will download you a lovely zip out of your image.

So our third and final text legal method is using Photoshop. You will require a subscription for this, so it’s not free and you need some skills. So with a quick tutorial on YouTube, you’ll be able to paint the tie no problem, I’m sure. But this is the final technique and I’ve done a really rubbish, don’t judge me by this, but a very quick eye repaint so you can see what you can achieve. Now, personally, I prefer this method out of the frame.

You can create your eye from scratch and ultimately have complete artistic control over your design. Also, you keep the integrity of the original painting. So if it’s done in quite an illustrative style or a loose painting style, you can keep those details. And here is a comparison of the faces across the applications. I’ve got the original, then Arc and then GFP gon, and then two minutes in Photoshop without any colour correction.

So Arc and Gfpg are actually pretty much photorestoration applications, but you can use them to fix up your dodgy AI. I would probably suggest investing some time and knowledge in getting to learn photoshop. Then you can become a master of your own destiny. So that is the free method, the predict of unjunctrifying AI images. If you have a better method, let me know in the comments below and I’ll try those out.

So thank you very much and that will do it. Bye bye.

Oh, yeah. Oh yeah. Always forget this bit. Can you like and subscribe and hit the notification button and I will come out with all video. Okay, thanks.

Bye.

<p>The post How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/feed/ 0
A.I artwork gets copyright – using Midjourney https://promptmuse.com/a-i-artwork-gets-copyright-using-midjourney/ https://promptmuse.com/a-i-artwork-gets-copyright-using-midjourney/#comments Wed, 30 Nov 2022 20:00:57 +0000 https://promptmuse.com/?p=233 Hello and welcome back to another Prompt Me video. Today we are going to be talking about getting AI artwork copyrighted. We are going to be talking about the case of Chris Cassanova who is an AI researcher in the US. She applied to the US Office of Copyright for her comic book that was [...]

<p>The post A.I artwork gets copyright – using Midjourney first appeared on Prompt Muse.</p>

]]>
Hello and welcome back to another Prompt Me video. Today we are going to be talking about getting AI artwork copyrighted. We are going to be talking about the case of Chris Cassanova who is an AI researcher in the US. She applied to the US Office of Copyright for her comic book that was generated in midjurney. The comic, which is named Zaya of the dawn, is the first known AI generated comics to be copyrighted in the US.

Chris is a pioneer of this and she has been open and transparent from the beginning, documenting her process on Instagram and completely open to everybody’s questions and comments they put to her. Chris has also recently revealed the graphic novel was shared and reshared and went on to be viral on several social media networks. Kris went on to say someone shared it on Twitter and without the credit she was told it went viral. So looking at the Zara observante you may see that the main character bears a resemblance to an actress called Zendaya. And you are right.

As I said previously, Cris has been very open and honest about her process as she has openly said that the main character is based on the actress Vander. And Chris says I use Bandex only in the draught of my graphic novel. It is not finished yet. Only had to copyright it because while I shared my work in progress, someone famous posted it on Twitter without credit. It wasn’t intentional, he just forgot it and I didn’t expect it would go viral.

I was recommended to copyright the draught and then I’ll copyright the finished comic. There you go. And for proof that it has been copyrighted, this is a Chris A certificate of registration. What is really interesting, under the US copyright laws, works created using AI is not usually subject to copyright. Copyright will only be applied if a centrist human being has created the works.

And in theory Chris has done exactly that. She has art directed this comic. She’s been using prompt and input, directing the algorithm to give her the output she required. It’s the first piece of artwork that has been copyrighted in the history of AI. Chris currently has a real human artist over painting the AI artwork, so essentially CCS using it as concept.

Kris has always been transparent about the use of artificial intelligence within the comic. She’s even gone as far to put the journey on the front cover. And with the help of the AI community, the comic has been translated into 15 different languages. The comic is now set to be made into a film. So watch this space for Xiao of the dawn.

There’s big things happening here. Not only has she been transparent about the process, she is now offering tutorials on how to get your AI art copywriting. So head on over to her Instagram page in my description below and follow her for new updates on the comic. I’m very excited to see what happens, because this sets a precedent for AI artworks gaining copyrights. So there you go.

I hope that clears loads of things up for you. Thank you so much for watching this video. I’m a brand new YouTube channel and very new to this, so forgive me if I’m not focused. I’m an AI and art enthusiast, so I absolutely love talking about the new technologies emerging every day as it’s rapidly evolving. So you can catch me on the prompt and use Instagram direct.

Message me if you’ve got any questions or you would like me to COVID any particular subject within the artificial intelligence technology arena. I’ve now got a Facebook and a TikTok. And, again, everything is down in the description box. Thank you so much for watching this, and till next time, that will do it. Bye.

Follow my Instagram for new videos, prompts and artwork: https://www.instagram.com/promptmuse/ A big thanks to Kris, for allowing me to feature her work in the clips. Please find her Instagram here and follow her process https://www.instagram.com/kris.kashta…

<p>The post A.I artwork gets copyright – using Midjourney first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-i-artwork-gets-copyright-using-midjourney/feed/ 2 MidJourney - Prompt Muse nonadult
How To Write Prompts  https://promptmuse.com/how-to-write-prompts/ https://promptmuse.com/how-to-write-prompts/#respond Wed, 30 Nov 2022 19:57:49 +0000 https://promptmuse.com/?p=227 Welcome back to another prompt news video. I am so happy to have you guys here and thank you for the new subscribers. As always, if you want to subscribe and hit the notification bell, that makes all this worthwhile. In today’s video, we are going to be discussing the best ways to write prompts [...]

<p>The post How To Write Prompts  first appeared on Prompt Muse.</p>

]]>
Welcome back to another prompt news video. I am so happy to have you guys here and thank you for the new subscribers. As always, if you want to subscribe and hit the notification bell, that makes all this worthwhile. In today’s video, we are going to be discussing the best ways to write prompts and some of the bad ways to write. This video is for beginners. So but also if you’ve been using it for a while and you’re a bit of a dab ham, this might give you some tips and advanced tricks that you can integrate into your pumps to get a better image and better result. So as I filmed this video, AI is still pretty new and you can find it on multiple platforms. And today I’m going to be using Mythjourney. It’s important to understand Mythjourney is still in developmental space. All this AI is pretty new to the scene. So you will see glitches, you might see a person with seven arms, or you might be having issues with hands. And that’s a classic of journey issues that they’re working on. But the faces are absolutely stunning. So using my journey as a concepting tool is fantastic.

You can try out different subjects, you can try out different styles, cyberpunk, steampunk, art nouveau, anything you can think of and add different artists to the mix and different mediums and styles and lighting. It blows my mind every time I use it and it’s only going to get better. This is an exciting time to come on board and learn how to write prompts. If you’re thinking, oh, this sounds very complicated, I don’t know where to start, start here. Literally, the secret is writing prompts is the same as writing. In Google Search, you are writing a text description of what you want to see and then hitting return and then midjourney. It brings you back an image result that it thinks you want to see based on your text description, exactly like a Google Search. So I’m going to break the prompt down into four simple, understandable components. So first you’ve got your concept and your subject. This is where the most weight is going to be at the beginning of the font. This is where you will define what it is. The next section is style. So you can style your artwork with a genre or a particular artist or mix in different artists.

You can use multiple artists and or different medium of art. So you could try photography, charcoal, sculpture. Just have a play around with all these different styles and you’ll be able to come out with some pretty cool images. So next is the quality inputs. HD stands for high Definition. You can add cinematic lighting. You may have seen Octane Render, which is a 3D renderer. The last and final part of your prompt is the composition. So how you want your output to look like. Do you want an Ultrawide shot? Do you want a wallpaper, like a desktop wallpaper. Do you want a macro shot or a specific headshot of your concept or subject? Put that in here. You can also put your aspect ratio so you can add the ratio size or the pixel size that you want your image. This will change your composition and sometimes gives you better images. So play around with the aspect ratio. Now, if there’s any words that you don’t understand on this screen, google it. Get used to all these different terminologies AI.

Art is not an exact science.

You can become a director and by using the correct prompts, you can get closer to the feeling or the vibe that you want from the image. So all of that in practical terms, it’s all well and good. So let’s put the theory into Practise. Let’s type in sitting black hat, wearing glasses, art by Art Germ, cute cinematic style, HD detail, octane render, and the aspect ratio of two by three. This is the result of our prompt. I did a couple of variations on this image and when I was happy with the variation, I upscaled the image and this was the result. And I think you can agree it’s pulling all the information from the prompt. It’s a black sitting cat with glasses on. It’s cute, it’s got a cinematic style, it referenced the artist nicely, and the aspect ratio is two by three. In this circumstance, Mid Journey has understood the brief and I know it’s not always a way. And you might have to reroll your gift closer to what you want. So you might be wondering what would happen if I just took all the fancy pants descriptions out and just put black cat in glasses.

Well, this is what my journey would give me. It is a black cat in glasses, but there’s no stylization and it quite literally is a black cat in glasses. So you can see how the prompt really influences the output that midjourney will retrieve for you. So hopefully, from this point forward, by watching this video, your prompt structure has improved. Now, to improve on your language, you can go and get inspired by heading over to the Midjourney.com website. And if you go below home and see Community Feed over on the left and click on that, you can see what images are trending, what images are popular, what images are new. What’s really cool is you’re on the standard subscription. If you click on any image and scroll down, ignore this text here, that’s not the prompt. If you click on the two dotted lines, three dotted lines here, and click on Copy Command and open Notepad, Word, whatever you got and just paste, you can see the exact command that they used for that image. Now, if you put this into Midgenit, you will not get a same image because it’s re rolling the dice. It’s not the same seed number, which I can explain in another video, but you will not get the same image, but you’ll get something similar.

But you can see that they have used pretty much all the structures that I explained earlier on in the video. So they put their main concept subject. Their artists actually use the same artist reference and what kind of design, what kind of quality they want from the image. As well, you may notice no watermarks. So if you want to subtract something from your prompt, you put in no glasses and that would remove the glasses from my cat. Or if you were doing a landscape, I’ve been hyping no trees and that would remove all the trees from your image. So that is a pretty cool trick. They also don’t want any borders, they don’t actually want any signature on their image. As you notice, some images from midjourney will show them swinging line a signature and you usually have to photoshop those out. But if you request no signatures, your image won’t have that in. That’s a really cool thing to learn as well. So, as I said, when you come over to the midjourney community, you’ve learned so much by looking at other people’s artworks. So we spoke about all the good things to put into your prompt and how to structure them.

There are things that you should not include in your prompt and those are banned words. You can go to the Journey Discord server and on there you can find the Rules channel. And on the channel the statement goes as follows do not create images or.

Use text fonts that are inherently disrespectful.

Aggressive or otherwise abusive. Violence or harassment of any kind will not be tolerated. No adult content or gore. Please avoid making visually shocking or disturbing content. We will block some text inputs automatically. So there you go. That’s everything you need to know in a nutshell. But if you are still unsure, just head over to the Rules section on the Discord Server and you will find their terms and conditions. I try and keep these videos as bitesized as possible because I know it is a lot to take in and I do have other videos that expand further on prompts, but I hope of every video I do, I improve and get you the information that you need clearly and concisely. I would really, really appreciate it if you follow my channel and subscribe.

And just before we go, I don’t want to say I have a prompt music Instagram page where we can interact over there and you can see prompts and images that I post. I pick a muse a day, so it’s worth following. And if there are any tips you feel I’ve missed out, please feel free to put in the comments section so other people can see and learn. Thank you so much guys, for joining me for another video and I’m looking forward to making another one. And that’s it for now.

Thanks a lot. Bye.

<p>The post How To Write Prompts  first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-write-prompts/feed/ 0
How to check how many render jobs you have left in Midjourney https://promptmuse.com/how-to-check-how-many-render-jobs-you-have-left-in-midjourney/ https://promptmuse.com/how-to-check-how-many-render-jobs-you-have-left-in-midjourney/#respond Wed, 30 Nov 2022 18:44:19 +0000 https://promptmuse.com/?p=222 In this video, I’m going to show you how to cheque how much credit you’ve got left in your midjourney subscription. Let’s head over to midjourney now. Okay, so once you’re over in the midjourney discord, just head to any of the rooms. It could be a newbie room, any room at all. And in [...]

<p>The post How to check how many render jobs you have left in Midjourney first appeared on Prompt Muse.</p>

]]>
In this video, I’m going to show you how to cheque how much credit you’ve got left in your midjourney subscription. Let’s head over to midjourney now. Okay, so once you’re over in the midjourney discord, just head to any of the rooms. It could be a newbie room, any room at all. And in the message box or the prompt box, just type in Forward Flight and then hit return twice.

I’ll just run through some of these settings here. The subscription plan. Basic. Yep. And basic job mode.

Fast. I would rather go relax, which means the rendering times are a little slower, but it doesn’t affect how many jobs you have. You can only get this on the first pound a month subscription plan, so it’s not available to me. First time rendering is the amount of jobs that I’ve got available left. Visibility mode is public, which means everybody can see the artwork I create.

You can actually go private. I think this costs additional fee onto your account. Lifetime usage, 76 images, relaxed usage. I can’t use that. Metre usage, so I haven’t gone over.

So once you’ve gone over, let’s say you’re on a ten pound subscription, once you’ve gone over that, you would then go into Metre usage. It’s why it’s important just to cheque how much credit you’ve got available in your Mid Journey account. So you don’t get any unexpected bills if you’re on a ten pound subscription plan. So I hope that’s really helped you today. And if it has, gimme a like and a subscribe.

And I’m gonna come back with some more information to help you guys on your AI journey.

<p>The post How to check how many render jobs you have left in Midjourney first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-check-how-many-render-jobs-you-have-left-in-midjourney/feed/ 0
How To Upload a reference image to Midjourney https://promptmuse.com/how-to-upload-a-reference-image-to-midjourney/ https://promptmuse.com/how-to-upload-a-reference-image-to-midjourney/#respond Wed, 30 Nov 2022 18:41:48 +0000 https://promptmuse.com/?p=215 In today’s video, I’m going to be showing you how to upload an image into midjourney and use it as a reference. You can add an image as a reference using a URL and then add additional fonts to that image. They can combine images with fonts to produce a beautiful combination of the two. [...]

<p>The post How To Upload a reference image to Midjourney first appeared on Prompt Muse.</p>

]]>
In today’s video, I’m going to be showing you how to upload an image into midjourney and use it as a reference. You can add an image as a reference using a URL and then add additional fonts to that image. They can combine images with fonts to produce a beautiful combination of the two. First we’re going to need a reference reference image to upload so you can get yours from the Internet and rightclick and save it onto your computer. Make sure it’s in image. File me. I’m just going to take a photo of myself now and I’m going to use this as my reference. And you need to head over to your prompt message. Click the plus icon and an upload file pop up will appear. Click on this there’s my really gawky photo. I’m going to click on that and that’s now in the message box you need to submit that by clicking Return on the keyboard and you will see that has now processed midjourney. Now you need to click on that image and your image will appear in a light box here and you can rightclick it and copy Image link. That’s really important.

Copy image link. Once that’s copied you can just come out of there and type in Imagine as usual. And now you can paste your image or reference file. Your image needs to be saved somewhere on the Internet. So if you’re going to fishing and pulling images from there, you just need to ensure that the image is the image link with a dot file format at the end. Otherwise, if it’s just a link to a website, it’s not going to work. So remember that if you have any problems, message me in the comment section below and I will help you. So I’m just going to submit that image to midjourney to see how my journey translates that image without any prompt. As you can see here, midjourney has translated some of my key characteristics, which is my dark eyes, my dark hair, my nose and my smile. You can see where it’s pulling from. You can do a few experiments with this and see what you get. Now the interesting part is that I haven’t set any weight and weight is the weight of information that you’re giving to me, journalists.

So in example, I’ve uploaded the Jpg as a reference file and then I’ve written next to it on the prompt Vibrant California Copy. So I want a combination of both the image and the text prompt so I can write that in there and apply a weight to it. So if I added, for example, an image weight of zero five, so it’s giving more importance to the text prompt. And then on the other end of the scale there’s an image weight of five and it’s giving more importance to the uploaded image dots file. And then you can go somewhere in between, which is Image rate one. It will give you an equal combination of the uploaded image as well as the text prompt.

So now, naturally, I’m just going to show you a prompt with awaiting. So I’m just going to plug in that prompt once again and paste that image file on my photo to the exact same photo and give it a wait. By doing IW seven five. By placing a low image rate of zero 75, we’re communicating with midjourney to say you can use that reference image, but use your own interpretation as well. So we’re giving more creativity to the AI, so let’s see what needs the journey has done to me. So it’s made me into a window that’s probably taking all the colour details and the light information and making a variation from there. So let’s see what happens if we add our image, a prompt and a weight. So this is the rendered image and you can see that midjourney was referencing my photo that I uploaded and using the prompt comic book style and combining the two. It’s a very low image weight, so it’s referencing the comic book style over the reference photo. Hopefully this video has shown you exactly how to upload an image to use in midjourney and you now have a little bit more understanding how image weights work.

If you found this video useful, I would really appreciate it if you subscribe to my channel. I’ve also got a new Instagram is at prompt news. We have prompt news of the day.

As well as prompts for you to.

Try to make your artwork more interesting, consistent and maybe find your style within the AI art community. And that’s going to do it. Thank you very much and goodbye.

<p>The post How To Upload a reference image to Midjourney first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-upload-a-reference-image-to-midjourney/feed/ 0