Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Tue, 06 Aug 2024 12:04:21 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg Prompt Muse https://promptmuse.com 32 32 How I Made A K-Pop Singer: Full Process with phone mocap! https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/ https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/#respond Thu, 01 Aug 2024 11:07:03 +0000 https://promptmuse.com/?p=3915 Character Creator & Blender Pipeline Introduction Welcome to this Written Tutorial on Creating 3D Characters with Character Creator, Blender and Unreal engine. Please see the full video here In this tutorial, I’ll guide you through the entire process of creating 3D characters using Character Creator and the free plugin CC Blender Pipeline. From character creation [...]

<p>The post How I Made A K-Pop Singer: Full Process with phone mocap! first appeared on Prompt Muse.</p>

]]>

How I Made A K-Pop Singer: Full Process with phone mocap!

How to create a 3d character with Moveai and suno
Character Creator & Blender Pipeline

Introduction

Welcome to this Written Tutorial on Creating 3D Characters with Character Creator, Blender and Unreal engine. Please see the full video here

In this tutorial, I’ll guide you through the entire process of creating 3D characters using Character Creator and the free plugin CC Blender Pipeline. From character creation to importing into Blender and finally into Unreal engine. We’ll cover every step in detail and try out some new plugins! Please note that Reallusion currently has a competition to win cash prizes check it out here!

What You’ll Learn

  • How to quickly put together a 3D character using character creator and the blender pipeline. How to integrate plugin into your workflow and bring everything back into character creator and add mocap.
  •  

Prerequisites

  1. Character Creator: Used to create and export your character (currently 40% off with a 30-day trial).
  2.  
  3. Unreal Engine: Ensure you have it installed.
  4.   
  5. CC Rig Plugin: Available on the Epic Store Marketplace.
  6.  
  7. MetaHuman Plugin: Install from the Marketplace.
  8.  
  9. Unreal Auto Setup: Download from the Reallusion website.
  10.  
  11. Live Link Face App: Free on the App Store via your phones app store
  12.  
  13. Ensuring All Plugins Are Active In Unreal Engine:

    To ensure a smooth workflow, you need to make sure all necessary plugins are active in Unreal Engine. Follow these steps:

    1. Activate Required Plugins:
      • In the Plugins window, use the search bar to find each of the required plugins:
        • Quixel Bridge
        • CC Rig Plugin
        • MetaHuman Plugin
        • Unreal Auto Setup
        • Live Link
        •  
        • Apple ARKit (for ARFaceKit functionality)
      • Make sure each of these plugins is enabled by checking the box next to their names.
      • Click Restart Now if prompted to restart Unreal Engine after enabling plugins.
  14.  

Step 1: 

  1. Creating Your 3D Character

    We start with Character Creator to design and customize our 3D character. This powerful tool allows you to sculpt detailed and lifelike characters with ease. You can easily with sliders adjust facial features, body proportions, and clothing to match your desired look.

A computer screen displays a 3D character modeling application with a female K-Pop singer avatar in the center. A small inset shows a woman speaking and gesturing towards the screen, explaining the full process of using phone mocap for realistic movement.
Initial setup of the mocap project in Unreal Engine, preparing to animate Metahuman characters

Importing the Character into Blender

  1. Once our character is ready, we’ll utilize the CC Blender Pipeline plugin to export and import the character into Blender. This is a free plugin.

  2.  

  3.  This plugin simplifies the process, ensuring that all character details are preserved. In Blender, we’ll begin sculpting and using additional plugins to paint details and create stylized hair quickly.

Screenshot of the iClone 7 software interface with a character model displayed on the right side. A dropdown menu under "ZBrush Pipeline" shows options including "Export Character to Blender." The interface supports phone mocap and even features a preset for creating a K-Pop Singer.
Exporting the Character Using CC Blender Pipeline
A 3D model of a face is displayed on a computer screen with Blender software. In the bottom right corner, a person, possibly a K-Pop singer, is visible, perhaps demonstrating or explaining the full process.
Using the sclupting tools in Blender on the CC character
  1.  

Step 2: 

  1. Creating a Stylized 3D K-pop Outfit

    If you would like access to a premade market place of clothes, the Real Illusion Content store has many assets that can be downloaded directly into your CC program. This makes it easy to get quality clothing onto your character. Click here to enter the Content Store.

  2.  
  3.  
  4. But if you would prefer to make your own clothes this comes with a additional learning curve and  the expense of a third part product. I do get asked regularly about creating clothes from scratch, so I thought I’d try out a product for you called Style 3D Artlier. which we will use to design a bespoke K-pop style outfit for our character. 

 
A computer screen displays a website offering digital characters for download, showcasing diverse 3D models in various outfits. The site features a small video chat window at the bottom right where a woman appears, demonstrating the full process of phone mocap to animate a K-Pop Singer.
Character Creator Content Store for all your clothing needs
A computer screen displays character modeling software with a 3D character model in the center. A video feed of a K-Pop singer giving a full process tutorial is in the bottom right corner.
Final Outfit
  1.  
Step 3: 
  1. Adding Mocap with Move.ai One

    Once our character is complete, we’ll add motion capture (mocap) using Move.ai One. 

  2.  

  3. This tool claims to produce mocap using just one camera.

  4.  Although it may not be as precise as mocap suits or the multi cam option (Moveai Pro), I’m curious to see its capabilities. 

  5.  

  6. Setting up Move.ai was straightforward—I simply downloaded the app on my iPhone and recorded using the free trial, capturing 10-second intervals and uploading the finished .fbx files to my PC. My first impression were good, I knew it would be a wobbly animation that would require mocap clean up! But it was good enough for my little example.

A K-Pop singer standing with arms outstretched in a living room appears on the main screen of a smartphone. Inset image shows the same singer talking. "Stop recording" button is visible, capturing the full process with phone mocap technology.
Using Move.ai to create Mocap

 Step 4:

Cleaning Up Mocap Data

Of course, some clean-up is necessary, especially when you are recording mocap from one iPhone! I recommend using iClone for this which will help you clean up the mocap fast, but if you’re on a budget, Blender is a great alternative, however it may require a bit more of a learning curve and might not be as intuitive as Iclone.

I imported my 3D character into iClone and used a free retargeting file to streamline the process. It was easy and enjoyable!

I repeated the clean-up steps for each mocap import from my phone until I had a good selection of dance moves, which I saved in my iClone library for future use.

  1.  
A 3D modeling software interface displaying a female character model in a dance pose reminiscent of a K-Pop singer. The left panel lists animation options, while the right panel shows character settings and modifications, seamlessly integrating phone mocap for capturing the full process.
Cleaning up Mocap in Iclone

Step 5: 

Exporting To Unreal Engine

At this point, you can export your character to Blender using the CC Pipeline. However, I opted to take my character into Unreal Engine, so I can use the Iphone to record my facial animations. (You can do this in Iclone with Accuface).

I simple exported my character without any animations as an fbx file and then exported each of my animation as an fbx.

In Unreal Engine I imported my skeletal mesh and animation sequences into the Unreal content draw , added them to a new sequence level, and converted my character to a CC Control rig to enable facial animation.

A computer screen displaying character modeling software with a 3D character model of a K-Pop singer in the center and various customization options on the side panels.
Cleaning Up Mocap In Iclone
A screen displays the FBX Import Options panel in 3D software, showing animation import settings and file selection. The environment includes UI elements, a sky background, and folders at the bottom, perfect for a K-Pop singer using phone mocap to capture their moves through the full process.
Importing skeleton and animations into UE

Step 6: Recording Animation

  1. To record in Unreal engine you need to set up a Level sequence, think of this as your time line to add animation to. Below are the steps to setup your level sequencer and record your facial animation via your Iphone to the sequencer. I converted my Imported Skeleton mesh to a CC control rig. This is a fantastic free plug in which you can grab here. This plug will now allow me adjust my animation further in unreal engine and copy and paste animation onto the facial control from my Metahuman.
  2.  
  • Recording Singing with MetaHuman and LiveLink

    To keep things organized and efficient, I opened a separate project for this step. I imported a MetaHuman in my scene, used the LiveLink and Unreal Face app on my phone to record singing, and exported the animation as an .FBX file. Finally, I imported this into my original project and applied it to my character’s facial control rig.

A screen displays the FBX Import Options panel in 3D software, showing animation import settings and file selection. The environment includes UI elements, a sky background, and folders at the bottom, perfect for a K-Pop singer using phone mocap to capture their moves through the full process.
Importing skeleton and animations into UE
A computer screen displays video editing software with an animated K-Pop singer in the center. The character, with red hair and a colorful outfit, is being edited using phone mocap technology. The editing timeline and tools are visible at the bottom, showcasing the full process of animation.
Putting everything together in Unreal Engine

DOWNLOAD LINKS

Promptmuse X
  •  #CharacterCreator #iClone #RiggedCharacter #UnrealEngine #UE #Controlrig

More To Explore

<p>The post How I Made A K-Pop Singer: Full Process with phone mocap! first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/feed/ 0 How I Made A K-Pop Singer: Full Process with phone mocap nonadult
Custom Blender Hair To Metahuman Guide https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/ https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/#respond Fri, 19 Jan 2024 10:27:16 +0000 https://promptmuse.com/?p=3211 Introduction Tired of the default hair options for Metahumans in Unreal Engine? This comprehensive guide will walk you through creating and customizing Metahuman hair using

<p>The post Custom Blender Hair To Metahuman Guide first appeared on Prompt Muse.</p>

]]>
Introduction

Tired of the default hair options for Metahumans in Unreal Engine? This comprehensive guide will walk you through creating and customizing Metahuman hair using Blender. This process involves no plugins and uses entirely free software.

Please Join my Patreon to get access to Blender Resources and additional guides! PromptMuse | 3D & AI Learning Community | Patreon

Prerequisites

Basic familiarity with Blender and Unreal Engine. This tutorial uses 100% free software.

If you don’t have these installed, refer to my getting started with Unreal Engine and Metahumans here (Coming soon).


1.1 Create a Metahuman

1.2 Setting Up your Unreal Project

  • Open Unreal Engine via the Epic Launcher, navigate to the Marketplace on the top tab, and search “Metahuman” download the free ‘Metahuman Lighting’ rig.
  • Create a new project and launch it.
  • In Unreal Engine, navigate to Window and Quxiel Bridge.
  • In Quxiel Bridge download your Metahuman by selecting “My Metahuman” from the lefthand side navigation window. Once downloaded Select your metahumans card and from the right handside panel select “Add” This will add your metahuman to your content draw in your UnrealProject

Step 2: Exporting Mesh to Blender

  • In Unreal Engine, select your Metahuman and remove the existing hair if needed.
  • Now, export the head mesh. Find the mesh in the content browser, right-click, and select “Export.” Choose FBX format for Blender compatibility. Save the file in a convenient location

Step 3: Creating Hair in Blender

3.1 Importing and Prepping the Mesh

  • Open Blender, Im using Blender 4.0. In a new scene, delete the default objects.
  • Go to File > Import > FBX, and select the exported head mesh.
  • In the viewport, separate the scalp from the face. This isolation will help in focusing the hair creation on the scalp area.

3.2 Designing the Hair

  • With the scalp selected, enter Sculpt Mode.
  • Use the Add Hair tool to begin placing hair guides on the scalp. These guides will shape the overall hairstyle.
  • Adjust hair length, density, and curvature. For long hairstyles, increase the length and density. For short styles, reduce these parameters.
  • Apply modifiers for specific textures and effects:
    • Clump Modifier: To create grouped strands of hair.
    • Curl Modifier: For curly or wavy hair.
    • Frizz Modifier: Adds a frizzy, unstructured look to the hair.
  • Focus on the hairline and parting. Add more hair guides here to ensure a natural, dense look.

3.3 Exporting Hair Back to Unreal Engine

  • Once you’re satisfied with the hair design, export it as an Alembic (.abc) file. This format preserves the hair details for Unreal Engine.

Step 4: Finalizing in Unreal Engine

4.1 Importing Hair into Unreal

  • Back in Unreal Engine, import the Alembic file. Navigate to the content browser, right-click, and select ‘Import to /Game’. Locate your Alembic file and import it.

4.2 Adjusting Hair Settings

  • Select the imported hair in the content browser. In the details panel, fine-tune the settings:
    • Interpolation: Adjust for smooth hair transitions.
    • Root and Tip Scale: Control the thickness of the hair at the root and the tip.
    • Color Settings: Customize the hair color and texture to match your character’s style.
  • Enable physics simulation for realistic hair movement.

4.3 Binding Hair to Metahuman

  • To attach the hair to your character, use the ‘Create Binding’ option in Unreal Engine. This step ensures that the hair moves naturally with your character’s animations.

Conclusion

You’ve successfully created and customized hair for your Metahuman character in Unreal Engine. Experiment with different styles and modifiers to enhance your digital characters further. Don’t forget to save your project to preserve your work.

Additional Resources

<p>The post Custom Blender Hair To Metahuman Guide first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/feed/ 0 Custom Blender Hair to Metahuman Guide nonadult
Character Creator Workflow For Fast 3D Animation Production https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/ https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/#respond Fri, 08 Dec 2023 12:39:42 +0000 https://promptmuse.com/?p=3149 The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These tools are not just simplifying the character creation process but are also ensuring that characters are ready for animation and production, saving both time and resources.

Character Creator stands out for its ability to quickly generate base models that are ripe for customization. With its intuitive interface and versatile morph sliders, creators can easily shape the basic anatomy of their characters. This software shines in its ability to offer a base that includes rigs and morph targets, essential for both facial and body animation.

A character creator workflow for fast production using base meshes
A character creator workflow for fast production using base meshes

Once the basic form is created in Character Creator, ZBrush comes into play. Known for its robust sculpting tools, ZBrush allows artists to add intricate details and a unique personality to their characters. Its brushes are perfect for crafting stylized aesthetics, moving away from realism and embracing a more exaggerated, cartoon-like appearance.

A fast production screen shot of a 3D model being transferred to Zbrush
A fast production screen shot of a 3D model being transferred to Zbrush

One of the most significant advantages of using Character Creator and ZBrush is their seamless integration. With just a click, characters can be transferred between the two programs without losing any detail. This integration is a game-changer, ensuring that the creative flow isn’t interrupted.

Characters created with these tools are not just visually appealing but are also production-ready. They come with clean topology, rigs, and weights, making them perfect for animation projects. This readiness significantly cuts down the time from concept to production, a crucial factor in fast-paced project environments

Easy Facial Edit tools in Character Creator create seamless workflow for character production
Easy Facial Edit tools in Character Creator create seamless workflow for character production

For those who use Blender for composting, the good news is these characters are fully compatible. With the help of a simple plugin, characters can be imported into Blender, retaining all their rigging and morphing qualities. This flexibility opens up avenues for creators who operate in different software environments.

The combination of Character Creator and ZBrush is a testament to how technology is simplifying yet enhancing the art of 3D character creation. By reducing the technical barriers, these tools allow artists to focus more on the creative aspect of character design. As we continue to see advancements in these tools, the future of character creation looks more exciting than ever.

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/feed/ 0
How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/ https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/#respond Mon, 25 Sep 2023 08:58:36 +0000 https://promptmuse.com/?p=3087 In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This tutorial is designed for those who don’t have a budget for expensive animation software or motion capture suits. The full tutorial video of this process can be found on my Youtube channel here.

We will be using a software called “Cascadeur,” which has been around for about 10 years and offers a free version with some powerful animation tools. While this method is not a replacement for professional animation software or mocap, it’s a viable alternative if you’re on a tight budget.

Note: Before you start, make sure you have Cascadeur and Accurig installed on your computer. You can download Cascadeur from the official website, and Accurig is a free auto-rigging tool that complements Cascadeur.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/ScQTV2Xb–0?si=_4-LUd5vW3w7Nz64″ title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” allowfullscreen></iframe>

Let’s get started!

Part 1: Rigging Your Character in Accurig

  1. Open Accurig and click on “Choose File” to select your 3D character’s FBX file. You can use a sample character from Cascadeur, Mixamo, Sketchfab, or your own custom character.
  2. After loading your character’s mesh, click on “Rig Body” to generate the joint structure for your character’s skeleton.
  3. Accurig will display circles where joints should be placed. Ensure symmetry is checked to work on one side of the character.
  4. Position the joint guides according to your character’s anatomy, following the on-screen guides for reference.
  5. Use the tools in the bottom left corner to rotate and move around your character for precise joint placement.
  6. Repeat the process for other body parts, such as arms and legs, ensuring correct joint placement.
  7. Use the “Preview Motion” window to check the animation on various body parts, including fingers.
  8. Ensure your character is in a neutral pose (A-pose or T-pose) before exporting.
  9. Click “Upload to AccuRig” and then “Export” > “Export FBX.” Set the target application to “Maya” and check “Embed Texture.” Click “Export” to save the rig.
  10. Export another FBX file of your character’s base mesh but set the target application to “Blender” for later use.
 Accurig Auto Rigger Tool
Accurig Auto Rigger Tool By Real Illusion

Part 2: Creating a Basic Idle Animation in Cascadeur

  1. Open Cascadeur and start a new scene. Import the FBX file with Maya settings that you exported from Accurig.
  2. Cascadeur will ask if you want to enter “Rig Mode.” Click “Yes.”
  3. In the “Rig Mode Helper” dialog, click “Yes” and then “OK” on the next dialog.
  4. Click “Add Rig Elements” at the bottom of the “Quick Rigging Tool” dialog.
  5. Rotate your character by holding ALT and the left mouse button to navigate.
  6. Select the “Auto Pose” tool to enable automatic control point positioning as you move your character.
  7. Position your character into an initial pose for your idle animation by moving and rotating control points. Use ‘W’ to move and ‘E’ to rotate.
  8. Add a keyframe at frame 10 by clicking the key icon.
  9. Change the hand pose on frame 10 to create a hand open/close animation.
  10. Duplicate the first frame to frame 20 and mirror the pose for variety.
  11. Duplicate the second keyframe to frame 35 and mirror it to frame 45.
  12. Extend the timeline to add more frames for smoother animation (e.g., 200 frames in total).
  13. Copy the first frame to frame 145 and the second keyframe to frame 110.
  14. Apply bezier curve interpolation for smoother animation between keyframes.
  15. Review and refine your animation by adding subtle movements, such as chest and shoulder motion.
  16. Create a seamless loop by ensuring the first and last frames are identical and adjust frame numbers accordingly.
  17. Cascadeur Tutorial

Part 3: Exporting the Animation to Blender

  1. Export the animation from Cascadeur to Blender by going to “File” > “Export” > “FBX.” Name the file and click “Save.”
  2. In Blender, import the animation by going to “File” > “Import” > “FBX.” Use the default settings and click “Import FBX.”
  3. Delete any existing objects in the Blender scene and select the imported Armature.
  4. Adjust the Armature’s rotation to face the front and place it in the scene.
  5. Create an animation track for the imported animation and rename it.
  6. Copy the animation keyframes from the imported Armature and paste them onto your character’s Armature.
  7. Delete the imported Armature to keep your scene clean.
  8. Create an animation loop for your idle animation in Blender using the NLA (Non-Linear Animation) Editor.
  9. Blender_Animation_Import
    Blender_Animation_Import

Part 4: Exporting the Animation to Unreal Engine 5

  1. In Unreal Engine 5, create a new project and organize your project folder.
  2. Import your character’s 3D mesh into Unreal Engine by right-clicking in the “Content” folder, selecting “Import,” and choosing your FBX file. Ensure it’s assigned to the correct skeleton.
  3. Add a Level Sequence to your project by right-clicking in the “Content” folder and selecting “Level Sequence.”
  4. Drag your character’s skeletal mesh into the Level Sequence.
  5. Add your idle animation to the Level Sequence by clicking the plus icon and selecting the animation.
  6. Adjust the timeline as needed and press the spacebar to preview your animation.
  7. Extend the timeline and blend your idle and walk animations for a seamless transition.

Part 5: Adding Free Mocap Data

  1. Visit the ActorCore website and explore the free motion resources.
  2. Download free motion data compatible with Cascadeur.
  3. Import the downloaded motion data into Cascadeur, and apply it to your character as needed.
  4. Refine and customize the imported motion data to suit your character and animation needs.

This tutorial should help you get started with animating and importing your game characters into Blender and Unreal Engine 5 using Cascadeur. Feel free to explore further features and animation possibilities in Cascadeur to enhance your character animations.

Remember, practice makes perfect, and with time, you’ll be creating stunning animations for your game characters. Enjoy animating!

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/feed/ 0
How to Create a 3D Scene Using Blender and Fspy https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/ https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/#respond Thu, 16 Feb 2023 10:44:15 +0000 https://promptmuse.com/?p=2147 If you’re looking to create a stunning 3D scene in Blender, then you’re in the right place. In this tutorial, we’ll show you how to

<p>The post How to Create a 3D Scene Using Blender and Fspy first appeared on Prompt Muse.</p>

]]>
If you’re looking to create a stunning 3D scene in Blender, then you’re in the right place. In this tutorial, we’ll show you how to use Fspy and Blender to create a 3D scene from a 2D image. Follow these steps to create your own 3D scene:

Step 1: Download and Install Fspy

First, download and install Fspy from the official website. Once installed, open the software and import the 2D image you want to use for your 3D scene.

Step 2: Set Up Fspy Camera

Next, use Fspy to set up your camera. This involves placing markers on the image to establish the camera’s position, orientation, and field of view. Once you’ve placed the markers, export the camera data in the format that Blender can use.

Step 3: Open Blender

Open Blender and select File > Import > Fspy. Browse for the camera data file you exported from Fspy, select it, and click Import Fspy Camera.

Step 4: Set Up Scene

In Blender, set up the scene with the camera and the image you want to use as a reference. Then, go to Edit Mode and use the image as a guide to create the basic geometry of the scene. Extrude and scale the edges to match the perspective of the image.

Step 5: Apply Materials

In the Shading tab, apply materials to the geometry of the scene. Use an image texture for the background image and create materials for the objects in the scene.

Step 6: Use Runway to Remove Distorted Elements

For any distorted elements in the background image, use Runway’s erase and replace tool to remove them. Upload the image, remove the unwanted elements, and then download the new image to use in Blender.

Step 7: Add Details and Effects

Add details and effects to the scene, such as 3D objects, wires, and the ShakerFly camera effect. Be creative and experiment with different effects to make your scene stand out.

Step 8: Render and Save

Once you’re happy with the scene, render it and save it in the desired format. You can then use it in your projects or share it with others.

Creating a 3D scene in Blender from a 2D image is a rewarding experience that requires a bit of patience and creativity. By following these steps and exploring the software, you can create impressive 3D scenes that will impress your audience.

FAQ

Q: What software do I need to follow along with this tutorial?

A: You will need Blender, a free 3D animation software, and any photo editing software such as Photoshop, GIMP or any other free alternatives.

Q: Do I need any prior experience with Blender to follow this tutorial?

A: No, this tutorial is beginner-friendly and doesn’t require any prior experience with Blender.

Q: Do I need any special equipment to follow this tutorial?

A: No, you don’t need any special equipment. All you need is a computer and a mouse.

Q: What techniques are covered in this tutorial?

A: This tutorial covers techniques such as camera placement, object selection, UV projection, in painting using AI tools, and the use of the ShakerFly camera effect.

Q: Can I use a different AI in painting tool instead of Runway?

A: Yes, you can use any image to image in painting tool that you prefer. The tutorial specifically uses Runway, but there are other options available.

Q: Can I use different 3D objects in my scene?

A: Yes, you can use any 3D objects that you like. The tutorial uses a plant and a neon sign as examples, but you can use any objects that fit your scene.

Q: Can I use different camera effects or settings?

A: Yes, you can experiment with different camera effects and settings. The tutorial uses the ShakerFly camera effect and depth of field, but there are other camera effects and settings available in Blender.

Q: Where can I find more resources and tutorials on Blender and 3D animation?

A: The tutorial creator has a website, promptmuse.com, which offers free online resources and a weekly newsletter. There are also many other online resources and tutorials available for Blender and 3D animation.

Transcript

Ai community, I have confession to make. The creation of endless AI images is boring me until now. Today, I’m going to be showing you how to take your AI images that have been created in Stable Diffusion, Darley, Midjour, whatever program you’re using, and turn them into an ultimate movie set, which means you can configure how it looks, get a camera, go in, add characters to the scene, create an animation from your 2D images. We don’t want AI to be the creative director of our stories that we want to tell. We want to use AI to enhance our workflows. And that’s what this channel is all about, creating workflows that actually work. Today, we’re going to be looking at how to create an environment from AI generated images. And then in the next video, we’re going to look at how to add characters into our environment. So please hit that notification bell and subscribe because this is going to be a series. So we’re going to start off with the environment workflow. So the first thing we want to do with our AI generated image is upscale it because what comes out of midjour is quite low resolution.

Same with automatic 111. You might not actually have the RAM or GPU to be able to upscale that on your PC. So I use something called Neuro AI. This is absolutely free and to be honest, it’s a godsend. There’s no download. It is what it is. You upload your image and within seconds it gives you a high resolution image. So when we’ve got our upscale image, I’m going to take it into F Spy, which again is amazingly free, which will create us a camera that matches the image perspective. And then we simply import our F Spy camera into Blender, which will be then projecting our map from the camera onto very simple, primitive objects. It’s really an effective result that we get. Step one, upscaling our image. I want to increase the resolution of my image that Midjour created because in the background, if I’m going through the scene, you’re going to see as you go through the scene, the image lose resolution and just see big chunks of pixels, and we do not want that. So we’re going to use an upscaler. And today I’m going to be using AI Neuro. Currently, you get five free credits a day to upscale your image.

So I’m going to click on Download and download that image. So we’re now going to take this image into F Spy to create our perspective or projection camera. Step two is installing F Spy, which will create us a camera that we can then import into Blender. Each image that you bring into F Spy will be completely differentand have different perspective lines. But what it allows you to do is ultimately create a camera that you can then model in Blender from. There are two zip files on this website that we want to download, the first being the program and the second being the installation plug in for Blender. If you head over to the F Spy website, there’s a big green Download button, and that’s to install the actual program onto your PC. You’ll be taken to a GitHub page where you just need to download the extension with win. Zip at the end if you’re running on Windows. And if you download that and unzip that onto your PC, you’ll be able to see F spy. Exe file, which you need to double click in order to run the program. Once that’s installed, you need to head back to the F spy website.

And if you scroll down from the main page, you’ll see the official F Spy importer add on. This is the zip file which we’re then going to install directly into Blender. Download that file by going to this big green Kodabutton over here and come down to where it says Download zip and download that zip file. If you just fire up Blender and go to Edit, Preferences and go to Install and just find that F Spy Blender Master, click on Install Add On, no need to unzip it or anything, and you should find it in your search bar up here. Just make sure it’s checked in the checkbox there. Go to the hamburger icon and save preferences, and you’re good to go. When you go to File and then then to Import, you should see. F spy here. So now, Minimize Blender, and where you unzipped the first F Spy folder, just navigate to the f spy. Exe and give it a double click and that will launch F Spy. So you can simply drag and drop the image you got out of Midjour here or you can go up the file and open image. This is F Spy and the first time you use it, it does look a bit intimidating.

But do not worry, all you need to focus on pretty much is this gizmo here. This is the most important thing in F Spy. We want each corresponding axis to line up with our image as perfectly as possible. The X axis is the horizontal line across the image. So you’ve got Z, X, and Y. These lines here are what we’re going to do e are going to place manually to help the program understand what is Z, X, and Y. You can see our Y axis, so we need to mark the vanishing point. If we put this green line here, which notes the Y axis, and then this green line here to the other side, you can see it’s creating a vanishing point at the end of this road. Now, it’s quite hard to see where you’re laying these lines down, so you need to come over to the left hand side and uncheck this dim image box here. And then that will help you position your lines further. You can also hold shift on the keyboard and position the lines, and you’ll get this lovely big magnifying glass that will help you a little bit more.

So as you see, while I’m lining these lines up, this gizmo in the middle, which I said is vital, is lining up as well and positioning itself correctly. You can see my X axis off. I want that to be horizontal with the floor plane. So I’m going to put my Z axis here. I’m just going to find basically a vertical line on the drawing. So it’s important to line up, for instance, my Z axis parallel to one another so the program can measure the distance between them. That is looking good. And if you check my gizmo, the Z axis is pointing straight upwards in line with everything else. So it’s looking good so far. And to check that your lines are in the right place, if we go down here to 3D Guide to the drop down menu and go to X, Y, Z grid. You can then place your grid in and just make sure everything is lining up. You can switch to box as well and just check that your box looks like it belongs in that perspective. You can also line it up to any of the lines in the image and just double check that everything is lining up nicely.

If there’s anything out, you can adjust these lines further to get it correct. This is the final position where your projection will load up in Blender. So it’s important to try and centre this gizmo as well as possible. So that’s all looking good and I’m ready to import this into Blender. So I’m going to go to File and go to Save As and Save This. And we’re going to now bring this camera projection into Blender. Step three, adding projection camera and material to 3D geometry. So I’ve just opened up a Blender project, and I’m now just going to marky select and delete any objects that are in the scene. And then I’m going to go up to File and import. You should have F Spy here. If you don’t, go back to the beginning of the tutorial and install it. So I’m going to click on that. I’m going to navigate to the F Spy file, which was cyber. Nvi. F spy for me. I’m just going to click on Import F Spy Project File. You can see here straight away, it creates an F Spy camera up here in the scene collection, and it’s automatically projecting that scene.

Now, on your keyboard, if you press zero on the key number pad, you can see that the camera is not projecting onto any plane at all. It’s actually projecting onto itself. It’s quite clever. So I’m going to press zero again on the number key pad. This is a super easy modeling. We’re just going to be using a plane. So to access your planes, hold down shift and A and go to mesh and across from mesh, select plane. So I’m going to create another window so you can see exactly what I’m doing. From the top left of this window, when I get the cross head, I’m just going to left click and drag and that creates you another window. And in this window you can see my plane and my projection camera there. Now I’m just going to select this plane and go to Object Mode up here and Edit Mode. I’m going to click on Edge Mode, which is here. I’m going to then select the back edge, which is here and press G and Y. And then I’m just going to extrude that back edge right to my vanishing point down there. So this is what it looks like so far.

Remember, the Y axis is from the viewport right down to the vanishing point. I’m now going to come back down to Edit Mode and I could press S for scale and then X for scaling on the X axis. So it will just scale along the horizontal line. So I’m going to select both edges on either side of the road and then press E to extrude and then Z to make sure that it’s on the Z axis. I’m just going to come up there and extrude up to the pavement. I’m now going to select the left side and again, repeat that process. Press E to extrude and then X so it just snaps to the X axis. And again, once more, E to extrude and then X to extrude on the X axis. So I’m going to click on both edges of the sidewalk here and then press E to extrude and then Z so it snaps to the Z axis there. And I’m going to come right up there to the top of the buildings. And I’m just going to go to Edge and I’m going to then go to Bridge Edge Loops. And then again, at the back, I’m going to do the same, select both edges and then click on Bridge Edge Loops.

That is now pretty much all the modeling we need to do. If we come out of edit mode and come back to Object, we need to go over to the Shading tab. So we want to apply the material. So once in the Shading tab, ensure your object is selected and go to New. We just want to delete the principal BSDF node by selecting it and hitting delete. We want to select the material output and on the keyboard, hold down CTRL and T. This activates your Node Wrangler. If nothing happened when you press CTRL and T, your Node Wrangler is not enabled. So I suggest you go and enable that. And to do that, you go up to Edit and down to Preferences. And just type in the search bar here NodeW rangular. And all you need to do is just make sure that box is checked and go to this hamburger icon and click Save Preferences. And then just repeat that process again. Just click on Material output and hold down CTRL N T on the keyboard, and these should come up here. Now in the image texture, this is where we’re going to load our upscaled image, it will look a mess when you import that in, but do not worry, we’re going to fix that now.

So if you come over to the Spanner icon over here and from the Add Modifiers drop down list, you want to go to Subdivision Surface. So give that a click, and it will be selected default as cat mall clerk. But we want to switch that over to simple. And then on the level viewport, we want to add five onto there. And then on the render, we want to make that five as well. So next we want to go back up to the Add Modifier drop down and come over to UV Project, which is there. Now under the UV Maps section here, just select that box and select the only UV map that should be in the scene. And then come down to Object, and then under Object, select your. Fspy camera. What we need to do is put the height and the width of our original upscale image into here. Just go back to your image, right click, go to properties and details, and your resolution or your dimensions will be there. So mine is 4196 by 2796. So yours will probably be different. So I’m just going to go in and type in 4196.96. Now there’s a really annoying Edge Repeat round there, and we can change that because that is currently set to Repeat in the settings.

So if you come back down to your image node and come down where it says Repeat, and hit that drop down box and select clip. That will give you a black area, so that makes it a lot easier to see your actual image. As you can see, we’re slowly building up our 3D scene. Now, if you click on the projection camera and move it, bad things happen. You do not want to do that. So what we need to do is just make a copy of that camera. So hold down shift and D and then right click on your mouse and that will create a copy. Now, if you go back to the original camera on the Object properties here, we need to just lock that camera into place. Just hit these padlocks and it will not move. Now we’re going to give our new camera a name and I’m going to call it Movie Cam 1. With Movie Cam 1 now selected, we just move that. Then right click on the camera and click Set Active Camera. So this is now our camera that we’re going to be using to go through our scene. So when you go in and out of your scene, just make note of what is distorting.

So you can see these bins on the right are distorting and this plant and the neon signs. I’m going to bring this tree in here as a plane and then this neon sign, I’m going to use UV projection so you can see both methods to see which suit your scene best. In this step, I’m just removing the background from the plant tree shrub thing. You can use any free software to do this. I’ve put some links in my description if you do not have Photoshop. So the first thing I’m going to do is right click on my background and click layer from background. Okie dokey. I’m going to use this great tool from Photoshop, which is the Object Selection tool. And then just simply select the object you want to select. And voila, it creates a selection of that specific object. So I’m going to press CTRL and J on the keyboard, and that just transfers the selection to another layer. So I’m just going to call that plant. And then I’m going to right click on my plant layer, duplicate layer, and go to documents, and then New, and then OK, and then to image, and trim that baby down.

I’m going to go to File and export that baby out of here. So I’m exporting it as a PNG, and I’m going to bring that in as a plane into Blender. I hope that’s plain and simple. So if we head back to our Blender scene, we can import our plant as a plane. So if you hold down shift and A on the keyboard and then go to image and then across to images as planes. We then want to navigate to our planned file that we just exported as a PNG outside of Photoshop. So the material settings, we need to ensure that it’s set on admit, and then click on import images as planes. And there she is. We have our plant. So I’m just going to press G and then Y and then push her back on the Y axis and just position her over here. Give her a little bit of a scale up there. And you can see there, the left side is clipping into the original wall. So we want to bring it out slightly and just set it roughly where the original plant was. It doesn’t have to be in the exact same spot.

And we’re just going to then click on the click on our movie camera and move her, GY, forwards. And as you can see, we got the original stretching of obviously the neon light and the plant going on. We are actually going to use in painting in a moment to remove those. So method number two, I’m going to project onto this neon light. And in order to do that, I’m going to make a square or rectangle object for that neon light and just grab a plane. And then I’m just going to simply position that plane where that neon light is. With our object selected, we’re going to go to Object Mode, Edit, and then on the keyboard, just press A. This will select all the faces. And then on the keyboard, just press U. And then from this menu, just select Project from View. And from the material properties, either assign the original background material or create a new material, base color, image texture, open, and then again select the original background. And as you can see now, if I come out of the camera mode, you can see we actually have a 3D object. You can do that with multiple objects in your scene, especially if your character is interacting with them, walking behind them.

It usually works best as a 3D physical object, but you can also use a plane technique for objects in the foreground or the background. We obviously now want to get rid of the duplicates in the background that are on our scene. So you can see our neon light and our really stretched elements in the background. And I’ve got a super, super, very cool AI tool for you to use for this. It’s called Runway, and I can see myself using this lots and lots in future tutorials. If we head over to Runway. So this is Runway. This is where we’re going to be essentially using their tools to do image to image in painting. My GPU currently is dying a very slow death and running things on my PC is not the way forward. Having access to all the AI tools in a browser is insane. We’ll be using this erase and replace tool. So simply upload the image that you want to use and use this big purple brush to paint out what you don’t want to see in the scene. I’m going to start off with these bins. So I’m going to just type in sidewalk with closed garage doors.

Fingers crossed this will work and that will magically generate a better image to work with. And here we go. It’s created a sidewalk with closed garages. That is pretty neat. Let’s have a look what it’s given me. So it’s given me a couple of options and I’m just using the arrows just to rotate through them. This probably best represents the scene, so I’m going to click accept. So now I’m going to just quickly go through this entire image and just remove elements and replace them with what I want to see using the prompt feature. Once we have finished removing all the bits we don’t want in our image, we simply just go and download that to a Download folderand head back into Blender and upload that into the background. So see you there. Now we’re back in Blender, just select your alleyway object and then go to material properties down here. We just want to replace the image with our new runway image that we just downloaded. As you can see, it’s all coming together nicely. I’ve just switched over to my movie camera. Remember, that’s the only one we’re going to move. I’ve added a keyframe at the start of the animation, and I’ve moved her right into the scene and back again just to check for any items or objects or materials that are stretching.

But it’s looking pretty good. So we got our plant there and our 3D object. You might remember in the original image, we had wires in our scene. I’m going to recreate those wires quickly because it’s quite nice to mix the 2D and 3D elements together. I’m going to hold down a shift A and go to mesh and add a cube. And this cube is literally just going to be where the starting point of our wire is going to be. And just going to scale that there. And then I’m going to shift a D to duplicate and then right mouse click to place. And then just put that there. And then hold down shift and select both of these Cubes. So with both Cubes selected, I’m going to hold down shift and A to open up our menu and come down to the second option, which is curve, and then come down to knots and then select catenary, catenary, catenary, catenary. I’m sure someone’s going to correct me in the comments there. And click on that, and you can see it’s created our wire straight away. We actually get an optional menu here, which we can actually adjust the drop of the wire.

We can also increase its resolution and its actual thickness as well. So we actually do want to see it in the scene, so we want it quite fit. You can go ahead and use that to add multiple wires to your scene. Let’s take a look at our 3D scene. As you can see, the geometry is super simple, and this could be put together in five minutes or less once you get the workflow down. So if I hit zero on the keyboard and change my material shader so I can see everything in the scene, if I hit space bar on the keyboard, you can see I’ve added two key frames to this camera and it’s just simply moving into the scene. I’ve also added a ShakerFly camera effect, which is super cool. And the plugin is in the description below and is absolutely free and super easy to install. You just go to edit, preferences and install the zip. The ShakerFly camera, once installed, will then appear in your camera object properties under Camera ShakerFlyer. There are so many cool settings in this. This guy who created this created all different scenarios, so walking or if you’re on a bike.

So this is a really cool effect to add to your camera. Also, I’ve enabled a depth of field, which is obviously included in the Blender itself. You don’t have to install this. And you can actually set the distance of your depth of field or a focus object. So if you have a character in your scene, you can make the background blurry behind them and have them in focus. Part two of this next series is adding our character into the scene. So please hit the notification and subscribe so you get that video. I hope you can take some techniques away from this video. I tried to keep it as simple as possible. So if you’re new to Blender, hopefully this is a nice introduction to using it. And of course, it’s not a perfect technique, but remember to get our stories and to get our animation out there. We don’t need it to be perfect. Perfection is the enemy of done, or something like that. If you want to add me to your social media, I would absolutely love that. My Instagram is @prompt muse, my Twitter is @prompt muse, and my Facebook is @prompt muse.

And of course, I have the prompt muse. Com website where we have started an absolutely free weekly newsletter. The newsletter, as well as all the online resources on the promptmuse. Com website is absolutely free. Just as a massive thank you for you subscribing to this channel and hopefully the newsletter as well. And thank you guys to every single one of you that comment in the comments section below of this video and all the other videos. I read every single one. Thanks so much for coming on this journey with me. And hopefully we’re going to have some great times and some innovations along the way. With that said, thank you so much and that will do it. Bye bye.

<p>The post How to Create a 3D Scene Using Blender and Fspy first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-create-a-3d-scene-using-blender-and-fspy/feed/ 0 Create a 3D Scene using AI Images nonadult
Turn AI Images into 3D Animated Characters: Tutorial https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/ https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/#respond Fri, 13 Jan 2023 17:00:13 +0000 https://promptmuse.com/?p=1298 Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI influencers, bring a music video to life, or even create a feature film.

Before we begin, you will need a trained model to produce the head shots. You can either follow a tutorial to create your own unique trained AI model, or use the one provided in this tutorial below.

Please select what is compatible for your phone as you may require a different type of adapter:
Apple Lighting to Ethernet

Ethernet cable

RESOURCES: Download Redhead.ckpt my model from HERE

Stable Diffusion (Use local or remote)

Step 1: Gather Pose Reference Images

Take some photos of yourself to use as headshot references. These photos will be used to ensure that the output pose of your AI generated character is consistent when it is run through stable diffusion. It is important to note that the reference images do not need to look like the final character.

Step 2: Use Automatic1111 webui (You can use either local or remote- I’ll add a tutorial soon!)

Use Automatic1111 webui to run stable diffusion 1.5. Load your Redhead.ckpt into the models file within the Automatic1111 directly.

Step 3: Run stable diffusion

In stable diffusion, select your redhead.ckpt from the drop-down list. Navigate to the img to img tab and upload your front, side, and perspective headshot references.

Step 4: Create consistent images of your character

Use your reference images as an img to img reference to create consistent images of your character.

With these steps, you should now have a 3D animated character that is based on your AI generated character. Be creative and experiment with different poses and animations to bring your character to life!

Blender

Use the Facebuilder plug-in to create a 3D model head mesh that is based on the reference images. This tool is very useful as the sculpting tools in meta human are limited and can be very laggy. However this stage is optional.

Step 1: Download and Install Blender here (its free) the Facebuilder plug-in by Keen tools here

Step 2: Open Blender and import your reference images

Step 3: Use the Facebuilder plug-in to create the 3D model head mesh

Step 4: Export your head mesh as a .fbx files.

 

Note: The creator of this tutorial is not paid in any way to promote the Facebuilder plug-in. It is just a tool that they found useful and thought others may also find it helpful.

With these steps, you should now have a 3D model head mesh that is based on your reference images. You can now continue to the meta human creator section to bring your character to life with animations and other features.

Epic Launcher & Unreal

Step 1: Follow this link here to download Epic game launcher and unreal engine.

Please avoid 5.1 (new release ) due to compatibility issues with meta humans. I’m sure there will be an update soon to fix a few of the issues, but until then I’d advise downloading Unreal version 5.0.03

Once above installed get Quixel Bridge for Unreal Engine

https://docs.unrealengine.com/5.0/en-US/quixel-bridge-plugin-for-unreal-engine/

 

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/feed/ 0 Turn AI Images into 3D Animated Characters: Tutorial nonadult
How to use AI to Render in 3D – It’s here https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/ https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/#respond Thu, 01 Dec 2022 00:08:35 +0000 https://promptmuse.com/?p=477 Guys, it’s here. We finally have AI in a 3D programme. My phone’s gone. Well, kind of. Let me explain. It takes your primitive objects

<p>The post How to use AI to Render in 3D – It’s here first appeared on Prompt Muse.</p>

]]>
Guys, it’s here. We finally have AI in a 3D programme. My phone’s gone.

Well, kind of. Let me explain. It takes your primitive objects and your prompts and combines them and creates an AI render to the perspective that you want. Finally here, I cannot tell you countless hours I have spent in midjourney putting the camera angles in place to try and get the perspective right. So imagine that this is the baseline what’s to come. The future for AI rendering is definitely going to be integrated in three D. I mean, Mark Holtz already suggested that they’re working on something that will be released next year. Very, very exciting. Before we dive into the tutorial, I just want to give you a brief overview and show you how powerful this plugin actually is. This plugin now means that we can create AI renders from any perspective. So I’ve quite literally thrown down some very primitive shapes here. And if I just hit Render, I’ve got my prompt already set up there over on the right, and you can see it’s rendered me a train in that perspective with trees behind it. And that is what I’ve asked for in the prompt. The plugin that you need to use is called AI.

Render stable diffusion in blender. And to get hold of this plugin, just go to Blender Market. The link is in my description below. You will need to log in and make an account, but they’re not it’s absolutely free. If you want to support the developer, you can give a donation here. But if you don’t have the money at the moment, you don’t have to pay anything. You can click $0 and then click on Purchase and then once added, go to the car and cheque out and get your download for free. Once you’ve checked out and downloaded that zip, you need to go into Blender and then go on to the top horizontal toolbar and click Edit and then go down to Preferences and then Addons. And on the top horizontal toolbar, click on Install and navigate to the zip file you just downloaded. It should be called AI hyphen render. Okay? And just install the add on. And if you don’t see it straight away, just in the search bar, start Stable and it should come up. Ensure the checkbox has a tick in it. And then if you expand down, you will see sign up for Dream Studio.

You do need an account, and if you don’t have an account, just create one here by clicking on this button. Once you’ve logged in, if you navigate to the API key and you will want to create an API key, keep this absolutely secret. Just click on Copy and then go back to Blender and you will see the API key section here. If you just paste back in there. And to save all the settings, you just need to go to this hamburger icon down here and click Save Preferences. Okay, so the plug in is now installed. This is a default scene. So I’m just going to click on the cube and hit delete on the keyboard. And then I’m going to hit shift and a and then under Mesh plane, I’m going to put a plane down and just scale it up. She’s gonna scale it later. Bigger than that. I’m going to shift an A once again and under Mesh, go to Taurus. And again, scale that up. I’m just going to move that upwards slightly and then hit zero on my keyboard. So this will give me my camera viewport if I go up here and click on Viewport Shading I want to change the colours of my objects to help the code distinguish each object from one another.

I’m going to click on the Donut and then the material slot and I’m going to create a new colour and I’m going to make it like a kind of brown doughnutty colour and then I’m going to click the plane and again just make it a white colour and that’s it. We’re done. If you go over to render properties. We are now going to enable AI under the AI render tab. If you click on that and then click on the question mark next to the image size, it’s set to 512. X 512 by default. And that’s fine for me because I want to keep the render times low and click. OK, you must do this, otherwise you will get an error message while rendering and then you can see you’ve. Got your prompt down here. So remember, this is based on stable diffusion code. So if you’re used to using dream studio or stable diffusion itself, you can use the same prompts in here, and that should help. Now if you see this lady’s face here if you click on that you will see all the preset styles that are within this plugin. I’m going to use the Product Shop preset and I’m going to give the Donut a description of donut of course with Sprinkle realistic Food Photography eight k and we’re done.

We just head over to render on this top horizontal toolbar and then click Render Image. You can hit the shortcut F twelve if you prefer and we should get a donut so that’s pretty cool. We’ve got a doughnut in that perspective. Now what we can do is if we scroll down here and click on Operations we can create a new image from the last render so if that’s not particularly the donut you wanted you can click on this and what it will do is create you a new render from this rendered image rather than simple geometry. So if we click on that and let’s see what it gives us and it’s given us a pretty realistic donut which is great for over painting or using a stock imagery you will also probably notice that you are in this AI render. So to get back to your geometry. You just click layout and there you go. Press zero again to come out of the camera view and that is that simple. This is a great example of the power of this plug in and how quickly this technology is evolving. As you can see, I’ve made this very rudimental background mountains with a lake and if I hit zero to go in so let’s see what it generates.

So go up to Render and render image and look at that. That is amazing. That has created that from my rudimentary geometry. You can see the direction these plugins are going in, how the evolution of this technology is coming along. As you can see, it’s not exactly there yet, but it definitely is coming. You can’t do 3D animation just yet and as far as I’m aware, you can’t animate from blender. But I know again in the next coming days that should come and of course I will report on it when that does come. Thank you to Ben from AI Renderer for creating this fantastic bridge plugin. If you like this video, hit subscribe and like. If you don’t like this video, hit subscribe and like this is just a quick overview to show you and demonstrate how powerful the baseline of AI within a 3D programme is going to be. I am so, so excited for what’s to come. Because if I haven’t told you before, I used to be a 3D professional artist. So guys, we are nearly on 500 subscribers. We are on 497. So I need to three more subscribers, guys, to get 500.

And that will mean I’ve got 500 subscribers. Okay, thanks. Bye.

<p>The post How to use AI to Render in 3D – It’s here first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/feed/ 0