Tutorials – Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Sun, 13 Oct 2024 10:12:56 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg Tutorials – Prompt Muse https://promptmuse.com 32 32 The best AI is still free! Forge & Flux for FREE (Easy Guide) + Animation https://promptmuse.com/the-best-ai-is-still-free-forge-flux-for-free-easy-guide-animation/ https://promptmuse.com/the-best-ai-is-still-free-forge-flux-for-free-easy-guide-animation/#respond Sun, 13 Oct 2024 09:42:33 +0000 https://promptmuse.com/?p=4273 https://www.youtube.com/watch?v=dbqiz6jt0o8 Introduction Welcome to my tutorial on how to use Forge UI and the model Flux to create and animate images for free. This step-by-step guide will help you harness the power of these free AI tools and bring your creative ideas to life with ease. Please see video above for step by step process. [...]

<p>The post The best AI is still free! Forge & Flux for FREE (Easy Guide) + Animation first appeared on Prompt Muse.</p>

]]>

The best AI is still free! Forge & Flux for FREE (Easy Guide) + Animation

Introduction

Welcome to my tutorial on how to use Forge UI and the model Flux to create and animate images for free. This step-by-step guide will help you harness the power of these free AI tools and bring your creative ideas to life with ease. Please see video above for step by step process.

Introduction

Forge and Flux are powerful, free AI-driven tools that allow you to create stunning images and animations effortlessly. Whether you’re an artist, content creator, or just curious about AI, this guide will walk you through everything you need to get started.

System Requirements

  • Windows Tutorial: The video is aimed at Windows users, with a minimum VRAM requirement of 6 GB for Forge.
  •  
  • Storage Requirements: Forge requires at least 1.7 GB, while Flux needs between 12-17 GB, depending on the version you use (NF4 or FP8).

Installing Forge UI

Forge is popular because it offers a clean, user-friendly interface that makes AI image generation accessible to both beginners and advanced users. It’s free, open-source, and provides compatibility with various AI models, including Flux and Stable Diffusion, allowing for versatile creative projects. The tool is optimized for fast performance, particularly for users with strong GPUs, and supports LoRA models for further customization. Forge’s ability to deliver unlimited image generation without subscriptions, coupled with its easy-to-use design, has made it a go-to option for those looking for a powerful yet efficient AI tool.
 
Download Forge Here:

Forge Download: https://github.com/lllyasviel/stable-diffusion-webui-forge

 

 

Installing Flux

Flux is a popular AI model known for its ability to generate high-quality images, from hyperrealistic art to anime and digital paintings, while also excelling at comprehending and integrating text within images. It is designed to work efficiently on lower-end GPUs, making it accessible to a broader range of users, even those with limited hardware. With its compatibility in tools like Forge, it allows for flexible creativity and fast performance, offering a streamlined experience for AI-based image generation without sacrificing quality. These features, combined with its ability to run on systems with moderate VRAM, make Flux a go-to model for AI enthusiasts
 Download Flux Here:

Flux Download: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981

NF4 Version:

  • Recommended for VRAM between 6 GB and 12 GB.
  • This is the developer-recommended version because it is very fast and efficient.
  • Ideal for users with moderate hardware specifications. It provides a good balance between speed and image quality while taking up less space.
  •  

FP8 Version:

  • Recommended for VRAM of 12 GB and higher.
  • This version is more demanding on your system and offers higher precision and quality. It is suitable for users with more powerful GPUs.
  • If you have ample VRAM (12 GB or more), the FP8 version can generate more detailed images, though it will require more system resources and take up more storage.
  •  

Choosing Between NF4 and FP8:

  • If you have 6-12 GB of VRAM, it’s suggested to go with the NF4 version, as it’s optimized for speed and performance with lower hardware requirements.
  • For users with 12 GB of VRAM or more, you can opt for the FP8 version for higher-quality image generation.

 

  1.  Prompt I used in the Youtube video:

    Cinematic composition, Digital art, The central figure is a young woman with long, flowing, silver-white hair cascading down from her head, blending seamlessly with the surrounding white to form a surreal, ethereal background. Her expression is calm and serene, with closed eyes, suggesting a state of relaxation or meditation. She is surrounded by a white Japanese dragon. The dragon’s smooth body twists its body around hers the out of the dragon’s body turning to dust. She is facing forwards towards the viewer with her eyes open, and has placed one hand elegantly place on the dragon. She has extremely sharp, long red nails. There is a glowing ring light in the background. The overall style is a blend of fantasy and majestic science fiction, with a high level of detail and smooth, polished textures.

 
Steps: 40, Sampler: Euler,
Schedule type: Simple, 
CFG scale: 1, 
Distilled CFG Scale: 3.5, 
Seed: 739566982, 
Size: 896×1152,
 
  1.  

 

  1.  Conclusion

    Congratulations! You’ve learned how to use Forge and Flux to create stunning AI-driven images and animations for free. With these tools, the possibilities for creativity are endless. Don’t hesitate to experiment with new features and explore more advanced techniques.

  2.  

 

  1. Resource links:

  2. https://blackforestlabs.ai/
  3.  
  4.  
  • Thank you for joining me, and image creation!

Promptmuse X
  •  #fluxai #forge #forgeui #fluxanimation #flux

More To Explore

<p>The post The best AI is still free! Forge & Flux for FREE (Easy Guide) + Animation first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/the-best-ai-is-still-free-forge-flux-for-free-easy-guide-animation/feed/ 0 The best AI is still free! Forge & Flux for FREE (Easy Guide) + Animation - Prompt Muse nonadult
How To Install Flux On ComfyUI https://promptmuse.com/how-to-install-flux-on-comfyui/ https://promptmuse.com/how-to-install-flux-on-comfyui/#respond Tue, 03 Sep 2024 09:38:05 +0000 https://promptmuse.com/?p=4058 Introduction Welcome to this written Tutorial on How to install Flux on ComfyUI. The occpomeying video can be found here: What is Flux? Flux, released

<p>The post How To Install Flux On ComfyUI first appeared on Prompt Muse.</p>

]]>

How To Install Flux On ComfyUI

Installing Flux Ai on ComfyUI

Introduction

Welcome to this written Tutorial on How to install Flux on ComfyUI. The occpomeying video can be found here:

What is Flux?

Flux, released in mid-2023, was specifically developed for ComfyUI to enhance its image generation capabilities. However, it is now avalible on webui’s such as Automtic1111, Forge and ComfyUI.

The Flux AI model excels in prompt adherence, producing high-quality images with accurate anatomy and demonstrating strong capabilities in generating text.

 

Flux and ComfyUI were developed by Black Forest Labs, and the tool is freely avavlibe to use locally, howver terms and condtions have to be agreed to.

It’s important to use it responsibly and within the bounds of ethical guidelines.

  1. Resource links:

  2. https://blackforestlabs.ai/
  3. https://github.com/comfyanonymous/ComfyUI_examples/tree/master/flux

 

 

Hardware Requirements:

Feature/VersionFP8SchnellDev
OverviewOptimized for low VRAM, best for efficient developmentFastest version for lower-end GPUs, optimized for speedGreat for quality and efficiency
Visual QualityGood, performs slower than the Schnell versionCompromises on visual quality for speed, less detailBetter details, more prompt adherence. High Quality
Image DetailGoodGoodHigh
VRAM Requirement8-12GB+
Have seen folks with 8GB using but at a push
8-12GB+12GB+
Model Size17.2GB17.2GB23.8GB
    
  1.  
  1. For all version make sure you have updated Comfyui, simply go to the ComfyUI Manager, click on Manager > Update All, and allow the update process to complete.
  2.  

1. FP8

Faster, optimized version for users with more limited GPU resources

Download Flux FP8

Save the flux1-dev-fp8.safetensors file into ComfyUI\models\checkpoints folder onto your PC.

Load up ComfyUI and Update via the ComfyUI Manager. Update ALL

Download This simple Flux worksflow below, drag and drop tje JSON file into your ComfyUI, Alterntively Load in via your manager. Update All

2. Schnell

Faster, optimized version for users with more limited GPU resources

Download Schnell Model here and put into ComfyUI > models > unet.

Download VAE here ComfyUI > models > vae.

Download Clip model clip_l.safetensors and t5xxl_fp8_e4m3fn.safetensors

Load up ComfyUI and Update via the ComfyUI Manager. Update ALL

3. Dev

 

Development and personal use, capable of producing high-quality images but requires more powerful hardware than the [Schnell] version

Download the .safetensor Model here and put into ComfyUI > models > unet.

Download the following two CLIP models, and put them in ComfyUI > models > clip.

(https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main)
  •  
  • Download the a.e VAE  file. Put it in ComfyUI > models > vae.

 

  1. Download FLUX Model

    Go to the Flux dev model Hugging Face page. Sign up (It’s free) /Login and accept the Terms and conditions. 

    Click on the Files and versions tab and download the flux-dev.safetensors file (See Image below)

  2.  
A screenshot of an image modification software interface, ComfyUI, shows nodes connected to the “Save Image” module with a cat photo at the bottom. A mouse cursor points to the "Load" button.
Load in Json Workflow

 

  1. Resource links:

  2. https://blackforestlabs.ai/
  3. https://github.com/comfyanonymous/ComfyUI_examples/tree/master/flux
  4.  
  • Thank you for joining me, and image creation!

Promptmuse X
  •  #fluxai #fluxcomfyui #comfyui 

More To Explore

<p>The post How To Install Flux On ComfyUI first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-install-flux-on-comfyui/feed/ 0
How I Made A K-Pop Singer: Full Process with phone mocap! https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/ https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/#respond Thu, 01 Aug 2024 11:07:03 +0000 https://promptmuse.com/?p=3915 Character Creator & Blender Pipeline Introduction Welcome to this Written Tutorial on Creating 3D Characters with Character Creator, Blender and Unreal engine. Please see the

<p>The post How I Made A K-Pop Singer: Full Process with phone mocap! first appeared on Prompt Muse.</p>

]]>

How I Made A K-Pop Singer: Full Process with phone mocap!

How to create a 3d character with Moveai and suno
Character Creator & Blender Pipeline

Introduction

Welcome to this Written Tutorial on Creating 3D Characters with Character Creator, Blender and Unreal engine. Please see the full video here

In this tutorial, I’ll guide you through the entire process of creating 3D characters using Character Creator and the free plugin CC Blender Pipeline. From character creation to importing into Blender and finally into Unreal engine. We’ll cover every step in detail and try out some new plugins! Please note that Reallusion currently has a competition to win cash prizes check it out here!

What You’ll Learn

  • How to quickly put together a 3D character using character creator and the blender pipeline. How to integrate plugin into your workflow and bring everything back into character creator and add mocap.
  •  

Prerequisites

  1. Character Creator: Used to create and export your character (currently 40% off with a 30-day trial).
  2.  
  3. Unreal Engine: Ensure you have it installed.
  4.   
  5. CC Rig Plugin: Available on the Epic Store Marketplace.
  6.  
  7. MetaHuman Plugin: Install from the Marketplace.
  8.  
  9. Unreal Auto Setup: Download from the Reallusion website.
  10.  
  11. Live Link Face App: Free on the App Store via your phones app store
  12.  
  13. Ensuring All Plugins Are Active In Unreal Engine:

    To ensure a smooth workflow, you need to make sure all necessary plugins are active in Unreal Engine. Follow these steps:

    1. Activate Required Plugins:
      • In the Plugins window, use the search bar to find each of the required plugins:
        • Quixel Bridge
        • CC Rig Plugin
        • MetaHuman Plugin
        • Unreal Auto Setup
        • Live Link
        •  
        • Apple ARKit (for ARFaceKit functionality)
      • Make sure each of these plugins is enabled by checking the box next to their names.
      • Click Restart Now if prompted to restart Unreal Engine after enabling plugins.
  14.  

Step 1: 

  1. Creating Your 3D Character

    We start with Character Creator to design and customize our 3D character. This powerful tool allows you to sculpt detailed and lifelike characters with ease. You can easily with sliders adjust facial features, body proportions, and clothing to match your desired look.

A computer screen displays a 3D character modeling application with a female K-Pop singer avatar in the center. A small inset shows a woman speaking and gesturing towards the screen, explaining the full process of using phone mocap for realistic movement.
Initial setup of the mocap project in Unreal Engine, preparing to animate Metahuman characters

Importing the Character into Blender

  1. Once our character is ready, we’ll utilize the CC Blender Pipeline plugin to export and import the character into Blender. This is a free plugin.

  2.  

  3.  This plugin simplifies the process, ensuring that all character details are preserved. In Blender, we’ll begin sculpting and using additional plugins to paint details and create stylized hair quickly.

Screenshot of the iClone 7 software interface with a character model displayed on the right side. A dropdown menu under "ZBrush Pipeline" shows options including "Export Character to Blender." The interface supports phone mocap and even features a preset for creating a K-Pop Singer.
Exporting the Character Using CC Blender Pipeline
A 3D model of a face is displayed on a computer screen with Blender software. In the bottom right corner, a person, possibly a K-Pop singer, is visible, perhaps demonstrating or explaining the full process.
Using the sclupting tools in Blender on the CC character
  1.  

Step 2: 

  1. Creating a Stylized 3D K-pop Outfit

    If you would like access to a premade market place of clothes, the Real Illusion Content store has many assets that can be downloaded directly into your CC program. This makes it easy to get quality clothing onto your character. Click here to enter the Content Store.

  2.  
  3.  
  4. But if you would prefer to make your own clothes this comes with a additional learning curve and  the expense of a third part product. I do get asked regularly about creating clothes from scratch, so I thought I’d try out a product for you called Style 3D Artlier. which we will use to design a bespoke K-pop style outfit for our character. 

 
A computer screen displays a website offering digital characters for download, showcasing diverse 3D models in various outfits. The site features a small video chat window at the bottom right where a woman appears, demonstrating the full process of phone mocap to animate a K-Pop Singer.
Character Creator Content Store for all your clothing needs
A computer screen displays character modeling software with a 3D character model in the center. A video feed of a K-Pop singer giving a full process tutorial is in the bottom right corner.
Final Outfit
  1.  
Step 3: 
  1. Adding Mocap with Move.ai One

    Once our character is complete, we’ll add motion capture (mocap) using Move.ai One. 

  2.  

  3. This tool claims to produce mocap using just one camera.

  4.  Although it may not be as precise as mocap suits or the multi cam option (Moveai Pro), I’m curious to see its capabilities. 

  5.  

  6. Setting up Move.ai was straightforward—I simply downloaded the app on my iPhone and recorded using the free trial, capturing 10-second intervals and uploading the finished .fbx files to my PC. My first impression were good, I knew it would be a wobbly animation that would require mocap clean up! But it was good enough for my little example.

A K-Pop singer standing with arms outstretched in a living room appears on the main screen of a smartphone. Inset image shows the same singer talking. "Stop recording" button is visible, capturing the full process with phone mocap technology.
Using Move.ai to create Mocap

 Step 4:

Cleaning Up Mocap Data

Of course, some clean-up is necessary, especially when you are recording mocap from one iPhone! I recommend using iClone for this which will help you clean up the mocap fast, but if you’re on a budget, Blender is a great alternative, however it may require a bit more of a learning curve and might not be as intuitive as Iclone.

I imported my 3D character into iClone and used a free retargeting file to streamline the process. It was easy and enjoyable!

I repeated the clean-up steps for each mocap import from my phone until I had a good selection of dance moves, which I saved in my iClone library for future use.

  1.  
A 3D modeling software interface displaying a female character model in a dance pose reminiscent of a K-Pop singer. The left panel lists animation options, while the right panel shows character settings and modifications, seamlessly integrating phone mocap for capturing the full process.
Cleaning up Mocap in Iclone

Step 5: 

Exporting To Unreal Engine

At this point, you can export your character to Blender using the CC Pipeline. However, I opted to take my character into Unreal Engine, so I can use the Iphone to record my facial animations. (You can do this in Iclone with Accuface).

I simple exported my character without any animations as an fbx file and then exported each of my animation as an fbx.

In Unreal Engine I imported my skeletal mesh and animation sequences into the Unreal content draw , added them to a new sequence level, and converted my character to a CC Control rig to enable facial animation.

A computer screen displaying character modeling software with a 3D character model of a K-Pop singer in the center and various customization options on the side panels.
Cleaning Up Mocap In Iclone
A screen displays the FBX Import Options panel in 3D software, showing animation import settings and file selection. The environment includes UI elements, a sky background, and folders at the bottom, perfect for a K-Pop singer using phone mocap to capture their moves through the full process.
Importing skeleton and animations into UE

Step 6: Recording Animation

  1. To record in Unreal engine you need to set up a Level sequence, think of this as your time line to add animation to. Below are the steps to setup your level sequencer and record your facial animation via your Iphone to the sequencer. I converted my Imported Skeleton mesh to a CC control rig. This is a fantastic free plug in which you can grab here. This plug will now allow me adjust my animation further in unreal engine and copy and paste animation onto the facial control from my Metahuman.
  2.  
  • Recording Singing with MetaHuman and LiveLink

    To keep things organized and efficient, I opened a separate project for this step. I imported a MetaHuman in my scene, used the LiveLink and Unreal Face app on my phone to record singing, and exported the animation as an .FBX file. Finally, I imported this into my original project and applied it to my character’s facial control rig.

A screen displays the FBX Import Options panel in 3D software, showing animation import settings and file selection. The environment includes UI elements, a sky background, and folders at the bottom, perfect for a K-Pop singer using phone mocap to capture their moves through the full process.
Importing skeleton and animations into UE
A computer screen displays video editing software with an animated K-Pop singer in the center. The character, with red hair and a colorful outfit, is being edited using phone mocap technology. The editing timeline and tools are visible at the bottom, showcasing the full process of animation.
Putting everything together in Unreal Engine

DOWNLOAD LINKS

Promptmuse X
  •  #CharacterCreator #iClone #RiggedCharacter #UnrealEngine #UE #Controlrig

More To Explore

<p>The post How I Made A K-Pop Singer: Full Process with phone mocap! first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-i-made-a-k-pop-singer-full-process-with-phone-mocap/feed/ 0 How I Made A K-Pop Singer: Full Process with phone mocap nonadult
Mocap with Custom Characters https://promptmuse.com/mocap-with-custom-characters/ https://promptmuse.com/mocap-with-custom-characters/#respond Fri, 28 Jun 2024 09:38:57 +0000 https://promptmuse.com/?p=3770 Epic Games Face App mocap for custom characters Introduction Welcome to this written tutorial on how to animate custom characters in Unreal Engine using the

<p>The post Mocap with Custom Characters first appeared on Prompt Muse.</p>

]]>

Mocap with Custom Characters

A woman on a video call gestures to a large cartoon minotaur on a blue studio interface. An arrow points from her to the minotaur, indicating a Mocap transformation or connection with Custom Characters.
Epic Games Face App mocap for custom characters

Introduction

Welcome to this written tutorial on how to animate custom characters in Unreal Engine using the Live Link Face app. This guide will show you how to easily transfer facial animations from your iPhone to your custom characters, including setting up body animations. Full video here

What You’ll Learn

  • How to set up and use the Live Link Face app with Unreal Engine
  •  
  • How to import and animate
  •  characters from Character Creator
  •  
  • How to add body animations to your characters
  •  

Prerequisites

  1. Character Creator: Used to create and export your character (currently 40% off with a 30-day trial).
  2.  
  3. Unreal Engine: Ensure you have it installed.
  4.  
  5. Quixel Bridge: Download and install from the Epic Store Marketplace.
  6.  
  7. CC Rig Plugin: Available on the Epic Store Marketplace.
  8.  
  9. MetaHuman Plugin: Install from the Marketplace.
  10.  
  11. Unreal Auto Setup: Download from the Reallusion website.
  12.  
  13. Live Link Face App: Free on the App Store via your phones app store
  14.  
  15. Ensuring All Plugins Are Active In Unreal Engine:

    To ensure a smooth workflow, you need to make sure all necessary plugins are active in Unreal Engine. Follow these steps:

    1. Activate Required Plugins:
      • In the Plugins window, use the search bar to find each of the required plugins:
        • Quixel Bridge
        • CC Rig Plugin
        • MetaHuman Plugin
        • Unreal Auto Setup
        • Live Link
        •  
        • Apple ARKit (for ARFaceKit functionality)
      • Make sure each of these plugins is enabled by checking the box next to their names.
      • Click Restart Now if prompted to restart Unreal Engine after enabling plugins.
  16.  

Step 1: Create an Unreal Project 

(Installing the Auto Setup as a bridge from Character Creator to Unreal Engine)

  1. Launch Unreal Engine and create a new blank project.
  2. Name the project (e.g., “AutoTutorial”) and create it.
  3. Close the project to install the necessary files.
Screenshot of setting up a mocap project in Unreal Engine for Metahuman character animation.
Initial setup of the mocap project in Unreal Engine, preparing to animate Metahuman characters
  1.  

Step 2: Install Unreal Auto Setup

  1. Download the Auto Setup from the Reallusion website and unzip it.
  2. Run the setup executable file.
  3. Copy the contents and plugin folders from the downloaded files to your Unreal Engine project folder (e.g., Documents > Unreal Engine > AutoTutorial).
  4. Replace the files when prompted.
 
Screenshot of downloading the AutoSetup plugin from Reallusion for Unreal Engine.
Downloading the AutoSetup plugin from Reallusion for seamless integration with Unreal Engine
  1.  
Step 3: Import Your Custom Character
  1. Open Character Creator and select your character. (Has to be a CC3 character)
  2. Export as FBX with Unreal Engine as the target preset. File>Export>FBX>Clothed Character
  3. Import the character into your Unreal Engine project, ensuring CC Control Rig is installed.
Screenshot of exporting a character from Character Creator to create a control rig in Unreal Engine
Exporting a character from Character Creator to create a control rig in Unreal Engine

 Step 4: Setup MetaHuman

We are now going to use a metahuman as a Dummy to record our facial animations onto.

  1. Import a MetaHuman character via Quixel Bridge and add it to your Unreal project.
  2. Set up Live Link Face App on your iPhone and ensure it is connected to your computer.
  3.  

Step 5: Connect MetaHuman to Live Link

  1. In Unreal Engine, select your MetaHuman character.
  2. Set up the Live Link connection in the details panel and ensure facial tracking is active.
  3.  

Step 6: Recording Animation

  1. To record in Unreal engine you need to set up a Level sequence, think of this as your time line to add animation to. Below are the steps to setup your level sequencer and record your facial animation via your Iphone to the sequencer:
  •  

Create a New Level Sequence:
In the Content Browser, right-click and go to Animation > Level Sequence.
Name your sequence and double-click to open it in the Sequencer.

 

Add Your Metahuman to the Sequence:
In the Sequencer window, click the + Track button.
Select Actor to Sequencer, then choose your Metahuman character from the list.

 

Start Recording:
During the countdown, ensure your ARFaceKit device is capturing your facial movements. Perform the desired expressions and movements.
Click the Record button in the Sequencer toolbar (Red Button Left of the screen) A countdown will begin.

Stop Recording:
Once you’ve finished the performance, click the Stop button in the Sequencer toolbar.
The recorded animation will appear as keyframes in the Sequencer timeline.

 

Review and Edit the Animation:
Scrub through the timeline to review the recorded animation.
You can adjust keyframes, refine movements, and blend animations as needed for a polished result.

 

Save Your Work:
Always save your Level Sequence and project to avoid losing any progress.

Step 7: Baking The Key Frames

  1. After stopping the recording, select the recorded track in the Sequencer.
  2. Right-click on the track and choose Bake To Control Rig > Face_ControlBoard_CtrlRig. This process will convert the live link data into keyframes, which we now can copy and paste on to our custom Character.
Screenshot of baking animation keys in Unreal Engine for a Metahuman character.
Baking live link animation data into keyframes for a Metahuman character in Unreal Engine

Step 8: Baking The Key Frames

Select the baked keyframes in the Sequencer for your Metahuman character.
Right-click and choose Copy.
Add your Character Creator (CC) character to the Sequencer by clicking the + Track button and selecting your CC character.
Navigate to the appropriate track on your CC character where you want to paste the keyframes.
Right-click on the track and choose Paste to apply the baked keyframes to your CC character.

 

  •  

Conclusion

That’s it for this tutorial on using the Live Link Face app to animate custom characters in Unreal Engine. If you have any questions or want to share your creations, feel free to tag me on social media @PromptMuse

DOWNLOAD LINKS

More To Explore

<p>The post Mocap with Custom Characters first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/mocap-with-custom-characters/feed/ 0 Finally! Custom 3D Character Face Mocap nonadult
CC Control Rig Is Here https://promptmuse.com/cc-control-rig-is-here/ https://promptmuse.com/cc-control-rig-is-here/#respond Wed, 22 May 2024 10:38:40 +0000 https://promptmuse.com/?p=3364 https://www.youtube.com/watch?v=T-VSbVlDuZ8&t=1s Rigging custom characters in Unreal Engine can be a tedious process, but the new CC Rigs plugin changes everything. This innovative tool converts CC3+

<p>The post CC Control Rig Is Here first appeared on Prompt Muse.</p>

]]>

CC Control Rig Is Here

Rigging custom characters in Unreal Engine can be a tedious process, but the new CC Rigs plugin changes everything. This innovative tool converts CC3+ characters into Metahuman rigs for both body and face, making it easier than ever to create custom characters—whether human, creature, or cartoon. Say goodbye to the limitations of custom character creation in Unreal Engine. This is a true game-changer.

Key Features of the UE CC Rigs Plugin:

  •  
  • Wide Compatibility: Although CC Rig was primarily created to import Character Creator 3D models, this tool allows for Imports from popular platforms like Daz3D, Maya, and Blender, automatically converting the skeletal body for use in Unreal Engine via the Accurig free plugin, this then can be convert to CC RIG (Reallusion) 

  •  
  • Enhanced Animation Editing: Allows detailed animation editing within Unreal Engine, including importing motion files, creating custom animations, and organizing motions using the Perform List Editor, you can even transfer Metahuman control rig animations over….yeah!  (Reallusion) (Reallusion).

  •  
  • Custom MetaHumans: Facilitates the creation of custom MetaHumans by transferring CC head shapes and textures to Unreal Engine, producing highly realistic digital actors compatible with Unreal Engine and iClone (Reallusion).

  •  
  • Asset Optimization: Using Character Creator workflow this provides tools for importing, editing, and optimizing character assets, including mesh conversion, InstaLOD optimization, and prop management (Reallusion Magazine).

  •  
UE CC Rigs By Reallusion
UE CC Rig Allows you to convert Characte rcreator model to the UE Control Rig

Getting Started:

  1. Character Creation:

    • Use Character Creator, available with a 30-day trial. Start with a base mesh template to simplify the initial steps. Adjust the facial profile and body using the intuitive slider system.
    •  
  1. Additional Sculpting:

    • If you need further customization, use the Blender pipeline or the GoZ plugin for ZBrush to sculpt your character and bring it back into Character Creator.
    •  
  2. Importing External Characters:

    • Import characters created externally and convert their rigs to CC3+ compatible rigs using AccuRIG.
  3. Extensive Customization:

    • Add hair, skin textures, clothes, and accessories to your character. Utilize Reallusion’s extensive catalog, or create your own in Blender or ZBrush. It’s also compatible with Marvelous Designer.
  4. Animation Process:

    • Once your character is ready, export it from Character Creator into Unreal Engine to convert into CC Rigs. Personally, I prefer using iClone for animation. I transfer my character with the one-click solution into iClone, set up my animations, and then convert into CC Rigs.
    •  
  •  
  • Step-by-Step Guide to Using CC Rigs in Unreal Engine

     
    1. Install the Plugin:

CC Rigs is avalible for free on the epics market place
Free Download of UE CC rigs on the Epic Games Market Place
    1.  2. Enable the Plugin:
      • Create or use an existing Unreal project. Load a project and navigate toSettings > Plugins (Top right of the screen), search for Character Creator UE Control Rig, and enable it. Restart (Restart Button Will be on the bottom of the window)
      •  
      • 3. Transfer Your Character:

        • Use Iclone Unreal Live Link or Auto Setup (Free) to transfer your character to Unreal Engine.
        • If you using the auto seteup route install from here for free, and follow the instructions to setup in UE. Once installed you can go File> Export FBX > Clothed character Target Unreal engine, check delete hidden mesh.
        • Then import into unreal ready to convert to UE CC rigs.
      •  
  •  
A diagram illustrates the workflow between iClone/Character Creator and Unreal Engine using Unreal Live Link and Auto Setup Plugin, highlighting the integration with CC Control Rig. In the bottom right corner, promptmuse discusses this animation tool.
Transfer your Character from Character Creator or Iclone
        •  
        • 4.Convert to CC Rig:

          • In Unreal Engine find your character’s skeletal mesh in the content panel, right-click, and choose Create CC Rig from the pop up menu.
          •  
          • 5. Animation Setup:

            • Add your character’s blueprint to the sequencer, load animations, and bake keyframes for easy editing. (Much easier to see this in the video)
            •  
  •  
a screenshot of a computer
Transfer your Character from Character Creator or Iclone
        •  
            •  

    Final Touches and Animation Refinement

    • Refine your animations by adding keyframes directly in the sequencer. This system will significantly improve your workflow and reduce your timelines.
    •  

    Final thoughts:

    This plugin is a game-changer for Character Creator and iClone users. If you’re not already a user, try the 30-day trial. I’m passionate about Reallusion’s products and wouldn’t recommend them if I didn’t believe they are great.

    Links and Resources:

  • Let me know if this exciting news helps you, whether you’re a studio, an animator, or an enthusiast. And as always, it’s been a pleasure! Thanks for reading and watching my video! Leave a comment on the video to let me know someone has read this haha!
PromptMuse x
 #CharacterCreator #iClone #RiggedCharacter #UnrealEngine #UE #Controlrig
  •  

More To Explore

<p>The post CC Control Rig Is Here first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/cc-control-rig-is-here/feed/ 0 Convert Custom Characters for UE using the NEW CC Rig nonadult
Custom Blender Hair To Metahuman Guide https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/ https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/#respond Fri, 19 Jan 2024 10:27:16 +0000 https://promptmuse.com/?p=3211 Introduction Tired of the default hair options for Metahumans in Unreal Engine? This comprehensive guide will walk you through creating and customizing Metahuman hair using

<p>The post Custom Blender Hair To Metahuman Guide first appeared on Prompt Muse.</p>

]]>
Introduction

Tired of the default hair options for Metahumans in Unreal Engine? This comprehensive guide will walk you through creating and customizing Metahuman hair using Blender. This process involves no plugins and uses entirely free software.

Please Join my Patreon to get access to Blender Resources and additional guides! PromptMuse | 3D & AI Learning Community | Patreon

Prerequisites

Basic familiarity with Blender and Unreal Engine. This tutorial uses 100% free software.

If you don’t have these installed, refer to my getting started with Unreal Engine and Metahumans here (Coming soon).


1.1 Create a Metahuman

1.2 Setting Up your Unreal Project

  • Open Unreal Engine via the Epic Launcher, navigate to the Marketplace on the top tab, and search “Metahuman” download the free ‘Metahuman Lighting’ rig.
  • Create a new project and launch it.
  • In Unreal Engine, navigate to Window and Quxiel Bridge.
  • In Quxiel Bridge download your Metahuman by selecting “My Metahuman” from the lefthand side navigation window. Once downloaded Select your metahumans card and from the right handside panel select “Add” This will add your metahuman to your content draw in your UnrealProject

Step 2: Exporting Mesh to Blender

  • In Unreal Engine, select your Metahuman and remove the existing hair if needed.
  • Now, export the head mesh. Find the mesh in the content browser, right-click, and select “Export.” Choose FBX format for Blender compatibility. Save the file in a convenient location

Step 3: Creating Hair in Blender

3.1 Importing and Prepping the Mesh

  • Open Blender, Im using Blender 4.0. In a new scene, delete the default objects.
  • Go to File > Import > FBX, and select the exported head mesh.
  • In the viewport, separate the scalp from the face. This isolation will help in focusing the hair creation on the scalp area.

3.2 Designing the Hair

  • With the scalp selected, enter Sculpt Mode.
  • Use the Add Hair tool to begin placing hair guides on the scalp. These guides will shape the overall hairstyle.
  • Adjust hair length, density, and curvature. For long hairstyles, increase the length and density. For short styles, reduce these parameters.
  • Apply modifiers for specific textures and effects:
    • Clump Modifier: To create grouped strands of hair.
    • Curl Modifier: For curly or wavy hair.
    • Frizz Modifier: Adds a frizzy, unstructured look to the hair.
  • Focus on the hairline and parting. Add more hair guides here to ensure a natural, dense look.

3.3 Exporting Hair Back to Unreal Engine

  • Once you’re satisfied with the hair design, export it as an Alembic (.abc) file. This format preserves the hair details for Unreal Engine.

Step 4: Finalizing in Unreal Engine

4.1 Importing Hair into Unreal

  • Back in Unreal Engine, import the Alembic file. Navigate to the content browser, right-click, and select ‘Import to /Game’. Locate your Alembic file and import it.

4.2 Adjusting Hair Settings

  • Select the imported hair in the content browser. In the details panel, fine-tune the settings:
    • Interpolation: Adjust for smooth hair transitions.
    • Root and Tip Scale: Control the thickness of the hair at the root and the tip.
    • Color Settings: Customize the hair color and texture to match your character’s style.
  • Enable physics simulation for realistic hair movement.

4.3 Binding Hair to Metahuman

  • To attach the hair to your character, use the ‘Create Binding’ option in Unreal Engine. This step ensures that the hair moves naturally with your character’s animations.

Conclusion

You’ve successfully created and customized hair for your Metahuman character in Unreal Engine. Experiment with different styles and modifiers to enhance your digital characters further. Don’t forget to save your project to preserve your work.

Additional Resources

<p>The post Custom Blender Hair To Metahuman Guide first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/creating-custom-hair-for-metahumans-in-unreal-engine-using-blender/feed/ 0 Custom Blender Hair to Metahuman Guide nonadult
An Idiots guide to ComfyUI https://promptmuse.com/an-idiots-guide-to-comfyui/ https://promptmuse.com/an-idiots-guide-to-comfyui/#respond Mon, 11 Dec 2023 13:44:19 +0000 https://promptmuse.com/?p=3157 In this introductory guide, we’ll delve into the default node-based workflow of ComfyUI. This standard workflow is what you see upon opening ComfyUI for the

<p>The post An Idiots guide to ComfyUI first appeared on Prompt Muse.</p>

]]>
In this introductory guide, we’ll delve into the default node-based workflow of ComfyUI. This standard workflow is what you see upon opening ComfyUI for the first time, and it offers a fundamental insight into the software’s operation and its core features. The guide is designed to help you create your first image and gain a solid understanding of the functions of different nodes, building your confidence in using the software effectively.

Introduction of ComfyUI

ComfyUI is a node-based user interface specifically designed for generating AI images and animations using Stable Diffusion. Gaining popularity in 2023 as an alternative user interface to Automatic1111, ComfyUI stands out for its flexibility and user-friendly design.

The node-based layout of ComfyUI streamlines the creation of custom AI-generated images and animations. However, for someone new to the system, this node-based approach might initially seem daunting. This is exactly why I have written this guide to slowly introduce you to it. Additionally, its compatibility and free-to-use nature alongside Stable Diffusion further enhance its appeal to those interested in AI-driven creative processes. This tool appears to be particularly favoured by users who prefer a more intuitive and customizable interface for their AI image and animation generation needs.

Table Of Contents

  • System Requirements and Installation
  • ComfyUI Default Layout
  • Understanding the Nodes in ComfyUI
  • Resources for Learning and Inspiration
  • Loading Checkpoints
  • The CLIP Text Encode (Prompt) Nodes
  • KSampler Node in Depth
  • Denoising and Scheduler Nodes
  • VAE Decode Node
  • Saving Your AI Image
  • Preview of the Next Guide

System Requirements and Installation:

  •  NVIDIA Graphics card
  • 12GB of VRAM

For Mac users please see here. I have made a installation video here : Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!) – YouTube

ComfyUI Default layout
ComfyUI Default layout

ComfyUI’s user interface is built around nodes, each serving a distinct function. Upon opening ComfyUI, you’re greeted with a blank workspace. To activate the default interface, which includes a pre-connected set of basic nodes, click the ‘Load Default’ button as shown in the accompanying image. This action will display a network of fundamental nodes, arranged from left to right. For installation of ComfyUI please refer to this video.

What do all the nodes do?

ComfyUI’s versatility is showcased by its wide range of node sequences, each tailored to create specific types of AI images or animations. This flexibility allows users to experiment and find the best node combinations for their creative projects. The ComfyUI community actively contributes to this diversity by constantly creating and sharing new workflows.

The Stable Diffusion Reddit forum is an excellent resource for both beginners and those looking for creative inspiration. In this forum, users engage in sharing ideas, tips, and various workflows related to Stable Diffusion. While the forum isn’t exclusively focused on ComfyUI, it remains a valuable source of information and community support for users interested in all aspects of Stable Diffusion, including those using ComfyUI. This collaborative environment can be especially helpful for learning, troubleshooting, and discovering new techniques in image and animation generation with AI.

Load Checkpoint

The first node on your default layout is the load checkpoint node. A checkpoint file is  essentially pre-trained model focusing on a style or genre. The type of images a model can create is determined by the dataset it was trained on. This dataset could focus on general imagery or be specialized in a particular genre or style. Please be aware that many checkpoint files cannot be used for commercial use. Also it’s worth noting if the checkpoint file was made for Stable diffusion 1.5(an earlier version of stable diffusion or SDXL – the latest version. CivitAI is a great source to download checkpoint files Civitai: The Home of Open-Source Generative AI .

The checkpoint node for comfy
The checkpoint node for comfyUI

The CLIP Text encode(Prompt)

There are two of these nodes, you will notice that one will connect to the KSampler Postive and one the negative. They are simply your positive prompt and negative prompt.

Positive Prompt: The positive prompt guides the AI towards what you want it to draw

Negative Prompt: The negative prompt specifies what you want the AI to exclude from the image

 

Clip text encode
Clip text encode, just a fancy way to say positive and negative prompt

KSampler

Comfyui Guide - Ksampler
Comfyui Guide – Ksampler

The KSampler lets you have control over the image process
This node has different options for both input and output to control how it processes the image.
Seed: The seed value is the initial random input used in the latent space to start the image generation process. Essentially, it determines the content of the image, with each generated image having a unique seed value.
Using the same seed with produce similar results.
Steps: Generally, a higher number of steps allows more time for refining the image generation, leading to better quality. However, using too many steps can adversely affect the image.
CFG: The CFG value emphasizes adherence to the prompt in the image generation, but excessively high scale settings can detrimentally affect image quality.

  • Sampler Name: Various samplers impact how fast, how well, and how varied the image creation process is. Different samplers offer distinct results in these areas.

Example of different samplers that cna be used in ComfyUI and Automatic 1111:

Euler a, Euler, LMS, Heun, DPM2, DPM2 a, DPM++ 2S a, DPM++ 2M, DPM Fast, DPM Adaptive, LMS Karras, DPM2 Karras, DPM2 a Karras, DPM++ 2S a Karras, DPM++ 2M Karras, DDIM, PLMS etc

  • Denoise: The Denoise strength setting, ranging from 0 to 1, determines the similarity or difference between the output image and the input image. A setting of 0 adds no noise, meaning the output will be exactly the same as the input.
  • Scheduler: The Scheduler controls the frequency and intervals at which the sampling process is repeated. Different Schedulers can impact the quality and variety of the generated images in various ways.

VAE Decode Node

VAE Decode node
VAE Decode node

There is no need to adjust anything with the VAE decode, this node is there to automatically convert images from latent space back into pixel space, utilizing the specified Variational Autoencoder (VAE). This process essentially transforms the abstract, encoded representations of images in latent space into visible, regular images in pixel space.

Save Image

This is the last node on the default workflow and where you AI image will appear. In the next guide I will show you how to add video outputs to your node base layout.

Conclusion

As demonstrated, generating an image with ComfyUI is more straightforward than it might initially appear, and this guide aims to boost your confidence in using the tool. While some workflows may seem complex at first glance, they become much simpler once broken down and explained. In upcoming guides, I’ll delve into additional node workflows, specifically focusing on video-to-video transformations. These will enable us to alter the style of a video using some of ComfyUI’s advanced nodes. Stay tuned for more insights and tutorials. Until next time…

<p>The post An Idiots guide to ComfyUI first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/an-idiots-guide-to-comfyui/feed/ 0 Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!) nonadult
Character Creator Workflow For Fast 3D Animation Production https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/ https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/#respond Fri, 08 Dec 2023 12:39:42 +0000 https://promptmuse.com/?p=3149 The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These tools are not just simplifying the character creation process but are also ensuring that characters are ready for animation and production, saving both time and resources.

Character Creator stands out for its ability to quickly generate base models that are ripe for customization. With its intuitive interface and versatile morph sliders, creators can easily shape the basic anatomy of their characters. This software shines in its ability to offer a base that includes rigs and morph targets, essential for both facial and body animation.

A character creator workflow for fast production using base meshes
A character creator workflow for fast production using base meshes

Once the basic form is created in Character Creator, ZBrush comes into play. Known for its robust sculpting tools, ZBrush allows artists to add intricate details and a unique personality to their characters. Its brushes are perfect for crafting stylized aesthetics, moving away from realism and embracing a more exaggerated, cartoon-like appearance.

A fast production screen shot of a 3D model being transferred to Zbrush
A fast production screen shot of a 3D model being transferred to Zbrush

One of the most significant advantages of using Character Creator and ZBrush is their seamless integration. With just a click, characters can be transferred between the two programs without losing any detail. This integration is a game-changer, ensuring that the creative flow isn’t interrupted.

Characters created with these tools are not just visually appealing but are also production-ready. They come with clean topology, rigs, and weights, making them perfect for animation projects. This readiness significantly cuts down the time from concept to production, a crucial factor in fast-paced project environments

Easy Facial Edit tools in Character Creator create seamless workflow for character production
Easy Facial Edit tools in Character Creator create seamless workflow for character production

For those who use Blender for composting, the good news is these characters are fully compatible. With the help of a simple plugin, characters can be imported into Blender, retaining all their rigging and morphing qualities. This flexibility opens up avenues for creators who operate in different software environments.

The combination of Character Creator and ZBrush is a testament to how technology is simplifying yet enhancing the art of 3D character creation. By reducing the technical barriers, these tools allow artists to focus more on the creative aspect of character design. As we continue to see advancements in these tools, the future of character creation looks more exciting than ever.

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/feed/ 0
ComfyUI and Animate Diff Evolve Installation Guide https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/ https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/#respond Sat, 28 Oct 2023 12:12:41 +0000 https://promptmuse.com/?p=3108 Hello there, Prompt Muse here! In this comprehensive guide, I’ll walk you through the essentials of setting up ComfyUI and Animate Diff Evolve. 1. Introduction

<p>The post ComfyUI and Animate Diff Evolve Installation Guide first appeared on Prompt Muse.</p>

]]>

Hello there, Prompt Muse here! In this comprehensive guide, I’ll walk you through the essentials of setting up ComfyUI and Animate Diff Evolve.

1. Introduction

• ComfyUI offers a node-based layout, allowing for a streamlined workspace tailored to your needs.mm
• Drag and drop features for images and workflows enhance ease of use.
• This tutorial aims to make you feel comfortable with ComfyUI, ensuring you make the most of its unique features.

2. System Requirements

• Nvidia RTX Graphics card is preferable (Works with AMD cards and Macs click here)
• Preferred: 12 GB of VRAM for processing videos.
• Allocate 50-80 GB of storage for outputs.

Tip: If your PC doesn’t meet these requirements, consider using Shadow Tech. It’s a remote PC service I personally use, providing an NVIDIA 1080 GPU, ample storage, and 12 GB VRAM for about $40/month.

3. Software Dependencies

• Git: Allows you to pull extensions from GitHub. Download here.
• FFmpeg: Essential for encoding videos. Download here.

4. Installing ComfyUI & comfyUIManager

1. Visit the ComfyUI GitHub page.
2. Under the “Installing” section, select the “Direct link to download” for the standalone portable version.
3. Once downloaded, extract the files to your chosen directory.
4. For ComfyUI models:
• Checkpoints: Download from civitai.com and place in the checkpoints folder.
• VAE: Download from Stability AI’s hugging face website and place in the vae folder.
5. Install ComfyUI Manager from the civitAI page for easy updates and add-on installations.

5. Basic Workflow Overview

• ComfyUI operates like a circuit board, with nodes representing each process.
• Start with the Load Checkpoints node, input your positive and negative prompts, and proceed to the K Sampler.
• The Latent Image Node determines your image dimensions and batch size.
• The VAE Decode node processes the final image.

Pro Tip: Images created in ComfyUI can be dragged and dropped back into the system to load their respective node layouts.

Here’s the continuation of the tutorial based on the provided transcript:

## **6. Installing Motion Models**

1. In ComfyUI Manager, go to **Install Models**.
2. Type “mm” in the search box, which stands for motion models.
3. Download and install **Motion Model version 14**, **version 15**, and **version 15 uncore 2**. The high and mid variants are also recommended.
4. After downloading, you should see a confirmation message.

## **7. Installing Control Net**

1. Still in the ComfyUI Manager, search for “control net”.
2. Locate and install **Stable Diffusion 1.5**. You might need to scroll down the list.
3. Install other control nets like **Line Artarts** and **Open Pose** as needed, ensuring you have enough storage.

**Note:** After installing, close all instances of the manager and ComfyUI. When reopening ComfyUI, it will automatically install the nodes and extensions you’ve selected.

## **8. Utilizing Pre-Made Templates**

1. Visit **cinka dink’s GitHub page**. He offers pre-made templates for Animate Evolve.
2. Download a workflow, drag, and drop it into ComfyUI to load the node layout.
3. If you encounter missing nodes (highlighted in red), simply go to the manager and click on **Install Missing Nodes**.

## **9. Running Workflows**

1. Use the **Load Checkpoints** node, followed by **Clip Setting**, **Positive Prompt**, and **Negative Prompt**.
2. The **K Sampler** determines sampling, while **Animate Diff Loader** adds motion.
3. Adjust settings such as image dimensions, batch size, and motion model as needed.
4. For advanced users, experiment with nodes like **Animate Diff Uniform Context Options** to enhance animations.

## **10. Adding Laura for Enhanced Details**

1. Double-click on an empty space in ComfyUI and search for **Laura**.
2. Select the **Laur Loader** and connect it as shown in the workflow.
3. Adjust the **Strength Model** to reduce the risk of artifacts.
4. Run the prompt and observe the enhanced details in the generated animation.

## **11. Video to Video Animations Workflow**

1. Use **Inner Reflections’** guide as a template for video-to-video animations.
2. For video inputs, use the **Video Load Node**.
3. Adjust settings like frame load cap, skip first frames, and select every M frame to customize the video output.
4. Use **Load Checkpoint** to select your desired model.
5. Implement control nets, like **Line Art**, to guide the animation style.
6. Experiment with different nodes and settings to achieve unique video outputs.

## **12. Exploring Other Templates**

1. Another great template to explore is **Prompt Scheduling**, which allows dynamic prompt changes over time.
2. This feature can be used to create evolving animations, adding depth to your projects.

**Conclusion**

Thank you for following along! With ComfyUI, the possibilities are vast, allowing for creative freedom and intricate animations. A written version of this tutorial is available on the Prompt Muse website. Feel free to reach out on social media @promptMuse for further assistance.

Remember: Art is a journey, and every masterpiece begins with the decision to try. Dive in and explore the world of ComfyUI and Animate Evolve!

<p>The post ComfyUI and Animate Diff Evolve Installation Guide first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/feed/ 0 Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!) nonadult
DALL·E 3.0: Is the Hype Justified? https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/ https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/#respond Thu, 26 Oct 2023 09:50:15 +0000 https://promptmuse.com/?p=3100 In the ever-evolving landscape of AI-driven creative technology, DALL·E 3.0 is the name on everyone’s lips. With its impressive capabilities, DALL·E 3.0 seemed to outperform

<p>The post DALL·E 3.0: Is the Hype Justified? first appeared on Prompt Muse.</p>

]]>
In the ever-evolving landscape of AI-driven creative technology, DALL·E 3.0 is the name on everyone’s lips. With its impressive capabilities, DALL·E 3.0 seemed to outperform its predecessors like Midjourney, and the best part? It was free. But was it too good to be true? Let’s delve into my experience with DALL·E 3.0 to find out.

I was one of the early adopters of DALL·E 3.0, and I couldn’t wait to put this new AI image generator to the test. The initial results, I must say, blew me away. What set DALL·E 3.0 apart was its ability to understand not just single words or simple prompts but entire phrases and sentences. It felt like a significant leap forward in AI creativity.

The images it generated were a sight to behold – beautiful, bright, and full of life. It seemed like I could finally do away with the rigid, prompt-based language that had constrained previous AI models. DALL·E 3.0 was a breath of fresh air in the world of creative content generation.

Excitement filled my days as I used DALL·E 3.0 to create a wide array of images for a project I was passionately working on.  It quickly became an indispensable tool in my creative arsenal. However, all good things must come to an end, and in this case, it happened rather abruptly.

One fateful day, I logged onto the platform to create more AI-generated images, only to be met with a disheartening message: “Promptblocked.” I tried again, hoping it was just a minor glitch, but to my dismay, the same message appeared: “Promptblocked.”

A screen shot of a web page showing several pictures related to DALL·E 3.0.
Prompted Blocked  DALL·E 3.0.

It turned out that DALL·E 3.0 had been nerfed into existence. The issue lay in the specific prompt I had used – a celebrity’s name, Lana Del Rey, to be precise. Previously, DALL·E 3.0 had excelled in creating fantastic reference images for my project, but suddenly, it refused to cooperate. The reliable wellspring of creative inspiration had run dry.

A DALL·E 3.0 generated 3d model of a woman with long wavy hair.
A DALL·E 3.0 Lana Del Rey reference Image before DALLE 3.0 got Nerfed.

As I reflected on my journey with DALL·E 3.0, I couldn’t help but acknowledge the rollercoaster of emotions it had taken me on. It had been an incredible tool, enabling boundless creativity with its brilliant images. But, it came with its limitations, and one had to tread carefully to avoid promptblocks.

In the end, my experience with DALL·E 3.0 was indeed remarkable, a brief and brilliant journey through the realm of AI-generated artistry. Though its limitations became evident, the potential of this technology remains undeniably promising. DALL·E 3.0 may have had its ups and downs, but it undeniably showcased the incredible power of AI in the world of creative content generation.

If you wish to try it out for yourself, a link is here: https://www.bing.com/create

It is still a fantastic tool, but not as good as it once was.

PromptMuse

<p>The post DALL·E 3.0: Is the Hype Justified? first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/feed/ 0
How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/ https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/#respond Mon, 25 Sep 2023 08:58:36 +0000 https://promptmuse.com/?p=3087 In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This tutorial is designed for those who don’t have a budget for expensive animation software or motion capture suits. The full tutorial video of this process can be found on my Youtube channel here.

We will be using a software called “Cascadeur,” which has been around for about 10 years and offers a free version with some powerful animation tools. While this method is not a replacement for professional animation software or mocap, it’s a viable alternative if you’re on a tight budget.

Note: Before you start, make sure you have Cascadeur and Accurig installed on your computer. You can download Cascadeur from the official website, and Accurig is a free auto-rigging tool that complements Cascadeur.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/ScQTV2Xb–0?si=_4-LUd5vW3w7Nz64″ title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” allowfullscreen></iframe>

Let’s get started!

Part 1: Rigging Your Character in Accurig

  1. Open Accurig and click on “Choose File” to select your 3D character’s FBX file. You can use a sample character from Cascadeur, Mixamo, Sketchfab, or your own custom character.
  2. After loading your character’s mesh, click on “Rig Body” to generate the joint structure for your character’s skeleton.
  3. Accurig will display circles where joints should be placed. Ensure symmetry is checked to work on one side of the character.
  4. Position the joint guides according to your character’s anatomy, following the on-screen guides for reference.
  5. Use the tools in the bottom left corner to rotate and move around your character for precise joint placement.
  6. Repeat the process for other body parts, such as arms and legs, ensuring correct joint placement.
  7. Use the “Preview Motion” window to check the animation on various body parts, including fingers.
  8. Ensure your character is in a neutral pose (A-pose or T-pose) before exporting.
  9. Click “Upload to AccuRig” and then “Export” > “Export FBX.” Set the target application to “Maya” and check “Embed Texture.” Click “Export” to save the rig.
  10. Export another FBX file of your character’s base mesh but set the target application to “Blender” for later use.
 Accurig Auto Rigger Tool
Accurig Auto Rigger Tool By Real Illusion

Part 2: Creating a Basic Idle Animation in Cascadeur

  1. Open Cascadeur and start a new scene. Import the FBX file with Maya settings that you exported from Accurig.
  2. Cascadeur will ask if you want to enter “Rig Mode.” Click “Yes.”
  3. In the “Rig Mode Helper” dialog, click “Yes” and then “OK” on the next dialog.
  4. Click “Add Rig Elements” at the bottom of the “Quick Rigging Tool” dialog.
  5. Rotate your character by holding ALT and the left mouse button to navigate.
  6. Select the “Auto Pose” tool to enable automatic control point positioning as you move your character.
  7. Position your character into an initial pose for your idle animation by moving and rotating control points. Use ‘W’ to move and ‘E’ to rotate.
  8. Add a keyframe at frame 10 by clicking the key icon.
  9. Change the hand pose on frame 10 to create a hand open/close animation.
  10. Duplicate the first frame to frame 20 and mirror the pose for variety.
  11. Duplicate the second keyframe to frame 35 and mirror it to frame 45.
  12. Extend the timeline to add more frames for smoother animation (e.g., 200 frames in total).
  13. Copy the first frame to frame 145 and the second keyframe to frame 110.
  14. Apply bezier curve interpolation for smoother animation between keyframes.
  15. Review and refine your animation by adding subtle movements, such as chest and shoulder motion.
  16. Create a seamless loop by ensuring the first and last frames are identical and adjust frame numbers accordingly.
  17. Cascadeur Tutorial

Part 3: Exporting the Animation to Blender

  1. Export the animation from Cascadeur to Blender by going to “File” > “Export” > “FBX.” Name the file and click “Save.”
  2. In Blender, import the animation by going to “File” > “Import” > “FBX.” Use the default settings and click “Import FBX.”
  3. Delete any existing objects in the Blender scene and select the imported Armature.
  4. Adjust the Armature’s rotation to face the front and place it in the scene.
  5. Create an animation track for the imported animation and rename it.
  6. Copy the animation keyframes from the imported Armature and paste them onto your character’s Armature.
  7. Delete the imported Armature to keep your scene clean.
  8. Create an animation loop for your idle animation in Blender using the NLA (Non-Linear Animation) Editor.
  9. Blender_Animation_Import
    Blender_Animation_Import

Part 4: Exporting the Animation to Unreal Engine 5

  1. In Unreal Engine 5, create a new project and organize your project folder.
  2. Import your character’s 3D mesh into Unreal Engine by right-clicking in the “Content” folder, selecting “Import,” and choosing your FBX file. Ensure it’s assigned to the correct skeleton.
  3. Add a Level Sequence to your project by right-clicking in the “Content” folder and selecting “Level Sequence.”
  4. Drag your character’s skeletal mesh into the Level Sequence.
  5. Add your idle animation to the Level Sequence by clicking the plus icon and selecting the animation.
  6. Adjust the timeline as needed and press the spacebar to preview your animation.
  7. Extend the timeline and blend your idle and walk animations for a seamless transition.

Part 5: Adding Free Mocap Data

  1. Visit the ActorCore website and explore the free motion resources.
  2. Download free motion data compatible with Cascadeur.
  3. Import the downloaded motion data into Cascadeur, and apply it to your character as needed.
  4. Refine and customize the imported motion data to suit your character and animation needs.

This tutorial should help you get started with animating and importing your game characters into Blender and Unreal Engine 5 using Cascadeur. Feel free to explore further features and animation possibilities in Cascadeur to enhance your character animations.

Remember, practice makes perfect, and with time, you’ll be creating stunning animations for your game characters. Enjoy animating!

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/feed/ 0
I Turned Myself Into A 3D Game character – Tutorial https://promptmuse.com/create-3d-characters-fast/ https://promptmuse.com/create-3d-characters-fast/#respond Thu, 17 Aug 2023 09:22:46 +0000 https://promptmuse.com/?p=3066 Introduction In this tutorial, I will show you how to use the Headshot 2 plug-in for Character Creator to create a 3D character from a

<p>The post I Turned Myself Into A 3D Game character – Tutorial first appeared on Prompt Muse.</p>

]]>
Introduction

In this tutorial, I will show you how to use the Headshot 2 plug-in for Character Creator to create a 3D character from a scanned head mesh. This new plugin from Real Illusion uses AI technology to turn any mesh into a retopgized 3D character, with UVW, rigged mesh and blend shapes for facial animation. There is a full length video of the tutorial here

What you will need

Polycam
Polycam LiDAR headscan with IPhone

Step 1: Import the head mesh into Character Creator

  1. Open Character Creator software.
  2. Go to File > Import > .obj.
  3. Select the head .obj mesh file that you want to import.
Import obj into character creator
Character Creator Import obj mesh

Step 2: Headshot 2 Plugin 

  1. Click on the Headshot 2 plug-in in the top toolbar. You need to install this prior from the Real Illusion website here.
  2. The Headshot 2 dialog box will open.
  3. Click on the Align Points button.
  4. The starter pins will be automatically placed on the head mesh.
  5. Move the pins so that they are aligned with the corresponding points on the Character Creator mesh. Delete pins by Ctrl & click the pin you want to remove.
  6. Prompt Muse | A.I News, Tech Reviews and Free Tutorials
    Headshot 2 Plugin. Aligning Points

Step 3: Refine the head mesh

  1. Click on the Head Gen button from the horizontal toolbar.
  2. A preview of the generated head mesh will be displayed. You can select the area of the mesh you wish to be generated onto the CC3+ model.
  3. If you are happy with the preview, click on the Generate button, otherwise you can go back and adjust your points.

Step 4: Refine the head mesh

  1. Use the brushes in the panel to refine the head mesh.
  2. You can smooth the mesh, move the mesh, or project specific areas.
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Refine the Headmesh with brushes

Step 5: Attach the head mesh to a body

  1. Click on the Attach to Body button in the Headshot 2 dialog box.
  2. Select the body type that you want to attach the head mesh to.
  3. Click on the Attach button.

Step 6: Add skin textures and materials

  1. Open the contents dialog.

  2. Use the presets in the Skingen plugin to add realism to the skin texture, such as pores, muscle definition and freckles

  3. You can also add make up and decals to the character.

  4. In the Modify panel, go to the Morphs options and adjust the depth of the eyeballs and height of the teeth using the sliders.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Skin Gen plugin for realistic skin presets

Step 7: Add hair

  1. I used the hair plugin (hair builder) and additional hairs assets from the Real Illusion marketplace.

Conclusion

This is just a basic tutorial on how to use the Headshot 2 plug-in for Character Creator. There are many more things that you can do with this plug-in, so I encourage you to experiment and explore.

I hope this tutorial is helpful. Please let me know if you have any questions.

<p>The post I Turned Myself Into A 3D Game character – Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/create-3d-characters-fast/feed/ 0 I Turned Myself Into A Game Character ! nonadult
Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/ https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/#respond Tue, 18 Jul 2023 12:28:03 +0000 https://promptmuse.com/?p=3044 A FREE Tool for Content Creators In the rapidly advancing world of artificial intelligence (AI), content creators are continually seeking innovative ways to ensure their

<p>The post Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors first appeared on Prompt Muse.</p>

]]>
A FREE Tool for Content Creators

In the rapidly advancing world of artificial intelligence (AI), content creators are continually seeking innovative ways to ensure their work bypasses AI detectors. Enter Pass AI Detection, a ground-breaking tool that not only refines your content for human readers but also ensures it’s tailored to bypass AI detectors increasingly utilised by search engines and other platforms.

What is Pass AI Detection?

Pass AI Detection is a sophisticated AI detection tool that analyses your text and modifies it to bypass both AI and human readers. It’s a game-changer for content creators, enabling them to create engaging content that meets the stringent requirements of AI detectors.

AI content detectors are utilised in a range of applications, from search engines to academic integrity tools like Turnitin. These AI detectors scrutinise the text to determine its relevance, quality, and originality. As a result, content creators need to ensure their content is tailored to bypass these AI content detection systems to improve visibility and reach.

Pass AI Detection leverages cutting-edge AI techniques to evaluate and modify your content. The process begins with the AI content detector, which scans your text and identifies areas for improvement. The AI detector looks at your distribution of keywords and topics and provides a suggested distribution that would optimise your content for AI content detection.Prompt Muse | A.I News, Tech Reviews and Free Tutorials

A Balanced Approach to Content Creation

Pass AI Detection centres on balance. It understands the negatives of keyword stuffing and the necessity for reader-friendly text. The tool doesn’t increase keyword frequency, but promotes balanced keyword distribution, crafting content that is both AI and reader-friendly.

As AI detection methodologies progress, Pass AI Detection adapts. The tool is continuously refining its algorithms to ensure your content remains optimised for the most recent AI detection techniques. This commitment to staying ahead of the curve positions Pass AI Detection as a leader in the AI content detection market.

Content creation is a critical component of successful marketing strategies. Pass AI Detection helps generate content that not only attracts your audience but also performs well with AI detectors, achieving an ideal balance between human readability and AI detectability. Experience cost-effective, innovative content creation with Pass AI Detection’s unique BYOK model. Join the revolution today.

<p>The post Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/feed/ 0
Warp Fusion: A Comprehensive Step-by-Step Tutorial https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/ https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/#respond Fri, 23 Jun 2023 12:39:45 +0000 https://promptmuse.com/?p=3032 Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability to create unique animations with a consistent theme or style. This tutorial will guide you through the process of using Warp Fusion, focusing on using a remote GPU, which is a preferred method for many as it allows for running multiple GPUs simultaneously, freeing up your PC for other projects.

Getting Started with Warp Fusion

Prerequisites

Before we dive into the tutorial, there are a few prerequisites you need to have:

  • Google Colab Pro or Google Colab Pro Plus
  • Access to Alex’s Patreon page where you can get the Warp Fusion notebook
  • A model and a Lora from Civic AI

Setting Up Warp Fusion

The first step in using Warp Fusion is setting up the environment. This involves downloading the Warp Fusion notebook from Alex’s Patreon page and loading it into your Google Colab. Once you have the notebook ready, you need to connect it to a hosted runtime if you’re using Google Colab Pro. This ensures that you’re using your compute unit and the GPU.

Configuring Warp Fusion

Basic Settings

Once your environment is set up, you can start configuring Warp Fusion. The first thing you need to do is specify the name of the folder where your output files will be stored in your Google Drive. You also need to set the width and height of your output video to match your input video. A resolution of 720 by 1280 is a good starting point, but you can adjust this to suit your needs.

Video Input Settings

Next, you need to specify the path to your input video. This is the video that you want to animate using Warp Fusion. You can save your input video on your Google Drive for easy access. If your video is long and you want to reduce the diffusion time, you can set the ‘extra frame’ setting to two, which means Warp Fusion will diffuse every other frame.

Video Masking

Video masking is a useful feature that can help make the background of your animation consistent and stable. You can use your input video as the mask source and extract the background mask. If you want to add another video into the background, you can specify the path to that video in the ‘mask video path’ setting.

Defining SD and K Functions

The next step is to define the SD and K functions. You need to change the ‘load to’ setting to GPU and specify the path to the model you want to use. You can get models from Civic AI or Hugging Face. You also need to specify the directory where your control net models will be stored on your Google Drive.

Running Warp Fusion

Once you have all your settings configured, you can start running Warp Fusion. This involves running all the cells in the notebook up to the GUI section. This process can take about 5 to 10 minutes, and you will get a green tick for every cell that has successfully completed.

Customizing Your Animation

Using Prompts and Loras

Prompts and Loras are powerful features that can help you customize your animation. Prompts are instructions that guide the AI in creating the animation, while Loras are elements that you can add to your animation. You can specify your prompts and Loras in the GUI section of the notebook. You can also adjust the strength of the stylization and the prompt guidance to achieve the desired effect.

Using Control Nets

Control nets are models that you can use to control the animation. You can select the control net models you want to use and adjust their weights to influence their impact on the animation.

Using the Warp Tab

The warp taballows you to adjust the flow blend of your animation. This is the blending of your input video with the next frame of stylization. If you find that your animation is over-stylized, you can reduce the flow blend to achieve a more balanced effect.

Using the Mask Tab

The mask tab allows you to use a background mask for your animation. You can change the color or use an image or the original video as the background. This can help to create a more consistent and stable background for your animation.

Rendering Your Animation

Once you have customized your animation, you can start rendering it. This involves running the ‘diffuse’ cell in the notebook. As your animation progresses, you will see a preview frame that allows you to check for any errors. If there are errors or things you want to fix, you can stop the diffusion and adjust your settings.

Creating a Video from Your Animation

After your animation has been rendered, you can create a video from it by running the ‘Create Video’ cell. This will create a video from the frames that were diffused in the previous step. The video and all the frames will be saved to your Google Drive.

Post-Production

After you have created your video, you can bring the frames into a post-production software like DaVinci Resolve or After Effects for further editing. This can involve reducing the flickering of the animation or exporting the frames into a .mov file.

Upscaling Your Video

The final step in the process is upscaling your video. This can be done using a service like Topaz Labs or Pixel. These services use AI to increase the size of the video and make it sharper. Pixel is a browser-based service that is easy to use and offers a pay-as-you-go pricing model, making it a cost-effective alternative to Topaz Labs.

Warp Fusion is a powerful tool that allows you to create unique and eye-catching animations. With its wide range of features and customization options, you can create animations that truly stand out. Whether you’re a hobbyist or a professional, Warp Fusion offers a fun and innovative way to create animations.

Remember to share your creations on social media and tag Prompt Muse. We love to see what you create with Warp Fusion. Happy prompting!

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/feed/ 0 Warp Fusion: Step by Step Tutorial nonadult
Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/ https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/#respond Fri, 09 Jun 2023 10:58:45 +0000 https://promptmuse.com/?p=3015 If you are looking for a reliable AI tool for your content generation needs, your search may be over. UseChatGPT.Ai is a cutting-edge software that

<p>The post Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy first appeared on Prompt Muse.</p>

]]>
If you are looking for a reliable AI tool for your content generation needs, your search may be over. UseChatGPT.Ai is a cutting-edge software that allows users to integrate their favorite AI models into their workflow effortlessly. Whether you are a content creator or a marketer, this tool can help you streamline your work process and improve productivity. In this article, we will review UseChatGPT.Ai in depth and explore its features, benefits, and drawbacks.

How to Use UseChatGPT.Ai

UseChatGPT.Ai is a browser extension that is compatible with Chrome, Firefox, and Safari. The application can be downloaded from the official website free of charge. Once installed, the extension allows you to use AI models such as GPT-4, OpenAI API, and Bing Bard on any text-based website or PDF with just one click. Here’s how to get started with UseChatGPT.Ai:

  1. Go to the official website and download the extension for your browser.
  2. Install the extension by following the instructions on the screen.
  3. Once installed, click on the UseChatGPT.Ai icon in your browser toolbar.
  4. Select your preferred AI model from the dropdown menu.
  5. Copy and paste the text you wish to analyze into the text box.
  6. Click on “Generate” and wait for the AI model to do its magic.
  7. Review the generated text and use it as you wish.

Benefits of UseChatGPT.Ai

UseChatGPT.Ai offers a variety of benefits to its users, including:

Improved Productivity

With UseChatGPT.Ai, you can generate high-quality content in a matter of seconds, saving time and effort. The AI models used by the tool are capable of analyzing large amounts of data and providing accurate results, allowing you to focus on other tasks.

Customization

UseChatGPT.Ai allows you to choose from multiple AI models and customize your prompts as per your requirements. This feature ensures that you can generate content that is tailored to your specific needs.

Cost-Effective

UseChatGPT.Ai is completely free to use, making it a cost-effective alternative to other AI tools that may charge a significant amount of money.

User-Friendly Interface

UseChatGPT.Ai has a simple and intuitive interface that makes it easy for even non-technical users to navigate the tool. The interface is well-designed and organized, ensuring a seamless user experience.

Drawbacks of UseChatGPT.Ai

Like any tool, UseChatGPT.Ai has its limitations. Here are a few drawbacks to keep in mind:

Accuracy Issues

While the AI models used by UseChatGPT.Ai are highly advanced, they are not foolproof. There may be instances where the generated content is inaccurate or irrelevant, requiring manual editing.

Dependency on Internet Connectivity

UseChatGPT.Ai requires a stable internet connection to function correctly, which may be an issue for users with limited access to the internet.

Final Thoughts

Overall, UseChatGPT.Ai is a powerful AI tool that can significantly improve your productivity and content generation capabilities. The tool’s user-friendly interface, customization options, and cost-effectiveness make it an excellent choice for content creators and marketers seeking to streamline their workflow. However, like any tool, UseChatGPT.Ai may not be perfect, and users should be aware of its limitations before using it. We recommend trying out UseChatGPT.Ai for yourself and experiencing its benefits firsthand.

<p>The post Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/feed/ 0
Create GPT4 articles with automated internal linking https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/ https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/#respond Thu, 25 May 2023 15:30:51 +0000 https://promptmuse.com/?p=3006 Alex Player is a content creator who has been producing high-quality content for years. He has been able to accomplish this by using various tools

<p>The post Create GPT4 articles with automated internal linking first appeared on Prompt Muse.</p>

]]>
Alex Player is a content creator who has been producing high-quality content for years. He has been able to accomplish this by using various tools and techniques that help him streamline his writing and research processes. In this article, we will explore Alex’s use of Chatbase, a powerful tool that can help anyone create awesome content in a matter of minutes.

LINKS:

ChatBase LifeTime Deal

ChatBase Monthly Subscription

Prompts:

Base Prompt

you are a writer. you write content using the resources, as I request and always add links to references. your articles are well written, factual, and contain subheadings every 1-3 paragraphs. you can ONLY mention products from the resources I provided you. you article starts with an introduction and ends with a conclusion. Do not write ANYTHING except the requested article.

Main Prompt (in chat)

Instructions:  write an 800-word article 

Topic: Creating a Modern Nursery: Top Furniture and Decor Picks from Nubie 

Target Audience: Parents

Internal Linking: Always use hyperlinks for as many relevant links within the site as possible, Example: We have <a href=”https://www.thechildrensfurniturecompany.com/beds>Beds</a> for all ages

Writing Style: Informative, broken up with relevant subheadings | bold (with <strong> tags) hyperlinked words.

URLS: Do not add full URLs, it needs to be wrapped in <a href> tags.

Title: clickbaity / interesting

User Intent: deciding to make a purchase 

Language: British English

Subheadings & titles: Wrapped in relevant H tags eg <h1>title</h1>

Paragraph Length: Random between 200-600 words

What is Chatbase?

Chatbase is an application that is designed to help users create chatbots for various purposes. However, Alex has found a unique use for the application that he believes can help anyone who is involved in content creation. With Chatbase, you can analyze all the content on a website and use it to train an AI model to create high-quality content. The tool uses OpenAI’s GPT-3 model to generate content that is both engaging and relevant to your topic.

How to Use Chatbase for Content Creation

Alex has developed a systematic approach to using Chatbase for content creation. Let’s take a look at his method step-by-step:

Step 1: Purchase Chatbase
The first step is to purchase Chatbase and get an OpenAI API key. You can buy Chatbase for $29 from AppSumo but you will need to purchase an OpenAI API key which costs $159. The key is essential as it allows Chatbase to access OpenAI’s GPT-3 model, which is an integral part of the application.

Step 2: Analyze the Website
Once you have installed Chatbase and input your OpenAI API key, you can start analyzing the website you want to create content for. This is an important step as it allows Chatbase to learn all the content on the website. To analyze the website, simply input the URL and let Chatbase crawl the website. The process can take a while depending on the size of the site. The tool can crawl through up to six million characters of content. Alex advises that you make sure you have selected the correct website and region you would like to extract content from.

Step 3: Train the Model
Once the website has been fully analyzed, you can proceed to train the AI model. During this process, you can choose what type of data sources you want to teach the AI. For example, you can choose to teach the AI only specific pages, products, or searches. The Chatbot API can also hyperlink into pages found on the website, allowing it to generate content that is precise and specific to the user’s query.

Step 4: Customize the Content
After creating the Chatbot API, you can start generating content. The AI model produces the content quickly, but as Alex notes, it’s not always perfect. You can further customize the text generated by removing any parts you don’t like. The chatbot dynamically learns from your changes, so after making selections, the chatbot may produce better results the following time you use it.

Step 5: Use the Content for Articles, Blogs, Ads, Social Media Campaigns
Once you have fine-tuned the content to your preferences, you can export the content for use on your blog, social media channels, ads, or even content for a book. The Chatbase tool does not limit the user to specific content formats. Instead, you are free to use the generated content as you see fit. You can even store multiple versions of generated content until you decide which version to use, meaning more value for a single purchase.

Alex’s Tips and Tricks for Chatbase Content Creation

1. Test the AI
Before exporting the AI-generated content, Alex suggests testing the chatbot API’s accuracy. Perform some queries related to the keywords and phrases that are relevant to your brand, and see how well the Chatbot API responds. If any responses are inaccurate, make the necessary changes to enhance accuracy.

2. Use the Right Temperatures

When using GPT-3 formulas, Chatbase, GPT-4, or GPT 3.5, Play advises that the used temperature values will influence the output entirely. When creating articles, low temperatures like 0 must be used to achieve factual content while avoiding false statements.

Chatbase is a powerful tool that can help anyone who is involved in content creation. With Alex’s systematic approach, you can use the tool to analyze any website and generate high-quality content in just a matter of minutes. If you’re currently working on a content creation project and want to streamline your writing process, consider investing in Chatbase. This application will not only save you time but also produce content that is both engaging and relevant to your topic.

FAQ

1. What is Chatbase and how can it be used as a chatbot?
Answer: Chatbase is an application that can memorize and learn all content on a website to be used as a chatbot.
2. Can Chatbase be used in a different way?
Answer: Yes, it can be used in a different way, which will be explained in this episode.
3. How much does Chatbase cost?
Answer: Chatbase is available on AppSumo for $29.
4. Do you need an Open AI API key for Chatbase to work?
Answer: Yes, an Open AI API key is required to use Chatbase, and it costs $159.
5. What is Chatgpt and how is it used in the demo?
Answer: Chatgpt is a tool for creating chatbots, and Alex Player demonstrates how to use it in this episode.
6. Can you choose data sources to teach the chatbot using Chatgpt?
Answer: Yes, you can choose data sources to teach the chatbot using Chatgpt.
7. What is the maximum amount of characters you can use on the current tier of Chatgpt?
Answer: The maximum amount of characters on the current tier of Chatgpt is 6 million.
8. Can you remove things you don’t want with Chatgpt?
Answer: Yes, you can remove things you don’t want with Chatgpt.
9. Can Chatgpt read PDFs or certain URLs?
Answer: Yes, Chatgpt can read PDFs or certain URLs.
10. What kind of topics can Alex Player generate with the help of Chatbase?
Answer: Alex Player can generate article/topic ideas for a brand’s website with the help of Chatbase.

<p>The post Create GPT4 articles with automated internal linking first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/feed/ 0
Sagans: The Anonymous AI Collective Taking Over the Music Video World https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/ https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/#respond Wed, 17 May 2023 17:18:21 +0000 https://promptmuse.com/?p=2996 On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for an artist as AI-generated vocals become increasingly popular, and how it can be a great ally for independent music artists in creating their own videos and music. The podcast features guests from Sagans, an anonymous AI collective that has been producing music videos for popular artists since 2022. We discuss the challenges of keeping up with AI advancements and their impact on creativity, and how it can be used as a tool for faster problem-solving and dream boosting. Tune in to discover the many possibilities for using AI technology to bring ideas to life in the music industry.

Artificial Intelligence (AI) has rapidly become one of the most versatile and powerful technological tools in recent years, providing users with a wealth of opportunities to explore, create, and express themselves in new and exciting ways.

In the world of music, AI has revolutionized the way musicians create, record, and market their art. From optimizing vocal tracks to generating entirely new compositions, AI is making music more accessible, dynamic, and personalized than ever before.
In this blog post, we will explore the many ways in which AI is transforming the world of music and the exciting possibilities it holds for the future.

The Rise of AI Vocals

One of the most fascinating developments in the music industry has been the rise of AI vocals, which has enabled musicians to create realistic vocal tracks without requiring the services of a professional singer. With AI vocals, musicians can fine-tune the pitch, timbre, and other vocal qualities to suit their creative vision, ensuring the final track sounds exactly as they imagined it.

AI vocals are also useful for people who might not have the self-confidence to sing themselves or who lack the resources to hire a professional singer. By providing a flexible and affordable solution, AI vocals allow artists to experiment with different styles, sounds, and arrangements without breaking the bank.

Collaborating with AI

The ability to collaborate with AI has also opened up new creative avenues for musicians, allowing them to create brand new vocals by combining different AI-generated voices. This makes it possible to create choirs without requiring a large number of singers, providing musicians with greater control over the final product.
In addition, AI can help improve the quality of music videos that independent artists produce, enabling them to create high-quality visuals without the need for expensive equipment or professional crews. This allows musicians to maintain their creative vision while still producing music videos that are visually stunning and engaging for their fans.

Learning from Others

The podcast team at Prompt Muse often discusses their creative process, which involves brainstorming ideas together for days or weeks, testing concepts, and then starting production in a relaxed and casual way. They also rely on hard work and spend hours researching information to learn new things, including the latest AI techniques.

To stay ahead of the curve, they often check Reddit for 10 minutes a day to keep up with the latest technology and advancements. Although some may find AI technology daunting, the team at Prompt Muse believes in adapting their workflow to new technology as it evolves. They remain open-minded and learn by experimenting and transforming assets to achieve their objectives quickly.

Misconceptions About AI

There are many misconceptions about AI, particularly its ability to create videos and music easily and independently. While AI can be used as a tool for creating quick visualizers, human input is still necessary for creating a good story and continuity. Music videos are seen as short films with a soundtrack and require a lot of time and effort to produce.

By suggesting that AI can replace the human touch, creators often find themselves feeling frustrated. Dismissing their work as “AI video” or “AI music” doesn’t recognize the human input and effort involved in the creative process.

AI-assisted Music Videos

However, AI can be a great ally for independent music artists in producing their own videos and music. In particular, Runway.ai is a popular tool used by the podcast team to edit backgrounds that are then added to the video edit on Adobe After Effects. While it is important to present the reality of what happened on the day of the shoot, it is equally important to bring the artist’s creativity and vision to life.
AI-generated music still requires the artist to record the song from start to finish – AI only changes the tone of their voice to match someone else – but previous tracks can be used to create a new track. Working in tandem with AI also allows the artist to produce something of which they are proud.

The Future of AI in Music

As AI technology continues to evolve, the future of music promises to be more exciting than ever before. With AI, artists will be able to explore new creative frontiers while also reaching new and diverse audiences around the world.

AI-generated music may not only be used by independent artists, but also by established stars to collaborate with other artists and explore new musical styles. This could help to expand the boundaries of music as we know it while also creating new and unique sounds that have never been heard before.

Innovation is the key to the future of music and Prompt Muse is committed to bringing our readers the latest developments in AI and music. The podcast and blog team encourages their audience to embrace the power of AI as a creative tool to develop their own unique vision.
As AI technology continues to evolve, it’s exciting to think about what the future will hold for music. With AI, musicians will be able to create more exciting, innovative, and personalized music than ever before, empowering them to take their creativity to new heights and explore new possibilities in the world of music.

FAQ

1. What are some benefits of AI vocals for artists and producers?
– AI vocals can be useful for people who are not self-confident enough to sing, and they can be merged to create brand new vocals and create choirs without needing many people.
2. How can AI assist independent music artists in creating high-quality music videos?
– AI can give power to music artists to create videos and music on their own, faster than before, and it can be a great ally for independent music artists in producing their own videos and music.
3. How do the guests on the podcast approach video production and what tools do they use?
– They start with walking in a Japanese environment, then edit the video to add a background frame, play with camera movements, and transform assets using Enringing; they use Warp fusion for primary tools and Deform for making scenes more interesting, and Erased backgrounds are edited on Runway.
4. Are there any misconceptions about AI and its ability to create videos and music?
– Yes, some people believe that AI can create videos and music with just one sentence or prompt, but human input is still necessary for creating a good story and continuity.
5. How do independent music artists feel about the use of AI in their work?
– There is frustration among creators when people dismiss their work as “AI video” or “AI music” without recognizing the human input and effort involved.
6. How has AI technology evolved and improved over time?
– When the speaker started with AI technology, there were limited resources available, but now tools like Dalle Fusion are available and powerful, allowing for greater creativity and faster problem-solving.
7. What tools and websites are available to bring creative ideas to life?
– There are many websites and tools available to create 3D visuals and animate drawings, and Runway can be used to achieve dream boosting and generating unique ideas.
8. What is Sagans, and what have they accomplished through their work with AI music videos?
– Sagans is an anonymous AI collective that has been producing music videos for popular artists since 2022, and they have produced videos for Lincoln Park Lawns Entropy and Die Antwood’s Age of Illusion in just one year.
9. How important is it to stay up to date with AI advancements as a creative?
– It is important to be aware of new technology for future use, but it is not necessary to know everything, and it is important to check casually without feeling overwhelmed.
10. How can creativity be enhanced through setting limits and restrictions?
– James Blake created a song by limiting himself to only four notes, and the N-word style and technique were developed mainly through experimentation and prompt.

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/feed/ 0
Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/ https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/#respond Mon, 08 May 2023 11:50:14 +0000 https://promptmuse.com/?p=2980 Introduction: The Importance of Privacy in AI Conversations As OpenAI’s ChatGPT gained popularity as a powerful language model, concerns arose regarding the lack of privacy

<p>The post Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data first appeared on Prompt Muse.</p>

]]>
Introduction: The Importance of Privacy in AI Conversations

As OpenAI’s ChatGPT gained popularity as a powerful language model, concerns arose regarding the lack of privacy settings for users. Initially, the AI product did not provide options for users to safeguard their data, and OpenAI utilized data from conversations to train its language model further. With privacy watchdogs and regulators demanding better data protection, OpenAI introduced a new privacy setting. This guide will walk you through the process of opting out of ChatG

PT data sharing and explain the implications of doing so.

Step-by-Step Guide to Opting Out of ChatGPT Data Sharing

To stop ChatGPT from using your private data, follow these steps:

  1. Log into your ChatGPT account using a browser.
  2. Click on your account icon located in the bottom left corner.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

3. Select “Settings.”

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

4. Find the “Data Controls” section and click on the “Show menu” next to it.

5. Toggle off “Chat History & Training.”

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Note that OpenAI may have tried to obscure this privacy-enhancing feature. Within this menu, you can also export your ChatGPT data or delete your account if desired.

What Happens to Your ChatGPT Chat History After Turning Off Data Collection?

When you disable OpenAI’s ability to collect and use your data for training ChatGPT, your chats will only be stored for 30 days. After that period, your conversations will be permanently deleted. OpenAI states that while they will retain your data for 30 days, it will only be reviewed to monitor for abuse and not used for training ChatGPT.

In the future, OpenAI plans to launch a ChatGPT Business account that will allow users to keep their chat history even when disabling data collection.

What Happens to Your Existing ChatGPT Data?

Unfortunately, you cannot delete the data you previously shared with ChatGPT before the new privacy settings were implemented. However, you can opt to start sharing data again by re-enabling “Chat History & Training” within the settings.

OpenAI claims to use the shared data to improve their models, making them more helpful for users. They state that they do not use the data for advertising, selling services, or creating user profiles.

Future Improvements in ChatGPT Privacy Management

While the current privacy options for ChatGPT are limited, it is essential to keep an eye on potential improvements in the future. OpenAI may develop better ways to manage user privacy, ensuring a more secure and user-friendly experience.

In conclusion, the introduction of privacy settings in ChatGPT is a step forward in data protection. By following this guide, you can easily opt out of data sharing and safeguard your private data. However, there is still room for improvement in privacy management, and users should remain vigilant regarding their data security.

<p>The post Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/feed/ 0
A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/ https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/#respond Mon, 08 May 2023 11:35:45 +0000 https://promptmuse.com/?p=2975 A Presidential Meeting on AI On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google,

<p>The post A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity first appeared on Prompt Muse.</p>

]]>
A Presidential Meeting on AI

On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google, Microsoft, OpenAI, and Anthropic. The meeting emphasized the importance of ensuring the safety of AI products before deployment and addressing the risks posed by AI. However, some AI experts criticized the exclusion of ethics researchers who have warned of AI’s dangers for years.
Over the past few months, generative AI models such as ChatGPT have quickly gained popularity, driving companies to develop similar products rapidly. However, concerns have been growing about potential privacy issues, employment bias, and the potential for using them to create misinformation campaigns.

AI Ethics Researchers Respond

Critics of the companies’ ethical track records were not impressed by the meeting. They questioned the choice of inviting people to the meeting who, they argue, represent companies that have created the issues with AI that the White House seeks to address.
On Twitter, AI researcher Dr. Timnit Gebru wrote, “It seems like we spend half our time talking to various legislators and agencies and STILL we have this… A room full of the dudes who gave us the issues & fired us for talking about the risks, being called on by the damn president to ‘protect people’s rights.'” In 2020, Google fired Gebru following a dispute over a research paper she co-authored that highlighted potential risks and biases in large-scale language models.

University of Oxford AI ethics researcher Elizabeth Renieris tweeted, “Unfortunately, and with all due respect POTUS, these are not the people who can tell us what is “most needed to protect society” when it comes to #AI.”

AI Safety and AI Ethics

The criticism highlights the divide between “AI safety” (a movement concerned primarily with hypothetical existential risk from AI) and “AI ethics” (a group of researchers concerned largely about misapplications and impacts of current AI systems, including bias and misinformation).
Author Dr. Brandeis Marshall suggested organizing a “counter-meeting” that would include a diverse group of AI ethicists, practitioners, and researchers to discuss the real-world implications of AI and propose more inclusive and responsible solutions.

Looking Forward

Although the White House meeting brought attention to the potential risks and challenges posed by AI, it remains to be seen whether the discussion will lead to concrete actions that address these issues. It is crucial for government, industry, and academia to collaborate and ensure that AI development is safe, responsible, and equitable.
To foster a more comprehensive dialogue on AI ethics and safety, including voices from marginalized communities and interdisciplinary perspectives, could be a critical step in building more resilient and fair AI systems. Engaging stakeholders from various backgrounds in the decision-making process will help address concerns around bias, privacy, and the potential misuse of AI technologies.

As AI continues to advance rapidly, it will become increasingly important for policymakers and industry leaders to work together and create robust regulatory frameworks that protect the public interest while enabling innovation. Only through a collaborative approach can we ensure that AI technologies are harnessed to benefit society as a whole, while mitigating potential risks and negative consequences.

<p>The post A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/feed/ 0
Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/ https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/#respond Thu, 20 Apr 2023 09:48:38 +0000 https://promptmuse.com/?p=2954 If you’re a fan of VR games and AI technology, then you’ll want to check out Tore Knabe’s latest creations. The Norwegian developer has combined

<p>The post Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game first appeared on Prompt Muse.</p>

]]>
If you’re a fan of VR games and AI technology, then you’ll want to check out Tore Knabe’s latest creations. The Norwegian developer has combined these two technologies to create immersive experiences that allow players to interact with AI characters in virtual worlds. What’s more, the AI characters in the game are powered by GPT-3/GPT-4, one of the most advanced natural language processing models currently available.

In this article, we’ll explore how Tore Knabe created his groundbreaking VR games, and what makes them such a unique and exciting experience.

The Making of Tore Knabe’s VR Game

Tore Knabe’s VR games are the result of several different technologies working together. The game was built using the Whisper which is Open AI’s new Speech to Text (STT) api. In addition to Whisper, Knabe also used Eleven Labs, a company specialising in creating human-sounding artificial voices, and the GPT-3 API, which powers the AI characters in the game.

The game takes place in various settings, including a peaceful lakeside scene with a priest and an old bar tavern with a talkative bartender. In both scenarios, the player is able to have conversations with the AI characters using natural language. The AI characters are able to respond to the player’s questions and statements in real-time, thanks to the power of GPT-3.

What’s particularly impressive about Tore Knabe’s game is the level of detail that has gone into creating the AI characters. For example, in the bar tavern scenario, the AI bartender is able to keep track of the various patrons in the bar, and can discuss each one with the player as a separate topic. This adds a level of realism to the game that is rarely seen in other AI-powered experiences.

The Future of VR and AI

Tore Knabe’s VR game is just the beginning of what could be a whole new era of gaming. As VR technology continues to improve, and AI becomes even more advanced, we can expect to see more games that combine these two technologies in innovative ways.

Imagine a virtual reality world where every character you encounter is powered by AI, and is able to converse with you in a natural and engaging way. Or a game where the AI characters can learn from your interactions, and adapt their behaviour based on your choices. The possibilities are truly endless.

Tore Knabe’s VR game is an exciting glimpse into the future of gaming, where AI and VR technology are combined in new and innovative ways. By using GPT-3 to power the AI characters in the game, Knabe has created a truly immersive experience that feels like a real conversation.

As we continue to explore the possibilities of AI and VR, it’s important to consider the potential benefits and drawbacks of this technology. While there are certainly concerns about its impact on society, there’s no denying that the combination of AI and VR has the potential to revolutionize the way we play games, and even how we interact with the world around us.

<p>The post Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/feed/ 0 Spiritual Chat with ChatGPT NPCs in Virtual Reality nonadult