Featured – Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Fri, 19 Jan 2024 12:08:47 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg Featured – Prompt Muse https://promptmuse.com 32 32 Character Creator Workflow For Fast 3D Animation Production https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/ https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/#respond Fri, 08 Dec 2023 12:39:42 +0000 https://promptmuse.com/?p=3149 The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These tools are not just simplifying the character creation process but are also ensuring that characters are ready for animation and production, saving both time and resources. Character Creator stands out [...]

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
The world of 3D character design is evolving rapidly, and two software giants, Character Creator and ZBrush, are at the forefront of this revolution. These tools are not just simplifying the character creation process but are also ensuring that characters are ready for animation and production, saving both time and resources.

Character Creator stands out for its ability to quickly generate base models that are ripe for customization. With its intuitive interface and versatile morph sliders, creators can easily shape the basic anatomy of their characters. This software shines in its ability to offer a base that includes rigs and morph targets, essential for both facial and body animation.

A character creator workflow for fast production using base meshes
A character creator workflow for fast production using base meshes

Once the basic form is created in Character Creator, ZBrush comes into play. Known for its robust sculpting tools, ZBrush allows artists to add intricate details and a unique personality to their characters. Its brushes are perfect for crafting stylized aesthetics, moving away from realism and embracing a more exaggerated, cartoon-like appearance.

A fast production screen shot of a 3D model being transferred to Zbrush
A fast production screen shot of a 3D model being transferred to Zbrush

One of the most significant advantages of using Character Creator and ZBrush is their seamless integration. With just a click, characters can be transferred between the two programs without losing any detail. This integration is a game-changer, ensuring that the creative flow isn’t interrupted.

Characters created with these tools are not just visually appealing but are also production-ready. They come with clean topology, rigs, and weights, making them perfect for animation projects. This readiness significantly cuts down the time from concept to production, a crucial factor in fast-paced project environments

Easy Facial Edit tools in Character Creator create seamless workflow for character production
Easy Facial Edit tools in Character Creator create seamless workflow for character production

For those who use Blender for composting, the good news is these characters are fully compatible. With the help of a simple plugin, characters can be imported into Blender, retaining all their rigging and morphing qualities. This flexibility opens up avenues for creators who operate in different software environments.

The combination of Character Creator and ZBrush is a testament to how technology is simplifying yet enhancing the art of 3D character creation. By reducing the technical barriers, these tools allow artists to focus more on the creative aspect of character design. As we continue to see advancements in these tools, the future of character creation looks more exciting than ever.

<p>The post Character Creator Workflow For Fast 3D Animation Production first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/character-creator-workflow-for-fast-3d-animation-production/feed/ 0
ComfyUI and Animate Diff Evolve Installation Guide https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/ https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/#respond Sat, 28 Oct 2023 12:12:41 +0000 https://promptmuse.com/?p=3108 Hello there, Prompt Muse here! In this comprehensive guide, I’ll walk you through the essentials of setting up ComfyUI and Animate Diff Evolve. 1. Introduction • ComfyUI offers a node-based layout, allowing for a streamlined workspace tailored to your needs.mm • Drag and drop features for images and workflows enhance ease of use. • This [...]

<p>The post ComfyUI and Animate Diff Evolve Installation Guide first appeared on Prompt Muse.</p>

]]>

Hello there, Prompt Muse here! In this comprehensive guide, I’ll walk you through the essentials of setting up ComfyUI and Animate Diff Evolve.

1. Introduction

• ComfyUI offers a node-based layout, allowing for a streamlined workspace tailored to your needs.mm
• Drag and drop features for images and workflows enhance ease of use.
• This tutorial aims to make you feel comfortable with ComfyUI, ensuring you make the most of its unique features.

2. System Requirements

• Nvidia RTX Graphics card is preferable (Works with AMD cards and Macs click here)
• Preferred: 12 GB of VRAM for processing videos.
• Allocate 50-80 GB of storage for outputs.

Tip: If your PC doesn’t meet these requirements, consider using Shadow Tech. It’s a remote PC service I personally use, providing an NVIDIA 1080 GPU, ample storage, and 12 GB VRAM for about $40/month.

3. Software Dependencies

• Git: Allows you to pull extensions from GitHub. Download here.
• FFmpeg: Essential for encoding videos. Download here.

4. Installing ComfyUI & comfyUIManager

1. Visit the ComfyUI GitHub page.
2. Under the “Installing” section, select the “Direct link to download” for the standalone portable version.
3. Once downloaded, extract the files to your chosen directory.
4. For ComfyUI models:
• Checkpoints: Download from civitai.com and place in the checkpoints folder.
• VAE: Download from Stability AI’s hugging face website and place in the vae folder.
5. Install ComfyUI Manager from the civitAI page for easy updates and add-on installations.

5. Basic Workflow Overview

• ComfyUI operates like a circuit board, with nodes representing each process.
• Start with the Load Checkpoints node, input your positive and negative prompts, and proceed to the K Sampler.
• The Latent Image Node determines your image dimensions and batch size.
• The VAE Decode node processes the final image.

Pro Tip: Images created in ComfyUI can be dragged and dropped back into the system to load their respective node layouts.

Here’s the continuation of the tutorial based on the provided transcript:

## **6. Installing Motion Models**

1. In ComfyUI Manager, go to **Install Models**.
2. Type “mm” in the search box, which stands for motion models.
3. Download and install **Motion Model version 14**, **version 15**, and **version 15 uncore 2**. The high and mid variants are also recommended.
4. After downloading, you should see a confirmation message.

## **7. Installing Control Net**

1. Still in the ComfyUI Manager, search for “control net”.
2. Locate and install **Stable Diffusion 1.5**. You might need to scroll down the list.
3. Install other control nets like **Line Artarts** and **Open Pose** as needed, ensuring you have enough storage.

**Note:** After installing, close all instances of the manager and ComfyUI. When reopening ComfyUI, it will automatically install the nodes and extensions you’ve selected.

## **8. Utilizing Pre-Made Templates**

1. Visit **cinka dink’s GitHub page**. He offers pre-made templates for Animate Evolve.
2. Download a workflow, drag, and drop it into ComfyUI to load the node layout.
3. If you encounter missing nodes (highlighted in red), simply go to the manager and click on **Install Missing Nodes**.

## **9. Running Workflows**

1. Use the **Load Checkpoints** node, followed by **Clip Setting**, **Positive Prompt**, and **Negative Prompt**.
2. The **K Sampler** determines sampling, while **Animate Diff Loader** adds motion.
3. Adjust settings such as image dimensions, batch size, and motion model as needed.
4. For advanced users, experiment with nodes like **Animate Diff Uniform Context Options** to enhance animations.

## **10. Adding Laura for Enhanced Details**

1. Double-click on an empty space in ComfyUI and search for **Laura**.
2. Select the **Laur Loader** and connect it as shown in the workflow.
3. Adjust the **Strength Model** to reduce the risk of artifacts.
4. Run the prompt and observe the enhanced details in the generated animation.

## **11. Video to Video Animations Workflow**

1. Use **Inner Reflections’** guide as a template for video-to-video animations.
2. For video inputs, use the **Video Load Node**.
3. Adjust settings like frame load cap, skip first frames, and select every M frame to customize the video output.
4. Use **Load Checkpoint** to select your desired model.
5. Implement control nets, like **Line Art**, to guide the animation style.
6. Experiment with different nodes and settings to achieve unique video outputs.

## **12. Exploring Other Templates**

1. Another great template to explore is **Prompt Scheduling**, which allows dynamic prompt changes over time.
2. This feature can be used to create evolving animations, adding depth to your projects.

**Conclusion**

Thank you for following along! With ComfyUI, the possibilities are vast, allowing for creative freedom and intricate animations. A written version of this tutorial is available on the Prompt Muse website. Feel free to reach out on social media @promptMuse for further assistance.

Remember: Art is a journey, and every masterpiece begins with the decision to try. Dive in and explore the world of ComfyUI and Animate Evolve!

<p>The post ComfyUI and Animate Diff Evolve Installation Guide first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/comfyui-and-animate-diff-evolve-installation-guide/feed/ 0 Ultimate Guide to Seamless AI Animations (Even on Low-End PCs!) nonadult
DALL·E 3.0: Is the Hype Justified? https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/ https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/#respond Thu, 26 Oct 2023 09:50:15 +0000 https://promptmuse.com/?p=3100 In the ever-evolving landscape of AI-driven creative technology, DALL·E 3.0 is the name on everyone’s lips. With its impressive capabilities, DALL·E 3.0 seemed to outperform its predecessors like Midjourney, and the best part? It was free. But was it too good to be true? Let’s delve into my experience with DALL·E 3.0 to find out. [...]

<p>The post DALL·E 3.0: Is the Hype Justified? first appeared on Prompt Muse.</p>

]]>
In the ever-evolving landscape of AI-driven creative technology, DALL·E 3.0 is the name on everyone’s lips. With its impressive capabilities, DALL·E 3.0 seemed to outperform its predecessors like Midjourney, and the best part? It was free. But was it too good to be true? Let’s delve into my experience with DALL·E 3.0 to find out.

I was one of the early adopters of DALL·E 3.0, and I couldn’t wait to put this new AI image generator to the test. The initial results, I must say, blew me away. What set DALL·E 3.0 apart was its ability to understand not just single words or simple prompts but entire phrases and sentences. It felt like a significant leap forward in AI creativity.

The images it generated were a sight to behold – beautiful, bright, and full of life. It seemed like I could finally do away with the rigid, prompt-based language that had constrained previous AI models. DALL·E 3.0 was a breath of fresh air in the world of creative content generation.

Excitement filled my days as I used DALL·E 3.0 to create a wide array of images for a project I was passionately working on.  It quickly became an indispensable tool in my creative arsenal. However, all good things must come to an end, and in this case, it happened rather abruptly.

One fateful day, I logged onto the platform to create more AI-generated images, only to be met with a disheartening message: “Promptblocked.” I tried again, hoping it was just a minor glitch, but to my dismay, the same message appeared: “Promptblocked.”

A screen shot of a web page showing several pictures related to DALL·E 3.0.
Prompted Blocked  DALL·E 3.0.

It turned out that DALL·E 3.0 had been nerfed into existence. The issue lay in the specific prompt I had used – a celebrity’s name, Lana Del Rey, to be precise. Previously, DALL·E 3.0 had excelled in creating fantastic reference images for my project, but suddenly, it refused to cooperate. The reliable wellspring of creative inspiration had run dry.

A DALL·E 3.0 generated 3d model of a woman with long wavy hair.
A DALL·E 3.0 Lana Del Rey reference Image before DALLE 3.0 got Nerfed.

As I reflected on my journey with DALL·E 3.0, I couldn’t help but acknowledge the rollercoaster of emotions it had taken me on. It had been an incredible tool, enabling boundless creativity with its brilliant images. But, it came with its limitations, and one had to tread carefully to avoid promptblocks.

In the end, my experience with DALL·E 3.0 was indeed remarkable, a brief and brilliant journey through the realm of AI-generated artistry. Though its limitations became evident, the potential of this technology remains undeniably promising. DALL·E 3.0 may have had its ups and downs, but it undeniably showcased the incredible power of AI in the world of creative content generation.

If you wish to try it out for yourself, a link is here: https://www.bing.com/create

It is still a fantastic tool, but not as good as it once was.

PromptMuse

<p>The post DALL·E 3.0: Is the Hype Justified? first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/dall%c2%b7e-3-0-is-the-hype-justified/feed/ 0
How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/ https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/#respond Mon, 25 Sep 2023 08:58:36 +0000 https://promptmuse.com/?p=3087 In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This tutorial is designed for those who don’t have a budget for expensive animation software or motion capture suits. The full tutorial video of this process can be found on my [...]

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
In this tutorial, I will guide you through the process of animating your game characters and importing them into Blender and Unreal Engine 5. This tutorial is designed for those who don’t have a budget for expensive animation software or motion capture suits. The full tutorial video of this process can be found on my Youtube channel here.

We will be using a software called “Cascadeur,” which has been around for about 10 years and offers a free version with some powerful animation tools. While this method is not a replacement for professional animation software or mocap, it’s a viable alternative if you’re on a tight budget.

Note: Before you start, make sure you have Cascadeur and Accurig installed on your computer. You can download Cascadeur from the official website, and Accurig is a free auto-rigging tool that complements Cascadeur.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/ScQTV2Xb–0?si=_4-LUd5vW3w7Nz64″ title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” allowfullscreen></iframe>

Let’s get started!

Part 1: Rigging Your Character in Accurig

  1. Open Accurig and click on “Choose File” to select your 3D character’s FBX file. You can use a sample character from Cascadeur, Mixamo, Sketchfab, or your own custom character.
  2. After loading your character’s mesh, click on “Rig Body” to generate the joint structure for your character’s skeleton.
  3. Accurig will display circles where joints should be placed. Ensure symmetry is checked to work on one side of the character.
  4. Position the joint guides according to your character’s anatomy, following the on-screen guides for reference.
  5. Use the tools in the bottom left corner to rotate and move around your character for precise joint placement.
  6. Repeat the process for other body parts, such as arms and legs, ensuring correct joint placement.
  7. Use the “Preview Motion” window to check the animation on various body parts, including fingers.
  8. Ensure your character is in a neutral pose (A-pose or T-pose) before exporting.
  9. Click “Upload to AccuRig” and then “Export” > “Export FBX.” Set the target application to “Maya” and check “Embed Texture.” Click “Export” to save the rig.
  10. Export another FBX file of your character’s base mesh but set the target application to “Blender” for later use.
 Accurig Auto Rigger Tool
Accurig Auto Rigger Tool By Real Illusion

Part 2: Creating a Basic Idle Animation in Cascadeur

  1. Open Cascadeur and start a new scene. Import the FBX file with Maya settings that you exported from Accurig.
  2. Cascadeur will ask if you want to enter “Rig Mode.” Click “Yes.”
  3. In the “Rig Mode Helper” dialog, click “Yes” and then “OK” on the next dialog.
  4. Click “Add Rig Elements” at the bottom of the “Quick Rigging Tool” dialog.
  5. Rotate your character by holding ALT and the left mouse button to navigate.
  6. Select the “Auto Pose” tool to enable automatic control point positioning as you move your character.
  7. Position your character into an initial pose for your idle animation by moving and rotating control points. Use ‘W’ to move and ‘E’ to rotate.
  8. Add a keyframe at frame 10 by clicking the key icon.
  9. Change the hand pose on frame 10 to create a hand open/close animation.
  10. Duplicate the first frame to frame 20 and mirror the pose for variety.
  11. Duplicate the second keyframe to frame 35 and mirror it to frame 45.
  12. Extend the timeline to add more frames for smoother animation (e.g., 200 frames in total).
  13. Copy the first frame to frame 145 and the second keyframe to frame 110.
  14. Apply bezier curve interpolation for smoother animation between keyframes.
  15. Review and refine your animation by adding subtle movements, such as chest and shoulder motion.
  16. Create a seamless loop by ensuring the first and last frames are identical and adjust frame numbers accordingly.
  17. Cascadeur Tutorial

Part 3: Exporting the Animation to Blender

  1. Export the animation from Cascadeur to Blender by going to “File” > “Export” > “FBX.” Name the file and click “Save.”
  2. In Blender, import the animation by going to “File” > “Import” > “FBX.” Use the default settings and click “Import FBX.”
  3. Delete any existing objects in the Blender scene and select the imported Armature.
  4. Adjust the Armature’s rotation to face the front and place it in the scene.
  5. Create an animation track for the imported animation and rename it.
  6. Copy the animation keyframes from the imported Armature and paste them onto your character’s Armature.
  7. Delete the imported Armature to keep your scene clean.
  8. Create an animation loop for your idle animation in Blender using the NLA (Non-Linear Animation) Editor.
  9. Blender_Animation_Import
    Blender_Animation_Import

Part 4: Exporting the Animation to Unreal Engine 5

  1. In Unreal Engine 5, create a new project and organize your project folder.
  2. Import your character’s 3D mesh into Unreal Engine by right-clicking in the “Content” folder, selecting “Import,” and choosing your FBX file. Ensure it’s assigned to the correct skeleton.
  3. Add a Level Sequence to your project by right-clicking in the “Content” folder and selecting “Level Sequence.”
  4. Drag your character’s skeletal mesh into the Level Sequence.
  5. Add your idle animation to the Level Sequence by clicking the plus icon and selecting the animation.
  6. Adjust the timeline as needed and press the spacebar to preview your animation.
  7. Extend the timeline and blend your idle and walk animations for a seamless transition.

Part 5: Adding Free Mocap Data

  1. Visit the ActorCore website and explore the free motion resources.
  2. Download free motion data compatible with Cascadeur.
  3. Import the downloaded motion data into Cascadeur, and apply it to your character as needed.
  4. Refine and customize the imported motion data to suit your character and animation needs.

This tutorial should help you get started with animating and importing your game characters into Blender and Unreal Engine 5 using Cascadeur. Feel free to explore further features and animation possibilities in Cascadeur to enhance your character animations.

Remember, practice makes perfect, and with time, you’ll be creating stunning animations for your game characters. Enjoy animating!

<p>The post How to Animate Game Characters and Import Them into Blender and Unreal Engine 5 first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-animate-game-characters-and-import-them-into-blender-and-unreal-engine-5/feed/ 0
I Turned Myself Into A 3D Game character – Tutorial https://promptmuse.com/create-3d-characters-fast/ https://promptmuse.com/create-3d-characters-fast/#respond Thu, 17 Aug 2023 09:22:46 +0000 https://promptmuse.com/?p=3066 Introduction In this tutorial, I will show you how to use the Headshot 2 plug-in for Character Creator to create a 3D character from a scanned head mesh. This new plugin from Real Illusion uses AI technology to turn any mesh into a retopgized 3D character, with UVW, rigged mesh and blend shapes for facial [...]

<p>The post I Turned Myself Into A 3D Game character – Tutorial first appeared on Prompt Muse.</p>

]]>
Introduction

In this tutorial, I will show you how to use the Headshot 2 plug-in for Character Creator to create a 3D character from a scanned head mesh. This new plugin from Real Illusion uses AI technology to turn any mesh into a retopgized 3D character, with UVW, rigged mesh and blend shapes for facial animation. There is a full length video of the tutorial here

What you will need

Polycam
Polycam LiDAR headscan with IPhone

Step 1: Import the head mesh into Character Creator

  1. Open Character Creator software.
  2. Go to File > Import > .obj.
  3. Select the head .obj mesh file that you want to import.
Import obj into character creator
Character Creator Import obj mesh

Step 2: Headshot 2 Plugin 

  1. Click on the Headshot 2 plug-in in the top toolbar. You need to install this prior from the Real Illusion website here.
  2. The Headshot 2 dialog box will open.
  3. Click on the Align Points button.
  4. The starter pins will be automatically placed on the head mesh.
  5. Move the pins so that they are aligned with the corresponding points on the Character Creator mesh. Delete pins by Ctrl & click the pin you want to remove.
  6. Prompt Muse | A.I News, Tech Reviews and Free Tutorials
    Headshot 2 Plugin. Aligning Points

Step 3: Refine the head mesh

  1. Click on the Head Gen button from the horizontal toolbar.
  2. A preview of the generated head mesh will be displayed. You can select the area of the mesh you wish to be generated onto the CC3+ model.
  3. If you are happy with the preview, click on the Generate button, otherwise you can go back and adjust your points.

Step 4: Refine the head mesh

  1. Use the brushes in the panel to refine the head mesh.
  2. You can smooth the mesh, move the mesh, or project specific areas.
Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Refine the Headmesh with brushes

Step 5: Attach the head mesh to a body

  1. Click on the Attach to Body button in the Headshot 2 dialog box.
  2. Select the body type that you want to attach the head mesh to.
  3. Click on the Attach button.

Step 6: Add skin textures and materials

  1. Open the contents dialog.

  2. Use the presets in the Skingen plugin to add realism to the skin texture, such as pores, muscle definition and freckles

  3. You can also add make up and decals to the character.

  4. In the Modify panel, go to the Morphs options and adjust the depth of the eyeballs and height of the teeth using the sliders.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
Skin Gen plugin for realistic skin presets

Step 7: Add hair

  1. I used the hair plugin (hair builder) and additional hairs assets from the Real Illusion marketplace.

Conclusion

This is just a basic tutorial on how to use the Headshot 2 plug-in for Character Creator. There are many more things that you can do with this plug-in, so I encourage you to experiment and explore.

I hope this tutorial is helpful. Please let me know if you have any questions.

<p>The post I Turned Myself Into A 3D Game character – Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/create-3d-characters-fast/feed/ 0 I Turned Myself Into A Game Character ! nonadult
Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/ https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/#respond Tue, 18 Jul 2023 12:28:03 +0000 https://promptmuse.com/?p=3044 A FREE Tool for Content Creators In the rapidly advancing world of artificial intelligence (AI), content creators are continually seeking innovative ways to ensure their work bypasses AI detectors. Enter Pass AI Detection, a ground-breaking tool that not only refines your content for human readers but also ensures it’s tailored to bypass AI detectors increasingly [...]

<p>The post Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors first appeared on Prompt Muse.</p>

]]>
A FREE Tool for Content Creators

In the rapidly advancing world of artificial intelligence (AI), content creators are continually seeking innovative ways to ensure their work bypasses AI detectors. Enter Pass AI Detection, a ground-breaking tool that not only refines your content for human readers but also ensures it’s tailored to bypass AI detectors increasingly utilised by search engines and other platforms.

What is Pass AI Detection?

Pass AI Detection is a sophisticated AI detection tool that analyses your text and modifies it to bypass both AI and human readers. It’s a game-changer for content creators, enabling them to create engaging content that meets the stringent requirements of AI detectors.

AI content detectors are utilised in a range of applications, from search engines to academic integrity tools like Turnitin. These AI detectors scrutinise the text to determine its relevance, quality, and originality. As a result, content creators need to ensure their content is tailored to bypass these AI content detection systems to improve visibility and reach.

Pass AI Detection leverages cutting-edge AI techniques to evaluate and modify your content. The process begins with the AI content detector, which scans your text and identifies areas for improvement. The AI detector looks at your distribution of keywords and topics and provides a suggested distribution that would optimise your content for AI content detection.Prompt Muse | A.I News, Tech Reviews and Free Tutorials

A Balanced Approach to Content Creation

Pass AI Detection centres on balance. It understands the negatives of keyword stuffing and the necessity for reader-friendly text. The tool doesn’t increase keyword frequency, but promotes balanced keyword distribution, crafting content that is both AI and reader-friendly.

As AI detection methodologies progress, Pass AI Detection adapts. The tool is continuously refining its algorithms to ensure your content remains optimised for the most recent AI detection techniques. This commitment to staying ahead of the curve positions Pass AI Detection as a leader in the AI content detection market.

Content creation is a critical component of successful marketing strategies. Pass AI Detection helps generate content that not only attracts your audience but also performs well with AI detectors, achieving an ideal balance between human readability and AI detectability. Experience cost-effective, innovative content creation with Pass AI Detection’s unique BYOK model. Join the revolution today.

<p>The post Meet Pass AI Detection: Your Free Tool to Bypass AI Content Detectors first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/meet-pass-ai-detection-your-free-tool-to-bypass-ai-content-detectors/feed/ 0
Warp Fusion: A Comprehensive Step-by-Step Tutorial https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/ https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/#respond Fri, 23 Jun 2023 12:39:45 +0000 https://promptmuse.com/?p=3032 Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability to create unique animations with a consistent theme or style. This tutorial will guide you through the process of using Warp Fusion, focusing on using a remote GPU, which is [...]

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability to create unique animations with a consistent theme or style. This tutorial will guide you through the process of using Warp Fusion, focusing on using a remote GPU, which is a preferred method for many as it allows for running multiple GPUs simultaneously, freeing up your PC for other projects.

Getting Started with Warp Fusion

Prerequisites

Before we dive into the tutorial, there are a few prerequisites you need to have:

  • Google Colab Pro or Google Colab Pro Plus
  • Access to Alex’s Patreon page where you can get the Warp Fusion notebook
  • A model and a Lora from Civic AI

Setting Up Warp Fusion

The first step in using Warp Fusion is setting up the environment. This involves downloading the Warp Fusion notebook from Alex’s Patreon page and loading it into your Google Colab. Once you have the notebook ready, you need to connect it to a hosted runtime if you’re using Google Colab Pro. This ensures that you’re using your compute unit and the GPU.

Configuring Warp Fusion

Basic Settings

Once your environment is set up, you can start configuring Warp Fusion. The first thing you need to do is specify the name of the folder where your output files will be stored in your Google Drive. You also need to set the width and height of your output video to match your input video. A resolution of 720 by 1280 is a good starting point, but you can adjust this to suit your needs.

Video Input Settings

Next, you need to specify the path to your input video. This is the video that you want to animate using Warp Fusion. You can save your input video on your Google Drive for easy access. If your video is long and you want to reduce the diffusion time, you can set the ‘extra frame’ setting to two, which means Warp Fusion will diffuse every other frame.

Video Masking

Video masking is a useful feature that can help make the background of your animation consistent and stable. You can use your input video as the mask source and extract the background mask. If you want to add another video into the background, you can specify the path to that video in the ‘mask video path’ setting.

Defining SD and K Functions

The next step is to define the SD and K functions. You need to change the ‘load to’ setting to GPU and specify the path to the model you want to use. You can get models from Civic AI or Hugging Face. You also need to specify the directory where your control net models will be stored on your Google Drive.

Running Warp Fusion

Once you have all your settings configured, you can start running Warp Fusion. This involves running all the cells in the notebook up to the GUI section. This process can take about 5 to 10 minutes, and you will get a green tick for every cell that has successfully completed.

Customizing Your Animation

Using Prompts and Loras

Prompts and Loras are powerful features that can help you customize your animation. Prompts are instructions that guide the AI in creating the animation, while Loras are elements that you can add to your animation. You can specify your prompts and Loras in the GUI section of the notebook. You can also adjust the strength of the stylization and the prompt guidance to achieve the desired effect.

Using Control Nets

Control nets are models that you can use to control the animation. You can select the control net models you want to use and adjust their weights to influence their impact on the animation.

Using the Warp Tab

The warp taballows you to adjust the flow blend of your animation. This is the blending of your input video with the next frame of stylization. If you find that your animation is over-stylized, you can reduce the flow blend to achieve a more balanced effect.

Using the Mask Tab

The mask tab allows you to use a background mask for your animation. You can change the color or use an image or the original video as the background. This can help to create a more consistent and stable background for your animation.

Rendering Your Animation

Once you have customized your animation, you can start rendering it. This involves running the ‘diffuse’ cell in the notebook. As your animation progresses, you will see a preview frame that allows you to check for any errors. If there are errors or things you want to fix, you can stop the diffusion and adjust your settings.

Creating a Video from Your Animation

After your animation has been rendered, you can create a video from it by running the ‘Create Video’ cell. This will create a video from the frames that were diffused in the previous step. The video and all the frames will be saved to your Google Drive.

Post-Production

After you have created your video, you can bring the frames into a post-production software like DaVinci Resolve or After Effects for further editing. This can involve reducing the flickering of the animation or exporting the frames into a .mov file.

Upscaling Your Video

The final step in the process is upscaling your video. This can be done using a service like Topaz Labs or Pixel. These services use AI to increase the size of the video and make it sharper. Pixel is a browser-based service that is easy to use and offers a pay-as-you-go pricing model, making it a cost-effective alternative to Topaz Labs.

Warp Fusion is a powerful tool that allows you to create unique and eye-catching animations. With its wide range of features and customization options, you can create animations that truly stand out. Whether you’re a hobbyist or a professional, Warp Fusion offers a fun and innovative way to create animations.

Remember to share your creations on social media and tag Prompt Muse. We love to see what you create with Warp Fusion. Happy prompting!

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/feed/ 0 Warp Fusion: Step by Step Tutorial nonadult
Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/ https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/#respond Fri, 09 Jun 2023 10:58:45 +0000 https://promptmuse.com/?p=3015 If you are looking for a reliable AI tool for your content generation needs, your search may be over. UseChatGPT.Ai is a cutting-edge software that allows users to integrate their favorite AI models into their workflow effortlessly. Whether you are a content creator or a marketer, this tool can help you streamline your work process [...]

<p>The post Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy first appeared on Prompt Muse.</p>

]]>
If you are looking for a reliable AI tool for your content generation needs, your search may be over. UseChatGPT.Ai is a cutting-edge software that allows users to integrate their favorite AI models into their workflow effortlessly. Whether you are a content creator or a marketer, this tool can help you streamline your work process and improve productivity. In this article, we will review UseChatGPT.Ai in depth and explore its features, benefits, and drawbacks.

How to Use UseChatGPT.Ai

UseChatGPT.Ai is a browser extension that is compatible with Chrome, Firefox, and Safari. The application can be downloaded from the official website free of charge. Once installed, the extension allows you to use AI models such as GPT-4, OpenAI API, and Bing Bard on any text-based website or PDF with just one click. Here’s how to get started with UseChatGPT.Ai:

  1. Go to the official website and download the extension for your browser.
  2. Install the extension by following the instructions on the screen.
  3. Once installed, click on the UseChatGPT.Ai icon in your browser toolbar.
  4. Select your preferred AI model from the dropdown menu.
  5. Copy and paste the text you wish to analyze into the text box.
  6. Click on “Generate” and wait for the AI model to do its magic.
  7. Review the generated text and use it as you wish.

Benefits of UseChatGPT.Ai

UseChatGPT.Ai offers a variety of benefits to its users, including:

Improved Productivity

With UseChatGPT.Ai, you can generate high-quality content in a matter of seconds, saving time and effort. The AI models used by the tool are capable of analyzing large amounts of data and providing accurate results, allowing you to focus on other tasks.

Customization

UseChatGPT.Ai allows you to choose from multiple AI models and customize your prompts as per your requirements. This feature ensures that you can generate content that is tailored to your specific needs.

Cost-Effective

UseChatGPT.Ai is completely free to use, making it a cost-effective alternative to other AI tools that may charge a significant amount of money.

User-Friendly Interface

UseChatGPT.Ai has a simple and intuitive interface that makes it easy for even non-technical users to navigate the tool. The interface is well-designed and organized, ensuring a seamless user experience.

Drawbacks of UseChatGPT.Ai

Like any tool, UseChatGPT.Ai has its limitations. Here are a few drawbacks to keep in mind:

Accuracy Issues

While the AI models used by UseChatGPT.Ai are highly advanced, they are not foolproof. There may be instances where the generated content is inaccurate or irrelevant, requiring manual editing.

Dependency on Internet Connectivity

UseChatGPT.Ai requires a stable internet connection to function correctly, which may be an issue for users with limited access to the internet.

Final Thoughts

Overall, UseChatGPT.Ai is a powerful AI tool that can significantly improve your productivity and content generation capabilities. The tool’s user-friendly interface, customization options, and cost-effectiveness make it an excellent choice for content creators and marketers seeking to streamline their workflow. However, like any tool, UseChatGPT.Ai may not be perfect, and users should be aware of its limitations before using it. We recommend trying out UseChatGPT.Ai for yourself and experiencing its benefits firsthand.

<p>The post Revolutionise Your Content Creation with UseChatGPT.AI – The FREE A.I content buddy first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/revolutionise-your-content-creation-with-usechatgpt-ai-the-free-a-i-content-buddy/feed/ 0
Create GPT4 articles with automated internal linking https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/ https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/#respond Thu, 25 May 2023 15:30:51 +0000 https://promptmuse.com/?p=3006 Alex Player is a content creator who has been producing high-quality content for years. He has been able to accomplish this by using various tools and techniques that help him streamline his writing and research processes. In this article, we will explore Alex’s use of Chatbase, a powerful tool that can help anyone create awesome [...]

<p>The post Create GPT4 articles with automated internal linking first appeared on Prompt Muse.</p>

]]>
Alex Player is a content creator who has been producing high-quality content for years. He has been able to accomplish this by using various tools and techniques that help him streamline his writing and research processes. In this article, we will explore Alex’s use of Chatbase, a powerful tool that can help anyone create awesome content in a matter of minutes.

LINKS:

ChatBase LifeTime Deal

ChatBase Monthly Subscription

Prompts:

Base Prompt

you are a writer. you write content using the resources, as I request and always add links to references. your articles are well written, factual, and contain subheadings every 1-3 paragraphs. you can ONLY mention products from the resources I provided you. you article starts with an introduction and ends with a conclusion. Do not write ANYTHING except the requested article.

Main Prompt (in chat)

Instructions:  write an 800-word article 

Topic: Creating a Modern Nursery: Top Furniture and Decor Picks from Nubie 

Target Audience: Parents

Internal Linking: Always use hyperlinks for as many relevant links within the site as possible, Example: We have <a href=”https://www.thechildrensfurniturecompany.com/beds>Beds</a> for all ages

Writing Style: Informative, broken up with relevant subheadings | bold (with <strong> tags) hyperlinked words.

URLS: Do not add full URLs, it needs to be wrapped in <a href> tags.

Title: clickbaity / interesting

User Intent: deciding to make a purchase 

Language: British English

Subheadings & titles: Wrapped in relevant H tags eg <h1>title</h1>

Paragraph Length: Random between 200-600 words

What is Chatbase?

Chatbase is an application that is designed to help users create chatbots for various purposes. However, Alex has found a unique use for the application that he believes can help anyone who is involved in content creation. With Chatbase, you can analyze all the content on a website and use it to train an AI model to create high-quality content. The tool uses OpenAI’s GPT-3 model to generate content that is both engaging and relevant to your topic.

How to Use Chatbase for Content Creation

Alex has developed a systematic approach to using Chatbase for content creation. Let’s take a look at his method step-by-step:

Step 1: Purchase Chatbase
The first step is to purchase Chatbase and get an OpenAI API key. You can buy Chatbase for $29 from AppSumo but you will need to purchase an OpenAI API key which costs $159. The key is essential as it allows Chatbase to access OpenAI’s GPT-3 model, which is an integral part of the application.

Step 2: Analyze the Website
Once you have installed Chatbase and input your OpenAI API key, you can start analyzing the website you want to create content for. This is an important step as it allows Chatbase to learn all the content on the website. To analyze the website, simply input the URL and let Chatbase crawl the website. The process can take a while depending on the size of the site. The tool can crawl through up to six million characters of content. Alex advises that you make sure you have selected the correct website and region you would like to extract content from.

Step 3: Train the Model
Once the website has been fully analyzed, you can proceed to train the AI model. During this process, you can choose what type of data sources you want to teach the AI. For example, you can choose to teach the AI only specific pages, products, or searches. The Chatbot API can also hyperlink into pages found on the website, allowing it to generate content that is precise and specific to the user’s query.

Step 4: Customize the Content
After creating the Chatbot API, you can start generating content. The AI model produces the content quickly, but as Alex notes, it’s not always perfect. You can further customize the text generated by removing any parts you don’t like. The chatbot dynamically learns from your changes, so after making selections, the chatbot may produce better results the following time you use it.

Step 5: Use the Content for Articles, Blogs, Ads, Social Media Campaigns
Once you have fine-tuned the content to your preferences, you can export the content for use on your blog, social media channels, ads, or even content for a book. The Chatbase tool does not limit the user to specific content formats. Instead, you are free to use the generated content as you see fit. You can even store multiple versions of generated content until you decide which version to use, meaning more value for a single purchase.

Alex’s Tips and Tricks for Chatbase Content Creation

1. Test the AI
Before exporting the AI-generated content, Alex suggests testing the chatbot API’s accuracy. Perform some queries related to the keywords and phrases that are relevant to your brand, and see how well the Chatbot API responds. If any responses are inaccurate, make the necessary changes to enhance accuracy.

2. Use the Right Temperatures

When using GPT-3 formulas, Chatbase, GPT-4, or GPT 3.5, Play advises that the used temperature values will influence the output entirely. When creating articles, low temperatures like 0 must be used to achieve factual content while avoiding false statements.

Chatbase is a powerful tool that can help anyone who is involved in content creation. With Alex’s systematic approach, you can use the tool to analyze any website and generate high-quality content in just a matter of minutes. If you’re currently working on a content creation project and want to streamline your writing process, consider investing in Chatbase. This application will not only save you time but also produce content that is both engaging and relevant to your topic.

FAQ

1. What is Chatbase and how can it be used as a chatbot?
Answer: Chatbase is an application that can memorize and learn all content on a website to be used as a chatbot.
2. Can Chatbase be used in a different way?
Answer: Yes, it can be used in a different way, which will be explained in this episode.
3. How much does Chatbase cost?
Answer: Chatbase is available on AppSumo for $29.
4. Do you need an Open AI API key for Chatbase to work?
Answer: Yes, an Open AI API key is required to use Chatbase, and it costs $159.
5. What is Chatgpt and how is it used in the demo?
Answer: Chatgpt is a tool for creating chatbots, and Alex Player demonstrates how to use it in this episode.
6. Can you choose data sources to teach the chatbot using Chatgpt?
Answer: Yes, you can choose data sources to teach the chatbot using Chatgpt.
7. What is the maximum amount of characters you can use on the current tier of Chatgpt?
Answer: The maximum amount of characters on the current tier of Chatgpt is 6 million.
8. Can you remove things you don’t want with Chatgpt?
Answer: Yes, you can remove things you don’t want with Chatgpt.
9. Can Chatgpt read PDFs or certain URLs?
Answer: Yes, Chatgpt can read PDFs or certain URLs.
10. What kind of topics can Alex Player generate with the help of Chatbase?
Answer: Alex Player can generate article/topic ideas for a brand’s website with the help of Chatbase.

<p>The post Create GPT4 articles with automated internal linking first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/create-gpt4-articles-with-automated-internal-linking/feed/ 0
Sagans: The Anonymous AI Collective Taking Over the Music Video World https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/ https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/#respond Wed, 17 May 2023 17:18:21 +0000 https://promptmuse.com/?p=2996 On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for an artist as AI-generated vocals become increasingly popular, and how it can be a great ally for independent music artists in creating their own videos and music. The podcast features [...]

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
On this episode of Prompt Muse, we explore the significant role of AI technology in the music industry. We discuss the concept of identity for an artist as AI-generated vocals become increasingly popular, and how it can be a great ally for independent music artists in creating their own videos and music. The podcast features guests from Sagans, an anonymous AI collective that has been producing music videos for popular artists since 2022. We discuss the challenges of keeping up with AI advancements and their impact on creativity, and how it can be used as a tool for faster problem-solving and dream boosting. Tune in to discover the many possibilities for using AI technology to bring ideas to life in the music industry.

Artificial Intelligence (AI) has rapidly become one of the most versatile and powerful technological tools in recent years, providing users with a wealth of opportunities to explore, create, and express themselves in new and exciting ways.

In the world of music, AI has revolutionized the way musicians create, record, and market their art. From optimizing vocal tracks to generating entirely new compositions, AI is making music more accessible, dynamic, and personalized than ever before.
In this blog post, we will explore the many ways in which AI is transforming the world of music and the exciting possibilities it holds for the future.

The Rise of AI Vocals

One of the most fascinating developments in the music industry has been the rise of AI vocals, which has enabled musicians to create realistic vocal tracks without requiring the services of a professional singer. With AI vocals, musicians can fine-tune the pitch, timbre, and other vocal qualities to suit their creative vision, ensuring the final track sounds exactly as they imagined it.

AI vocals are also useful for people who might not have the self-confidence to sing themselves or who lack the resources to hire a professional singer. By providing a flexible and affordable solution, AI vocals allow artists to experiment with different styles, sounds, and arrangements without breaking the bank.

Collaborating with AI

The ability to collaborate with AI has also opened up new creative avenues for musicians, allowing them to create brand new vocals by combining different AI-generated voices. This makes it possible to create choirs without requiring a large number of singers, providing musicians with greater control over the final product.
In addition, AI can help improve the quality of music videos that independent artists produce, enabling them to create high-quality visuals without the need for expensive equipment or professional crews. This allows musicians to maintain their creative vision while still producing music videos that are visually stunning and engaging for their fans.

Learning from Others

The podcast team at Prompt Muse often discusses their creative process, which involves brainstorming ideas together for days or weeks, testing concepts, and then starting production in a relaxed and casual way. They also rely on hard work and spend hours researching information to learn new things, including the latest AI techniques.

To stay ahead of the curve, they often check Reddit for 10 minutes a day to keep up with the latest technology and advancements. Although some may find AI technology daunting, the team at Prompt Muse believes in adapting their workflow to new technology as it evolves. They remain open-minded and learn by experimenting and transforming assets to achieve their objectives quickly.

Misconceptions About AI

There are many misconceptions about AI, particularly its ability to create videos and music easily and independently. While AI can be used as a tool for creating quick visualizers, human input is still necessary for creating a good story and continuity. Music videos are seen as short films with a soundtrack and require a lot of time and effort to produce.

By suggesting that AI can replace the human touch, creators often find themselves feeling frustrated. Dismissing their work as “AI video” or “AI music” doesn’t recognize the human input and effort involved in the creative process.

AI-assisted Music Videos

However, AI can be a great ally for independent music artists in producing their own videos and music. In particular, Runway.ai is a popular tool used by the podcast team to edit backgrounds that are then added to the video edit on Adobe After Effects. While it is important to present the reality of what happened on the day of the shoot, it is equally important to bring the artist’s creativity and vision to life.
AI-generated music still requires the artist to record the song from start to finish – AI only changes the tone of their voice to match someone else – but previous tracks can be used to create a new track. Working in tandem with AI also allows the artist to produce something of which they are proud.

The Future of AI in Music

As AI technology continues to evolve, the future of music promises to be more exciting than ever before. With AI, artists will be able to explore new creative frontiers while also reaching new and diverse audiences around the world.

AI-generated music may not only be used by independent artists, but also by established stars to collaborate with other artists and explore new musical styles. This could help to expand the boundaries of music as we know it while also creating new and unique sounds that have never been heard before.

Innovation is the key to the future of music and Prompt Muse is committed to bringing our readers the latest developments in AI and music. The podcast and blog team encourages their audience to embrace the power of AI as a creative tool to develop their own unique vision.
As AI technology continues to evolve, it’s exciting to think about what the future will hold for music. With AI, musicians will be able to create more exciting, innovative, and personalized music than ever before, empowering them to take their creativity to new heights and explore new possibilities in the world of music.

FAQ

1. What are some benefits of AI vocals for artists and producers?
– AI vocals can be useful for people who are not self-confident enough to sing, and they can be merged to create brand new vocals and create choirs without needing many people.
2. How can AI assist independent music artists in creating high-quality music videos?
– AI can give power to music artists to create videos and music on their own, faster than before, and it can be a great ally for independent music artists in producing their own videos and music.
3. How do the guests on the podcast approach video production and what tools do they use?
– They start with walking in a Japanese environment, then edit the video to add a background frame, play with camera movements, and transform assets using Enringing; they use Warp fusion for primary tools and Deform for making scenes more interesting, and Erased backgrounds are edited on Runway.
4. Are there any misconceptions about AI and its ability to create videos and music?
– Yes, some people believe that AI can create videos and music with just one sentence or prompt, but human input is still necessary for creating a good story and continuity.
5. How do independent music artists feel about the use of AI in their work?
– There is frustration among creators when people dismiss their work as “AI video” or “AI music” without recognizing the human input and effort involved.
6. How has AI technology evolved and improved over time?
– When the speaker started with AI technology, there were limited resources available, but now tools like Dalle Fusion are available and powerful, allowing for greater creativity and faster problem-solving.
7. What tools and websites are available to bring creative ideas to life?
– There are many websites and tools available to create 3D visuals and animate drawings, and Runway can be used to achieve dream boosting and generating unique ideas.
8. What is Sagans, and what have they accomplished through their work with AI music videos?
– Sagans is an anonymous AI collective that has been producing music videos for popular artists since 2022, and they have produced videos for Lincoln Park Lawns Entropy and Die Antwood’s Age of Illusion in just one year.
9. How important is it to stay up to date with AI advancements as a creative?
– It is important to be aware of new technology for future use, but it is not necessary to know everything, and it is important to check casually without feeling overwhelmed.
10. How can creativity be enhanced through setting limits and restrictions?
– James Blake created a song by limiting himself to only four notes, and the N-word style and technique were developed mainly through experimentation and prompt.

<p>The post Sagans: The Anonymous AI Collective Taking Over the Music Video World first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/sagans-the-anonymous-ai-collective-taking-over-the-music-video-world/feed/ 0
Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/ https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/#respond Mon, 08 May 2023 11:50:14 +0000 https://promptmuse.com/?p=2980 Introduction: The Importance of Privacy in AI Conversations As OpenAI’s ChatGPT gained popularity as a powerful language model, concerns arose regarding the lack of privacy settings for users. Initially, the AI product did not provide options for users to safeguard their data, and OpenAI utilized data from conversations to train its language model further. With [...]

<p>The post Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data first appeared on Prompt Muse.</p>

]]>
Introduction: The Importance of Privacy in AI Conversations

As OpenAI’s ChatGPT gained popularity as a powerful language model, concerns arose regarding the lack of privacy settings for users. Initially, the AI product did not provide options for users to safeguard their data, and OpenAI utilized data from conversations to train its language model further. With privacy watchdogs and regulators demanding better data protection, OpenAI introduced a new privacy setting. This guide will walk you through the process of opting out of ChatG

PT data sharing and explain the implications of doing so.

Step-by-Step Guide to Opting Out of ChatGPT Data Sharing

To stop ChatGPT from using your private data, follow these steps:

  1. Log into your ChatGPT account using a browser.
  2. Click on your account icon located in the bottom left corner.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

3. Select “Settings.”

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

4. Find the “Data Controls” section and click on the “Show menu” next to it.

5. Toggle off “Chat History & Training.”

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Note that OpenAI may have tried to obscure this privacy-enhancing feature. Within this menu, you can also export your ChatGPT data or delete your account if desired.

What Happens to Your ChatGPT Chat History After Turning Off Data Collection?

When you disable OpenAI’s ability to collect and use your data for training ChatGPT, your chats will only be stored for 30 days. After that period, your conversations will be permanently deleted. OpenAI states that while they will retain your data for 30 days, it will only be reviewed to monitor for abuse and not used for training ChatGPT.

In the future, OpenAI plans to launch a ChatGPT Business account that will allow users to keep their chat history even when disabling data collection.

What Happens to Your Existing ChatGPT Data?

Unfortunately, you cannot delete the data you previously shared with ChatGPT before the new privacy settings were implemented. However, you can opt to start sharing data again by re-enabling “Chat History & Training” within the settings.

OpenAI claims to use the shared data to improve their models, making them more helpful for users. They state that they do not use the data for advertising, selling services, or creating user profiles.

Future Improvements in ChatGPT Privacy Management

While the current privacy options for ChatGPT are limited, it is essential to keep an eye on potential improvements in the future. OpenAI may develop better ways to manage user privacy, ensuring a more secure and user-friendly experience.

In conclusion, the introduction of privacy settings in ChatGPT is a step forward in data protection. By following this guide, you can easily opt out of data sharing and safeguard your private data. However, there is still room for improvement in privacy management, and users should remain vigilant regarding their data security.

<p>The post Preserve Your Privacy: Take Action and Change ChatGPT Settings to Stop OpenAI from Training on Your Data first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/change-chatgpt-settings-to-stop-openai-from-training-on-your-data/feed/ 0
A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/ https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/#respond Mon, 08 May 2023 11:35:45 +0000 https://promptmuse.com/?p=2975 A Presidential Meeting on AI On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google, Microsoft, OpenAI, and Anthropic. The meeting emphasized the importance of ensuring the safety of AI products before deployment and addressing the risks posed by AI. However, some AI experts criticized [...]

<p>The post A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity first appeared on Prompt Muse.</p>

]]>
A Presidential Meeting on AI

On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google, Microsoft, OpenAI, and Anthropic. The meeting emphasized the importance of ensuring the safety of AI products before deployment and addressing the risks posed by AI. However, some AI experts criticized the exclusion of ethics researchers who have warned of AI’s dangers for years.
Over the past few months, generative AI models such as ChatGPT have quickly gained popularity, driving companies to develop similar products rapidly. However, concerns have been growing about potential privacy issues, employment bias, and the potential for using them to create misinformation campaigns.

AI Ethics Researchers Respond

Critics of the companies’ ethical track records were not impressed by the meeting. They questioned the choice of inviting people to the meeting who, they argue, represent companies that have created the issues with AI that the White House seeks to address.
On Twitter, AI researcher Dr. Timnit Gebru wrote, “It seems like we spend half our time talking to various legislators and agencies and STILL we have this… A room full of the dudes who gave us the issues & fired us for talking about the risks, being called on by the damn president to ‘protect people’s rights.'” In 2020, Google fired Gebru following a dispute over a research paper she co-authored that highlighted potential risks and biases in large-scale language models.

University of Oxford AI ethics researcher Elizabeth Renieris tweeted, “Unfortunately, and with all due respect POTUS, these are not the people who can tell us what is “most needed to protect society” when it comes to #AI.”

AI Safety and AI Ethics

The criticism highlights the divide between “AI safety” (a movement concerned primarily with hypothetical existential risk from AI) and “AI ethics” (a group of researchers concerned largely about misapplications and impacts of current AI systems, including bias and misinformation).
Author Dr. Brandeis Marshall suggested organizing a “counter-meeting” that would include a diverse group of AI ethicists, practitioners, and researchers to discuss the real-world implications of AI and propose more inclusive and responsible solutions.

Looking Forward

Although the White House meeting brought attention to the potential risks and challenges posed by AI, it remains to be seen whether the discussion will lead to concrete actions that address these issues. It is crucial for government, industry, and academia to collaborate and ensure that AI development is safe, responsible, and equitable.
To foster a more comprehensive dialogue on AI ethics and safety, including voices from marginalized communities and interdisciplinary perspectives, could be a critical step in building more resilient and fair AI systems. Engaging stakeholders from various backgrounds in the decision-making process will help address concerns around bias, privacy, and the potential misuse of AI technologies.

As AI continues to advance rapidly, it will become increasingly important for policymakers and industry leaders to work together and create robust regulatory frameworks that protect the public interest while enabling innovation. Only through a collaborative approach can we ensure that AI technologies are harnessed to benefit society as a whole, while mitigating potential risks and negative consequences.

<p>The post A New Era of AI Governance: The White House Leads the Way in Promoting AI Ethics and Inclusivity first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/a-new-era-of-ai-governance-the-white-house-leads-the-way-in-promoting-ai-ethics-and-inclusivity/feed/ 0
Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/ https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/#respond Thu, 20 Apr 2023 09:48:38 +0000 https://promptmuse.com/?p=2954 If you’re a fan of VR games and AI technology, then you’ll want to check out Tore Knabe’s latest creations. The Norwegian developer has combined these two technologies to create immersive experiences that allow players to interact with AI characters in virtual worlds. What’s more, the AI characters in the game are powered by GPT-3/GPT-4, [...]

<p>The post Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game first appeared on Prompt Muse.</p>

]]>
If you’re a fan of VR games and AI technology, then you’ll want to check out Tore Knabe’s latest creations. The Norwegian developer has combined these two technologies to create immersive experiences that allow players to interact with AI characters in virtual worlds. What’s more, the AI characters in the game are powered by GPT-3/GPT-4, one of the most advanced natural language processing models currently available.

In this article, we’ll explore how Tore Knabe created his groundbreaking VR games, and what makes them such a unique and exciting experience.

The Making of Tore Knabe’s VR Game

Tore Knabe’s VR games are the result of several different technologies working together. The game was built using the Whisper which is Open AI’s new Speech to Text (STT) api. In addition to Whisper, Knabe also used Eleven Labs, a company specialising in creating human-sounding artificial voices, and the GPT-3 API, which powers the AI characters in the game.

The game takes place in various settings, including a peaceful lakeside scene with a priest and an old bar tavern with a talkative bartender. In both scenarios, the player is able to have conversations with the AI characters using natural language. The AI characters are able to respond to the player’s questions and statements in real-time, thanks to the power of GPT-3.

What’s particularly impressive about Tore Knabe’s game is the level of detail that has gone into creating the AI characters. For example, in the bar tavern scenario, the AI bartender is able to keep track of the various patrons in the bar, and can discuss each one with the player as a separate topic. This adds a level of realism to the game that is rarely seen in other AI-powered experiences.

The Future of VR and AI

Tore Knabe’s VR game is just the beginning of what could be a whole new era of gaming. As VR technology continues to improve, and AI becomes even more advanced, we can expect to see more games that combine these two technologies in innovative ways.

Imagine a virtual reality world where every character you encounter is powered by AI, and is able to converse with you in a natural and engaging way. Or a game where the AI characters can learn from your interactions, and adapt their behaviour based on your choices. The possibilities are truly endless.

Tore Knabe’s VR game is an exciting glimpse into the future of gaming, where AI and VR technology are combined in new and innovative ways. By using GPT-3 to power the AI characters in the game, Knabe has created a truly immersive experience that feels like a real conversation.

As we continue to explore the possibilities of AI and VR, it’s important to consider the potential benefits and drawbacks of this technology. While there are certainly concerns about its impact on society, there’s no denying that the combination of AI and VR has the potential to revolutionize the way we play games, and even how we interact with the world around us.

<p>The post Virtual Reality meets Artificial Intelligence: The Fascinating World of Tore Knabe’s VR Game first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/virtual-reality-meets-artificial-intelligence-the-fascinating-world-of-tore-knabes-vr-game/feed/ 0 Spiritual Chat with ChatGPT NPCs in Virtual Reality nonadult
How I Unleashed the Power of AI to Create Next-Level Animations https://promptmuse.com/how-i-unleashed-the-power-of-ai-to-create-next-level-animations/ https://promptmuse.com/how-i-unleashed-the-power-of-ai-to-create-next-level-animations/#respond Sat, 15 Apr 2023 15:30:29 +0000 https://promptmuse.com/?p=2937 Introduction: The Power of AI in Animation AI-generated animations are becoming increasingly popular in the creative world, and it’s no wonder why. These powerful tools allow artists to quickly and easily create stunning visuals that would otherwise take hours or even days to produce. In this comprehensive guide, we’ll walk you through the process of [...]

<p>The post How I Unleashed the Power of AI to Create Next-Level Animations first appeared on Prompt Muse.</p>

]]>
Introduction: The Power of AI in Animation

AI-generated animations are becoming increasingly popular in the creative world, and it’s no wonder why. These powerful tools allow artists to quickly and easily create stunning visuals that would otherwise take hours or even days to produce. In this comprehensive guide, we’ll walk you through the process of creating your own AI-generated animation using state-of-the-art techniques, tools, and workflows. With the help of this tutorial, you’ll be well on your way to producing professional-quality animations in no time.

Preparing Your Data for Training

Before diving into the animation process, it’s essential to prepare your data for training. Here’s a step-by-step guide to help you get started:

Gather a diverse dataset: To train your AI model effectively, you’ll need a large and varied dataset. This can include a mix of images, videos, and other visual content.
Preprocess your data: To ensure your AI model can effectively learn from your dataset, you’ll need to preprocess your data. This can involve resizing images, removing any unwanted artifacts, and normalizing the data.
Split your dataset: Divide your dataset into a training set and a validation set. This will allow you to train your model on one set of data while validating its performance on another.

How I Unleashed the Power of AI to Create Next-Level Animations
Once your data is ready, you can begin the process of training your AI model.

Training Your AI Model with Google Colab

Google Colab is an excellent platform for training your AI model, as it provides a powerful and user-friendly interface. Follow these steps to train your model using Google Colab:

Upload your dataset: Import your dataset into Google Colab, either by uploading it directly or connecting to your Google Drive.
Configure your training settings: Adjust the learning rate, optimizer, and other settings to optimize your model’s performance.
Train your model: Run the training cell to begin the training process. This may take anywhere from 30 to 40 minutes, depending on the size of your dataset and the complexity of your model.
Throughout the training process, be sure to monitor your model’s performance by checking its progress in the Google Colab interface.

Once your model has completed its training, you can export it for use in your animation project.

Creating and Animating Your 3D Character

Now that your AI model is trained, it’s time to create and animate your 3D character. Here’s a step-by-step guide to help you bring your character to life:

Generate an image of your character: Use a tool like Automatic 111, Run Pod, or CalicuristroC Lab UI to generate an image of your character using your trained AI model.
Create a 3D model of your character: Import your generated image into a 3D modeling program like Headshot or iClone, and use the built-in tools to create a 3D model of your character.
Add hair and other details: Use the libraries and tools provided by your 3D modeling program to add hair, clothing, and other details to your character.
Animate your character: Use a plugin like Motion Live to record your facial movements and apply them to your 3D model in real time. This will create a realistic, lifelike animation of your character.

Once your character is fully animated, you can export it for use in your final project.

Rendering and Finalizing Your AI-Generated Animation

After you have completed setting up your animation, it’s time to render and finalize your AI-generated creation. With the right settings and techniques, you’ll be able to produce smooth and realistic animations.

Batch Rendering Frames

To create a frame-by-frame animation, you need to batch render your frames. To do this, head over to the “Batch” tab in Automatic 111, select the input folder where your 3D files are stored, and the output folder where you want the rendered frames to be saved. Then, click on “Generate.” Once the rendering is complete, you will have a seamless animation that can be easily combined and edited in post-production software.

Post-Production and Final Touches

After rendering your animation frames, import them into a post-production software like Adobe After Effects or Da Vinci Resolve. Apply the appropriate effects, such as dirt removal and deflicker, to ensure a smooth and polished final result.

Remember that you can experiment with different checkpoint models in Automatic 111 to quickly change the render style of your animation. This flexibility allows you to create a variety of looks and styles with just a few clicks.

Sharing Your AI-Generated Animation and Continuing to Learn

Once your AI-generated animation is complete, be sure to share your work on social media and tag Prompt Muse so they can see and appreciate your creative projects. Connect with others in the community and continue learning from their techniques and ideas.

Join the Prompt Muse Community

Get involved with the AI creative community by following Prompt Muse on social media. You can find them on Instagram and Twitter as @PromptMuse. Additionally, join their Discord server to engage with other AI enthusiasts and share your projects.

Stay Up-to-Date with the Latest AI Tech News

To keep up with the latest developments in AI technology, subscribe to the Prompt Muse newsletter. You’ll receive regular updates on cutting-edge techniques and tools that can help you push the boundaries of your AI-generated animations and other creative projects.

Creating AI-generated animations is an exciting and ever-evolving field. By following the detailed steps and techniques shared by Prompt Muse, you can create stunning and realistic animations using AI technology. Experiment with different settings, checkpoints, and tools to discover new possibilities and unique styles for your projects.

Remember to share your creations with the AI community and tag Prompt Muse on social media, as they are always eager to see and celebrate the innovative work being done in this space. Stay connected, keep learning, and continue pushing the boundaries of AI-generated animation.

TRANSCRIPT:

And welcome back to another prompt muse tutorial. Today, I’m going to be showing you how to create a fully animated AI character. The beauty of this technique is that you don’t have to act out the scenes before. In fact, you don’t need to look like your actor or your actress. This method could potentially be used in the movie industry. If not just the concept artwork, it could be used for an entire movie itself. For this tutorial, I’m going to be training a realistic data set for my puppet. I’m going to be using myself so you can judge the likeness at the end for yourself. I gather up loads of images of myself. I then train it with Koya Laura. Then once I’ve done that, I use Character creator to create a puppet. You do not need to know any 3D. There is no learning curve to this software. You’re basically using sliders and dragging and dropping assets onto your 3D character. It is super simple to use. Obviously, you can go down the blender routeand the meta human route if you want to, but I just warn you, there is a learning curve, so it’s up to you what you want to do.

But my workflow is through character creator. This means I can then switch out to any character using the data set. So once I’ve got those rendered files of my puppet, I can then switch out those data sets from one character to another, and it’s a really easy and efficient way of working. So let’s dive in and get on with the tutorial. Step one, preparing your training images. I’ve prepared various images of myself. Put some images that are more like you, rather than photographs with a filter on, because trust me, it works ten times better. If you do that, you will get the likeness a lot better. It’s important that you choose your head, your body, some full bodies in there, some close up, some far away. You need to ensure that your hairstyles are quite different in each image, that you’re taking from different profiles, that your background’s changed, that your outfit’s changed. If you put too many headshots into your training data, you end up with things like this. Because the AI doesn’t know what the body is, so it gets confused. You need to put some body shots in there. I’m using a website called Burmy, which is absolutely free to use and it crops your images down to the size you want.

I’m going to be using 512 by 512 because that’s pretty much my resolution. You can go up to 768 by 768, but remember that’s going to use more VRAM and take longer to train. Once I’ve done that, I want to rename all my files. So click on Rename and just put X in the box and Save as Zip. And that will save all your files in numerical order. So 1. Png, 2. Png, because that’s how you want it. Head over to Google Drive and simply save your zip file there. Step two, training your data set. I’m using the Linquf Quia Laura Dream booth. We are going to make sure that we’re logged into our Google account here, and then we’re going to press Connect here. So once you’ve done both of those, we can start the training. Simply come down to step one. All we need to do is check the Mount Drive button and run that cell. And this is going to mount your Google Drive. I’m going to click Connect to Google Drive. It’s going to give you some warnings about connecting to an unGoogle offered notebook. Now that’s running. Once that’s completed, you’ll get a green check next to it.

So this downloads all the dependencies into your file structure over here. Now, remember, this is remote. This isn’t on your PC. So once you come out of here, you’ll lose everything. But this is where we connect the Google Drive so we can pull files across from our Google Drive. And then once we’re finished, we can take the train model and export that to our Google Drive. So we’re just going to come down here, the 1.2 Open Special Fire Explorer. Ignore that, that’s fine. We don’t need to do that. The default setting here is for anything version 3, which is more of an animation style model. And if you click here, there’s a few more preloaded links in there. If I click on Stable Diffusion 1.5, that’s good for me because I’m doing a realistic character. I just want to chip in and say the Stable Diffusion 1.5 base model is a good all rounder training file to use. Obviously, you can use whatever you want, but if you’re starting off, this is a good one to learn with. You can see in this bit below, you can actually load in your custom model, which means you can go to the hugging face link.

For example, a good model to use is Realistic Vision 2. So you get the Hugging Face link for your model and place it in there and run that cell. But we’re not doing that for this tutorial. So I’m just going to leave the space underneath, which is Stable Diffusion 2Model Blanks. We’re not going to be using that. And then just ignore 2.2. We’re not going to be running our own custom model. So 2.3 download available VAE. Sometimes you might notice when you switch models in automatic one on one or whatever program you’re using, that the images are really desaturated and lost their colours. That’s usually down to the VAE not the not being detected or being corrupted. So we are going to load in the Stable Diffusion 1.5 VAE, which is already there. So it’s a Stable Diffusion VAE, and that’s going to, again, just download it all into our file structure. And then we’re just going to hit on Run on there. And then we come down to 3.1 locating train data directory. So this is going to create some file path to where our train data file is. And that’s where all our input data set images, so my face images, will be going into that folder.

And don’t worry, you don’t have to do anything. It does it automatically for you. So I’m going to hit run on that cell once again, and it says your trained data directory. So if we go and have a look, so expand the Laura down here. By the way, if you don’t have this open, it’s this folder here to go into here. Now, go to Laura and you’ve got your regularisation data and your trained data. Do not drag and drop your images into here. Wait, we’re going to be doing that automatically. On to 3.2 unzip data set. So this zip file underscore URL. So this is why we zipped our file up and put it onto our Google Drive because we’re now going to grab it. So if you go to Drive, this is essentially your Google Drive, my Drive. And then I’m going to come down and find my zip file, which is 100 prompt muse images. Zip. I’m going to click on the three dots there and copy path. And then I’m just going to paste in the top zip file underscore URL. I’m going to leave the unzip, underscore, two blank. I’m just going to close this hierarchy here so we can see what we’re doing.

And you can see there it’s extracted all my files from that zip file into that new folder. So we’re going to come down to 3.3, which is the image scraper. Again, I’m not going to use this. This is based on more or less animation databases. So what it does is scrape regularisation images, which I don’t want to do because I’m not using animation. So I’m going to ignore this. But if you are using animation, you can do it here, ignore 3.3. Data clearing. This is to do with the cell above it. You’re scraping all these images, you might not be aware what they actually are. There will probably be some in there, but hopefully… No, don’t do that. And the convert transparency images. Well, it says what it does. So if the image has a transparent background, that’s very hard for machine learning. So you want to convert that and also random colours as well. So you check that if you’re doing the animation and scraping the images, which we’re not going to do. Okay, so let’s go down to 4.2, which is data annotation. We’re going to be using a blip captioning, which is tagging images with a description.

This is used for realistic imagery. The one below it, which is the way through diffusion, is used more for animation. So we’re just going to be using this blip captioning over here. here. So I’m not going to change any of these settings. I’m going to leave them as default and run that cell. What that will be doing is reading the input images that I put into the Google collab. It’s then going to be describing what it sees in the images. Everything it needs out of the description is what it’s going to train upon. So it’s going to describe my microphone and the fact that I’m wearing a necklace or potentially a brown top. This means it’s not going to train upon those things, which makes it so much easier to create a way more flexible model when I’m generating the images later on. So I’m not stuck in this room with this necklace and a brown top on. So to show you what I mean, I’m just going to show you the files it created. So if you come to your files, Laura, and then expand on train data, you can see it’s generated these caption files here.

So if we just pick any image here, 13, and you can see I’ve got a microphone and a necklace, so it should pick up on those attributes. So I’m going to click the caption that goes along with that image. And yeah, it said a woman sitting in a chair holding a microphone. So it’s actually that. Now I can actually add on to this and add necklace if I didn’t want it to train on me wearing a necklace, but I like my necklace and yeah, it’d be good to have that in the model as well. So you can edit these further if you want to. But for this tutorial, I’m not going to do that. I’m just going to leave it as is. I’m just going to close those images there and close that window. I’m going to ignore the way for your diffusion tagger. As I said, that’s for anime, and I’m going to ignore the custom caption tag. This creates you a text file caption, which again, I’m going to ignore that. Now we’re onto training model. So in 5.1 model config. So if you’ve used StableDiffusion version 2 to train your model, you need to check these two boxes here.

I haven’t. I’ve used Stable Diffusion 1.5, so I’m going to leave those unchecked. Under the project name, give your project name as something that you will remember. So I’m going to just call my imprompt tutorial. And then underneath it, you’ve got pre trained model name all. So I need to change this to my Stable Diffusion trained model. We downloaded all these dependencies in the first cells. This would have made you a pre trained underscore folder. So if you just expand that, and then within there sits your saved Hensers model. So if you go with the three dots, click on it, copy path, and simply just paste that in there. So we have the VAE, so the VAE file, which controls the color in your images. So we also need to grab that and that would have installed during the first sell as well. So that will be in the VAE folder. So repeat that process, copy that path and just simply paste it in there. Now, this is where it’s going to be saving your finalized models. And I say models because it’s going to create multiple versions, but we’ll get to that later. Once you’ve closed this Google notebook, this will all go.

All these files will disappear. Make sure you check output to drive and that will save it to your Google Drive and just run that cell. So you can see here, the output path is content drive, my drive, Laura output. So there’ll be a folder on your Google Drive called Laura, and it will be an output file. We’re getting to the most important settings here. So we want to keep the train repeats to 10. Got the instance token. I’m just going to keep mine at mksks. Now you will see random names sometimes like sks. This is because it’s not a name that Stable Diffusion associates with something, so it’s not going to call it up. So by associating M KSKS with my model, it knows it’s calling up my model, my image. I’m going to keep that as is. If you’re not sure, just keep it as M KSKS style. So we are not training a style. We are training a woman. Or you can put person. I actually prefer to put person. You can put woman. It’s up to your own discretion if you want to do that. Resolution, we’re doing 512 by 512 because we have input images that are 512 by 512.

If you’re doing 7 6 8, put 7 6 8 here, just change it up the slider. Just leaving all these settings here as default, and I’m just going to run that cell. So we come down to 5.3, Lauren Optimisation Config, but you really need to experiment with the settings yourself to see if you can get a better result because obviously you are training different images than I am. But however, I will give you my settings because I have done a lot of tests. Come down to the Convolution DIM, which I’m going to be setting quite low at eight, and the Convolution Alpha, I’m going to be setting at one. Then we come down to network dim, which I’m going to set at 16, and then I’m going to change the network alpha to eight. These settings actually have huge influence on your model. I used to do the settings at one, two, eight by one, two, eight, but I’ve played around and I quite like these settings for my realistic models. What settings might work for me might not work for you guys because of different training sets, different resolutions and all that. But I digress.

Okay, I’m going to leave the optimiser config as adding W 8 bit. So the first learning rate, I’m going to change to five, E 4. So the text encoder learning rate, I’m going to change to 1E4. And then the learning rate scheduler, I’m going to change to CoSine with restarts. The warm up steps, I’m going to do 0.05. And to be honest, I’m quite happy with that. So this is probably going to be a learning rate of about 950 steps. But we’ll see once we hit run. So we’re going to run that cell, and then we’re going to go to 5.4 training config. I’m going to leave low RAM on. I’m going to enable sample prompt. I’m going to leave the sampler as DTM. Noise offset, I’m going to leave at zero. Sometimes I go 0.01. I’m going to be creating 10 Epochs here, which will save a file at every learning stage, which means I can test the files out in my web UI at the end to see if it’s either undercooked or overcooked or just about right. I like to do about 10 because it gives me a nice, diverse range to pull from.

The trained batch sizes. Now, you can go quite low. You can go to one. I’m probably going to go to two and see how it goes from there. So the batch sizes is how many files it’s training together. If I’m training six, it’s going to be a lot quicker than it will be for two. If I went to one, I’d probably completely run out of RAM. So if you do have a RAM issue, try sticking to six or higher. But if you don’t have any RAM issues whatsoever, you can train on anything as low as one here. The mixed and saved precision, I’m both leaving those at F P 16. My Epoch, I’m going to save every Epox. So that’s 10 Epox I should have at the end. I’m saving the model as a saved tensers model, and I’m leaving this all as default here, so that’s pretty simple. I’m going to run that cell. Now we come to our final cell. You’ll be glad to hear, all you need to do, just run that cell and leave everything as default and let the training begin. This might take probably about 30 to 40 minutes. If I wanted it to be done quicker, I would actually increase the batch size.

Hopefully, all this makes sense. I wanted to describe what I’m doing as I do it, so you have at least a understanding of what’s going on, which hopefully, again, will allow you to make changes to suit your training data. Once the training is complete, you do not have to do any of the remaining cells in the notebook. Your files will now be saved automatically into your Google Drive. So head over to your Google Drive, you will have a Lo ra file in there, an output file, and in there lives your Lo ra files. And remember, I said it would save a file at every training step, and we said 10 in this demonstration, so it’s given us 10 files here. As you’re probably aware, I use automatic 111 on Run Pod, and the link for instructions are all here. So if you don’t want to use automatic 111 locally on your computer and you don’t want to have to set up a Run Pod like I’ve got, the developer of this Koya Lo ra notebook has just come out with a brand spanking new automatic 111 notebook with control net 1 and the brand new control net 2, as well as the ability to use your newly trained Lora files.

You can use the CalicuristroC lab UI, and it’s basically automatic 111 to generate your images. So I just thought I’d throw that in there as an additional option. Now, grab these files, download them and load them into your stable diffusion model LoRA file. Just whack them all in there. Step three, creating our puppet. Now, underneath the Generate button, you’ll see this sun icon here. Give that a click, and then this will open up this panel along here. Select Laura, and you should see your Laura files now in there. Now, if you don’t, you can always click the refresh button. You can test out all these files here by clicking in on each file. That will then load its tag up into the prompt like this. So you just test them all out, just use one at a time. You can also control the weights of these Laura files as well by adjusting this figure. So I’m just going to load in my LoRA file with a prompt as well as a negative prompt and just run it through and see what it looks like. So I’m quite happy with the overall state of my LoRA file.

It does look like me. So I’m just going to create an image of me bored. I’m going to be taking this bored image of myself and dragging and dropping that into the Headshot plugin, which then generates me a 3D model of my face. I can go in and sculpt this further if I want to, but I’m just using this as a puppet or a guide for the AI. It doesn’t have to look great. It just has to look similar to me. Once I’m happy with the face and the body shape, I then add some hair. Again, it’s super easy. I’m just dragging and dropping from a library. Now, once I’ve finished with my character, I’m now going to export it to iC loan. These programmes work together in a pipeline. So iC loan is more of an animation programme. So this is where I’m going to be adding the facial animation as well as the body idle. I use a plug in called Motion Live. I just activate Motion Live and I’ve downloaded the Motion Live app on my phone here. All links are below in the description. It’s super easy to use. All you’re doing is recording your facial movement and that’s being applied to your 3D model in real time.

I’ve just dragged and dropped a lighting situation in here and got some physics on the hair and I’m pretty much ready to render. Come over to the render panel, which is here and I’m going to I have selected PNG sequence. I actually went 7 6 8 by 7 6 8 in the end. Try and be visible by 16 if you can. And we come down here and I’m just doing 300 to 500 frames and I’m going to export those as PNG. So we’re going to jump back into automatic 111and I’m going to go through the settings I’m using. I’m using image to image and I’ve loaded in my Laura here. Remember the buttons over here. And then I’ve just added a simple prompt because I don’t want the prompt to fight the original image. That’s quite important. So you want to keep the prompt as brief as possible. Do not put too many details that are not in the image that you want to create. Then we have the negative prompt, which is as important as this prompt up here. So you can copy my one. I’ll put it down in the description below. So this is our rendered image added in here.

So the sampling method I have set to DPM then the SDE. You can use whatever you want. I’ve had good results with Euler A, I’ve had good results with KM Kuhares. Sampling steps, I’m keeping relatively low for this. Width and height, I’m going for 7.68×768, the same as my input. De noising strength, I’m keeping low. I’m keeping the de noising strength at 11. Actually, you’ll notice my CFG scale is down six as well. Again, we want to pull from this image as much as we can and apply the Laura over the top without the prompt compromising things too much. I’ve already set my seed because I’ve tested this out already. When you do this, just render a minus one seed until you get something you like and then lock it in with the Reuse button. So on Control net, I’ve enabled head and I have the model enabled as well. I haven’t changed anything from the default settings here. I’ve got a secondary model in as well, which I’ve enabled canny and enabled the Cany model. Again, I haven’t changed any of the default settings. So let’s render and see what it creates us. That looks like me on a really good day.

And it’s it’s following the mouth movements as well, which we need for lip syncing. I have seen so much AI generation where the mouth is just a blurry mess. Using my workflow, you can now get precise mouth lip syncing. This is why I made the model on me so you can see that it is working, okay? So we’re now going to batch render these frames, which will give us a frame by frame animation. And to do that, head over to batch and put your input, so where your 3D files are sitting on your drive, and then your output where you want them to render to. And then hit Generate. And congratulations, we have completed our animation. All that is rendering. I want to show you something really cool. And this is why I like this workflow. I literally can come over and switch my checkpoint file and run that again. And look at that. It’s created a semi animation style version with a click of a button. So you can literally change the render type by using your diffusion checkpoint. I think this is really cool. And this is a great starting point from where things are going to go from here.

Now, we’ve got Control net 2, which has just come out, which I’m not using in this video. So that’s going to take things to a whole new another level. So I’ve simply just thrown those frames into After Effects from Automatic 111. If you have Da Vinci Resolve, use the dirt removal and the deflicker times two in there because that will give you really nice smooth results to your animation. I’m going to hit space bar so you can have a look at the animation. I’ve slowed it down so you can see the frames are blending so well into each other. It’s probably not very good animation I did there, but you can see the technique. I switched the checkpoint model over. I didn’t change any of the settings. I’d probably go back in and change some settings to make the animation version smoother. But that shows you how quickly you can flick, essentially, the AI renderer to change. And then that took about two minutes to render those animation frames. I would love to see what you guys make with this workflow. So please tag me in your social media so I can have a look because I love looking through all the creative projects everybody’s doing with AI at the moment.

I put all my AI creative projects and workflows all on my social media as well. On Instagram, I’m @PromptMuse. On Twitter, I’m @PromptMuse. And on Discord, there is a link below. I’m really excited to see what you guys create with this. This tutorial will be written up step by step on the Prompt Muse website as well. And we have a fantastic weekly newsletter that surrounds the latest and greatest in AI tech. So please join that on the Prompt News website. Thank you for watching this video. I really appreciate it if you subscribe to this channel and give it a thumbs up. I do have a buy me a Coffee link down in the description, and that is just to buy me a coffee. I don’t put any content behind any paywalls. I like to make sure that all my content is free and accessible to everybody. And having that allows me to continue to do that. So I thank you very much. This video has been Days in the making because I have had to test out so much stuff and variations to get this to work. So I know you guys appreciate that. Please share this video on your social media and @ me as well when you do it because I like to see that.

That’s amazing and I can say thank you to you as well. So yeah, I think that will do it for today. Bye bye.

<p>The post How I Unleashed the Power of AI to Create Next-Level Animations first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-i-unleashed-the-power-of-ai-to-create-next-level-animations/feed/ 0
Introducing the First True No Code Fine Tuning Method for Open AI https://promptmuse.com/introducing-the-first-true-no-code-fine-tuning-method-for-open-ai/ https://promptmuse.com/introducing-the-first-true-no-code-fine-tuning-method-for-open-ai/#respond Mon, 10 Apr 2023 01:03:00 +0000 https://promptmuse.com/?p=2870 Fine-tuning with Open AI has been a hot topic lately, but it seems that many people have been struggling with it. While some claim to offer a no-code solution, their methods end up requiring a lot of code. This is where my new app comes in 😉. This is what I believe to be the [...]

<p>The post Introducing the First True No Code Fine Tuning Method for Open AI first appeared on Prompt Muse.</p>

]]>
Fine-tuning with Open AI has been a hot topic lately, but it seems that many people have been struggling with it. While some claim to offer a no-code solution, their methods end up requiring a lot of code. This is where my new app comes in 😉. This is what I believe to be the first true no-code fine-tuning method for Open AI, making it easy for anyone, “even their mother”, to do it. In this article, we’ll go through the process of using this new no-code method and explain how it works.

What is Fine Tuning?

Fine tuning is the process of taking an existing AI model and training it further on a specific task or dataset. This allows the model to specialize in a particular area and improve its accuracy. Fine tuning is essential for many applications, including chatbots, language models, and image recognition.

Two Parts of Uploading Fine Tuning Data

The first step in fine tuning is getting your data into a specific format that Open AI likes. This is where the JSONL Converter comes in. The JSONL Converter takes a list of prompts and their corresponding completions and converts them into a JSONL file, which can be uploaded to Open AI. The prompts and completions can be anything from simple questions and answers to more complex conversations.

Once you have your JSONL file, you can upload it to Open AI using the Auto AI Fine Tuner. This tool allows you to fine tune your model using the uploaded data. It takes care of all the details, such as setting the number of epochs and batch size, so you don’t have to worry about them.

Using the JSONL Converter

The JSONL Converter is a simple web application that allows you to create a JSONL file from a list of prompts and completions. It’s easy to use and doesn’t require any coding experience. Here’s how to use it:

  1. Go to the JSONL Converter page.
  2. Click on the “Add Prompt” button to add a prompt and its completion.
  3. Repeat step 2 for each prompt and completion you want to include in your dataset.
  4. Click on the “Format for Upload” button to generate a JSONL file.
  5. Download the JSONL file and save it to your computer.

You can also upload your prompts and completions directly from Google Sheets. Just create a spreadsheet with two columns: one for prompts and one for completions. Then, go to the “Import from Google Sheets” tab on the JSONL Converter website and follow the instructions.

Using the Auto AI Fine Tuner

The Auto AI Fine Tuner is another web application that allows you to fine tune your Open AI model using the data you uploaded with the JSONL Converter. It’s designed to be easy to use and doesn’t require any coding experience. Here’s how to use it:

  1. Go to the Auto AI Fine Tuner page.
  2. Enter your Open AI API key and click on the “Connect” button.
  3. Go to the “Uploaded Files” tab and click on the “Select File” button to upload your JSONL file.
  4. Copy the ID number of the uploaded file and go to the “Fine Tuned” tab.
  5. Enter a name for your fine-tuned model and the ID number of the uploaded file.
  6. Select the AI model you want to fine tune (e.g., Da Vinci).
  7. Set the number of epochs and batch size.
  8. Click on the “Create Fine Tune” button.
  9. Wait for the fine tuning process to complete (usually takes a few hours).
  10. Once the fine tuning is complete, go to the Open AI Playground and start using your fine-tuned model.

Conclusion

Alex’s no-code fine tuning method for Open AI is a game-changer for those who have been struggling with the process. By breaking it down into two simple steps and providing easy-to-use web applications, anyone can fine tune an AI model without any coding experience. The JSONL Converter and Auto AI Fine Tuner are both free to use and open source, making them accessible to everyone.

However, it’s important to note that fine tuning requires a good dataset to work effectively. Garbage in, garbage out, as they say. It’s not enough to just throw any data at your model and expect it to magically improve. You need to have a well-curated dataset that reflects the specific task or domain you’re trying to fine tune your model for.

Therefore, Alex plans to create a series of tutorials on how to create a good fine tuning dataset. He’s also working on a Google Sheets automation that will make it easy to create the simple prompts and completions needed for fine tuning. With these tools and resources, anyone can fine tune their AI models and achieve better results.

In conclusion, Alex’s no-code fine tuning method is a much-needed solution for those who have been struggling with the process. It’s easy to use, accessible, and free. However, it’s important to have a good dataset and follow best practices for fine tuning to achieve the best results. We look forward to seeing what other tools and resources Alex will create to make AI more accessible to everyone.

<p>The post Introducing the First True No Code Fine Tuning Method for Open AI first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/introducing-the-first-true-no-code-fine-tuning-method-for-open-ai/feed/ 0
Voiceflow: The Future of AI-Powered Conversational Interfaces https://promptmuse.com/voiceflow-the-future-of-ai-powered-conversational-interfaces/ https://promptmuse.com/voiceflow-the-future-of-ai-powered-conversational-interfaces/#respond Tue, 04 Apr 2023 12:55:19 +0000 https://promptmuse.com/?p=2491 Hey, tech enthusiasts! Alex here, today we’re diving into one of our all-time favourites – Voiceflow. I know it’s been a while since our last update, but I had to take a quick break to celebrate my 40th birthday. But I’m back now, and I have some exciting new features to share with you. First, [...]

<p>The post Voiceflow: The Future of AI-Powered Conversational Interfaces first appeared on Prompt Muse.</p>

]]>
Hey, tech enthusiasts! Alex here, today we’re diving into one of our all-time favourites – Voiceflow. I know it’s been a while since our last update, but I had to take a quick break to celebrate my 40th birthday. But I’m back now, and I have some exciting new features to share with you.

First, I want to address an issue that was raised by some of our viewers in the comments section. Previously, we had some problems with the tutorial where the API was causing numerous issues. But, thankfully, Voiceflow’s new improved A.I workflow has resolved these issues, and today we’re going to dive into the new variablesintents and prompt chaining – that have changed everything in the workflow system that we had prior.

If you’re not familiar with Voiceflow, it’s an AI-powered platform that lets you build, design, and publish voice and chatbot experiences. And the best part? You can sign up for free, forever, with two assistants and one editor. But, if you’re looking for more ai bots, the paid version will give you access to more bots.

One of the most exciting new features of Voiceflow is the introduction of the “GPT 3.5″ persona setting. This allows you to give your AI a personality, which in turn affects the way it responds to user input. And, thanks to two new variables – last response and last utterance – you can now take advantage of prompt chaining, giving the illusion of a constant dynamic conversation.

This new feature lets you loop through a series of prompts, using the last response or last utterance as input for the next prompt. It’s a great way to create a natural, conversational flow that feels more like a human conversation than a pre-programmed response.

Coupled with their intent feature, you can create custom responses that allow your bot to respond appropriately to a wider range of user inputs.

I was blown away by the potential of this new feature and am already thinking of ways to incorporate it into our own chatbot experiences. And, with the reduced cost of credits, you can now make more calls without worrying about running out of credit too quickly.

So, to sum it up – Voiceflow is a powerful tool that makes it easy to create engaging and interactive chatbots and voice assistants. And, with these new features, it’s easier than ever to create a natural, conversational flow that feels more like a human conversation than a pre-programmed response.

That’s it for today, folks. I promise not to take such a long hiatus again. In the meantime, I hope this article has given you some insight into the exciting new developments at Voiceflow. Thanks for tuning in, and until next time, happy bot building!

<p>The post Voiceflow: The Future of AI-Powered Conversational Interfaces first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/voiceflow-the-future-of-ai-powered-conversational-interfaces/feed/ 0
How to Run a Chat Model like Chat GPT on Your Computer without Internet https://promptmuse.com/how-to-run-a-chat-model-like-chat-gpt-on-your-computer-without-internet/ https://promptmuse.com/how-to-run-a-chat-model-like-chat-gpt-on-your-computer-without-internet/#respond Thu, 23 Mar 2023 14:42:08 +0000 https://promptmuse.com/?p=2418 In this tutorial, we’ll show you how to install language models like Alpaca and Lama on your computer so that you can run a chat model like Chat GPT without internet access. This works on Linux, PC, and Mac, and requires Node.js and a few other programs. Let’s get started! Step 1: Go to GitHub [...]

<p>The post How to Run a Chat Model like Chat GPT on Your Computer without Internet first appeared on Prompt Muse.</p>

]]>
In this tutorial, we’ll show you how to install language models like Alpaca and Lama on your computer so that you can run a chat model like Chat GPT without internet access. This works on Linux, PC, and Mac, and requires Node.js and a few other programs. Let’s get started!

Step 1: Go to GitHub and navigate to https://github.com/cocktailpeanut/dalai. This is where you can download the necessary files to install the language models.

Step 2: Choose which model you want to install. Alpaca is a more optimized, smaller version, while Lama is the original open-source and larger in size. In this tutorial, we’ll be using Alpaca.Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Step 3:

If you’re on a Mac, Install Node.js on your computer. Simply click on the Mac installer and follow the prompts.

If you’re on a PC, you’ll need to download Visual Studio, select Python development, Node.js, and Desktop Development C, and choose which modules you’d like to install.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
For PC Users

Step 4:

Installation for Mac

First, let’s talk about installing alpaca models. To do this, you can run the following command inside TERMINAL:

npx dalai alpaca install 7B

This will download the alpaca models onto your computer, which you can use for various tasks.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials
For Mac Users

Optionally, To download the larger llama models, you can run the following command:

npx dalai llama install 7B

This will download the llama models onto your computer, which you can use for tasks such as text generation or translation.

If you want to download multiple llama models, you can run this command instead:

npx dalai llama install 7B 13B

This will download both the 7B and 13B llama models onto your computer.

That’s it! Now you know how to install both llama and alpaca models on your computer. Have fun using them!

Installation for PC:

Firstly, it’s important to note that if you’re using a Windows computer, you need to run all the commands in cmd and not in powershell. Powershell has strict permissions that can make the script fail silently.

Now, let’s get started with installing alpaca models. Open your cmd application and type in this command:

npx dalai alpaca install 7B

This will download the alpaca models onto your computer.

Next, let’s add llama models. Open your cmd application and type in this command:

npx dalai llama install 7B

This will download the llama models onto your computer. If you want to download multiple llama models, you can type in this command:

npx dalai llama install 7B 13B

That’s it! You’ve successfully installed the alpaca and llama models on your computer.

LOADING the user interface

Once you’ve installed everything you need, you can run the following command to start the web UI server:

npx dalai serve

This command will start the server, which will allow you to interact with your installed models through a web interface.

Once you’ve started the server, you can open up your web browser and navigate to http://localhost:3000. This is where you’ll find the web UI for your installed models.

Now you can have fun playing around with your models and using them for various tasks! That’s it for this tutorial – happy exploring!

 

Additional Resources:

Read about Dalai https://crfm.stanford.edu/2023/03/13/alpaca.html

GitHub https://github.com/cocktailpeanut/dalai

 

<p>The post How to Run a Chat Model like Chat GPT on Your Computer without Internet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-run-a-chat-model-like-chat-gpt-on-your-computer-without-internet/feed/ 0
Mastering AI Animation: A Comprehensive Workflow with Mocap and ControlNet https://promptmuse.com/mastering-ai-animation-a-comprehensive-workflow-with-mocap-and-controlnet/ https://promptmuse.com/mastering-ai-animation-a-comprehensive-workflow-with-mocap-and-controlnet/#respond Mon, 20 Mar 2023 10:58:38 +0000 https://promptmuse.com/?p=2396 Creating coherent AI animations can be a challenging task, especially when dealing with glitches and limited control over characters. However, with the right tools and workflow, you can achieve impressive results that give you complete control over your characters’ appearance and actions. In this article, we will walk you through a step-by-step process to create [...]

<p>The post Mastering AI Animation: A Comprehensive Workflow with Mocap and ControlNet first appeared on Prompt Muse.</p>

]]>
Creating coherent AI animations can be a challenging task, especially when dealing with glitches and limited control over characters. However, with the right tools and workflow, you can achieve impressive results that give you complete control over your characters’ appearance and actions. In this article, we will walk you through a step-by-step process to create coherent AI animations using a script developed by Zampious aka Six Hunter, combined with mocap data and the iClone Character Creator software.

Workflow Overview

The workflow we will be following involves using mocap data to animate the characters, which can be obtained for free or created yourself using an iPhone or a mocap suit. We will then use the iClone Character Creator software to create an actor and add the mocap data to it. The final step involves using the Automatic 111 Web UI with Six Hunter’s Python script and ControlNet enabled to generate the animations.

To achieve the desired results, we will also be using a Scarlett Johansson trained Laura file and a diffusion checkpoint file. You can train your own files or choose from thousands of available models and textual conversions for your project.

Step 1: Create a Base Character

First, create a base character that doesn’t need to look exactly like your final character but should have similar features such as hair, clothing style, and physique. The face can be overwritten with the AI, but it’s helpful to have similar features as a guide for the AI. The iClone Character Creator software is a great tool for creating characters, as it allows you to easily drag and drop hair, clothing, and other elements onto your character.

Step 2: Obtain Mocap Data

Next, obtain mocap data for your character’s movements. You can create your own by recording a video of yourself and uploading it to websites like Plask and Deep Motion, which will export an FBX file for free. Simply drag and drop the exported FBX file onto your character in iClone to apply the mocap data.

Step 3: Animate the Character

Once your character has the mocap data applied, you can begin animating them. There are multiple ways to do this, such as using facial rigging controlled by your phone or AI-generated voice and lip-syncing. When you’re happy with the animation, render the frames as a sequence, which will be used later for stable diffusion.

Step 4: Set Up Automatic 111 Web UI and ControlNet

Next, set up the Automatic 111 Web UI and ControlNet by installing the required files and scripts, including Six Hunter’s Python script, the Scarlett Johansson Laura file, and the diffusion checkpoint file. Make sure to enable ControlNet and allow other scripts to control the extension in the settings.

Step 5: Generate the Animation

With everything set up, you can now generate the animation using the Automatic 111 Web UI. Start by uploading the first frame of your animation, which will guide the rest of the animation. Adjust settings such as denoising strength, sampling method, and ControlNet model to achieve the desired results. Once you’re happy with the generated frame, lock in the seed to ensure consistency in all the images.

Step 6: Apply the Script and Render the Animation

Finally, apply Six Hunter’s multi-frame video rendering script to generate the animation. Upload your guide frames, enable color correction, and choose the input frame loopback source. After generating the animation, you can use software like After Effects or DaVinci Resolve to compile the frames and apply post-production effects such as deflickering and motion blur.

Conclusion

Creating coherent AI animations can be a complex process, but with the right tools and workflow, you can achieve impressive results. By using mocap data, iClone Character Creator, and Six Hunter’s Python script, you can have complete control over your characters and their actions. This workflow allows for endless possibilities in creating unique and engaging animations for your projects.

Transcript:

Today, I’ve got a very exciting workflow. I’m going to be showing you how to create coherent AI animations without all the glitching all over the place. Before we start, I must say that this workflow would not happen without Zampious aka Six Hunter. He has created the script that I’m using to produce these results. Now, I have worked out a workflow to use around the script to get really nice results that you can control every aspect of. To be honest, using stock footage is pretty useless because you have absolutely zero control on what that person is doing. Of course, you can get a green screen and train a model to look like yourself and act the part. But that makes absolutely no sense because then you have to be hiring a team of actresses and actors. In this video, I’m going to be showing you how to have ultimate control of your characters, what they look like, and what they look like they do. Before we start the step by step process of what I did to achieve these animations, I’m going to give you an overview of the workflow. With that said, let’s hop into the workflow overview.

For those who have guessed it on my Twitter and Instagram and other socials that I was using mocap data, you are correct. There are several places you can get mocap data from. So you can pick up mocap data for free or you can make it yourself on your iPhone. Granted, that’s not the best way to do it. The best way to do it is to use a mocap suit, but they are rather expensive. So I simply just mix and blend my purchased mocap data in with my facial rig that I control with my phone, and it’s easily compiled together in iCologne by character creator. The next step is to create an actor and just add that mocap data. The actor is essentially telling the AI what loose styles we want, so it doesn’t have to look exactly like your end result, just a guide for the AI. The last part is to use automatic 111 Web UI with 6 hunks Python script enabled and control net enabled as well. And I show you exactly how to do that. I’m also using, can you guess, a Scarlett Johansson trained Laura file, if you haven’t noticed that already, as well as a diffusion checkpoint file.

I’m actually using both of them and I’ll show you which ones I use. You can always train your own to create your own styles, but there is thousands and thousands of models and textual conversions and hyper networks and lords being created all the time. So you can look through the list and see which one you like. And if you’re using for a commercial project, I suggest you make your own. The first step is to create a base of our character. Now, this doesn’t need to look exactly like your character. You just need to keep the features such as the hair, the style of the clothing, and their physique. Now, the face can be overwritten with the AI, but it’s quite good to keep similar features. Although, again, like I say, it doesn’t have to be exactly the same. It’s just something there to guide and help the AI. I absolutely love character creator for just dragging and dropping hair on and have stylisation, add clothes. I can add additional clothes via blender. I can buy them from the marketplace. The same with the hair, you can use the hair that is in the system. You can change it, you can make it longer, and everything is built into a pipeline, which I absolutely love.

Obviously, it comes at a cost, but for me, it’s worth it. The second step is to get your mocap data, which essentially is your actor’s movement, what they’re going to do. You can make your own by making a video of yourself and uploading it to websites like Plask and Deep Motion. What they will do is export your FBX file for free. I simply drag and drop for that exported FBX file onto my character in iC loan, and then start with making the character talk. Now, there are multiple ways you can do this. You can use their face where you can use their actual lip, which is really cool because you can use AI on top of it to change your voice. Once I’m happy with the animation, I can either send it to unreal blender and put cameras in and record it from there. But to be honest, I don’t even do that at this point. I just go to render and render video and go to image and then set it as sequence and then my output size to what I want to use. And then I render and this takes about two minutes to render the frames ready for stable diffusion.

I get a lot of questions about what GPU I have. You don’t want my computer, I actually use a cloud GPU, which in non technical turns is a remote computer. I actually just connect, sign in and use my stable diffusion template on there. So it’s very easy to use. You can see my prior video on how to set it up and install. In this video, I’m going to be using ControlNet, so you’ll need to have that installed. That’s also in my previous video, so it shows you my complete setup from there and how I use it. Anyway, on with the video. I just log into my Run pod. Now, if you’re using a local version of Automatic 111, this will work for you because the file structures are exactly the same. There’s a couple of things that we need to set up before we get started. We’ve got to make sure we’ve got to make sure we got a checkpoint file. So I’m going to load this into the back end of my stable diffusion. To do this, I’m just going to connect to my pod. So I’m going to connect to the JupyterLab, which is my file structure that runs my automatic 111.

This is the same as the file structure that you find on your local version of stable diffusion. As you can see, if I click on the stable diffusion file, you’ll probably recognize this file structure as is that is the same as your local one. And that’s why you can follow along if you’re doing this locally on your PC. So the first thing we want to do is go to models and then Stable Diffusion. You need to put your checkpoint folder in here. And as you can see, I’m using realistic vision. Ckpt. And if we go over here to Civet AI, you can see this is where I downloaded it from. So you can download it here on your local version. If you’re using Run pod, just press this down arrow and model safe tensor. Right click on that, copy link, come back to your JupyterLab and click on Terminal down here and just simply type in, we get space control V for paste and hit return on the keyboard. And that will ultimately download the file. Now we use the We get protocol because it’s the quickest way to download a large file onto Run pod.

You can download it locally to your machine and drag and drop it into a file structure. But it can take sometimes a long time to do that, especially if you’ve got slow internet connection. So this is my preferable way. If it’s a small file, I usually just download it locally to my machine and drag it across. But checkpoint files tend to be quite big. See there, it’s downloaded and it’s given it a horrible name without an extension. So we’re going to hit F2 on a keyboard and name it. Ckpt. I’m naming it Silly because I’ve already got that file. So now that’s done, we can close down this terminal window and we want to now load in a Lo ra file. So if you don’t know what a Lo ra file is, it gives you the stylisation on top of your checkpoint file. So I’m going to be using these in combination in this video. Now, I feel like I get quite good results from doing this. We go up the hierarchy back to Models and then go to Lo ra. So you should see Lo ra. We were just in that folder there. So click on Lo ra.

And if you ever lost to where I am, just look at this path here and it will tell you exactly where I am. We go back to Civet AI. So I’m going to be using the Scarlett Johansson Lo ra file. I’m not going to take her complete likeness. I’m just going to take pinch just for this tutorial to show you, you can actually create your own Lo ra styles. And if you want me to do a tutorial on that, this is a really small file. It’s 144 megabytes. You can just download that locally to your machine and then come over to JupyterLab. And if I go to Downloads, you can see I’ve got it here. You just drag and drop it into there. The last thing we need to do in this back end is import our script. So if you press this button again to go up the hierarchy and come to script. So give that a click. And we are going to be using 6hunt script in here. If you go to xanthias. H. I o, this is where you can download your file. If you can afford it, please donate to him. He’s given the stable diffusion and AI community loads and helped out massively.

But if you can’t afford it, you can press no thanks and download it for free, which is really generous of him. And that will download that to your local machine. Once it has downloaded, come back to your downloads and simply drag and drop again into your run pod. So you should see it there, multi frame, underscore, render. Yeah, we have done everything we need to in the back end. So now we need to look at the front end, which is the automatic 111 Web UI. So we come back to my pod, we’re going to now connect to the stable diffusion Web UI, which is the interface. Again, this is the same as your local version. So we’re going to head directly to the Settings tab up here and then come down here to Control net. There’s a couple of things here we just want to check. You can add multiple Control net models to one render. I’m going to call it render for now, I think. And I’ve got two. I’m only going to use one model, but I wanted to show you this here. So you can put all nine if you really wanted to.

But my favourite is actually Canny, so I’m just going to be using one. If you come down here, you need to allow other scripts to control this extension. This is vital. You need this to be checked. I repeat, you need this to be checked. Once that’s all done, we click on Apply settings and then click on Reload UI. You’re going to get a bad gateway. Do not panic. This is normal. Do not worry. So we’re going to just close that window and come back to my pods and click on your Connect to HTTP. And again, this will reload your web UI with all your settings done. Now, let’s get to the fun bit. So I’m going to go to image to image, which is this tab here. And if you see this little sun here, I think it’s a sun, under the generate button, give that a click. And then we’re going to go to the Laura tab. Here you can see your Laura styles. So like I said before, we’re going to be using the Scarlett Johansson. Now, if you don’t see your Laura file in here, you can give it a refresh. That doesn’t work.

I advise you to come back out of your pods, click this hamburger icon and restart. Please do not restart. You need to restart your pod. And that will take a minute to restart everything. And that should refresh everything in here. I’m going to click on the Scarlett Johansson. And as you can see, I added that there to my prompt. We are going to be using minimal prompting in this video. You might be happy to hear. I’m going to close this window now. We are done with the prompt here is invoking the Laura script. We just need to write something for our negative prompt. You can come back to Civet AI and if you see this little information button, give that a click and you can pretty much plagiarise the heck out of this negative prompt. I’m going to then just paste it in here. I find that the best prompt is the simplest prompt. I’m just going to write photo of Scar. Now we need to add our first image to our image to image. Now I’m going to click on here and navigate to my 3D files that I just churned out. It’s a pretty ugly 3D model, to be fair, but we’re going to change that.

We’re going to come down to the settings and in the sampling method, we are going to go to DPM 2M Corraris. Sampling steps, I’m going to probably increase to about 30. I’m going to keep it relatively low. Whit from height, 512 by 512 because I want this to be a really quick vendor to show you in real time. Cfg scale, I’m going to keep that at 7. Denoising strength, this is one of the most important factors here. Now, I’m just going to show you how bad this is going to look. Then click generate. The denoising strength is quite high, so it’s going to not look like this image. It’s going to look more like Scarlett Johansson. So we need to decrease that. The less denoising strength there is, the less it’s going to look like Scarlett Johansson. We just want a mix. I’m going to go 35 and then see what happens. Just click on generate and there you go. That’s a bit better. It’s important that you upload the first image of your animation as this will guide the rest of your animation. Lips. There we go. And that should apply this to your finished result.

And there you go. She has red lips. We are going to now lock in that seed. At minus one, every single frame I produce will generate a new seed. I quite like this seed for this image, so I want to keep that consistent in all the images. I’m going to press the recycle button to lock that in. We’re going to open Control net and come down. Remember, I said you can import multiple models, so you could use normal, you could use head. I’m just going to use one. I’m going to click on Enable to make sure it actually is working. And then the preprocessor, I’m going to go to canny. Then the model, I’m going to choose Canny. I’m going to keep the weight at one. Now, I’ve played around lots and lots with this, fiddling with absolutely every setting. And for me, I find the default settings usually work the best. The last step is to load up the script. This is super important. Before you load up the multi frame script, please do this, otherwise your render will not render. Click on this image here and send to image to image. That is an important step.

Now we’re ready to start the script, which is the multi frame video rendering script. Click on that. The initial denoise strength needs to be set as zero zero, leave the append interrogated prompt at each iterations as none. Third frame image to be first Gen. It’s super important to click this Upload Guide frames button, which is easily missed. Just give that a click and then upload all your frames that you want to render. Then enable color correction and then choose input frame on the loop back source. Ready to rumble. Let’s generate. It will look like it’s rendering just one frame. Do not worry, this is normal. There is no front end interface to show you how much your animation is progressing. Sadly, it will just look like it’s doing one frame. I assure you, it probably hasn’t. So we come back to Jupyter Labs and we need to go up to the top level of the workspace, go to Stable Diffusion Web UI. Again, the same on your local version. Go to outputs, go to image, to image, images here, and then go to your most recent version. I’ve got quite a few here. So now you just have to wait patiently for your files to be exported.

So they’re exported as frame 0 0, and then continue from there. I’m using After Effects to do the postproduction and put my files all together. You can use whatever you want. The word on the street, Da Vinci Resolve is pretty good. But to be honest, to get the deflicker plug in that everybody’s talking about that smooths out your animations, it will cost me around about $270, something like that. And I’m fighting with myself at the moment because that is quite expensive just to gain a plug in. So I’m going to click on New composition. I’m going to make it 512 by 512 and click OK. I’m going to go to File, Import, and then files. I’m going to select the first file and shift selects the last file and then select PNG sequence. I’m going to make sure that box is checked and then I’m going to go to Import. I’m then going to drag those frames down to my timeline and just shorten my timeline to the same length of my files. Now you can see the first three frames of this animation are junk. You do not want them. Now, Six Hunters are very aware that this is a bug.

I’m just going to move the timeline across so we don’t see them. There’s a couple of effects that I put on my frames to help them because there is a little bit of flick still, which is really annoying. I’m sure the deflicker in this and Da Vinci Resolve would sort that out. But for $270, I’ll wait for that one. What I will do is put frame blending on here. I’m going to make sure it has that icon with those dots to those arrow. I’m also going to then check this motion blur symbol as well. From the effects and presence panel, I’m going to search for camera shake de blur, and then I’m also going to search for pixel motion blur as well. I’ve not seen anybody use these in combination, but I think it does help. You can tweak the effect settings by going into effects here and just changing some of these if you want to have a play around. I’ve only had a couple of days to play around and it’s just me researching all this. So I think within time, we’ll find loads of different settings that work really well. You can see it’s not perfect, but we’re only a couple of lines of code away from a perfect animation, and this is really exciting.

I can use that seed on the same character and pretty much make her do whatever I want. So the reason why I use character creator over meta humans, because meta humans are extremely currently limited. So they’ve only got four choices of clothing, the AR kit with the body animation. There’s quite a bit of learning curve there and digging around in blueprints as well as trying to add mocap. It can get a bit messy. The character creator workflow is just smooth. It’s been there for years. It goes into iCloane for animation really easy. They have their own mocap database, so everything integrates in nicely to AI. Anyway, I can’t wait to see what you guys produce with this workflow. Please tag me on any social media. I’m pretty much #PromptMuse on everything or @PromptMuse. We do have the newsletter and we’re still giving Run Pod credits away, so please sign up. The link is in the description below. And as always, I will be doing a write up and more in depth description of all the instructions in this video onto the prompt muse website. So let me know what you think of this workflow and maybe what other software you will be using along with this.

So thank you very much and I’m excited to see what you create. That will do it. Bye bye.

<p>The post Mastering AI Animation: A Comprehensive Workflow with Mocap and ControlNet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/mastering-ai-animation-a-comprehensive-workflow-with-mocap-and-controlnet/feed/ 0
The Metaverse: A Misconstrued Term Fueling FOMO and Misdirection https://promptmuse.com/the-metaverse-a-misconstrued-term-fueling-fomo-and-misdirection/ https://promptmuse.com/the-metaverse-a-misconstrued-term-fueling-fomo-and-misdirection/#respond Mon, 20 Mar 2023 10:19:47 +0000 https://promptmuse.com/?p=2390 The Great Metaverse Mirage In a world where technology evolves at breakneck speed and buzzwords capture the public’s imagination, the term “metaverse” has taken center stage. Its allure is undeniable, offering a tantalizing glimpse into a future where the virtual and physical worlds converge seamlessly. However, beneath this captivating veneer lies a perplexing paradox: the [...]

<p>The post The Metaverse: A Misconstrued Term Fueling FOMO and Misdirection first appeared on Prompt Muse.</p>

]]>
The Great Metaverse Mirage

In a world where technology evolves at breakneck speed and buzzwords capture the public’s imagination, the term “metaverse” has taken center stage. Its allure is undeniable, offering a tantalizing glimpse into a future where the virtual and physical worlds converge seamlessly. However, beneath this captivating veneer lies a perplexing paradox: the metaverse remains an abstract concept that many fail to grasp, even as they find themselves enthralled by it. This article delves into the metaverse phenomenon, exploring its origins, the role of major corporations, and the potential consequences of chasing an ill-defined dream.

The Roots of the Metaverse and Its Pop Culture Appeal

The metaverse’s literary origin can be traced back to Neal Stephenson’s 1992 novel Snow Crash, which presented a virtual world that functioned as an extension of reality. However, it was the film adaptation of Ernest Cline’s Ready Player One that truly catapulted the metaverse into the public consciousness. The movie’s portrayal of a fully immersive digital universe captured the imaginations of millions, setting the stage for a surge of interest in this futuristic concept.

Corporate Ambitions and the FOMO Factor

Major corporations, such as Facebook (now Meta), have been quick to capitalize on the public’s fascination with the metaverse. Their ambitious claims and rebranding efforts have perpetuated the myth of the metaverse as an imminent technological wonderland. This narrative has given rise to FOMO, or Fear of Missing Out, which drives individuals and businesses alike to invest in projects that promise a slice of the metaverse pie.

The metaverse’s abstract nature has made it the perfect playground for marketing campaigns that exploit the public’s innate curiosity and desire for innovation. The term has been used to sell everything from virtual real estate to digital fashion, with little clarity on what the metaverse truly entails. In the absence of a universally agreed-upon definition, the metaverse risks becoming a hollow catchphrase that serves corporate interests more than it fosters genuine technological advancement.

The Reality of Virtual Worlds and Digital Assets

While the metaverse remains a nebulous concept, existing virtual worlds like VRChat and digital assets have already made their mark on the tech landscape. These platforms and assets cater to niche audiences, providing immersive experiences for users who actively engage with them. However, the current state of these technologies does not match the grandiose vision of the metaverse as a ubiquitous, all-encompassing digital universe.

Although some proponents argue that the metaverse will emerge as a natural evolution of existing virtual worlds, the vast majority of people have yet to show a sustained interest in these platforms. As it stands, the gulf between the metaverse’s utopian promise and the reality of consumer engagement remains wide, casting doubt on the notion that we are on the cusp of a metaverse revolution.

The Metaverse Paradox: A Vision that Obscures

The allure of the metaverse lies in its ability to captivate and inspire. However, this same quality has given rise to a paradox: the more we chase the metaverse dream, the further it recedes from our grasp. The ambiguity of the term allows it to assume myriad forms, fueling speculation and hype without fostering a clear understanding of what it truly entails.

This metaverse paradox poses several risks. The term’s widespread misuse may lead to disillusionment among users and investors, as the promised digital utopia fails to materialize. A prime example of this disillusionment is the $700 billion crash in Meta’s stock value in October, which equated to a 25% drop in share price. This occurred as users grew tired of broken promises amid rising inflation and fears of a looming recession. Furthermore, the focus on the metaverse may divert attention and resources from more tangible and immediate technological challenges, such as bridging the digital divide, ensuring data privacy, and promoting equitable access to technology. This shift in focus has also impacted other tech giants like Google and Snap, who have experienced hits to their ad revenues. By fixating on a poorly-defined vision of the future, we risk neglecting the pressing issues that demand our attention today.

Rethinking the Metaverse and Embracing Clarity

The metaverse, as a concept, is undoubtedly intriguing and thought-provoking. However, it is crucial to recognize the potential pitfalls of pursuing an ill-defined dream that serves corporate interests more than it addresses real-world needs. As we navigate the complex landscape of technology and innovation, it is vital to ground our discussions in reality, prioritizing tangible progress over nebulous fantasies.

In the words of renowned science fiction author William Gibson, “The future is already here — it’s just not evenly distributed.” Instead of getting swept away by the metaverse craze, we should focus on harnessing technology to create a more equitable and sustainable future for all. By fostering a clearer understanding of the metaverse and its implications, we can ensure that our collective enthusiasm is channelled towards meaningful innovation that benefits the many, rather than the few.

By Alex Player

<p>The post The Metaverse: A Misconstrued Term Fueling FOMO and Misdirection first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/the-metaverse-a-misconstrued-term-fueling-fomo-and-misdirection/feed/ 0
How to create Passive Income with ChatGPT https://promptmuse.com/how-to-create-passive-income-with-chatgpt/ https://promptmuse.com/how-to-create-passive-income-with-chatgpt/#respond Mon, 13 Mar 2023 12:45:22 +0000 https://promptmuse.com/?p=2376 My Demo Website: https://wordmuse.io Envato Elements: https://1.envato.market/k0gXev Hostinger: https://www.hostinger.com Envato: https://elements.envato.com AppSumo: https://appsumo.8odi.net/Eabq24 GPT3 Domain Name Search: https://www.squadhelp.com Affiliate Market: https://shareasale.com Sign Up to our News Letter: https://promptmuse.com/news-letter/ Creating Passive Income: A Step-by-Step Guide Passive income is the holy grail for many people, offering the potential to earn money while you sleep. In this article, [...]

<p>The post How to create Passive Income with ChatGPT first appeared on Prompt Muse.</p>

]]>
My Demo Website: https://wordmuse.io

Envato Elements: https://1.envato.market/k0gXev

Hostinger: https://www.hostinger.com

Envato: https://elements.envato.com

AppSumo: https://appsumo.8odi.net/Eabq24

GPT3 Domain Name Search: https://www.squadhelp.com

Affiliate Market: https://shareasale.com

Sign Up to our News Letter: https://promptmuse.com/news-letter/

Creating Passive Income: A Step-by-Step Guide

Passive income is the holy grail for many people, offering the potential to earn money while you sleep. In this article, we’ll explore the process of creating passive income and how you can leverage the power of technology, specifically ChatGPT, to generate income over time. So grab a cup of tea and let’s dive in!

Brainstorming Passive Income Ideas

Before we get started, let’s brainstorm some passive income ideas using ChatGPT. We’re looking for ideas that require minimal effort and can be automated. While some ideas may require your initial involvement, the goal is to eventually set them up to run on their own. One idea that stands out is creating a website to aid writers, offering services like content formatting, letter conversion, grammar checking, and even generated writing prompts.

Choosing a Domain and Web Hosting

To bring our passive income idea to life, we need a domain name and reliable web hosting. While there are several hosting providers to choose from, Ionos and Hostinger stand out for their technical support and competitive prices. In this case, the author opts for Hostinger due to their existing experience with the platform.

After logging in to Hostinger, it’s time to choose a domain name. Using the help of tools like SquidHub’s business name generator, the author settles on “WordMuse.io” as the perfect domain name for their website.

Setting up the Website with WordPress

With the domain name secured, it’s time to set up the website using WordPress. The author walks us through the installation process, mentioning the need to wait for the website to propagate and become accessible to visitors. Patience is key!

Utilizing ChatGPT and Copen

Now comes the interesting part. To add functionality to the website, the author integrates ChatGPT and Copen. They create an account on Copen.io and use ChatGPT to generate the program they need. The code is then copied and pasted into the code editor.

The author explains the role of HTML, CSS, and JavaScript in web development, and how they use the generated code to build a user interface that allows users to convert text in various cases, such as sentence case, capital case, lowercase, title case, and more.

Designing and Styling the Website

With the functionality in place, it’s time to make the website visually appealing. The author introduces Envato Elementor, a tool that simplifies the design process with template kits. They choose the “Cyborg” template kit and install it. Using Elementor Pro, they customize the pages, including the homepage and headers, to create a cohesive and attractive design.

Launching the Website and Monetizing

After a few days of refining the website and making necessary adjustments, it’s finally time to launch WordMuse.io. The author explains the process of logging into the WordPress admin dashboard and navigating through the options to manage plugins, templates, and pages. They emphasize the importance of having a professional and user-friendly website to attract visitors and monetize the platform effectively.

Conclusion

Creating passive income through technology and automation is an exciting endeavor. By leveraging the power of ChatGPT, web hosting services like Hostinger, and web development tools like Elementor, anyone can bring their passive income ideas to life. Whether you’re building a SaaS platform, an e-commerce store, or a content-driven website, the key is to combine creativity, technology, and strategic thinking to generate income even while you’re away from the computer. So what are

Transcript:

Hey guys, today I’m going to be taking you through every single step that you will need to create passive income. Yes, we’re going to be using the power of chat TVT and the power of our brains to create something that could potentially bring in quite a bit of money over time. First of all, sorry about the audio. I did buy a new microphone, but it broke in the first day. Goodbye. So I’ve ordered a new one and it should come Friday, so bear with me until then. Anyway, let’s dive in. Okay, so first of all, we’re going to start by trying to brainstorm some ideas through chat. We’re going to do this by asking for ideas that don’t actually require a lot of work because we’re lazy and also the amount of coding that chat can do is a little bit limited. A lot of these are quite good, but they do entail me to be an aspect of it. It’s not really passive. Let’s prompted now to give me some more passive automated ones, which I can just set and forget. Okay, I think I’m starting to get an idea of one. There was a automatic cover letter that inspired me to think about the manipulation of words, et cetera.

Maybe do a website that aids writers. We could include things like formatting of content, formatting of changing letters, maybe. Letter Converter, Counter, all of that SNAS. Okay, let’s try and brainstorm this. This is awesome. Grammar checker, going to take that. Theoris. I don’t know. Is there an API for this? Writing prompts, I could hook that up, but that will be credits for GPT. Character generator, plot generator, also GPT, Open AI credits. Community, not so keen. Writing tips that could be generated articles. I’m going to set up a website. I would highly recommend either using Ionos or Hostinger. I’m going to be using Hostinger for this only because most of my websites are currently with them. But honestly, Ionos offer the best technical support and prices that I’ve come across. The startup, head over to Hostinger and probably just going to need the single plan. There will be one website unless you’ve got some already. I already have some so I don’t need this. Just log in and let’s get a domain. Okay, so to think of a nice idea for a domain name, I could go backwards and forwards with GPT 3 and see if it’s available.

Or I could use squid help. They have a free to use generator. They emailed me, I’m sorry for not getting back to you, but I do like the idea. So don’t take my silence as I’m not interested because this is quite useful. So yeah, squidhub. Com business name generator. Those three words are hyphenated, by the way. I’ll leave the link in the description. So let’s just say what we want. I need a name for a… Let’s see what it gives us. Word muse. I have to go with that, I think. It says it’s available. It will be so wrong. I like pen, but word muse will be so wrong for me not to go with. Stovey, expensive, please. Wordmuse. Com. All right, maybe not Word M use. 7,000. I wish. Okay, we could go with one of these lesser ones, but Wordmuse. Io, I quite like that. Okay, so with that information, let’s go back over to my current provider and let’s do wordmuse. Io. 31 %, I can take that. I can do that. Yeah, it’s an investment. More than my usual eight dollars I would do, but wordmuse. Io. Anything else here in there? info, website text, space, fun, link, click.

Webmuse. C lick. Io, that’s where it’s at. Okay, so all of these, we don’t want any of these actually. Skip, I will manage my plugins later. We don’t want a template. Skip I will do that later. Choose a domain, wordmuse. I o. There we go. It’s now installing WordPress. It’s as simple as that. What was that? Two minutes? Whether we’re going to be able to use this WordPress straight away is debatable, probably not. I will probably pause it now and check in a couple of hours, if not tomorrow. Actually, I’m at an AI exhibit tomorrow in London. Today is March seventh, so probably I will do it on Tuesday, Wednesday, Thursday. Yes, and hopefully I might have a new microphone then, but I will probably not use it because then it will sound really weird with the audio. All right, edit website. Okay, so if I go to wordmuse. I o right now, it can’t be reached. It will be reached. It will be here. We’ll see our website in all its glory very soon. So we just need to wait for it to propagate. I’ll be back in a jiffy. I remember your order model.

Now, the interesting part starts. For this, we’re going to need Chatty Pitty and we’re going to need Copen. We’re going to head over to copen. I o and create an account. Once done, just click pen and then we get this lovely window here. Let’s pop over to Chatty Pitty and now we’re going to ask it to make us the program. That should be fine. Let’s try that. By the way, you can do this with the free or paid version of Chat PT, the only difference is the speed. The results are the same. Let’s copy that and let’s put that into our code pen. The code pen is split into three sections HTML, CSS, and JavaScript. For those that don’t know, HTML is like the front end code of the skeleton, if you want. Css is like the muscles, the beautification of it. Javascript is the brains. Right now, we are… Actually, this HTML, isn’t it? We put the HTML in there, and it’s given us our lovely boxes. Let’s go over here. I didn’t ask for HTML, so it’s odd that it gave that to me. Anyway, let’s put the JavaScript in there. We can test this if this works.

Let’s type Alex Player, and then I don’t like the drop down menu and I don’t like the button to convert. I don’t like the drop down menu. I want simply the buttons in a row and the user chooses which one they want. Let’s try again. As I’ve mentioned in previous videos, this is very much like a game of tennis. Back and forth, back and forth. Let’s do the new HTML and the new JavaScript. That’s better. There we go. Let’s give that a test. Alex, player. Good. Only it doesn’t work. That is a problem. Now we need to troubleshoot. I love the layout. Much more like this, but now it doesn’t work. Nothing. Then say what happens, why doesn’t it work? Nothing. Then it will go back again and try again. You might need to do this two or three times until it works, but eventually it normally does work it out. It doesn’t look like it finished because there’s an open bracket, but there’s no closed brackets, so let’s do continue. You silly sausage. Okay, let’s give that a try. Back over to Kopen. Wait for it to flash. There we go. Let’s try. Hit Player.

Good. That’s good. I just don’t like the convert button, though. That’s another step. Great, it worked. But please remove the convert button and make it change as soon as I press the button to select the conversion type. But it will probably have to redo the HTML as well, I guess. While this is waiting, I’m really looking forward to GPC 4 next week. Apparently, it’s all models, so it’s not just text. Yes, we thought it’s all models. It’s going to be image, text, everything. I don’t think that will work because it sniffed off the end of it. Let me give it a try. Take that, take this, click copy button. Okay, so this replaces to that. Let’s try that. You know what? It works. I’m going to move to this thevert button from this HTML. Let’s paste that in. We could do this ourselves, but it defeats the purpose. I’m playing ignorance. in here. Now paste that in. Add a flash and let’s play. Awesome. We have a working application. But this is limiting. We’ve got sentence case, capital case, lowercase. Title case, lower case. Title case in it, case, kebab case. That tastes yummy. Convert space, dot case, swap case, invert case, sentence case.

Alternating case. Okay. We have the Let’s take this. This is our camel case. Here is the code for Camel case. Please write the same style code for the above 10 suggested cases. Doing this, I don’t want it to spit out the whole entire code again because it’s going to run out of credits, it’s going to run out of tokens. This way, I’m just getting the little bits that I need and I can add them in myself. It ran out there. Okay, let’s et’s do this nine. Ten was a bit optimistic. Let’s find where I put them. Let’s move this down. Function. Here we go. ence in this case, refine the end of that and we paste. Then get the next one, copy. I realized it’s incomplete. Okay, copy paste. Now please adjust the HTML to accommodate all of these two cases. Accommodate. There we go. Click that and paste that. Wait for the flash and there’s all the cases. Now we can style it. I will show you how we can style it with GPU 3. Then later I will show you how we can manually style it. Great. Give it a cookie. Then we say, Please create SSH.

Make this look stylish. This will need to be changed anyway because the CSS it’s doing is not good if we’re putting in a website, i. E. It’s got star, which means it will affect everything within the website. Body is not good. I will use these temporarily. I’ll take the body, I won’t take the star. Let’s put that in. There we go. Started to look like something, isn’t it? All right, so something’s happened now. It doesn’t want to work. That is probably due to me copying stuff in. What we do here is the buttons are fine. What we need to do is copy this and say, Please fix this. When I press the case button, nothing happens. I broke it. Okay, so I’m missing the event listeners. All right, so the issue here is these event listeners have different names to these. Please change the HTML to match these. Okay, so after a little bit of bug testing, I finally gave it the HTML and I said, Please write the JavaScript for this. It gave me the JavaScript, then I used the continue from where you left off and then it finished it off and it works, finally.

That was a little bit more than I expected, but we have it now working as you can see in the bottom box and the copy and paste also works, which is down here. We have our first text tool. They’re rather like Kabbalah, looks quite cool. We’re going to head over to our website and we’re going to put this in so people can start using it. Here we are. This is now three days actually. It’s now Friday, the 10th of March and our website is working, it’s up and it’s running. We log in to log into your WordPress, by the way, go to the URL to WP admin, and that will take you to the login box where you do your admin password and admin username. For this, you can use pretty much any theme you want, any bit text builder you want, any even templates if you wish. But we’re going to be using Envato Elementor. First of all, you go to Plugins, you add new plugins and let’s type in Veto. Once activated, head over to Welcome and then connect your account. If you don’t have an account, that costs about £12, £13 a month, but it’s great value for your money.

We’re going to be using template kits. I’m opting for template kits because basically it makes designing a website a breeze. They’re affordable, they’re free, they’re included with this, and you don’t have to worry about a developer abandoning them like you would with a theme if you went down that route. These are more modular, so you can choose different sections, update as you wish. Have a look through and find one that you want? I’ve opted for one called Cyborg, so let’s view that. Then we need to, first of all, let’s install the global kit. This is the CSS and the styling and the fonts. W e install that here. Then we go to the pages. These are the ones that’s automatically installed. Have a quick peek at this. What is this? These are the defaults. Let’s just get rid of everything. Move to trust. Don’t want any of this. Let’s do a new page and we’re going to call this phone. Actually, it’s just called Word News. Then edit with Elementor. T he screen is black now because that was the theme styling that we just added. What we do now is click on the green leaf thing, click View Installed Kits, and then we want to have a look for homepage.

Let’s pump that in there. While that’s loading, we can have a look at the other pages. Single project, help center, services. We’re going to be using any of these. We might use the About Us, 404 will likely use. We’re not going to be doing blogs. This is more of a SaaS service. Header, we’re going to be using. For headers, you do need Elementor Pro. So keep that in mind when choosing your theme. For example, I’m using Neve right now. It does come with its own header and filter builder, so you can use that if you want. But for simplicity and speed, I’m going to be using Elementor Pro’s headers. Let’s give it a catchy title. You know what? Okay, let’s publish this. Let’s use Chat chp t again. Let’s do New Chats and explain what we are doing. Okay, so we’re creating a SaaS for a Free Word tool website, which includes case convertor, word counter, and other tools. It’s called word news. I o, please write SEO focused homepage content. Let’s do that again, including subheadings. We can just start pacing this in actually. Let’s paste that in there. That’s the header, isn’t it? All right, so let’s create the first page actually for the application that we just created.

This is caseconverter, and it will implementer. What we need to do is get the HTML, put that in here. We got HTML block here, so we go over to our pen. Let’s pick that up. Paste it in. Lovely jubbly. Actually, you know what? As I mentioned previously, these classes, they’re very generic and that will cause a problem. I will go back over here. Do a new chat. Here is my HTML and the SSH. Please change classes. I don’t know why I’m shouting. It’d be less generic. Okay, so let’s put it in a sub class, so text perverter and then button. Good. This won’t mess around with the website anymore. It dropped off there. Continue where you left off. That’s the magic word, isn’t it? Okay, so take this, copy that in there. Let’s continue where I left off. Now we need to add the JavaScript. F or that, we’ll go over to our dashboard, plugins, add new. Let’s do css js. And yeah, that would do simple custom css in JavaScript. And there you are there. We want to add JavaScript. Go back here, get our code and paste it in. We’re going to call that js.

Itsconverterpublish. Now we’re going to add this, our HTML. Let’s go back here. I wished to add this jsHTML. Please write the script for this. This is pretty much what we need here. We could have just done that ourselves, but I’m lazy and credit. I noticed the page, it says page ID, that means it’s a permalinks actually. So one problem at a time. Let’s go to settings, permalinks. And we want to change that to custom structure. Postname actually. We hit the button Save Changes twice. That changes the HTML access also. Don’t worry if you don’t know what that means, just click it twice. Now we can go back to this View. What is it called actually? Case Converter. Okay, I’m pretty much done. I did a few more pages, so I did in total Case Converter, which you saw. I did password generator. This one had quite a few bugs. Honestly, just a bug test to say, Hey, this is my code. What’s wrong with it? If it says it’s fine, say, Okay, this is my HTML. Why doesn’t it work with that code? Usually, we’ll just say, Okay, the names are misaligned or something, but just back and forth, back and forth.

That works. Then additionally, we have a word counter. Let’s do, Hello, my name is Alex. I’m quite happy with that. We got our pages, we can add later, but you get the gist. I added a footer. Right now, this is just holding content. I’ll show you how I did that. This is done in the exact same way we’re going to do a header. We don’t have a header yet, so I’ll show you how to do that. We go to templates. This is the new theme builder they like to push. I don’t like that. We click that button and we go to header. Create new header. Let’s call it something we’ll remember, not footer. Call it header. This is the Elementor default ones. We don’t want that. We have a theme thing going on. Click our green leaf, view, install kits. Then here we’ll find the header somewhere and go pose footer header. There we go. Obviously, this isn’t my logo. I’m going to want to change that. Quite often you won’t have a navigation, so you’ll have to go to this little menu screen and create one. Everything on here isn’t valid, so let’s delete everything here actually.

We can add some stuff actually, can’t we? Case password, word counter, case convertor. Rewriter was something I started. I was going to use the API for Open AI, but I just want to get this out of the door. We got our one, two, three pages added. Let’s do a custom link. Take the URL and we’re going to call this home. Then just drag this to the top because I want it to be on the far left. This is from left to right. We’re going to want an About Us page. Let’s just put that in there. About. Sorry, that’s the URL, isn’t it? Put that there. As we don’t have a page right now, I’m just going to do the hashtag. There we go. Cool. Cool. All right, so this needs to be stretched out. There we go. We’ve got a call to action button here. Let’s do logo. I’m going to use one similar to the prompt muse one. Click on this top left corner and we want to align this middle. Image size looks a little bit blurry. Awesome. Now we want to click on these six dots. This basically selects the whole container.

We want to make the whole container black so we don’t see other elements underneath it. You’ll understand in a second. Then we want advanced motion effects, sticky top. This means when they scroll, they’ll always have this fantastic navigation. Sorry, I’m skipping ahead here. So publish, we want this navigation bar to be on every single page. It already has one. Let’s do save and then let’s go back. It’s duplicated it for some reason. Trash that. It should be the new one. That’s now on the entire site. Good. Right, we want this to be on the homepage. It’s currently not. Let’s go back to the dashboard. We go to settings. We want it to be a static page. Awesome. I got rid of that header, by the way, that big introductory header because I want people literally to come to the website and think, What are we doing here? Okay, let’s generate a password. sentence case, we could do this. Then for the value of SEO, I’m going to fill this out. I won’t bore you with me filling this out because literally it is just to L it with Elementor. You just copy paste, copy paste. Not too interesting.

Change the images up. Obviously, this isn’t relative at all. None of this is relative, it’s all just filler. I will fill that out. I would change this. We need a privacy policy and cookie policy. We can do that with chat chip and tea, I suppose. All right, let’s go in. Let’s say I have a website called Go. While that’s thinking, I’m going to head over here, go to Posts. Should we make it a Post or a page? Let’s do a page, actually. Add new cookie policy. Let’s edit of Elementor. This doesn’t have to be a nice page, just simple to read. So dragging the text. Here we go. Clicky policy. Make that a H 1 because that’s the title. The way things go is the title is always a H 1. Subtitle is H2s, sub subs, H3s, and so on. This will be nice. You see it’s got the semi colons to bolden them just because people don’t read all of the text. They scan. They scan for the information they’re looking for. It’s always nice to have it presented in a way that it’s easily recognizable. We have it nearly there. Looks good. Publish that.

Have a look. We got the header there so that the pop is being cut off. Right click anywhere in Navigator, then we can select the top element, which is the container, advance, margin. Let’s change that to percentage and just hit that. Span that about 7 %. Awesome. We have a cookie policy. Now we need a cookie pop up. Plugins, add new, install. This one’s always nice and it’s free, which is always nice. I want to change the website name because for some reason it’s called Web Agency Demo. That’s what I want. Let’s go to Customize. Hedder change logo and here we can call it Word Mute. Three writers. Writers or Writing, Writers. 3 Writers tools. Let’s have Favicon. Favicon is basically the icon that is on the top of the thumbnail, like the Chrome thumbnail and stuff. It should be 512 by 512. I will use the logo for prompt views. Okay. Here and just drag in your image. Okay, so now we want SEO, so we’re going to add node. If you type SEO, there’s a load of free tools. But I am a firm believer, rank math is king. I stand behind rank math.

It’s only not too expensive. It’s free, but there is a pro version. I recommend giving the pro version. I’ll show you how to set this up. We need to do a couple of things. Let’s go to a header footer and we need header footer code manager. Elementor does have one, but I don’t like it. Old habits. Cool. It’s down there. What we need to do now is go to Google search console, create an account. If you don’t have one, it’s free. Then we need to add property. Head over, get the URL, paste that in, get rid of the dash, continue. Then do HTML tag, copy that. Go back to your new header footer tool, and we’re going to call this search console just so we know what it is. It doesn’t matter what you call it. Paste that in there. Click Save. If you have a caching program, clear your cache now. Go back to this and then verify. It’s now looking for that code we just added. We proved we own the website. That’s it. Now we can go back to this. Once we’ve verified our Google search console, just go ahead and complete the rank math wizard.

It’s pretty much just filling out your logo, your site name, etc. Then we’re set to go. What we need next is to add our site maps. Go head over to rank math on the left, click site map settings. Then what I like to do is just add the ones. Well, first you do a general one. This is the primary one. Head back over to Google search console and click on site maps. Add this in here and submit. Okay, good. That worked. You don’t have to do this next step, but it’s what’s a habit for me. So what I do is I go back again and then I get my post pages and I like to add these all. So it has all the individual site maps. Do that. And then when you have your plastic bridge, etc, you can add them. All right, so now we need to get paid. There’s a couple of choices here to get paid. One, we can apply for AdSense. That takes around a week to get confirmation. This is a new site, might not get verified straight away. The other option is Affiliate. With Affiliate, the best option I found is ShareA sale. Y ou just log in, then you choose the type of product that is similar to your niche.

Then you apply, you send the company the link to your website, and then they get back to you quite quickly saying, Hey, or no. If they say, Yeah, they give you an affiliate link. With that, you can create adverts and post them on your page. In my case, I’m a massive fan of AppSumo. I love AppSumo. I spend all my money on appSumo and the products they sell, they’re all just pretty good value for money. It’s a bit of a risk, some of the products because they’re new companies, so it’s like backing a new horse. But I’ve saved a lot of money in the long run, so I’m quite happy being an affiliate of appSumo. To do so, just create an account and then join their affiliate program. I’m going to add a few affiliate links from them. They run by Impact. Com. Let’s find a link. But okay, so details, get code, copy that, and now let’s go to our site. We want HTML. We want to create a section that goes here actually. I click the plus, create new section, HTML. Drag that in there and we can paste that code we just had.

Let’s see what that looks like. I got ad blocking. Refresh. There we go. You can install a plug in to force people to use or turn off ad blocker. I’m not that mean. Everyone knows ads help pay the bills, but I’m not going to force anyone. Traffic’s traffic, sentence case. Let’s add some ads to this one as well. You know what, we can just copy this. Okay, I think we’re nearly done. Sorry, this has gone on a little bit. The last thing to do is we want to do the middle descriptions for each and every page. This will help us get the clicks that we need. On each page, do Edit of Elementor, then click on the SEO aspect. If you don’t see it, click on the POD, and then the nine things, SEO, Edit Snippet, And so right now, this is what we have, free writing tools dash and then the site name. That’s good. And I did the meta description for this using chat chip and tee. So let’s do one now together. So let’s do case convertor, free text case convertor. Okay. And okay, go. Good. It looks a bit long, though. Let’s ask for another.

Shorter. Okay, that’s fine. Then we go back to the website and just paste it in. Cool. We’ve used 140 out of 160 characters. Awesome. Then just do the remaining pages. Then the last, last thing to do is go back to Google search console and we’re going to force it to crawl our site. The way to do that is just to get your site name and paste it in there and go. It’s already on Google actually, which is quite good. But let’s request indexing. Now that’s going to recrawl our website and our website is live. Obviously, organically, websites do take quite a while to start ranking for anywhere between 4 to 6 to 12 months. So to improve your search result, I advise us to keep updating the site, improving it, adding news, tweaking the tools, just keeping the users informed and updated and coming back, really. Let’s wait for that to go. There we have it. Our new website with its adverts and with the tools and with the SEO. Super. Anyway, if you didn’t understand it, sorry, I know it’s a long one, or you’d like me to show something else in the next tutorial or something in particular interest you, let me know down below and I will try to get round to it.

Anyway, I hope you enjoyed this one. Take care, Alex out.

<p>The post How to create Passive Income with ChatGPT first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-create-passive-income-with-chatgpt/feed/ 0 Featured - Prompt Muse nonadult