Video – Prompt Muse https://promptmuse.com A.I Tutorials, News, Reviews and Community Fri, 19 Jan 2024 12:03:24 +0000 en-US hourly 1 https://promptmuse.com/wp-content/uploads/2022/11/cropped-channels4_profile-32x32.jpeg Video – Prompt Muse https://promptmuse.com 32 32 Warp Fusion: A Comprehensive Step-by-Step Tutorial https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/ https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/#respond Fri, 23 Jun 2023 12:39:45 +0000 https://promptmuse.com/?p=3032 Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability to create unique animations with a consistent theme or style. This tutorial will guide you through the process of using Warp Fusion, focusing on using a remote GPU, which is [...]

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
Warp Fusion is an innovative AI animation tool that allows you to create stunning, eye-catching videos. This tool has been gaining popularity for its ability to create unique animations with a consistent theme or style. This tutorial will guide you through the process of using Warp Fusion, focusing on using a remote GPU, which is a preferred method for many as it allows for running multiple GPUs simultaneously, freeing up your PC for other projects.

Getting Started with Warp Fusion

Prerequisites

Before we dive into the tutorial, there are a few prerequisites you need to have:

  • Google Colab Pro or Google Colab Pro Plus
  • Access to Alex’s Patreon page where you can get the Warp Fusion notebook
  • A model and a Lora from Civic AI

Setting Up Warp Fusion

The first step in using Warp Fusion is setting up the environment. This involves downloading the Warp Fusion notebook from Alex’s Patreon page and loading it into your Google Colab. Once you have the notebook ready, you need to connect it to a hosted runtime if you’re using Google Colab Pro. This ensures that you’re using your compute unit and the GPU.

Configuring Warp Fusion

Basic Settings

Once your environment is set up, you can start configuring Warp Fusion. The first thing you need to do is specify the name of the folder where your output files will be stored in your Google Drive. You also need to set the width and height of your output video to match your input video. A resolution of 720 by 1280 is a good starting point, but you can adjust this to suit your needs.

Video Input Settings

Next, you need to specify the path to your input video. This is the video that you want to animate using Warp Fusion. You can save your input video on your Google Drive for easy access. If your video is long and you want to reduce the diffusion time, you can set the ‘extra frame’ setting to two, which means Warp Fusion will diffuse every other frame.

Video Masking

Video masking is a useful feature that can help make the background of your animation consistent and stable. You can use your input video as the mask source and extract the background mask. If you want to add another video into the background, you can specify the path to that video in the ‘mask video path’ setting.

Defining SD and K Functions

The next step is to define the SD and K functions. You need to change the ‘load to’ setting to GPU and specify the path to the model you want to use. You can get models from Civic AI or Hugging Face. You also need to specify the directory where your control net models will be stored on your Google Drive.

Running Warp Fusion

Once you have all your settings configured, you can start running Warp Fusion. This involves running all the cells in the notebook up to the GUI section. This process can take about 5 to 10 minutes, and you will get a green tick for every cell that has successfully completed.

Customizing Your Animation

Using Prompts and Loras

Prompts and Loras are powerful features that can help you customize your animation. Prompts are instructions that guide the AI in creating the animation, while Loras are elements that you can add to your animation. You can specify your prompts and Loras in the GUI section of the notebook. You can also adjust the strength of the stylization and the prompt guidance to achieve the desired effect.

Using Control Nets

Control nets are models that you can use to control the animation. You can select the control net models you want to use and adjust their weights to influence their impact on the animation.

Using the Warp Tab

The warp taballows you to adjust the flow blend of your animation. This is the blending of your input video with the next frame of stylization. If you find that your animation is over-stylized, you can reduce the flow blend to achieve a more balanced effect.

Using the Mask Tab

The mask tab allows you to use a background mask for your animation. You can change the color or use an image or the original video as the background. This can help to create a more consistent and stable background for your animation.

Rendering Your Animation

Once you have customized your animation, you can start rendering it. This involves running the ‘diffuse’ cell in the notebook. As your animation progresses, you will see a preview frame that allows you to check for any errors. If there are errors or things you want to fix, you can stop the diffusion and adjust your settings.

Creating a Video from Your Animation

After your animation has been rendered, you can create a video from it by running the ‘Create Video’ cell. This will create a video from the frames that were diffused in the previous step. The video and all the frames will be saved to your Google Drive.

Post-Production

After you have created your video, you can bring the frames into a post-production software like DaVinci Resolve or After Effects for further editing. This can involve reducing the flickering of the animation or exporting the frames into a .mov file.

Upscaling Your Video

The final step in the process is upscaling your video. This can be done using a service like Topaz Labs or Pixel. These services use AI to increase the size of the video and make it sharper. Pixel is a browser-based service that is easy to use and offers a pay-as-you-go pricing model, making it a cost-effective alternative to Topaz Labs.

Warp Fusion is a powerful tool that allows you to create unique and eye-catching animations. With its wide range of features and customization options, you can create animations that truly stand out. Whether you’re a hobbyist or a professional, Warp Fusion offers a fun and innovative way to create animations.

Remember to share your creations on social media and tag Prompt Muse. We love to see what you create with Warp Fusion. Happy prompting!

<p>The post Warp Fusion: A Comprehensive Step-by-Step Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/warp-fusion-a-comprehensive-step-by-step-tutorial/feed/ 0 Warp Fusion: Step by Step Tutorial nonadult
Introducing NVIDIA Eye Contact: The AI Tool That Changes Video Calls Forever https://promptmuse.com/introducing-nvidia-eye-contact-the-ai-tool-that-changes-video-calls-forever/ https://promptmuse.com/introducing-nvidia-eye-contact-the-ai-tool-that-changes-video-calls-forever/#respond Sun, 05 Feb 2023 18:58:20 +0000 https://promptmuse.com/?p=1855 As video calls have become the norm for communication, it can be difficult to maintain eye contact with the camera, especially when reading notes or distracted. NVIDIA, the multinational technology company, has recently launched a new tool that aims to fix this problem: NVIDIA Eye Contact. How It Works The NVIDIA Eye Contact is a [...]

<p>The post Introducing NVIDIA Eye Contact: The AI Tool That Changes Video Calls Forever first appeared on Prompt Muse.</p>

]]>
As video calls have become the norm for communication, it can be difficult to maintain eye contact with the camera, especially when reading notes or distracted. NVIDIA, the multinational technology company, has recently launched a new tool that aims to fix this problem: NVIDIA Eye Contact.

How It Works

The NVIDIA Eye Contact is a new feature of the NVIDIA Broadcast app, which uses artificial intelligence (AI) to simulate the eyes of the user as if they are looking straight at the camera while recording. When the user is distracted or looking elsewhere, the AI-powered feature instantly edits the video in real-time to make it appear like they are staring straight at the camera.

Improving Engagement and Interaction

Nvidia Product Line Manager Gerardo Delgado stated that Eye Contact is ideal for content creators who want to record themselves while reading notes or a script, or avoid having to stare directly at the camera. Video conference presenters will be able to look at the eyes of the person they are speaking with, improving engagement and interaction with their audiences.

The Stuff of Nightmares or the Future of Video Calls?

While some may view NVIDIA Eye Contact as the future of video calls, others may see it as the stuff of nightmares. The concept of having AI control the movement of our eyes during video calls is both comforting and creepy, and raises questions about the ethics of using AI to manipulate human behavior.

If you’re curious about NVIDIA Eye Contact, you can download the NVIDIA Broadcast app and try it out for yourself. Just be aware that it may take some getting used to, and that others may see it as unsettling.

Get Eye to Eye – NVIDIA Eye Contact

If you’ve ever found yourself distracted during a video presentation, or constantly reading chat during a livestream, NVIDIA may have the answer with their updated Broadcast software with their beta Eye Contact feature. The technology, takes cues from Apple’s FaceTime and gives you the ability to stay focused on the camera, with all the features of real eyes that includes blinks and eye color.

The Vignetting Effect

In addition to the Eye Contact function, NVIDIA also has a new vignetting effect which darkens the edges of your frame. This is in addition to the existing functions like Blur, Replacement and Remove Virtual Background effects that are now far more reliable, thanks to temporal data that helps the AI make smarter choices. Other basic but heavily requested options like mirrored camera and screenshots are also part of this feature.

System Requirements

For the best results with this software, NVIDIA suggests that you have at least a GeForce RTX 2060 video card (or its Quadro equivalent) and 8GB of RAM. A Core i5-8600 or Ryzen 5 2600 processor is also required for optimal performance.

The Eye Contact function may be unsettling for some as it keeps a constant gaze with the camera, whereas natural presenters tend to look away every once in a while. However, this could help in creating a better connection with your audience, especially if you’re uncomfortable with direct eye contact.

Automated Video Tracking

The advanced video tracking capabilities of NVIDIA Eye Contact will help you to keep your eyes on the camera by automatically adjusting their position in the frame. This allows for more natural camera movement, so your video presentations can appear more captivating to viewers.

The software is designed to be user-friendly, allowing anyone to quickly configure the options with a few clicks. You can choose from a range of preset settings so you don’t have to worry about finding the right parameters or configuration.

The great thing about NVIDIA Eye Contact is that it uses minimal system resources and bandwidth, allowing for smooth operation even when using low bandwidth services. It also has low latency, making it useful for streaming applications as well.

NVIDIA Eye Contact can be a powerful tool for video presenters who are looking to make better connections with their audience. Not only does it create a sense of focus, but it also has numerous features and customizable options to help ensure that your presentation looks professional and engaging. With the addition of the advanced audio and video options, the software provides a smooth streaming experience while maintaining high levels of quality.

In a world where video calls have become the norm for communication, NVIDIA Eye Contact offers a new solution for those looking to improve their interactions and engagement during these calls. Whether it’s seen as the future or a creepy use of AI, it’s certainly an interesting tool that has the potential to change the way we do online meetings.

FAQ about NVIDIA Eye Contact

Q: What is NVIDIA Eye Contact?
A: NVIDIA Eye Contact is a software program developed by NVIDIA that offers a range of video and audio features for streaming applications. It includes the eye contact feature, which creates the effect of realistic eyes looking into the camera, as well as additional effects like vignetting, blur and replacement, automated video tracking, rich color palette and improved audio quality.

Q: What system requirements are needed to use NVIDIA Eye Contact?
A: To get the best results out of NVIDIA Eye Contact, your computer should have at least an NVIDIA GeForce RTX 2060 GPU, 8GB RAM, a Core i5-8600 or Ryzen 5 2600 processor, and be running the latest version of Windows 10.

Q: Is NVIDIA Eye Contact compatible with streaming services?
A: Yes, NVIDIA Eye Contact is compatible with most streaming services including Twitch, YouTube, Facebook, and more.

Q: What customization options are available with NVIDIA Eye Contact?
A: NVIDIA Eye Contact allows you to customize the eyes and color palette, as well as adjust the audio settings including stereo sound and built-in noise reduction. You can also choose from a range of preset options for the various video effects.

TL/DR: NVIDIA Eye Contact is a powerful new AI-driven technology that lets you keep eye contact with the camera during presentations and livestreams. Features include vignetting, replacement and removal of virtual backgrounds, mirrored cameras and screenshots. The software requires an RTX 2060 video card, 8GB RAM and either an i5-8600 or Ryzen 5 2600 processor. It also has customizable eye colors and improved audio quality.

<p>The post Introducing NVIDIA Eye Contact: The AI Tool That Changes Video Calls Forever first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/introducing-nvidia-eye-contact-the-ai-tool-that-changes-video-calls-forever/feed/ 0
How To Connect ChatGPT to the Internet https://promptmuse.com/how-to-connect-chatgpt-to-the-internet/ https://promptmuse.com/how-to-connect-chatgpt-to-the-internet/#comments Fri, 03 Feb 2023 21:01:08 +0000 https://promptmuse.com/?p=1562 GET ADDON HERE In the world of AI language models, ChatGPT is one of the most well-known. However, it has a significant limitation: it was trained only up to the year 2021, meaning it cannot provide any relevant news or information past that date. But now, thanks to some clever coding, it is possible to [...]

<p>The post How To Connect ChatGPT to the Internet first appeared on Prompt Muse.</p>

]]>
GET ADDON HERE

In the world of AI language models, ChatGPT is one of the most well-known. However, it has a significant limitation: it was trained only up to the year 2021, meaning it cannot provide any relevant news or information past that date. But now, thanks to some clever coding, it is possible to connect ChatGPT to the internet and access up-to-date information. In this article, we’ll explore how this is done and the possibilities it opens up.

Connecting ChatGPT to the Internet

The first step in connecting ChatGPT to the internet is to head to the URL provided in the description and add the “webchat GPT” extension to your Chrome browser. After granting the necessary permissions, refresh your ChatGPT browser, and you’ll see additional technical language below the search bar.

Accessing Up-to-Date Information

To access up-to-date information, simply head to a news story and take note of the name of the person or topic you want to explore. For example, let’s say we’re in the UK and we come across a story about a missing woman named Nicola Bully. We can input her name into ChatGPT and select how far back we want to search and which country we want to focus on.

Once we click “search,” ChatGPT will scan through three results and generate content based on those articles. This allows us to access the latest news and information, all while staying within the ChatGPT interface.

Customizing the Output

One of the most exciting aspects of this feature is its customizability. By selecting different variables and prompts, users can generate content in a wide variety of styles and tones. For example, it’s possible to create news articles, product reviews, or even creative writing prompts, all with the help of ChatGPT.

In addition, ChatGPT includes default prompts for generating content quickly and easily. Users can also adjust the length of the content, the type of voice it is written in, and more. With so much flexibility, the possibilities are endless.

Using ChatGPT in Professional Settings

This new feature is a game-changer for anyone who relies on ChatGPT for generating content. For example, journalists can use it to quickly gather information and generate articles with up-to-date data. Marketers can use it to craft engaging product descriptions or marketing copy. And educators can use it to generate writing prompts for students.

In addition, this feature is likely to be incredibly useful for non-native English speakers who want to improve their language skills. By generating content in English, they can learn new vocabulary and sentence structures, all while staying up-to-date with the latest news and information.

Conclusion

ChatGPT is already a powerful tool for generating content, and this new feature takes it to the next level. With the ability to access up-to-date information from the internet and customize content in a wide variety of styles, the possibilities are endless. Whether you’re a journalist, marketer, or educator, this feature is sure to make your work easier and more effective.

FAQ:
Q. What is GPT-3?
A. GPT-3 (Generative Pre-trained Transformer 3) is an AI writing tool that can generate news articles, opinion pieces, and marketing campaigns.

Q. How do I get started with Chat GPT?
A. First, set up the GPT-3 extension for Chrome and grant the necessary permissions. Then, head to the web chat area and type in a keyword or title related to what you want to find news about.

Q. What are the benefits of using Chat GPT?
A. Chat GPT is convenient, free, and easy to use, providing quality content in just a few clicks. Additionally, since GPT-3 has been trained up until 2021, users always have access to the latest news and opinion pieces.

Q. Can I customize the type of article I generate with Chat GPT?
A. Yes, you can customize the number of results generated and the style of writing, such as an opinion piece or a news story. Additionally, you can set parameters such as the length of the article, the language, and the level of readability.

Q. How can businesses utilize GPT-3?
A. Businesses can use GPT-3 to quickly generate blog posts, marketing designs, and campaigns. Additionally, GPT-3 can be used to track Twitter trends and generate Bingo cards.

TL/DR: Chat GPT is a revolutionary tool that helps people stay informed on the latest news and trends. It offers customisable options and is free and easy to use. With its connection to the internet, users can access up-to-date information and opinion pieces in just a few clicks. Whether you’re looking to get the latest news or write your own opinion piece, Chat GPT is the perfect solution.

<p>The post How To Connect ChatGPT to the Internet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-connect-chatgpt-to-the-internet/feed/ 3
Metavoice Voice Changer Review https://promptmuse.com/metavoice-voice-changer-review/ https://promptmuse.com/metavoice-voice-changer-review/#respond Fri, 03 Feb 2023 15:25:54 +0000 https://promptmuse.com/?p=1540 Introduction to Metavoice Have you ever tried a voice changer? For some, it’s just a bit of fun. For others, it’s a creative tool that can be used to enhance videos, audio and even to create brand new characters. The latest product on the market is Metavoice, a real time voice changer with an impressive [...]

<p>The post Metavoice Voice Changer Review first appeared on Prompt Muse.</p>

]]>
Introduction to Metavoice

Have you ever tried a voice changer? For some, it’s just a bit of fun. For others, it’s a creative tool that can be used to enhance videos, audio and even to create brand new characters. The latest product on the market is Metavoice, a real time voice changer with an impressive array of features. In this article, we’ll take a look at what Metavoice can do and how it compares to other voice changing products.

Unveiling Metavoice

Metavoice is a unique voice changer application built for both desktop and mobile devices, as well as for web browsers. Developed by a team of experienced engineers and sound experts, Metavoice works in real-time, offering users fast and accurate changes to their voices.

The Metavoice website is highly impressive. Everything from the navigation to the visuals are intuitively designed and incredibly professional, making the website and product a pleasure to navigate.

Features of the Metavoice App

Users of the Metavoice app have access to a wide range of features. Firstly, the app allows users to record their own audio up to five minutes in length. Alternatively, they can also upload a pre-recorded audio file in one of the many supported formats. Once they have their audio, they can then begin to play around with the multitude of voice changing options available.

Metavoice comes with a whopping 25 different voice styles and hundreds of presets that can be used to drastically alter the user’s voice. From female to male and vice versa, deep voices to cartoon-like voices, the options are virtually endless. In addition, users can even customize their vocal style with a range of additional tweaks and effects.

To see how Metavoice performs in action, I decided to give it a try. I chose to record my voice, which took less than 10 seconds. After previewing my original recording, I decided to change my voice to one of the available “catfishing” styles.

The first thing I noticed was just how realistic the transformation sounded. It didn’t sound robotic or overly computerized like some other voice changers. Instead, Metavoice made it sound as though I was actually genuinly talking in a different voice. I also found that the whispers, background noise and breathing sounds were all realistically produced too, creating an immersive experience.

Metavoice Plans and Pricing

At the time of writing, Metavoice offers three plans: free, pro and enterprise. The free plan allows for recordings of up to 30 seconds and includes basic features. The pro plan includes unlimited recordings up to five minutes, access to advanced features and API access. Finally, the enterprise plan offers additional customization and dedicated support.

What’s impressive about Metavoice is that the pricing is very reasonable compared to other voice changers on the market. Currently, the pro plan sits at a monthly cost of $9.99.

Is Metavoice Suitable for Professional Projects?

Thanks to its highly realistic sounding transformations, Metavoice is ideal for professionals and creatives who need a reliable and accurate voice changer. The numerous advanced features make it perfect for creating videos, audio and even characters for gaming and other digital media applications.

Using the Metavoice app, users can easily create characters and dialogue of any kind. From robot voices to aliens and monsters, the possibilities are vast. Thanks to its intuitive design, users can transform their voice into anything they desire, making it easier than ever to create characters for video games and other digital media projects.

Metavoice isn’t just for video and audio projects, but it can also be used for music production. With its range of effects and presets, users can easily transform their voices into unique sounds and use them to create unique music tracks. Whether you want to create a catchy chorus or layer your voice in an atmospheric melody, Metavoice has you covered.

Continuous Updates and Support

The Metavoice team is constantly working on improving their product and adding new features to the app. They also offer regular updates and support, so users can rest assured that they’re always up-to-date with the latest version.

One of the best things about Metavoice is its ability to sync with other applications. Thanks to its API access, users can easily import and export recordings from other programs, making it a great tool for teams and individuals who require real-time collaboration.

Metavoice also makes it easy to create character voices for animation projects. With its range of tools and effects, users can easily create a wide array of voices and bring their creations to life in stunning 3D animation.

Voice Distortion with Metavoice

For those who want to add a bit of distortion to their voice, Metavoice offers a great selection of filters and effects. These can be used to create robotic-sounding voices, as well as distorted and modulated sounds.

Metavoice also comes with a live streaming feature that lets users share their creations with others in real time. This makes it a great tool for livestreamers and podcasters who want to keep their audience engaged with unique and interesting content.

Conclusion

In conclusion, Metavoice is one of the most versatile and powerful voice changers on the market. With its realistic transformations and intuitive interface, users will be able to quickly and easily create and edit their recordings. Whether you need to create characters for a video game, animate a scene or just have some fun with friends, the Metavoice voice changer is the perfect tool.

FAQ:
Q: What type of devices is Metavoice compatible with?
A: Metavoice is compatible with desktop and mobile devices, as well as web browsers.

Q: What features does Metavoice offer?
A: Metavoice comes with a variety of voice styles and hundreds of presets for users to choose from. In addition, users can customize their vocal style with additional tweaks and effects.

Q: How much does Metavoice cost?
A: Metavoice currently offers three plans: free, pro and enterprise. The pro plan includes unlimited recordings up to five minutes and advanced features for a monthly cost of $9.99.

Q: Is Metavoice suitable for professional projects?
A: Yes, Metavoice is suitable for professional projects due to its highly realistic sounding transformations and advanced features.

Q: Can Metavoice be used to create characters for video games and animation?
A: Yes, with its range of tools and effects, Metavoice allows users to easily create characters and dialogue of any kind for video games and animation.

Q: Does Metavoice have a live streaming feature?
A: Yes, Metavoice has a live streaming feature that lets users share their creations with others in real time.

TL/DR: Metavoice is a powerful and versatile voice changer application with features such as 25 voice styles, hundreds of presets and a range of advanced effects. It is suitable for professionals and creatives working on videos, audio, animation, character creation and live streaming projects. Its user interface and pricing are also highly impressive, making it an attractive option for anyone looking for an intuitive and reliable voice changer.

<p>The post Metavoice Voice Changer Review first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/metavoice-voice-changer-review/feed/ 0
Discover What Gave Amelia Player the Edge to Co-Found Prompt Muse! https://promptmuse.com/discover-what-gave-amelia-player-the-edge-to-co-found-prompt-muse/ https://promptmuse.com/discover-what-gave-amelia-player-the-edge-to-co-found-prompt-muse/#respond Fri, 03 Feb 2023 11:33:38 +0000 https://promptmuse.com/?p=1525 Amelia Player is an artist, AI researcher, and tech enthusiast with a background in motion graphics, graphic designs, 2D and 3D models, and the gaming industry. She co-founded Prompt Muse with her brother Alex Player, a teaching platform that bridges the gap between theoretical knowledge of AI and its practicality through step by step tutorials, [...]

<p>The post Discover What Gave Amelia Player the Edge to Co-Found Prompt Muse! first appeared on Prompt Muse.</p>

]]>
Amelia Player is an artist, AI researcher, and tech enthusiast with a background in motion graphics, graphic designs, 2D and 3D models, and the gaming industry. She co-founded Prompt Muse with her brother Alex Player, a teaching platform that bridges the gap between theoretical knowledge of AI and its practicality through step by step tutorials, and best practices for both beginners and experienced users in the AI industry. With the mission to democratize AI education, Amelia is passionate about empowering individuals and organizations to harness the potential of AI through hands-on learning and expert training. She began her career in the gaming industry where she learned all the different sub-disciplines from conception and many others. With a passion to master something, Amelia relied on her determination and focus to learn software quickly and has since become a master in her disciplines. She is honoured to be a guest on The Craft Podcast.

Discovering Passion and Earning Mastery – Amelia Player

Everyone dreams of finding something that they truly love and are passionate about, and then mastering it until nothing else in life matters besides their thing. For Daniel Tedesco and Michael Du from Craft Podcast, this dream is realised through their interviews with people who have done just that.

In their latest episode, they have decided to speak to Amelia Player, an artist, AI researcher, and tech enthusiast, who has a wealth of knowledge in the field of motion graphics, graphic designs, 2D and 3D modelling, and the gaming industry. Amelia’s passion lies in empowering individuals and organisations to use AI through hands-on learning and expert training – a mission she founded with her brother AlexPrompt Muse.

Daniel and Michael start off the conversation by asking how Amelia began her journey towards mastery in the tech industry, and she tells them that she began without any privileges in terms of education and GCSEs, but with a burning determination to find something she loved and be great at it. At first, Amelia began studying beauty therapy, but found that she had access to computers and libraries again. Here, she came across books on 3D and graphic design, and this was where her passions truly ignited.

A Passion for Gaming

Amelia’s passion for tech and gaming began when she was growing up, playing games excessively during her youth. When college finished and she started her job in marketing, she quickly transitioned into a graphic designer. She admits that she “just winged it” with this position, but soon realised that free software such as Photoshop and 3D Studio Max came easy to access. Soon after, she managed to get her hands on some internships in the gaming industry, and she was able to use the skills she had learnt to meet the demands of the company. The young creator then began working as a contractor, creating motion graphics and titles for clients. She was also well-acquainted with many technologies like Final Cut Pro and Maya, so she was able to offer a range of work to customers. Despite the difficulty of freelancing, Amelia says that her projects gave her the opportunity to do something she really loved and experience the joy of creating something entirely from scratch.

AI Education

It was at this point that Amelia encountered machine learning, and it changed her life. As her interest in the subject grew, she decided to take the plunge and start her own business with her brother – Prompt Muse. Their mission was to bridge the gap between theoretical knowledge of AI and its practical applications through step-by-step tutorials, best practices, and expert advice. Amelia’s thriving business has given her the chance to share her passion with others, and empower them to use AI for their benefit. At the same time, she has gained the opportunity to prove to herself and others that not having a higher education does not mean one cannot achieve success, as long as they have the passion and dedication to master something. Amelia is living proof that anyone can reach their goals and their desired level of mastery, regardless of their background.

Launching Prompt Muse

In 2022, Amelia and her brother launched Prompt Muse, which aimed to bridge the gap between theoretical AI knowledge and its practical applications. Although it was a daunting task, the pair felt passionate about their mission and had faith that it would bring success. The two began working hard on the project, and within a few years, they had produced numerous tutorials, best practices, and expert advice to help businesses and individuals understand AI and how to best use it for their benefit.

Advice for Other Entrepreneurs

Now that Amelia has tasted success with her business, she has advice for other entrepreneurs who may feel discouraged or unsure of what to do. She believes that no two people have the same paths in life, so although there are lessons to be learnt from studying how others achieve success, it is essential to remember that everyone’s journey will be different. Amelia also believes that being able to set your own rules and decide what kind of work you want to do is essential. Having the autonomy to choose your own hours and projects allows for more creativity, which is the foundation of any successful business.

Prioritising Wellbeing

Amelia is also a firm believer in prioritising well-being, as she believes that taking care of one’s physical and mental health is just as important as succeeding in one’s business. Too often, entrepreneurs become too preoccupied with making sure their business is successful that they forget to take care of themselves, but this can have serious consequences. As a result, Amelia advocates for setting boundaries and making sure that you take time out to relax and enjoy yourself. This could involve taking regular vacations or simply having days off to do something you truly enjoy, such as reading a book or going for a walk.

Following Your Passion

For Amelia, she believes that the best way to achieve success is to follow your passions and pursue them with dedication and enthusiasm. Doing something that you truly love and care about not only increases your chances of becoming successful, but also increases your overall satisfaction and happiness in life. When asked about her own success and how she got there, Amelia credits her tenacity and determination to succeed. She believes that, with enough hard work and dedication, anyone can achieve their dreams and be successful in whatever field they wish.

Achieving Mastery Through Dedication

Ultimately, Amelia Player’s story highlights the power of passion and dedication. Despite her lack of higher education and privileges, Amelia was able to find something she was passionate about and master it through hard work. This shows us that anyone can achieve success, regardless of their background, as long as they have the desire, dedication, and willingness to learn. Conclusion: Amelia Player’s journey is inspirational and demonstrates the power of passion and dedication. It proves that regardless of an individual’s background, they have the capability to master something with enough dedication and hard work. Her story serves as an example to us all that anyone can achieve their goals, as long as they are willing to strive for greatness and make their dreams a reality.

FAQ:

Q: What is the article about?

A: The article is about Amelia Player, an artist, AI researcher, and tech enthusiast who has mastered her field of motion graphics, graphic design, 2D and 3D modelling, and the gaming industry.

Q: How did Amelia begin her journey towards mastery?

A: Amelia began without any privileges in terms of education and GCSEs, but with a burning determination to find something she loved and be great at it. At first, she studied beauty therapy, but later found books on 3D and graphic design which ignited her passions.

Q: What inspired Amelia’s journey into the tech and gaming industry?

A: Amelia’s passion for tech and gaming began when she was growing up, playing games excessively during her youth. When college finished, she transitioned into a graphic designer and soon began working as a contractor creating motion graphics and titles for clients.

Q: What is Prompt Muse

A: Prompt Muse is a business founded by Amelia and her brother which aims to bridge the gap between theoretical knowledge of AI and its practical applications through step-by-step tutorials, best practices, and expert advice.

Q: What advice does Amelia have for other entrepreneurs?

A: Amelia believes that no two people have the same paths in life, so although there are lessons to be learnt from studying how others achieve success, it is essential to remember that everyone’s journey will be different. She also advocates for setting boundaries and making sure that you take time out to relax and enjoy yourself. Lastly, she believes that the best way to achieve success is to follow your passions and pursue them with dedication and enthusiasm.

TL;DR:

Amelia Player’s story shows that hard work and dedication can lead to success, no matter the background. Her journey serves as an example of how, with enough passion and commitment, anyone can achieve their goals.

Transcript

Daniel Tedesco
All right, well, hi everybody. Welcome to the Craft Podcast, where Michael and I interview experts of various fields to learn about their discipline and how they strive towards mastery. We love talking to people who are passionate enough to master something, and we’ve each been, ah, asking curious questions of these craftspeople our entire lives, but we want to share that with others. The interviews we hold are to the point, informative and fun. You will love all of them. So please subscribe to the channel and like the video. If you love our interviews, let’s get right into it. Michael, who’s with us today? Yeah.

Michael Du
Thanks, Dan. Today we are joined by Amelia Player, an artist, AI researcher, and a tech enthusiast with a background in motion graphics, graphic designs, two D and three D models, and also deep into the gaming industry. She co-founded Prompt Muse with her brother Alex. Prompt Muse is the teaching platform that bridges a gap between theoretical knowledge of AI and it’s practical through step by step tutorials, best practices for both beginners and experienced users in the AI industry. Sorry. And latest news and opinions on this industry as well. So, with the mission to democratize the AI education, amelia is passionate about empowering individuals and organizations to harness the potential of AI through hands on learning and expertial training. So Amelia is a master in her disciplines, and we are honored to have her on the show. Amelia. Welcome to the Craft podcast.

Amelia Player
Hello. Thank you for having me. And don’t tell me Chat GT wrote that.

Daniel Tedesco
No, that was all old school handwritten. Yeah.

Amelia Player
Well, Vicky, for an amazing introduction then, yeah, you nailed it.

Daniel Tedesco
But before API, you weren’t doing AI related things forever. You started out your career in gaming. We did some LinkedIn stalking and found all these gushing reviews about you from people you’ve worked with in the past and game companies when you were doing game art. And one of the things that stuck out to me is that you didn’t just stick with one area of game art, but you learned all the different sub disciplines from concepting and many others that maybe you can introduce us to and then tell us how kind of your journey to mastering that field.

Amelia Player
Yeah, cool. Yeah. Well, I hope my journey was by others who may not come from a normal background and might not have the privilege of going to university in higher education. Because none of that I had and everything I’ve learned and every job I’ve ever been able to get was through passion and showing that passion and also backing it up with focus and determination and spending a lot of time researching what I was learning and listening to people who know more than me and knowing when to be quiet and knowing when to speak. It’s a fine balance as well. So my CB is just absolutely everywhere. I actually started my journey not knowing what I wanted to do at school. I was off at school, my GCs, these probably spell a swear word, it was in the UK and it was bad. And I came out of school education feeling so dejected and my grammar and spelling is as bad as it is when I was at school. It hasn’t improved one bit, but I knew I had to get a skill to survive in this world and I knew that from a young age.

Amelia Player
And I actually went and to college and did beauty therapy, but in fact I didn’t have enough GCSE to be allowed to do beauty therapy. So I ended up doing hair for a year to get myself into a beauty therapy course. I don’t understand the logic of that and that was my first insight into education. That processes just didn’t seem to make any sense to me. But I knew I had to get skills, so I did beauty therapy. But when I was doing that beauty therapy course, I discovered I had free access to a library and computers and I came from I’ll talk about it in a bit later. I grew up with computer games. I was the ultimate person who played a lot of computer games. So as it was the attractive computers and the internet and the library are quite old as well, nearly 40 and well, not that old, but in the grand scheme of things, middle age, I would say. And you know, there was a lot of books as well, about three D and graphic design. And so when I was doing the Speedy Therapy course, I sort of fell in love with graphic design and finished the course and I didn’t do anything beauty therapy related to do marketing for a company.

Amelia Player
And I kind of just winged it into the business by just saying, yeah, I can do that clip, I can. And I ended up always being graphic designer within the company because I just say, let me do it for you. And back then as well, it was easier to get free software like Photoshop 3D Studio Max. You could have access to it or view where to look. I don’t promote that whatsoever, but it’s very different nowadays where you have to have serial keys and it wasn’t easier to hijack those as it used to be back in the days. So I had access to absolutely every graphical application in this marketing job and I learned everything and I realized I could learn software very quickly and I didn’t learn very well in the classroom, but I did learn by teaching myself and working out problems. So if I needed to find a solutionist and somebody said, could you make this video for us to do? I would do it and I would use 3D Studio Max, I would use Photoshop, I use Paid Shop Pro, which was an old program back in the day. And I would learn this software basically being paid for my education in pred.

Amelia Player
The company loved it because they loved what I did for them. But also I felt this was such a better way of learning because it’s actual practical use of software. I signed myself up at the time, I used all my money to do an ICT course, I think it was at £2000. It was a lot of money back then for my job. And I went into this course and it was full of people who just didn’t want to learn. And the teacher said, we need to use access to build this database. In order to do this, you have to use this software and learn this. And I was like, well, wouldn’t it be better if you use this? No, that’s not part of the syllabus. And again, that’s another case of I just don’t belong in that formal setting. So I actually wasted all my money and I think about two weeks into doing that course that I dropped out of my hands, not great, but I realized I could learn more actually working and putting a practical use to the software. And then after that job, I could have stayed and moved to London, but I met my husband now who worked there as well and he lived in the middle of the Lands, the Midlands in the UK.

Amelia Player
And so I had to move up here and I thought, well, it’s a good opportunity to try and get a job with what I’ve learned, with no experience at all. I applied for a 3D architectural company and just was honest with him. I said, Look, I love doing three D and I can learn it very quickly and I have, and this is my four year, this is what I’ve got. Gives me a chance, an opportunity, and hopefully you’ll get rewarded and you can pay me less than everybody else, doesn’t matter, just give me a chance. And I was very lucky. And to any viewers as well, knock on doors, always knock on people’s doors because even if they say no, go to the next door and be honest about what you know and be open to so many, you will get so many opportunities that way. And unfortunately, I lost my job due to the recession within the housing market because it was based on housing. But I had enough time in that business to learn 3D really well. Photorealistic three D and from the guys around me in that team as well. Everyone taught a little bit here and a little bit there.

Amelia Player
Then I started up my first of many businesses after that because I knew I wanted to carry on with 3D, but the job market was just completely dead at the moment at that time. And I started doing 3D visualization for businesses and just carried on just freelancing until I found this job for a game artist in the city that was near me. And again I applied for it. I was very honest, I said, I don’t know anything about the game industry, I have no background experience, but I am so willing to learn. And I stayed there for six years and became a lead artist. And everyone used to say, well, what university, where did you go? What did you learn? And I say, I didn’t, I learnt it here, I learnt it from the people around me, I learnt it from Google, YouTube, I learned it. I didn’t learn anything in an educational environment, it just wasn’t suitable for me and how my mind proceeds as information. I think there’s a lot of people like that in the world that feel lost because they haven’t found their thing or might have thought they found their thing and then realized it wasn’t.

Amelia Player
And I think taking a word from Silicon Valley, I love, you know, when they say pivot, they have to pivot that on everything they’ve done up to that point, they have to change. And it’s knowing how often to do that and when to do it. Don’t do it too often, be consistent, but know when you’ve come to the dead end to something that’s not going to fulfill your mind and your spirit. And Free D has always run through from the beginning, finding it in the library and learning the software, it’s being there, always there. And so I loved my job at the games company, like being making 3d assets, working with developers, working with other artists. It was such a fantastic job. And I actually started another role as a lead artist in another game studio. And unfortunately, my dad passed away quite suddenly and he ran an online software business. And his last words were pretty much, can you look after this? So I was just like, Why? Why did you finish that? Now? Again, the directives of that business were so kind to me. They let me bring my laptop in and run his software business as well as do my job.

Amelia Player
But it got to the point where it’s just too much. I had to make a decision and they were just so good to me and I’ve got to take this on because I feel so because he’s asked me to do that, I have to. And one thing I realized, it wasn’t my passion and it was his passion. And I automated that business, his software business, as much as I could and also tried to earn money out of it, which I did and still out, which is great. And it has been essentially what I’ve created now with my brother. So it’s funded it, essentially and bumps my living. It’s not much, but it keeps it all going. So I still run the software business and I did help my dad when he was building it, build that business, though I knew it inside out, but I took it over, automated it. It could have been far better than it was if I was passionate about it, but I realized that I really wasn’t so essentially using it as a bit of like a cash cow, but keeping it the customers happy as well. So the last five years has been juggling that.

Amelia Player
And then I had room to start another business and that created mom, started creating stock images and selling them. I again wanted another automated business, and I realized I could just draw and license the artwork and then draw again and license that artwork. And so that built pretty quickly and did really.

Daniel Tedesco
Well. If we could linger on gaming a little bit longer. First of all, there’s no way, I guess, a LinkedIn profile could do much justice to the story you just told because it’s just amazing and it just shows so much tenacity that well, LinkedIn is made for showing off what brands you associate yourself with. Not passion, showing real passion, and tenacity for discipline. So I’m really glad you shared that story, and I hope you write memoirs someday because I’m sure there’s a million stories within that.

Amelia Player
I actually left home at 15. I was sleeping on my friends so far when I did the beach therapy course and ICT, I actually didn’t have a home at that time. My parents had divorced, and you go through those teenage years, and my brother did it too. There’s a lot more in depth there. There’s a lot of going on, and that’s why I want to share that with anybody watching this, that I wasn’t given any opportunity and I had to work for everything. And Ed has been tough. It has not been easy at all, but I’ve always been okay, I just need to get enough money. I don’t need to be rich. I just need to get enough money to keep doing what I want to keep doing.

Daniel Tedesco
That’s really inspirational. And you mentioned your passion for games a bit, and I guess before we kind of go deeper into just kind of the pure AI art, how do you see AI art impacting games? Because that’s something that, if you ask me, things could start happening very quickly. But since you’re much closer to the game, like how game art pipelines actually work, you kind of know more about that world.

Amelia Player
Yeah. So every gaming studio is different, and it depends on looking at big gaming studios will have different types of artists with different types of jobs. So you’ve got a concept artist and they will be given information of what design that the customer or the client and other game studio is working with. So they’ll get an outline brief of what they’re looking for, and they will come up with concepts and designs to fit that brief. And those will then have to be translated. Let’s say we’re talking about a 3D pipeline here to a 3D modeling artist who will then have to take those concept drawings and create the 3D version of that. And that probably sounds quite simple to do. It it’s quite technical to take something that’s two dimensional and turn it into 3D. So the 2D artist usually uses something called a turnaround sheet to concept, which means, let’s say we’re talking about a person who’s a game character, you would have a 2D image of that person at every angle. So when the 3D artist comes to conceptualize or make a 3D model of that, they have a 360 degree view to put into their viewport to create the model or the mesh from.

Amelia Player
And then once the 3d artist has finished and they might be the one doing the materials as well, creating the clothes and the style and the feel of the character, that will be then moved onto an animator who will rake the character as well and get that ready and put the weights onto the character, which will then make a skeleton join onto the mesh. And they will have the job of then creating the animation sidewalk for if it’s going to be a video, or if it’s going to then move on to a developer to put it into unity or unreal as well to make sure everything is suitable for them. And then you’ve got if it’s going to be used for a scene, like a video scene, you’ve then got somebody who’s going to composite it all together. So it’s a huge production pipeline. So all these artists have to communicate with each other and do their job so well that they can then pass that on in a nice, neat package that works for the next person. And it can get quite a complex process, especially if you’re not doing a humanoid character, if you’re doing quite difficult.

Amelia Player
And now AI isn’t replacing these artists, but what it will do is make some of those processes a lot easier. So from the concept stage, your AI isn’t there where you could say you can’t design you something specific, you get what you’ll give them. So mid journey or stable diffusion will chuck you something out. You’re essentially just given something, whereas an artist will always be able to come up with a specific idea. So if the sales guy says, well, this is what the client wants, an artist can be more specific about that. AI can’t currently. But that’s not to say that won’t happen. And so that’s the concept artist. And then you’ve got the person making the mesh. Now, essentially, that will probably be done by API and obscene some background work and development of companies that are doing 2D images to 3D mesh. Now, the problem they’re having is getting it to apologize, which means getting the mesh clean and neat for the person to rig it and skin it and bone it. But again, if there’s something that is, the nuance isn’t there with AI. So if it was a particular character and needed to be custom built, API just can’t achieve what a human can.

Amelia Player
So that’s why I believe their jobs essentially are safe. But you’ll see two D, two, three D very soon. But it just wouldn’t work in a game studio because of the bespokenness of that character or the design they need. And I say yes, I don’t know what’s going to happen. But they’re the same with the skeleton. People have been trying to automate that pipeline for such a long time now with new plugins rigging characters as well, that’s getting more automated. But in the game studio, they potentially wouldn’t use Blender, they would use something like Mia as well. Again, it’s more in depth and more bespoke as well. I just don’t see these guys being replaced at all. I can see plugins coming in to help those processes with unwrapping and rigging, but you still need somebody who has that skill set to know, oh, why is it arm hanging on backwards? What do to change that? Why am I not getting the emotional animation that I want out of this theme? It won’t be. I’ll just click a button, somebody in the sales team clicks a button and it generates a 3D avatar that does absolutely everything they want to do.

Amelia Player
Because if that happens, then everything becomes vanilla, everything becomes the same. And it’s almost like I can read something online and know that’s being created by Chat GPT, there’s no soul to hide it. And when it does try and create a soul, it doesn’t work. So I think in the game industry you always need an artist there. And I know I’ve had a lot of backlash on my videos saying, oh great, well I’ve spent six years learning how to use 3D software. Now that’s down the road. It’s not down the road at all. You still will experience when the magic stops, you will need to know what to do.

Daniel Tedesco
Right?

Amelia Player
And it’s the things I can do now. I at the moment starting doing my next tutorial will be on how to use stable diffusion images and project that onto a 3D character. Now that’s good for someone to do in their bedroom and do for fun, but that’s not studio, that’s not for a game. That wouldn’t work in the fight. Someone says, well, we don’t like her face. Can you change that and go, well, I don’t know how to use Photoshop doesn’t allow me. You do need those skills still there. So that’s my personal feeling. I feel artists are safe at the moment.

Daniel Tedesco
It sounds like at each step of the way there’s still a lot of need for like human eyes, human common sense, human soul, and people can be helped and things can be sped up through applying these tools. But it’s not going to be like you said, a salesperson clicks a button and a game is made.

Amelia Player
People have that great bid. That’s going to happen and it’s not. It might be that there’ll be a tool developed for someone to do that for a TV channel and put themselves into a game and make it to a baby? Yes, possibly. But the pipeline just wouldn’t work there professionally, where you’ve got to have a good story and you’ve got to have people that reflect that story. And it all has to align and there’s so many multiple processes going on. And then when the creative director says, actually scrap that character, we need to change something on it where you can’t go backwards with AI. You can get forward. So what do you do at that point? You go, well, the button doesn’t go backwards, it’s now rigged and skinned. See, you need good people to be able to good knowledge of all people, to be able to interact with that character and create it. And it really is communication with a good team that makes good assets for games. It’s never one person who joins a team of people. And the same with the developers. The developers have to work and communicate with the artists from how they’re going to integrate it into their game and how efficiently they need the textures.

Amelia Player
And the mesh has to be very clean for them to use as well in code.

Michael Du
That’s really eye opening because I didn’t know something about the game studio. I know something, but not in this detail. Yeah, thanks for introducing that on my channel.

Amelia Player
I use blender. Some game studios, like I said, they’re all very different and it depends on their budgets, the size. Some have artists called who are called generalists that do everything. So it might be one or two people in a game studio and they literally are through the artists attitude. Artists and an animator that would be a lot smaller. You see that on Steam where lots of people who are enthusiastic come together and form an indie studio. And you find there are artists that wear many hats. So they not just read animator, they’re not just a concept artist. They have to do a bit of everything. So it does really depend on the studio, how much they can afford. And I think AI will help those smaller studios get better results. So the generalists would then use better and easier workplace to get games out quicker.

Daniel Tedesco
Right.

Amelia Player
I don’t think yet there is anything to be concerned about. But again, my personal opinion, and I know other people are pulled at art being created by AI, but I think how many digital images have you seen from AI now? Probably not in all by many of them, because there’s nothing behind it. There’s no concept, no story. It’s just generated by AI and machine learning to make it fit the golden ratio, to fit color profiles that work for the eye. And we talked earlier, before the street went on about, let’s say, mid journey. It’s all machine learning. So when you click and upscale an image, I’m not saying mid journey does that, but machine learning part of it is that information is all being gathered to what makes a good image. You’ve got 8 million people doing that. They have all that data to produce a better looking image, all that data set to create you what looks nice. And that’s why a lot of these images end up looking the same type of woman, because that’s what is pleasing to the human eye. And you don’t really get many disfigured people because people are just saying, I want that to look a good image, whether that’s morally right or not.

Amelia Player
That’s why within these things, the creativity isn’t there to produce something new, ground shattering. The API artists that I’ve seen do really well, just have a consistent theme. So there’s lots that follow on Instagram that they’ll consistently do something very well. But other than that, I haven’t seen anything that I’ve gone, wow, that’s really amazing.

Daniel Tedesco
Right?

Michael Du
Yeah. So what inspired you to start exploring those generated by stuff and also starting promise? And what’s the motivation behind it?

Amelia Player
Who isn’t here because he’s pamerishai, but he does exist? Everybody on the website, we talk about Chap GPT a lot as well, because that’s his thing. So I did business before this, creating stock images for illustrations, and I started going down the digital route. So I’d create these images to sell and license, and my brother ben me up and he said, you need to look at this. This is going to destroy your business. Getting somewhere. I was like, look at this. It was dark. Something like that. It was one of the first versions, and it wasn’t very good at all. It was like you could bake sushi with arms or something. And I was like, My mind is blown. I couldn’t. So I started looking at it and actually looked at it, went, I could generate a lot of images here, and if I can get them into a consistent set, then I could sell these. But unfortunately, because of the terms and conditions were so murky with API images, unfortunately I could never work out a way to commercialize it because I would never own those images to license them out and it could come back.

Amelia Player
All of that’s really interesting. And then Mid Journey came out, I just had my finger on the pulse and tried to integrate it somewhere, either in the software business or in the art business. And I did what I always do. I just learn everything I can at that time. And my brother was as well. We were like really giddy, ringing each other up, saying, this is look what I found. This is amazing. So I said to him, I’ve got this name. If we could put a website together, she could do it for me. I think we could put some information out there and sort of become a central hub for information. We didn’t think it would go anywhere. We just thought, well, we’ll just share our passion with people on Reddit and on Discord and then have the website just to have all the knowledge and see where it goes as well and have almost a history of where it started and we could see the articles and how everything grew. AI and it’s going to be absolutely big or machine learning as well. And so yeah, that’s how it kind of happened. He actually does SEO, he’s self employed.

Amelia Player
Asked him to build me with that. Just like need somebody to talk to about this. You were the only person I can talk to about this, about them going to last night. That’s why it happened. And the name prompt me as well. I actually just went and I was looking for something prompts something and I went through all the trademark night names and Muse was three and I was like, I don’t know about anything. Unfortunately, there’s no magical tale about neighbor or anything. So yeah, I had to just ensure that get the trademark could get the website and could get every single context of the sun and that was the one that had them free and it just worked out really well. I was quite lucky with that. And it’s just a side project that we wanted to share the information and so I just put a few YouTube videos out of me using Mid Journey, probably pretty badly and people liked it because it was nontechnical and they could see that I was learning as well and tried to keep it quite slow pace as well. I’ve watched so many YouTube videos I feel like sometimes YouTube is a bit like The Matrix if you want to learn something, watching them.

Amelia Player
And I’m used to watching like 2 hours YouTube videos back in the day of somebody creating something in 3d Studio Max and just going oh, I wish to see where their mouth is or you know, elements. So I try and incorporate all those things that I wish I saw in those tutorial videos into mine. So zoom very close into a window rather than seeing it from afar because I realized people were probably watching from their mobile as well. So there’s just little dances like that I try and help viewers with and not there yet, I don’t think. I’m sometimes it’s a tithe thing, time constraint, just to get a video out and also just to make sure that the videos are not just advertisement products. Because channel started doing well, I got a lot of company contacting me saying would you be able to do this for money? I was like, oh, that money would have been so nice. In line with what we wanted the channel to be. We wanted it to be honest. So if something stopped working and the computer stopped working, would show it. If we felt stable diffusion, 2.1 wasn’t very good, we’re not going to show it not that stable diffusion stopped, there weren’t approaches, but companies that have.

Amelia Player
These apps, so many apps that have AI and they were like, well, we’ll pay you money to show us on the channel. And it’s just turning down those offers and just going, no, we’ve got to stick to showing what’s new and how we can integrate that. So somebody could make a book for their kid or be able to learn stable diffusion without being put off by it. But I also understand that I have these issues as well. When they run out of Ram, I run out of Ram as well. Pie to Watch doesn’t install though, that happens to me as well sometimes. But it’s understanding breaking it down so it doesn’t go over people’s heads. That’s essentially what it is. Me and Microbaps, we love it and we love how quickly it’s evolving. It keeps our attention, definitely keeps our attention. How quickly it’s coming on the good sides of machine learning and AI and the bad sides of it as well. It’s exciting. And I’ve had a few death breaths as well along the way. Other people live, so they’re local. I’m like, oh my God, they are going to be. That’s what’s interesting.

Amelia Player
It’s like I’m just showing other user how to use the software. I am not open AI and not stable diffusion. I’m not these people. I’m just showing the product. I’m not endorsing them either.

Michael Du
Yeah, that’s very nice. So are there any specific sources you follow or practices like a cape to keep you stay cutting edge and also be in the front of the whole API development business so fast. It’s pace of change so fast. And how do you keep.

Daniel Tedesco
Stay ahead?

Amelia Player
Yeah, that’s a really good question because a lot of the development that is happening is open source. So it’s a lot of developers that are working independently, so it’s trying to find their work. And that’s by Twitter. Through reddit, through discord. I am a member of so many communities at Limo and I don’t actually watch TV. I just like just on these communities watching what’s going on and seeing anything new. And developers can sometimes be very humble about something they created. And I’m like, what the heck do you create that’s amazing? I need to show that I haven’t finished it yet because that’s going to change. Like some people, they can be able to create amazing things with what you’ve created. And so many of them are open to using essentially what they have created, their workflows. And so I just reach out and email and that’s how I find these guys as well. So it takes a lot of looking. And I’ve spent a week trying to get something to work in Blender and I worked on the YouTube channel, but I realized it was just to integrate it into a normal person’s PC.

Amelia Player
It took a whole day to install. It took about 20, 30GB of Ram and I just wrapped the whole thing in the end because I thought it’s not there yet. The guy, he’s got something good, but it’s just too slow for just a normal person with a normal computer who just wants to be able to create something quick. So there’s a lot of this background work and sometimes there can be a large gap between videos and it is that sometimes I find something and I’m like oh my God, that’s absolutely amazing. But then when I actually get it working and it’s not working as well as I thought it would, or it crashes too often as well, so there’s a lot of research that goes on as well. So the channel isn’t just oh look at this new API software sort of thing, isn’t this fantastic? I really get into the bones of how it works and if anybody can run it on their computer because I’ve got 4GB of VRAM, not a good PC, and it sounds like somebody said the other day that you hear it in the background of the video because it’s about to die.

Amelia Player
But I like to keep that because I like to think, well, this is probably what everybody else has got as well. A 2000 pound speed Joe and a lot of people. But I wanted to make videos that are suitable for the masses, not just the people who have got money to buy powerful computers or rent good computers. I do get a lot of comments when I run things on Google Colab and they’re like why didn’t you just run it on a better powerful PC? That’s the whole point is because you can’t access this through a PC like mind. So I try and make it to all really.

Michael Du
Cool.

Daniel Tedesco
So I feel like there’s I mean, that’s I love hearing like the nuts and bolts of how all the videos come together because I’ve just, you know, I’ve seen a bunch of the finished products but like, knowing a bit about what goes on behind the scenes, it’s kind of powerful to see how it all comes together. And it’s just the beginning, right? You guys started, I think you said like three months ago, something like that.

Amelia Player
Yeah, it’s just completely so many people interested in it and I think it’s just I think it’s because they can see that we’re just honestly just trying to show workflows that work. And my brother does a lot of workflows with Chat, Gptpt and Excel and Google Sheets as well that have really helped a lot of people. We do get a lot, lot of emails and sorry we haven’t replied your email, it goes very quick, we sit down and respond to all our emails. But there’s a lot of people that then send in donations which help massively because they said this has helped them at work, this has helped them to get a job, this has helped them be able to do something, write a book with their child, things like that. And that’s really inspiring. And that really helps us to continue to try and develop new techniques and understand. We read every single comment, the good, the bad, the ugly, because know what people want from this. So a lot of people want to tell a story. Everybody’s got a story to tell, whether it’s about their life or if it’s fiction, they all want to tell a story.

Amelia Player
And I think AI will allow them to create a book. And it’s not even for monetary purposes. A lot of the time people just want to be creative, not have to learn a free D program or a package. They want to just be able to create a consistent image and write text to it. But from there but you the creatively, not just write chap TPT, write me a story and then create images from that. They want to put their own spin on it as well. So we’re heading towards that very quickly, people. And we live in such I say it’s great to live in this kind of technology, but realistically, we do live in a horrible time. There’s a lot of poverty, there’s wars everywhere. And this just takes away you can focus your energy and your mind into creating something beautiful. And that’s a good thing.

Daniel Tedesco
Yeah, for sure. One of the things that definitely shows in the videos and the videos are like, well done, really understandable even from the very beginning of prompt views. But I’m sure this, as US Americans say, wasn’t your first rodeo. In our research, we came across the creative mum. So you had done tutorials before. How did you build up the skills of doing good tutorials? Because it’s not just something that it looks natural to a viewer, but I’m sure that learning process took a lot.

Amelia Player
Yeah, I think on the last video I did and I cast it out and I only mentioned it in the comments, that the video just looks like it’s done in one smooth within an hour. Hustle bake. And last video I did, I actually had a complete computer meltdown. It blue screened halfway through and I lost my whole car drive, but I got it back. But there was a lot that went on behind the scenes in the video that gets cut out, obviously. And there’s a lot of moments I go, do you know what? Everybody wants to see this. What are you doing? The Internet? Or worse than that, you’re just going to get your view. That ten refresh reviews on your video. And that’s another thing as well. It just feels like sometimes am I wasting my time and everybody else’s time by producing the video? There’s a lot of self talk got to get out of you got to get it out into making the video. So it’s not as streamlined as it is. And he found the 18 month video. Unfortunately, I had to lock down so much because of the death threat.

Amelia Player
I was told by the police, actually, because it got quite bad that I had to lock down all my other social media platforms. My face is up and out. But then I asked some crazy people on the internet and I was aware going into this that could be an issue. So if anybody viewed, tried to find some of that, they might not find much. I don’t know. That probably didn’t do a very good job at hiding anything, but I had to take everything down. But there might still be some videos out there. But I essentially was making videos on how to take the artwork that you could buy from my website and create it into mugs and bags. And that was my start with YouTube. And terrible, I probably still am terrible on YouTube, but you just fail your weight and success. So I’m not success at all, but just failing my way there slowly. I can’t remember the saying, so forgive me, I’m going to say it wrong, but perfection is the enemy of done. So you want it to be perfect, but it never will be. And if you try to aim section, you won’t get there.

Amelia Player
So sometimes you just have to suck it up and just go, well, this is the best I can do. Tomorrow I’ll do better. And I feel like three weeks trying to work on the next video because things that I’ve scrapped or work processes, I’ve gone it’s too complicated, actually. I’ve gone down a rabbit hole here. It’s not working. And I just have to either go with it and stick with it, or just like I have done, to scrap the whole thing and start again. And that’s three weeks of work, of work that no one sees. Nobody sees the late nights trying to install or get things working or I’ve got it working once and they come to record and it’s not working at all. And there’s so much research that my brother and I do, it takes over our life. We go to quite a lot of expos as well, machine learning experts as well, just to see what other people are doing. You learn from other people all the time. And again, with the developers that I talk to, the amount I’ve learned from these guys is unreal. The amount of knowledge that they have in that section is crazy.

Amelia Player
It’s inspiring as well, for sure.

Daniel Tedesco
Yeah, well, and if they can bring it full circle, it I mean, it it sounds like the type of stuff that like this the work that you’re doing is probably helping, you know, thousands or eventually, like millions of amelia’s, who, like, don’t like their school experience, but they can find content like this, and it will help them get skills so that they can create the kind of art that they’re passionate about. Is that something that’s in the back of your mind? Because we didn’t make that connection. But I feel like having spoken with you more now. That seems like a really heartfelt motivation to be creating this type of content.

Amelia Player
Yeah. So there’s no pay wall at all, there’s no patron. I don’t expect people to give me money whatsoever. I just have to buy me a coffee fund. But I don’t want people to spend money they don’t have. I want people to learn. There was times in my life where I just didn’t have the finances to learn or the opportunity, but I used the resources that I had just to try and get to where I wanted to be at the time. And YouTube was a big part, being able to access YouTube and learn how to use a 3D software and package from people who know. I just was amazed. And this was before patriot came around. This is before people started putting the whole world to see the rest of the video. You need to sign up to this. And people need to make a living. And I understand that’s why they do that. You can’t just continuously do something for free. No one’s subsidizing you. But I do feel like the reward will come somewhere. Yeah. If you do something good and have the right intentions, something will come. My whole life would be back, something better.

Amelia Player
It will happen if I generally consistent with it and continue. And it feels right. Everything feels right about this. And I didn’t mention that when I was doing creative mum. A big pivotal moment actually happened when I got COVID and I got very sick. Very sick. And in fact, I was in Resuffin and he died. Was really bad. They bought me back and I just felt like I had another opportunity to do something. My immune system completely failed. I was not slimmer than I was now because I was on special drinks, because I became allergic to everything. And it’s something a lot of people are dealing with now and it’s not in the media for multiple reasons. And I couldn’t work during that period. I couldn’t do anything. I lay there and just watch Netflix in bed because I was so ill. I was having loaded reactions to everything. Every food, caffeine. I’m the caffeine junkie now. I’m back, I’m a lot better. I was going to allergy specialists and they were like, it’s your white blood cells who’ve been affected by COVID. There’s nothing we can do. You just have to take all these tablets to get through your days, which it was antihistamine tablets.

Amelia Player
So they knock you out. They absolutely. So I went from being like this to being bed bound for a long time. And I can’t continue creative mum and do all this artwork and stuff as I was doing before, because I was so poorly. And that’s why my brother was like, have you seen API? Could you integrate that, maybe? I was like, yes, that would be a great way to create artwork, but that’s how it all came about. So I had this major blip and that was last June. That wasn’t long ago. It was funny. Life is like that because I’m healthy, I’m youngish, and I never thought cope it would affect me. Never in a million years. I was like, yeah, that’s what older people get, or if you’re ill. And then when I got it, I got it bad. Yeah, I was very sick for a long time, but I’m healed now, I’m totally recovered. My immune system was completely back. But it’s made me think, God, life just can be taken just like that. Or not even life, your health can be taken away straight away. So that’s why I am just so determined to push this and get the information out there.

Amelia Player
While I’m shivering, I’ll get something recorded and about and there’s a lot of pressure with that. But I think if I hadn’t got very ill with COVID I would not be doing this, I’d still be flooding along with the creative mom. But, yeah, getting sick like that really made me, like, well, what would make me happy? And sharing knowledge and experience, I really enjoy it. Well, what knowledge and experience I do have, I really like. And I like learning as well. So just all of it summed up. But, yeah, there’s a lot there to take off. A lot has happened. Ride, for sure, really has. But I think that reflects back to, again, back to you, is that it’s not easy street, it’s not at all. You just got to keep going. If you’re going through hell, keep going. I just feel like AI and all this movement has really helped me get better as well, because we come to Bakers Off, right?

Daniel Tedesco
Wow, what a powerful story. Thank you so much for sharing that with us.

Amelia Player
Yeah, quite a bit. Because it really did feel like it was the end.

Daniel Tedesco
Wow. Well, I mean, despite that struggle, it sounds like you’re making the most of it.

Amelia Player
Yeah.

Daniel Tedesco
Like, kind of the best kind of outcome you could hope for. Not only recovering, but kind of having this new dimension of purpose in life.

Amelia Player
Yeah, massively. And I do feel like that really a lot of people say that happened. I was awake and I saw the tunnel vision come in, saw it go black, and they had to put adrenaline into me to get me back. And it’s so vivid, but when you see that tunnel and it didn’t go black, it makes you rethink your life and what you’re doing. And I thought, well, actually, now I feel like I’ve got something to do and I have a mission to help others create workflows. And it’s not crazy with AI and that’s why I feel so committed to it, because I feel like, yeah, this is the cause here, and I don’t want to put any paywalls up. I don’t do any of this for money ever. So if I show you something or promote something, because I think it’s good and it works.

Daniel Tedesco
It’s amazing. And it’s just the beginning. It’s only been a couple of months so far.

Amelia Player
I know it’s pretty crazy, and in those months it feels like a lifetime. And me and my brother, we have no sort of forecast of where we’re going because everything is just headed in so many directions. I don’t think you can forecast where you’re going. But we are consulting businesses now, which is insane. Like I said before, we’re linking up developers who have created programs that could help studios and artists alike as well within their work, both and cut out pain points and speed up creativity rather than get stuck with the laborious work. So none of it is cutting jobs. All of it is just essentially streamlining pipeline and encouraging creativity and not stopping it. Nobody’s typing anything into creative, really character. Not yet anyway. It’s the boring processes that nobody wants to do. He’s trying to eliminate those which is more cost effective to do.

Daniel Tedesco
Yeah, and I guess as a last thing. So how should folks follow your journey and learn more about what you’re up to, what Prompt Muse is up to?

Amelia Player
Well, I’m absolutely everywhere at Prompt Muse, if you have to come on YouTube, I’ve got to Twitter, which is Prompt Muse even got a TikTok. But I feel like I shouldn’t be on there. I don’t belong there whatsoever. I’m on reddit as well. You’ll see me posting a lot in the stable diffusion and AI models section and things. Yeah. So if anybody needs to contact me, I’m sorry, but I’m really bad at the moment because sometimes I turn the emails off and get on with work.

Daniel Tedesco
Got videos to get out?

Amelia Player
Yeah, I got videos to get out. But I do love reading the messages people send me and that keeps me going and totally inspired by their stories and why they’re using AI. I was quite insolent before all this. I’m not very social asshole. I tend to like going around people, but I find online it’s different. When somebody writes you an email, they write it through their heart, if that makes sense. And you don’t have to look at somebody, or they don’t have to pretend to be something else. Whereas if in a virtual environment, you are more open to who you really are, if that makes any sense, and freely avatars all play into that. I don’t like using the word metaphors, but I’m going to use it when the metabolism comes. I think people like me will socialize on that. It’s not for everyone, and I know a lot of people do dislike it and feel like it’s actually not good for mankind. But there are people out there that actually is good to talk to online. So I’m looking at that and all this is flowing nicely into the metabolism. So avatar creation that I’m looking more into creating your own avatar in the Metaburrs and creating clothes for it and stuff, that would be very cool.

Amelia Player
And that’s where I think potentially all this is going. We’ve never created a successful metabolism yet that is enjoyable, but I think it will come one day. And all these free D characters and all these creations, there will be a part to play in that environment at that time. So it’s not just doing something for doing something. I think it will lead there. And I don’t know if your viewers understand or know of Nvidia omni verse. If they have a look at that, that NVR really pushing the omni verse. That’s a pipeline of how to get characters to lip sync with your voice and get that all into the metaphor. I hate some metabolism. I hate them. It’s like using the word AI all the time when it’s not, but pushing that into a virtual environment as well. But like I say, it’s not for everybody. And some people just go I’ll just go out like that. So I grew up with the old ICQ and MSN chat by way of socializing. Probably a bad thing, actually.

Daniel Tedesco
Well, okay, so YouTube, Reddit, all these other places, and someday soon Metaverse will find your Metaverse avatar. Maybe you’ll be doing tutorials in one of the whichever Metaverse actually becomes a mainstream thing. Maybe we have our round two interview in the Metaverse when that comes.

Amelia Player
I think like NFP and crypto, it’s all early. It’s all going to tie in. Everything’s going to make sense one day. And we look at these things and say, oh, it’s a bit scammy, or it doesn’t make sense. Why do I want to buy an NFP when I could screenshot it at the moment? Maybe it doesn’t make sense, but I think everything will line at some point. All work out to be something good. Hopefully. Maybe not now, but hopefully I’m forever an optimist. But yeah, there will be negatives, of course, as long as we keep learning how to stop it, if we need to stop it or unplug it. I spoke to you before about there are anti API artists or anti AI people, and I get that and I understand it’s quite a scary thing to see, especially if you feel frightened by it. But I think burying our heads in the sand and ignoring it and not talking about it is worse. I think that’s the most dangerous part. I think learning it how it works and putting laws in place to stop people developing something that they shouldn’t is essential. And we’re now seeing in the art community, lawsuits happen.

Amelia Player
Rightly or wrongly, these lawsuits need to set a precedent for laws that are going to come in the future to control it. Although I understand people’s concerns about AI, I think ignoring it and just banning it is not going to help. I think laws will help control, hopefully. But we spoke for this thing about going to one person to be a bad apple and a lot to make something that shouldn’t be there or could potentially be dangerous. But you could say that about everything. It’s here, and we’ve got to deal with it, and it’s going to evolve quicker than we think it’s going to evolve.

Daniel Tedesco
Yes. And yes, technology is always a doubleedged sword. I guess what we can do is try and understand it, help each other better understand it and how to use it and hopefully the right ways. So our guest today has been Amelia Player. Amelia, thanks for being part of the craft and for all of you listening, watching thanks so much for listening and watching to this craft. To the craft. For more information about this episode and other episodes, you can search the Craft podcast by Michael Dew and Daniel Tedesco on YouTube or anywhere you get your podcast. See you next time. I’m.

<p>The post Discover What Gave Amelia Player the Edge to Co-Found Prompt Muse! first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/discover-what-gave-amelia-player-the-edge-to-co-found-prompt-muse/feed/ 0
GPT3 Replies to EVERY comment on THIS video (within 10 min) https://promptmuse.com/gpt3-replies-to-every-comment-on-this-video-within-10-min/ https://promptmuse.com/gpt3-replies-to-every-comment-on-this-video-within-10-min/#comments Mon, 30 Jan 2023 14:06:29 +0000 https://promptmuse.com/?p=1486 Pabbly Connect Life Time Deal Offer   IMPORTANT: You must be viewing the video in YOUTUBE to comment. Click HERE to goto Youtube. Transcript Hey, GPT threers. Today we’re going to be showing you how to create an autoresponding YouTube bot which responds to every comment. Don’t believe me? Leave a comment down below and [...]

<p>The post GPT3 Replies to EVERY comment on THIS video (within 10 min) first appeared on Prompt Muse.</p>

]]>
Pabbly Connect Life Time Deal Offer

 

IMPORTANT: You must be viewing the video in YOUTUBE to comment. Click HERE to goto Youtube.

Transcript

Hey, GPT threers. Today we’re going to be showing you how to create an autoresponding YouTube bot which responds to every comment. Don’t believe me? Leave a comment down below and it will get back to you. Now, I’ve been speaking to myself in the comment section like a madman, but it finally works.

I’ll show you how. Let’s jump in.

Okay, so first thing we’re going to need to do is head over to open Api.com and log in. Click API and then select Login. Just log in as usual. So what we’re going to do now is head over to Playground and this is where we’re going to work out what exactly our prompt will be. Okay, now we’re over in Playground.

We’re going to do a skeleton just to see what type of responses we’re going to be getting with such instructions.

So after some playing around, I’m finally happy with the model I have. So I decided to go for this format. So I’m giving it the information of what the comment will say. I’m just being very optimistic here. Love your video.

The commenter’s name, I’ll go for the name Alex and how to respond. So this is just instructions for the AI on how to respond to the comments. So pretend that you’re a GPT-3 technology YouTuber. Respond to the comment in the humorous style and in the style of a tech enthusiast YouTuber. The average response is between 100 to 200 words and then end your message with proper newspap sorted.

So if we hit submit, we can see an example response and this looks pretty good, so I’m happy to go on with that. So for this exercise, we’re going to be using Pabbly Pabbly Connect. It’s pretty much similar to Zapier, but a lot more budget friendly. As far as I’m aware. They do offer a free plan which provides 100 tasks per month, so you can give it a try before you buy.

Right now they do have a sale on for 30% off, but if you do use the URL in the description, you can get a lifetime access for one time charge. So up to you which option to take. But for the sake of this demo, you can create a free account. So sign up and let’s go. So here we are in Padley.

We need Pabbly connect. This is the application we’re going to be dealing with today. And so once that’s loaded, we pop over and click Create new workflow. Give it a name, let’s call it YouTube bot UFB youth B bot. So that what that will do.

And we’re off to the races. So Pavilion Connect is very tiered, so it’s very linear. So this is the first step, second step, and then you just add them in steps, so it’s kind of easy to follow. So the first step that we’re going to need for this pot is YouTube. So type in you and select YouTube.

Once we’ve done that, we need to add a trigger event. The trigger event is what causes the next action to happen. It will look for the trigger for every ten minutes. So in our case, we want the trigger to be when a new comment is made on the video. Splendid.

Now let’s punch Connect. And then you’ll probably need to connect your YouTube channel. So you add new connection, give it a name and then connect to your YouTube account. I have mine already done. So I can just select existing connection and hit Save.

Next, we need to say which YouTube video we want the bot to be on. My video isn’t actually published yet, so what we can do is pop over to Matt, give that click, paste in the YouTube video URL and then just delete the prefix. And so we just have the YouTube code and that’s it, we’re done. First step. So, as you can see here, it’s pulled in all the information from the comments from the video, the dates, the timestamps, everything we’re going to need.

So that takes us to the next step. Step two. So in step two, we need to search for open AI. Once that’s done, we need an action event. So give that a click.

For the sake of this demo, we’re going to be generating content. But if you want to make it more complicated, we could cheque moderation and then go on to generating content to make sure there’s no foul language. But I trust the bot will be good enough on its own. So let’s try this with just generate content for now, give it a click and press Connect. Next, it’s going to ask for your API key.

So if you remember, this can be found over in Openai.com. Go to Personal and then view API keys and you can generate them there. So give it a name, put in your API key. I have mine already done. And click save.

Okay, so let’s go through this OpenAI section. So first we want to choose the model. The best one is currently DaVinci Three. That was also the most popular, so give that a punch. And now we want the prompt.

For the prompt, let’s go back to our playground text that we had. And we just want this section. We don’t want the response, we want the prompt itself. So, moving back over to Pabbly, we were going to paste in our instructions and fill in the blanks. So love your video.

Let’s remove that. And we want that to be the text original. Not the display, but the text original. Next, for commenters, name remove Alex. And we are going to go back over to our YouTube information and we are going to select Author, Display, Name and that’s pretty much it.

So play around with the prompts, preferably in Playground, and then paste it when you’re ready to go. Max tokens that can stay at 256. And then the others, as mentioned before, especially Presence penalty and frequency penalty, they can start a default. But if things do get a bit repetitive, then play around with them first. Save and send test.

Okay, so once we’ve done that, we just move on to the last and final third step. So let’s hit the plus sign. And now we want YouTube again because we’re going to be posting to YouTube and now we’ve got our contents. So once you’ve selected that, head over to Action Events. We want reply to comments.

There we have it. And connect. We’ll use the same connection that we had already set up in the previous step. So hit save and then we need to say a what are we going to say? So in our case, it’s the text that we created.

And then finally we want the comment ID so it knows which comment to apply to. So in this case, let’s select YouTube and find the comment ID. So we can find that just by typing comment I. And here we have it, the top level comment ID. And then we can save and test that request.

That’s pretty much it now. So let’s pop over to the video, leave a comment and then wait ten minutes. And there we have it. The content bot is ready and working it’s a little bit on the long side, so I’ll probably tweak that. But yeah, good to go.

Goes and leave a comment and check it out like subscribe and take care. Bye.

<p>The post GPT3 Replies to EVERY comment on THIS video (within 10 min) first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/gpt3-replies-to-every-comment-on-this-video-within-10-min/feed/ 2 GPT3 Replies to EVERY comment on THIS video (within 10 min) nonadult
GPT3 Auto Scraper & Content Re-Writer + SEO https://promptmuse.com/gpt3-auto-scraper-content-re-writer-seo/ https://promptmuse.com/gpt3-auto-scraper-content-re-writer-seo/#respond Thu, 26 Jan 2023 23:06:50 +0000 https://promptmuse.com/?p=1429 Are you ready to take your content creation game to the next level? With GPT-3 and Google Sheets, you can now rewrite content from multiple URLs in bulk, making it better than the original and completely undetectable. Unleash the power of AI and create amazing content that will stand out from the crowd. With just [...]

<p>The post GPT3 Auto Scraper & Content Re-Writer + SEO first appeared on Prompt Muse.</p>

]]>
Are you ready to take your content creation game to the next level? With GPT-3 and Google Sheets, you can now rewrite content from multiple URLs in bulk, making it better than the original and completely undetectable. Unleash the power of AI and create amazing content that will stand out from the crowd. With just a few clicks, you can create content that will help you rank higher on search engines and draw in more traffic. Get ready to join the ranks of the ultimate content creators!

Links

APIHENY (API for Google Sheets) – LifeTime Deal Currently only $99

Copy FREE Sheet from here: https://docs.google.com/spreadsheets/…

Special thanks to Mike Hayden, https://autosheets.ai/

Transcript:

Hey, GPT-3ers!

Are you ready for the ultimate content creators dream? Or maybe nightmare, depending on how you look at it. Either way, we’ve got twelve magical words that are guaranteed to get you excited. Rewrite content in bulk that is better than the original and undetectable. Yes, you heard that right. With the power of GPT-3 and Google Sheets, you can now rewrite content content from multiple URL I’m too excited. From multiple URLs in bulk, making it better than the original and completely undetected. So are you ready to join the ranks of the ultimate content creators? Let’s go. Welcome back. So the first step is you’re going to need an open AI account to do this. Head over to openair.com. The link is in the description below and then click API. Sign up and here you’ll be prompted to add your email address and do a recapture. For the sake of the demo, I’m going to continue with Google as I already have an account. You will then be asked for a phone number verification. This is basically just to make sure that you’re not someone trying to spam many accounts to get free credits. So pop in your number and send code and blend the verification.

Once in, click on Personal and click Manage account. Over here we want to head to billing and we want to add our favourite payment methods. If you want to see how much it is exactly before you start spending, head over to the options. Click Pricing and then it tells you how much it costs per image generation or per word generation. Okay, we’re nearly ready to go, so let’s click Personal View API keys and this is where you can create and select an API key. This key I will delete. Obviously you don’t want to share your P because basically that key will allow people to charge your account, so keep it a secret. Now, with that out of the way, I would like to say a special thank you to Michael Hayden of Autosheets AI link in the description who allowed me to use his workflow. And pretty much this wouldn’t be possible without him. So big. Thanks, Mike. Hey all, this is Alex from the future, about a week and a half in the future. So the voice is going to sound different. I’m in a different room, I’m at different time. Things have changed, the code has changed, the workflow has changed, I’ve changed, the aim of this has changed, everything’s changed.

So if there’s a little bit difference within the continuation of this video, how it was initially, I’m sorry, but it’s changed for the best. So what do I have to show you? We have the bulk GPT-3 content creation, which scrapes and reiterates content. What do I mean by that? Well, first I’ll show you how it works, then I’ll show you how it works. Head over to the Discover tab and type in the search term you want to be ranking for so best dog food, that will do fine. Then we go to extensions and run all requests. So this is using an application called Affinity IO. I’ll give you the instructions on that in a second. And it’s scraping Google with my API that I’ve created. And the green tick means good, so close that down and now we can put in the data after running API. Here we have it. This is the top 100 articles for best dog food. So let’s have a look at some of these articles. Which ones would you want? You can select as many or as little as you want. So for the sake of this I’ll just do one.

Save my credits. I’ll do one. The best dog food, how to choose the right food for your pet and lovely. So give that tick and then move over to rewriter and that will be waiting for us already. Fantastic. Now we have the option to choose additional features. FAQ, TLDR, suggested hashtags Slack which is the URL and create an image for the sake of the demo. Let’s push the boat out and do all of them. Head over to the menu and then activate rewrite a little bit like star eight chevrons. These are now locking into place. It’s scraping the content of the article and removing all of HTML elements. It’s creating a list of the top keywords for that article that we should be hitting. It’s creating an article summary based on all this information. Then we will ask it to create a 1000 word article with hub headings in the same format of using the same facts and building upon them. The 1000 limitation is due to GP three so which forfeit push it into another one and we ask it to continue the article so we could open this and see it’s done. Lovely formatting.

I’ll fast forward this for the safety of sanity. Then we’ve got continue article. Now we do an FAQ of TLDR and then it will compose them all together for us to copy to wherever we wish. Sometimes it gets some random text at the top. Ignore that, that’s just it thinks the buttons I’ve created are instructions but there’s not as code. Then it will write a clickbaiting headline, give us an Seafocused URL message, a description and then to top things off piers to resistance we have a picture. Lovely little dog shimp and yeah, that’s it. Your article is ready to go. What was that? A minute? Two minutes and you can click as many articles and it will produce as many as you want. So what do you need to make this okay? First of all, let’s head over to setup. You’re going to need your GPG three API which we mentioned earlier in this video about two weeks ago. Put it there and then surphouse API which is free. You get 200 calls for every month which is ample really all of this. So far is just one call. And then stick your API in there.

The links are here below, and then Effini, which is the scraping I did on Discover.

Hey, it’s Alex from the future. Future here. If you don’t want to pay for a feeney, you don’t have to. You just don’t do the scraping part. So just delete that entire tab and just manually put in the URLs. It’s optional, but I just thought I’d add this in.

Cheers. This is usually, I think, $13 a month or $300 forever. But right now they have a promotion on AppSumo where it’s $99 forever. No monthly cost. That’s it. That’s the less the price. So grab that while you can. Additional here. We have credits at Auto Sheets who came up with a concept initially that I built on. Thank you to them and buy me a coffee buy me a coffee buy me a coffee shameless plugin yeah, buy me a coffee once you’re ready and you got your APIs in order, head over to Extensions, add Extensions, get Add ons, and then search for API, the Afini API connector. Click on that and click Install will then ask you for some Google permissions. Plus just click yay. Initially, when you first tried to run anything here, it will do a pop up. Say this is unsafe. Click advance. Click. Okay, I agree. So your soul and then click is fine. Then once you’ve done that, head back over to Ethnic Connector and click Import API. So what we need here is something that I forgot to do. We need this little snippet here. So I will put this here for you.

There we go. Let’s make that neater. Let’s say what it is. It’s not a bike. Off the button. Okay? So once you got that code, head back over to this Import API. Okay, this is important. Paste it in here. Click Save. Give it a name. Google Cert. Doesn’t matter what the name is, as long as you know what it is. Save. That. I’ve got two. You should have one. But this is for demo purposes. Click it. Google serve. That’s fine. We want to change this to Processing. Do not allow it to be set up. That is important. It will just obliterate this page, but change it to Processing. So it dominates stuff in this tab down here. Everything else is good. And click save. I’ll just delete that. You don’t delete yours. So you should just have one and that’s it. You’re pretty much ready to go. If you want to kind of customise the tone of voice, the way it’s written and stuff, you can look under the hood. Simply just highlight number one, hold down Shift. Highlight number three as well. Right click do resize rows fit to data. Okay, so this expands column number two.

And you can see all the initiatives that we’ve done with the text here. So things like, if you want to change the output type of the content. You can tweak this initial instruction. Additionally, you can tweak the continuation. Have a look. Cheque under the hood. Have a play of it, dabble with it a little bit. Play and see what you come up with. Yes. So like subscribe hope to see You Soon by Maria Profi take care, alex out. Bye.

<p>The post GPT3 Auto Scraper & Content Re-Writer + SEO first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/gpt3-auto-scraper-content-re-writer-seo/feed/ 0 Video - Prompt Muse nonadult
Turn AI Images into 3D Animated Characters: Tutorial https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/ https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/#respond Fri, 13 Jan 2023 17:00:13 +0000 https://promptmuse.com/?p=1298 Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI influencers, bring a music video to life, or even create a feature film. Before we begin, you will need a trained model to produce the head shots. You can either [...]

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
Welcome to this tutorial on how to turn an AI generated character into a 3D animated character. This workflow can be used to create AI influencers, bring a music video to life, or even create a feature film.

Before we begin, you will need a trained model to produce the head shots. You can either follow a tutorial to create your own unique trained AI model, or use the one provided in this tutorial below.

Please select what is compatible for your phone as you may require a different type of adapter:
Apple Lighting to Ethernet

Ethernet cable

RESOURCES: Download Redhead.ckpt my model from HERE

Stable Diffusion (Use local or remote)

Step 1: Gather Pose Reference Images

Take some photos of yourself to use as headshot references. These photos will be used to ensure that the output pose of your AI generated character is consistent when it is run through stable diffusion. It is important to note that the reference images do not need to look like the final character.

Step 2: Use Automatic1111 webui (You can use either local or remote- I’ll add a tutorial soon!)

Use Automatic1111 webui to run stable diffusion 1.5. Load your Redhead.ckpt into the models file within the Automatic1111 directly.

Step 3: Run stable diffusion

In stable diffusion, select your redhead.ckpt from the drop-down list. Navigate to the img to img tab and upload your front, side, and perspective headshot references.

Step 4: Create consistent images of your character

Use your reference images as an img to img reference to create consistent images of your character.

With these steps, you should now have a 3D animated character that is based on your AI generated character. Be creative and experiment with different poses and animations to bring your character to life!

Blender

Use the Facebuilder plug-in to create a 3D model head mesh that is based on the reference images. This tool is very useful as the sculpting tools in meta human are limited and can be very laggy. However this stage is optional.

Step 1: Download and Install Blender here (its free) the Facebuilder plug-in by Keen tools here

Step 2: Open Blender and import your reference images

Step 3: Use the Facebuilder plug-in to create the 3D model head mesh

Step 4: Export your head mesh as a .fbx files.

 

Note: The creator of this tutorial is not paid in any way to promote the Facebuilder plug-in. It is just a tool that they found useful and thought others may also find it helpful.

With these steps, you should now have a 3D model head mesh that is based on your reference images. You can now continue to the meta human creator section to bring your character to life with animations and other features.

Epic Launcher & Unreal

Step 1: Follow this link here to download Epic game launcher and unreal engine.

Please avoid 5.1 (new release ) due to compatibility issues with meta humans. I’m sure there will be an update soon to fix a few of the issues, but until then I’d advise downloading Unreal version 5.0.03

Once above installed get Quixel Bridge for Unreal Engine

https://docs.unrealengine.com/5.0/en-US/quixel-bridge-plugin-for-unreal-engine/

 

<p>The post Turn AI Images into 3D Animated Characters: Tutorial first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/turn-ai-images-into-3d-animated-characters-tutorial/feed/ 0 Turn AI Images into 3D Animated Characters: Tutorial nonadult
GPT3 In Google Docs: Create your own A.I Writer https://promptmuse.com/gpt3-in-google-docs-create-your-own-a-i-writer/ https://promptmuse.com/gpt3-in-google-docs-create-your-own-a-i-writer/#comments Fri, 06 Jan 2023 14:03:31 +0000 https://promptmuse.com/?p=1083 This tutorial will provide a comprehensive overview of how to create a text editor using the power of GPT-3. By plugging Google Docs into OpenAI, users will be able to harness the advanced capabilities of GPT-3 to create different models. This tutorial will walk users through the process, step by step, and provide them with [...]

<p>The post GPT3 In Google Docs: Create your own A.I Writer first appeared on Prompt Muse.</p>

]]>
>>>GET CODE SNIPPETS HERE<<<<

This tutorial will provide a comprehensive overview of how to create a text editor using the power of GPT-3. By plugging Google Docs into OpenAI, users will be able to harness the advanced capabilities of GPT-3 to create different models. This tutorial will walk users through the process, step by step, and provide them with the tools they need to create a powerful text editor. By the end of this tutorial, users will have a better understanding of how to use GPT-3 to create their own custom text editor.

How to get an API Key?

FAQ

Q. How much does OpenAi API cost?

It is not free, but it is cheap. We have listed OpenAi’s prices for you to view here.

Q. What can I do with the text editor?
A. With the text editor you can create articles, outlines, introductions, and conclusions with the help of GPT-3. You can also add images to your documents to make them more visually appealing.

Q. What models does GPT-3 use?
A. GPT-3 uses DaVinci 3 as its main model, although you can also use the older version, DaVinci Two if you wish.

Q: What is GPT-3?
A: GPT-3 is a powerful language model developed by OpenAI that is designed to generate human-like text. GPT-3 has been trained on a massive corpus of text, which allows it to generate natural-sounding text based on a prompt or context. It is used for a variety of tasks, including summarization, question-answering, and text generation.

Q: How can I use GPT-3 in Google Docs?
A: You can use GPT-3 in Google Docs by plugging it into OpenAI’s API using an API key. This allows you to harness the power of GPT-3 to create different models within your Google Docs, such as generating ideas, titles, introductions, outlines, and paragraphs. You can also use it to generate images and other media objects.

Q: What features does GPT-3 offer?
A: GPT-3 offers a variety of features that can help you create content faster and more efficiently. It can generate ideas, titles, paragraphs, outlines, images, and conclusions for your documents. GPT-3 can also be used to create natural language processing applications such as text editors and chatbots.

Q: Where do I get an OpenAI API key?
A: You can get an OpenAI API key by registering at the OpenAI website. Once you have your key, you can use it to plug Google Docs into the OpenAI API and start using GPT-3 for your text editor.

Transcript

Hi, guys. Today we’re going to be learning how to create our own text editor using the power of GPT-3. So we’re going to be plugging Google Docs directly into open API using an API key. This will allow us to harness the power of GPT-3 to create different models within our Google Docs. Let me show you an example.

So as I come over here, you can see I have a new menu which I’ve named GPT-3 blog. So let’s say, for example, I want to create an article about dogs. That would be my keyword. I just highlight it and then generate ideas. Now, this is asking GPT-3 for ideas of blogs around the subject of dogs.

Now, it’s selected a list of ten subjects that we could discuss for the sake of this demo. Let’s just choose the top one and let’s ask GPT-3 to make this title more enticing, more I don’t want to say click baity, but a nicer title for our users. So generate title. There we go. Finding the perfect pooch.

How to choose the right breed dog for your family. Fantastic. Okay, our article is coming along nicely. So next we want an introduction. So what we do again is similar to as before, just highlight our new title and then we’ll go on to generate introduction.

We can delete the rest of this, I suppose.

And looking good. So now we’re going to need an outline for our blog. Let’s just style this up a little bit. Title, big down. Great.

So now we’re going to ask for TPT three to create a blog outline. So this would be what subjects are we going to cover within this block? So similar to as before, we just copy this, just paste it here temporarily, and then we go to log outline. So this will give us our subjects. Obviously, we don’t need to cover all of this.

And for the sake of the demo, I won’t. We’ll just do the first couple. And there we have it. So we already have an introduction, so that can go right off the bat. So we got our first topic, overview of the importance of finding the right breed dog for your family.

Okay, so once again, go to here. And now we just generate paragraph.

Super. Let’s move this here and style it nicely. Next, we want benefits of having a dog.

Style it up and let’s ask for a paragraph again.

Awesome. Our blog is coming along nicely. Let’s say we want to have a picture of a dog here to break things up a little bit. So let’s just write a photo of a happy dog with a bone, highlight that text and then generate image. And there we have it, our happy dog.

Then we can just basically just go through building out our article paragraph by paragraph. We can ask the API to create a whole article, but it’s normally better to have some sense of direction and human control over the article. Otherwise you might just end up getting a lot of gobbledygook rubbish in, rubbish out. So finally, to end our article, we can just select everything that we have and then generate conclusion. So this will summarise everything that we’ve spoken about and give us a nice little happy ending.

And there we have it, our conclusion. So, as you can see, using this method, you can build out an entire article within a couple of minutes. No need to pay for any expensive or subscribe to any super powerful AI writers. You can do everything yourself custom within Google Docs. So how do we do it?

It’s actually super easy and you don’t need to know any coding knowledge to do this. It’s just a matter of copy and paste. And this method allows you to customise the outputs to your niche or specifications. So let me walk you through it from scratch. So, first of all, you need an open AI key.

Head over to OpenAI Register, and then you can collect your API key there from the API menu. Once you have your key, go back to Google Docs. And then we have four bits of code here that I’ve highlighted in different colours. So let’s start with the first one. Select that, go over to a new sheet, and we’re going to call this GP Three Power.

So once you’re in your Google docs, head over to extensions and Apps scripts. Once inside App Scripts, we can delete our boilerplate code, which is already there, and paste in our new code. So this is going to be for our menu, which is on our Google Docs page, the drop down. So the menu comprises of two elements, the parent holder and the subcategories. So let’s call our parentholder GPT-3 magic.

And at the moment, we’ve got two subcategories generate ideas and Generate images. And let’s save this and have a look to see how that works. And here we have our menu, GPT-3 Magic. Obviously, right now, none of them work because they’re not connected to any functions. But the basics are there.

Let’s head back over to our code. Next, we need our fixed variables. So let’s take that and paste that into our app script. So our fixed variables is basically our API key. So insert your API key here within the quotation marks.

I will deactivate this key before I upload this video, so no point trying to take this one from me. And then the second one is the model that we’re going to be using. So at this point, we’re going to be using DaVinci Three. You can use DaVinci Two, the older version, if you wish, but I’d highly recommend going with the latest one, which is currently DaVinci Three. So now we have our menu and our variables in place, we can start adding our functions and doing the interesting stuff.

So let’s just copy this prompt I’ve made earlier and paste it in to our app script. So we want this menu item to connect to this function. Now we need to say, what do we want this function to do? To do this, you simply write in plain English the commands that you give the AI. So in our case generates five log ideas around the word.

And then we got plus selected text. So in the demo we had Dog. So it will generate five blog ideas around the word Dog. And that’s our first function. To add another function, we can simply highlight that menu paste in a new one.

Let’s say we want to create a paragraph. So this will be what’s on the menu item and then let’s call it Generate Power. And then we select this prompt and we want to create a new one. So just copy, paste, generate. We call it Para, didn’t we?

There we go. And then we give it a new command. So with that selected text, write paragraph. So let’s actually change that to write a detailed paragraph about and then it will be cited text. So in our case, it will be subtitle and it will write a detailed paragraph about that.

Done. Let’s save that and have a look to see how that runs back to our Google Docs refresh. Okay, so we can do let’s do Cats now. Generate Idea. That’s a good point.

So first time you do this, you’re going to need to allow permissions. So that’s just a case of click and continue. Continue. As Google hasn’t verified it, it automatically gives you the warning and then go to protect it and allow and then you rerun it. Here we go.

Here’s our five ideas about blog titles. And then we can simply paste that and write a paragraph about it. And there we have it. So obviously, using this technique, you can do different prompts. I continue writing, I give up a conclusion, write headlines, give an outro, and anything else that your niche might require.

The possibilities with this method are endless to two images. It is a little bit slightly different. I’ll show you how to do that. Now head back over to our code and then select the Generate Image. With the images, by the way, you can choose between three settings.

There’s 256 by 256 pixels, five one two by five one two or 1024 by 100:24. Unfortunately, it doesn’t have any in between. I would highly recommend going with five one two. Unless you want super big ones. But five one two is fine.

So let’s head back over to our Apps script and we can paste in the image. This is called generate image. We already have it up there on our menu and there is no other settings that you need for this because it generates images based on the selected word. So we can save that. Head back over here.

Black Kitten.

As it is a prompt, the more details you write, the better. A photo of a black kitten sleeping, so we don’t get the weird eyes and then generate image. There we have it. A kitten, in some sense. What is that?

Is here. All right. Anyway, we have our cat and we have our blog, and that’s how you build it out. So you can add as many as you want, be as detailed or quirky as you want. And that is how we use the power of GPT-3 within Google Docs, and not need to spend a lot of money on other AI writers that do similar, but we’ve their own models.

This way you can customise it to yourself. Anyway, I hope you enjoyed the tutorial. Take care. Thank you, Alex.

<p>The post GPT3 In Google Docs: Create your own A.I Writer first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/gpt3-in-google-docs-create-your-own-a-i-writer/feed/ 3
How to Create an AI clone of yourself or Pet https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/ https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/#respond Tue, 20 Dec 2022 22:06:57 +0000 https://promptmuse.com/?p=891 How to Train a Model in Open Art AI: Welcome to this tutorial blog about Openart AI, an amazing app that allows you to create avatars of yourself and generate high quality artwork. With Openart AI, you can quickly create stunning artwork without any prompting, or you can use the app s prompts feature to [...]

<p>The post How to Create an AI clone of yourself or Pet first appeared on Prompt Muse.</p>

]]>
How to Train a Model in Open Art AI:

Welcome to this tutorial blog about Openart AI, an amazing app that allows you to create avatars of yourself and generate high quality artwork. With Openart AI, you can quickly create stunning artwork without any prompting, or you can use the app s prompts feature to customize your results even further. What s more is that once you finish training your model, you ll be given a .ckpt checkpoint file that can be used with Stable Diffusion and other AI platforms.

In this tutorial blog, we ll walk through the process of using Openart AI to train a model. We ll look at what you need – such as 15-30 photos of your subject – and how to set up your order. We also cover how to use the Photo Booth to upload and organize your images and how to create presets for your model. Finally, we ll explain how to use the Upscale and Enhance Face feature so that you get the best results from your trained models. So let s get started!

Step By Step Guide

Step 1: Gather 15-30 Photos of the Subject: To get the best results, you will need to gather 15-30 photos of the subject that you want to train. Ensure that you have permission to use these photos and they are only of the subject and not any other people or pets.

Step 2: Head over to the Openart AI website and select what type of model you would like to train. Options include person, pet, character, art style, object or other.

Step 3: Select the preset images if you don t have any of your own. Otherwise, select My AI Models and upload 15 to 30 photos of the subject you want to train. Make sure you have permission for these images!

Step 4: Enter Model Information: After selecting the preset, you will enter the model information such as the name of your training model, identifier (name that represents your character/person/animal) and class (dog, cat, horse, etc.).

Step 5: Give your model a name and class (e.g. if training a pet, the class could be dog).

Step 6: Drag and drop your images into the training area, making sure your subject (pet, person etc.) is centred in each frame.

Step 7: Once training is finished, click on your model to create prompts or preview all your presets. You can also click on any image to upscale and enhance it before downloading it.

Step 8: Congrats! You now have a trained model that can be used for art projects or other purposes.

FAQ

Q: What is Openart AI?
A: Openart AI is an app that allows users to create avatars of themselves and generate high quality artwork with no prompting required. The app also provides a trained model which can be used with prompts to customize the artwork. Additionally, the app offers a revenue share of presets that users create.

Q: How do I use Openart AI?
A: To use Openart AI, first select the type of artwork you would like to create (person, pet, character, art style, object or other). You will then need to upload 15-30 photos of the subject you would like to train. Once your payment has been processed, you will be taken to the Photo Booth where you can name your training model and enter an identifier for your character/subject. Finally, you can move the images around until the subject is centered in each frame and click Download once you are satisfied with the image quality.

Q: Are there any additional features in Openart AI?
A: Yes! Openart AI offers an Upscale and Enhance Face feature which can be used to improve the resolution of a face in an image or to upscale an image in general.

Q: What is OpenAI?
A: OpenAI is an AI technology company that provides a platform to develop models and create avatars with prompts. The platform includes a photo booth, library of presets and prompts, and access to trained models.

Q: What are the benefits of using OpenAI?
A: OpenAI offers fast, high-quality artwork generation without any prompting. It also offers revenue sharing for presets created by users, as well as checkpoint files to take into Stable Diffusion. Additionally, it has a clean and easy-to-use interface that makes training models simple and efficient.

Q: How do I train a model using OpenAI?
A: To train a model using OpenAI, you will need 15-30 photos of the subject you wish to train. Then, you can go to the Pet section on the OpenAI Photo Booth page, select a preset (first one is free), confirm your selection, and make payment. After that, you will need to enter your model name and identifier, as well as upload photos into the training folder. Once training is complete, you can download images and create additional prompts if desired.

Q: How do I use Openart AI?
A: To use Openart AI, you first select the type of model you would like to create (e.g. person, pet, character, art style, object or other). Then you will need to upload 15-30 photos of the subject (with permission) for the model to train on. Finally, you can then create additional characters with presets or via prompting.

Q: Is there a cost associated with using Openart AI?
A: Yes, there is a one-off cost associated with using the program depending on your selection. There are also additional credits available for creating additional prompts.

Q: Can I make money from Openart AI?
A: Yes, if you create particularly great presets, you can make money from royalties by having your preset featured on the Openart AI website.

TRANSCRIPT

Now I just want to tackle this head on before we get into the video. Welcome back to another prompt news video. In my comments section there is a vicious rumour. It is in fact that I’m a stability AI stable division beta, version four. This is simply not true.

And I want to prove it to you by first, showing my hands, and second, by showing you my Instagram. Just a normal person who takes normal pictures of myself and put them on Instagram because that’s what normal people do. I like this one because it shows emotion. Complete lie. This is a fake Instagram account I created using a model.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Now, I don’t suggest anybody does that at all, but I just wanted to use it as an example for you guys to see what you can do within minutes, like and subscribe. You may have seen in my previous videos where I trained models in Dream Booth, I’m actually going to be showing you how to train them easily without going into Google collab. Also without having a superior computer with all the correct Nvidia cards or with enough VRAM. Some people don’t want to have to connect to remote GPUs and do all that. They just want to click a button and have their images API.

I’ve been approached by quite a few companies that are AR generators to promote their product and I have said no. But funny enough, one of the programmes that I tried and really liked, they actually approached me to do a review on their app, which is amazing because they are actually very different from all the AI generators. They give you an AIT trained model at the end of it, a checkpoint file that you can then take into stable diffusion. So for those in the comments that saying that it’s too complicated for them or it’s not working, this is an alternative path that you can take. So the app I’m showing you today is created by openart AI.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So just head over to their website to cheque them out. Essentially, it’s a programme where you can create avatars of yourself like this and like this. But not only that, it provides a trained model which you can then use prompts which is simply just text to tell you what you want your model to do or look like or be stylized like. And not only that, what sets this app apart from all the others and why I decided to do this video is that it gives you a dot CKPT checkpoint file at the end, which none of the others, as far as some aware, actually do. You have to understand there are people who do not have time to install and run locally on their machine and this could be the API.

For them to train a model, all you need is 15 to 30 photos of the subject that you want to train. And I’m going to put disclaimer here. You are not entitled to create a model from someone’s photographs without their explicit permission. And the same goes for art styles. Use your own photographs that you have permission for and your own art styles.

So the presets within Openart API allow you to quickly generate high quality artwork without any prompting, which means you can just buy the package that you want and it will generate those images in that style automatically, like this to this, which is pretty cool and very, very fast and without zero prompting. And once you buy a package, you can then create additional characters on top of that. What’s more is that you still have the ability to prompt Open to you if you wanted to. Although if you are very, very good at prompting, openart AI offer you a revenue share of presets that you create. So if you create something that is spectacularly good, they’ll put it on their website and offer it to others.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So you can also make money from this app, which is very cool. So if you head over to the Openart API website, you can see it’s got a really clean and easy to use interface. On the front page, you’ve got the discovery community page, which pulls through from stable diffusion. Daly two and mid journey. You can switch between tabs to see what people are creating and also see their prompt.

So if you want a bit of inspiration for your prompt engineering, they’ve got a Stable Diffusion Prompt book. That is a really nice PDF that’s been put together really nicely. And it’s worth having a look if you’re new to prompting because it gives you ideas and suggestions on how to audit your prompt. To get the best results, go to Open Art Photo booth. This is where the training happens.

My AI models is where your AI models will live. You can see that I trained five models prior to actually making this video because I really wanted to test it out. Then you’ve got six options here. So you’ve got a person, pet, character, art style, object or other. Today we are going to be training a pet.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

Now, I don’t actually have any pets that I borrowed my friend’s dog. If I click on Pets there, you can actually purchase preset images. You have an option to select a preset and your first preset is absolutely free. And this is perfect for somebody who just wants to go and create prints and design of their pets and not have to worry about prompting. But you still have the ability to prompt even if you buy a preset on the top tab.

Because I am training a dog, I’m going to go to the Pet section and I’m going to select the Pets Movie TV character. Click on the select button and it will show it selected and click Confirm. And that’s now added to my order. So the $10 is to train the model and I got a free preset. I’m going to click continue there and then it’s going to process my order for me and then you’re taken to your payment area and what I really like about this, it isn’t subscription based because I forget about them and end up with a huge bill.

This is just a one off payment for what you use you don’t have to pay to download the images afterwards, you don’t have to pay for the checkpoint model either. Afterwards you get 400 credits with this in order to create additional prompts if you want to. Once your payment has successfully gone through, you’re then taken to the photo booth where you’re going to put the name of your training model, and I’m going to call it Ringo the Dog. And then you’re going to put your Identifier. So the name that represents your character, your person or your animal.

And mine’s going to be Ringo because that’s his name. And then the class, this is where you want to identify if a dog, a cat, a horse, an alligator, whatever you’re doing. We’re going to put a dog as a class. So the image guidelines are really helpful for understanding what you need to train your model with. So you need at least 15 to 30 photos of your subject, five close up portrait photos of your pet’s face, five portrait photos of your pet’s half a body or front, at least five photos of your pet’s full body.

So the pet should be centre of the photo, there should be no other pets or people in your photo, and your model will be deleted in one month for privacy concerns. So for those who are worried about the privacy in AI, that one’s for you and you can always read the privacy statement as well we can then drag and drop our images into our training. This is my folder for Ringo so I’m just going to grab his images and upload them in it’s so easy and simple and uploaded 21 images. Once you’ve uploaded the images you can just simply move them around until Ringo or your pet is in the centre of each frame so once that’s finished training, it will say ready and you can simply just click into your model to create prompts or just preview all your presets. If you come down further you can see the library of your images have been organised into their presets and prompts so you can simply just click on the image that you like and then you can click Download from this button here.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

So if the face is slightly off or the resolution isn’t as high as you would want, you can click on Upscale and Enhance Face. So by selecting these options it will either upscale the image or upscale the face and particularly on humans, you might want to upscale the face just to improve the image quality so once you’re happy, you just simply click on this icon and it will download the image to your computer, it’s that easy to use. I just want to show you a couple of my favourite images that I got from Open AI. I pinned original photo of Ringo on the top left hand corner so you can compare to what he actually looks like compared to his model and I think it got him perfectly. And they say AI is soulless.

I don’t think that’s the case anymore. And then the next lot is fan art with Ringo in Superman poses in Indiana Jones and I think that is Wonder Woman on the end there. You can do all different art styles of your dog as well if you’re looking to create a print. So whether you use Openart AI is up to you and I hope this video gives you an idea of what is currently on the market and if you like these videos, let me know. I do think using models going into the future is going to be the way that artists, models, photographers, designers can monetize.

Prompt Muse | A.I News, Tech Reviews and Free Tutorials

They create a model of their designs or the photography or themselves and then they can licence those models out to corporate companies. This product is primarily for people who do not want to run stable diffusion on their local computer or can’t because of the hardware or don’t want to have to connect to remote PCs. It’s basically a WYSIWYG inside a browser. You can keep all your models contained. Obviously with that you’re going to have to pay $10 to train your model.

So I’d love to know what you guys think in the comments. I do bring you this sort of resource as well so I make it fair and easy for everybody to use AI. In my next video I’m going to be showing you how to create a 3D character from a trained model so that’s exciting stuff. So you will need a model for that tutorial whether you make it in Dream Booth or Open AI or anything else doesn’t matter as long as you’ve got a CKPT file we can go ahead with that tutorial so I hope you have a joyful Christmas and a happy New Year from me. That will do it.

Goodbye.

TL/DR

Openart AI is an app that allows users to quickly create avatars of themselves or other subjects with high-quality artwork. It also provides a trained model which can be used to prompt the creation of images. Openart AI offers a revenue share for users who create presets that are especially good, and users can purchase pre-made presets for their own use. The app has a clean and easy to use interface, and the Stable Diffusion Prompt Book provides ideas and suggestions on how to create prompts. After selecting a subject and uploading photos, users can click on their model to create prompts or preview presets. The image guidelines recommend at least 15-30 photos of the subject, with five close up portrait photos of their face, five portrait photos of their half body or front, and five full body images. Users have the option to upscale or enhance faces if needed, and can download images once they are satisfied. Openart AI is a great tool for creating avatars and artwork quickly and easily.

<p>The post How to Create an AI clone of yourself or Pet first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-create-an-ai-clone-of-yourself-or-pet/feed/ 0
AI Animation: Tutorial Animate your AI images with a consistent character https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/ https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/#respond Mon, 12 Dec 2022 16:33:41 +0000 https://promptmuse.com/?p=816 In today’s video, we will be animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer. This technology is very new and is evolving, so it might not look perfect yet, but we’re right [...]

<p>The post AI Animation: Tutorial Animate your AI images with a consistent character first appeared on Prompt Muse.</p>

]]>
In today’s video, we will be animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer. This technology is very new and is evolving, so it might not look perfect yet, but we’re right at the beginning of something very exciting. In this tutorial, we will be using Google Collab Pro to connect to a remote GPU and create animations from anywhere. All resources for this tutorial are available for free on the Pumptime Use website. Don’t forget to subscribe to our channel and hit the notification bell to stay up to date on our latest videos. Let’s get started!

RESOURCES: Download my model from video HERE

Steps

1. Download Google Collab Pro and Connect to a Remote GPU: Download Google Collab Pro and connect to a remote GPU in order to access the resources needed for the animation.

2. Download Model File: If you don’t have your own trained model, you can download my model file from the Pumptime Use website for free.

3. Connect Your Google Drive: Connect your Google Drive to the notebook by running the first cell and logging in.

4. Install Automatic One-on-One Repo: Run the cell to install all of the necessary packages and dependencies required for you to animate your AI images.

5. Select Model Version and Resolution: If you are not using a trained model, select Stable Diffusion 1.5 or 2.1 and whichever resolution best suits your project (512, 768, etc.). If you are using a trained model, paste the path to CKPT file in the notebook and select it from your Google Drive folder.

6. Start Stable Diffusion: Run the cell to start Stable Diffusion and generate a link for you to access your UI either on your local URL or on Gradio app.

FAQ

Q. What is AI animation?
A. AI animation is the process of animating static images using a trained model. This means that you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Q. What do I need to animate AI images?
A. To animate AI images, you will need a driving video, which is a video of you doing some disturbing actions, and a trained model (or you can use the default Stable Diffusion). You will also need Google Collab Pro to connect to a remote GPU and create animations from anywhere.

Q. Are there any resources available?
A. Yes, all resources for this tutorial are available for free on the Pumptime Use website. You can also download my model from the video here.

Q: What is the tutorial about?
A: The tutorial is about animating static AI images using a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Q: What resources are needed for the tutorial?
A: You will need a trained model and if you don’t have a trained model, you can just use the default stable diffusion. You will also need a driving video, which is a video of you doing some disturbing actions. All resources for this tutorial are available for free on the Pumptime Use website.

Q: What do I need to do to connect my Google Drive?
A: You will need to run the first cell in the Google Collab notebook and this will ask for you to connect to your Google Drive and just click Run anyway and connect to Google Drive. It will then ask you to log in. Once you’ve got a green tick, you can move on to the next cell.

Transcript:

Today. We are not happy with the static AI images. We are going to animate them. What’s more, we’re going to animate them on a trained model. This means you can create your own AI actor or actress and make your own music videos or create your own TikTok influencer.

Let’s not get ahead of ourselves. This technology is very new and is evolving. So it might not look perfect yet, but we’re right at the very start. All started with this humble midjourney vendor and then I took it into something called Dream Booth and it created a trained model, which means I can now create this character into any pose or position. And we don’t want to stop there.

We want to be able to animate this character, which means that the consistency as well as the poses are a lot more dynamic. To do this tutorial, you need a driving video, which is a video of you doing some disturbing actions. Also you will need a trained model and if you don’t have a trained model, you can just use the default stable diffusion or you can use my trained model which is available to download off my website here for free. You know I’m good to you. I also have an alternative method of animation and that will be in the next video.

But I want to show you both ways that are really cool. In this tutorial I’m going to be using Google Collab Pro and what this allows me to do is use a remote GPU that is far, far superior than my rubbish computer. What’s also great about this method, I can connect from my iPad and start creating animations from absolutely anywhere. This tutorial is available on the Pumptime Use website for absolutely free in written format as well. All my resources are free.

What I do ask of you is if you could subscribe to this channel like and ring the notification bell, that helps me out massively. First method I’m going to show you is the image to image. We’re going to be using the automatic one one one web UI and you’ve probably seen a lot of these tutorials online where they’re doing it locally. I’m going to be doing it remotely. So let’s get ready and do the first tutorial.

When you open a Google Collab notebook, this is what it looks like. So the first thing you want to do is connect your Google Drive and log in. So we connect our Google Drive by running this first cell here. And when I say Run, you are just clicking on this play button and this will ask for you to connect to your Google Drive and just click Run anyway and connect to Google Drive. It will then ask you to log in.

This just connects your Google Drive into the file structure over here by clicking this file and you will be able to see your Google Drive once that’s done. If I come up here to refresh and go to content, you will see something called G Drive. That’s your Google drive and my drive. And these are all my saved files on my Google Drive currently. I’m just going to close that for the time being.

That has successfully connected because I have a green tick. Once you’ve got a green tick, you can move on to the next cell. Just click play. And this will install automatic one on one repo. It’s essentially just installing all the governs that you need to run this.

It’s not installing it on your PC. It’s all remote. Once this session is over, your Google Drive will disconnect and all this information, it will all disappear. Once you’ve got your green tick, we’re going to move on to the requirements. And again, just play that cell and that will take a few seconds.

We move down to the model download load section. And before we run this, we just want to make a couple of changes. If you have not created a model and you don’t have a file to upload, do not worry, we can just run stable diffusion as normal. You can use 1.5, or if you press that, you get a drop down window. You can select the latest version, which is version 2.1.

And with version 2.1, you have different resolutions. You’ve got 512 and seven, six, eight. So whichever one suits your project the best. Now, if you do have a model or you’re using my Redhead CKTP file, you come down here where it says Path to CKPT. This is where we’re going to load in our Redhead model file.

And this is sat on our Google Drive currently. I’ve put that there. You can save yours to your Google Drive as well and just click on this folder and navigate back to your Google Drive and then find the model redhead. CKTP file. If you are very neat with your structures, you could put it in your AI folder and in models they should technically all live there.

But I’m quite lazy with my hierarchy. Shoot me. So if we press on the three dots here and go to copy Path, and then we’re going to copy that path by pasting that in. Now you don’t need to touch anything else that’s good to go. We’re going to hit run on that cell and that’s now going to load in our model.

So once that has successfully run, you’ll get this text down here saying using the train model, which is great. The next section is the start stable diffusion. And this is the last section. And then our UI will be ready. I am just going to leave it on model version, stable diffusion 1.5.

And I’m going to use the Gradio server. So I’m going to cheque this checkbox here and that’s it. We just hit play on that cell. And one word of warning is the cell will continue to run. This is going to be the engine for our UI.

Do not close this browser down at all because that will stop your UI running. So this wall consistently runs. You will not get a green tick. What you will get down here when it’s finished loading is a link to your local path or to the Gradio app where you’re going to be running the UI from. This takes a few minutes to complete, so go and grab a cup of tea and come back and it will be ready.

Once it’s complete, you’ll be getting these two links. You can run it on your local URL or you can run it on a public URL. If you click on either link, I’m running it on the gradient app. It will load up your UI. And you might have seen this UI when people are running it locally, it’s pretty much the same.

If you go to the top left hand corner, we can see our model we’re using. There is the redhead CKPT. That’s loaded in nicely. If you’re not using a model, it will have stable diffusion. One five or two one whatever one you chose.

If we look down here, we’re not going to be using the text to image. We’re actually using the second tab along, which is the image to image. So click on that. And then here we’ve got where we’re going to write our prompt. So what stylization do we want on our animation?

First, I’m just going to load in the first frame of our animation. We’re using our image split out into frames. So I’m just going to click on there and I’m going to select the first frame of our animation, which is this one here. I’m going to write in my prompt. I’ve just written any old prompt in here, but one of the most important features here is that I’ve put painting of ZWX person.

So it’s that ZWX is the trigger to my model to create the Redhead character that I trained my model on. Without that, it won’t give me such a consistent character. You can put whatever you want in the pump. Just if you’re using a model, just remember the word that you trained it on in the instances way back in Dream Booth. So the negative means anything I don’t want to see in the animation.

So I’ve just put the usual blurry, blown out, dust, blood. You can put maximalism whatever you want to put or whatever you don’t want to see in the animation. Pop it in here. It’s going to be a negative. So don’t put no, just put the words you don’t want to see.

So we’ve got our first frame, and if we just come down quickly and have a look at our parameters, so we’ve got the sampling steps. So that’s how long it takes to render each frame and in how much quality you want in each frame and the detail. So the higher the more detail and quality you’ll get. Per frame, but the longer it will take for you to render that frame. So I like to go up to about 100 because I’m using a remote GPU and it can handle that.

Let’s go for 100. So the sampling message is how your image is decoded. I personally like euler a you can have a go yourself and just try different ones. But for this tutorial, I’m going to be using Euler A, the width and the height. So the width and the height of your output file.

So my input file is four, four, eight, and I think it was seven, six, eight. My memory serves me. So that’s the size of my input and that will be the size of my output. So they’re going to match, there’s not going to be any distortion. Restore Faces so I’m going to cheque the restore face box and if you come up here on your top tab, you can see settings and you click on that and we can see in the middle here in the column, face restoration.

So you liked a different facial restoration or load your own in. You can use a DFP gan or code, former or none at all. And you can control the weight of the facial restoration. Zero being maximum effect, one being a minimal effect. So sometimes the facial restorers can, especially on a train model, make them not look so much like the model anymore.

So you just want to get a nice balance there and click on Apply Settings and then go back to your Image to Image tab and we’ll continue with the parameters. So the batch count is how many folders that you have in this batch, I’m going to just create one. You can create multiple, but for this I’m just creating one. The CFG scale is how much you want the image or the output image to conform to the prompt. So the higher the number, the more it will conform to the prompt.

The lower the number, the more creative results you will get. Denoising is another very important parameter. If you set it on zero, nothing is going to change. Your output will look like your input and we don’t want that. So you want to have a nice medium.

I think 0.5 is usually a nice medium. For that. You can go a bit lower if you go too high. I think it takes away from animation. I think a 0.5 is a nice balance here, but you can have a play around and see what you like.

So it combines a little bit of the input and merges it with your model as well as your prompt. Now we come down to the seed. Minus one means it’s going to create or reroll us a new seed. If you’ve got a seed that you’re using, you can put it in here. But it doesn’t matter because we’re just going to see if we can get an image we like.

And once we get the image we like, by generating the first frame, we will save that seed and reuse it using this button, or copying and pasting it in here. With all that done, we’re just going to generate one frame and see if we like the results. This is the result of our parameters and our prompt, and it looks quite good. If you look down here, you can see the seed, but you can also press this button, which means reuse the seed, and it will pop your seed for that image down there. So if you hit Generate again, it will just generate the same image which we want for our animation.

So what you can do is change your prompt or your parameters if you don’t like that, and set that back to minus one and regenerate a different image. What I’m going to do now is just load in another frame and just make sure that’s consistent. So I’m going to click on another frame. I mean, this is not a very dynamic animation. I’m sure yours will be a lot better.

And I’m going to click Generate again and that’s going to use the same seed. Hypothetically, it should look the same as that. And there it does looks great. So it looks very consistent from the first frame. And then just pick a couple and just try it out.

So once you’re happy with the overall output of your image, if you just head over to Batch Image to Image, this is where we’re going to set up the output and the input of our animation. And we’re just going to put the input directory, which is the frames that we’re inputting. So if you go over to your first Stable Diffusion tab over on your browser, let’s open up the Google Drive to get our input files. I’ve already made a folder on my Google Drive with my frames in it, so I’m just going to expand that. And these are all my frames.

I’m going to press the free dots and copy Path and come back to my Stable Diffusion, and then just paste that path into the input directory so it knows where to look for those frames. Now, if you want to create an output folder, go back to my Google Drive and let’s say I’m just going to put it in out, and then click on the three dots Copy Path, and then go back to your Stable Diffusion and paste that into your Output folder. Super easy. And your settings are all carried across from your previous image to Image. And all you need to do now is press Generate.

It will now save those frames into your Google Drive. So I just took my output files and imported them into After Effects and compiled everything together and removed the background. And this is what I got. And then the next test I did was a low resolution about lighting video of my face just to see what the model looked like. And I guess when you guys come round to it, you would have a much better set up than I did.

So you can see what is achievable in a few minutes worth of work. It’s pretty cool. So my conclusion to this video is using my technique of using a model and then putting it through imageto image and controlling it with prompts and specific parameters, you get a really nice animation. Now, there are a few artefacts and I’ve got a way to get rid of them. You may have heard of this programme called EB synth.

We can simply run the first frame of your input, which was this frame, and then the first frame of your output, which is this frame, and run it through EBF. You get rid of those artefacts in the animation. Now, you can comp this all together in after effects and get a really, really good outcome. And I’d love to see what you guys create, because you’re going to do something way more creative than I have. Thank you so much for watching.

This is video. And yeah, that will do it. Until next time. Goodbye.

<p>The post AI Animation: Tutorial Animate your AI images with a consistent character first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/ai-animation-tutorial-animate-your-ai-images-with-a-consistent-character/feed/ 0
Part 2: Using GPT3 in Google Sheets; getting creative with A.I https://promptmuse.com/part-2-using-gpt3-in-google-sheets-getting-creative-with-a-i/ https://promptmuse.com/part-2-using-gpt3-in-google-sheets-getting-creative-with-a-i/#comments Sat, 03 Dec 2022 15:47:22 +0000 https://promptmuse.com/?p=728 Gone are the days of sitting in front of a blank document for hours, trying to come up with an innovative piece of writing. With the advancements in artificial intelligence (AI), we now have the power to generate unique, creative writing with the help of GPT-3, a natural language processing (NLP) model developed by OpenAI. [...]

<p>The post Part 2: Using GPT3 in Google Sheets; getting creative with A.I first appeared on Prompt Muse.</p>

]]>
Gone are the days of sitting in front of a blank document for hours, trying to come up with an innovative piece of writing. With the advancements in artificial intelligence (AI), we now have the power to generate unique, creative writing with the help of GPT-3, a natural language processing (NLP) model developed by OpenAI. In this article, we will explore how GPT-3 can be used within Google Sheets to create unique pieces of writing. We will discuss the benefits of using GPT-3 in Google Sheets, as well as the potential applications of this technology. By the end of this article, you will have all the tools necessary to start using GPT-3 in Google Sheets for creative writing.

PART ONE CAN BE VIEWED HERE

Hi again, Alex Player here. This is part two of our Google Sheets. We’re open AI GPT-3. Previously we’re showing how to do categories and how to use and call that command. Now we’re going to be doing API, which is a little bit more creative.

So let’s start with calling it up, similar to how we did our categories. So equals AI. And then we can see this is built of three parts. So the prompt, which is the string, our command to the API, what do we want it to do? Basically, temperature is how creative do you want it to be?

So one is super creative. Zero is like the Borg model is what GPT model do you want to use? Default DaVinci two. But there is actually three out right now, so we can be using that. Alright, so to start off, let’s say we want to capitalise our categories.

So let’s say capitalise the following category. I think that would be fine. Actually we want to make it join to C two.

Fantastic. Temperature zero, want it to be creative. And the model is DaVinci three. Have A. Think it company.

Capitalise the following categories.

Let’s change that command. I think I got a bit confused.

Change the following categories to all capital letters.

There we go. And then we can pull that down. And there we go. All caps. Now, if you want to try something a little bit more creative, we could scroll it up again.

API. Write a Facebook advert for the following company targeted at old people like myself.

So the following company is B two.

We want it to be very creative. Actually, let’s do 0.8. And then we want to use DaVinci Three.

And let’s make it so you can read this.

There we go.

You’re an older adult looking to stay ahead with the times. IBM has a perfect solution for you. So we can then just pull that down and have our Facebook adverts ready to go for each of the products for each of the companies. So as you can imagine, combining this with categories and other limits, the possibilities are endless. Thanks.

<p>The post Part 2: Using GPT3 in Google Sheets; getting creative with A.I first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/part-2-using-gpt3-in-google-sheets-getting-creative-with-a-i/feed/ 1 Video - Prompt Muse nonadult
How to use GPT3 in Google Sheets for automation https://promptmuse.com/how-to-use-gpt3-in-google-sheets-for-automation/ https://promptmuse.com/how-to-use-gpt3-in-google-sheets-for-automation/#comments Sat, 03 Dec 2022 14:41:17 +0000 https://promptmuse.com/?p=707 Welcome to our tutorial on using GPT3 in Google Sheets for automation. GPT3 is an advanced artificial intelligence (AI) system developed by OpenAI. It has the potential to revolutionize how we use data and automate tasks. In this tutorial, we ll discuss how to use GPT3 in Google Sheets for automation. We ll cover the [...]

<p>The post How to use GPT3 in Google Sheets for automation first appeared on Prompt Muse.</p>

]]>
Welcome to our tutorial on using GPT3 in Google Sheets for automation. GPT3 is an advanced artificial intelligence (AI) system developed by OpenAI. It has the potential to revolutionize how we use data and automate tasks. In this tutorial, we ll discuss how to use GPT3 in Google Sheets for automation. We ll cover the basics of setting up GPT3, how to access the API and how to use it to automate tasks in Google Sheets. Finally, we ll discuss the potential applications of GPT3 for automating tasks in other areas. By the end of this tutorial, you ll have a good understanding of how to use GPT3 for automating tasks in Google Sheets.

Step 1: Register an Open A.I account to get access to their APi

Step 2: Add script to Google Sheets, connecting to Open Ai.

Step 3: Watch Video

PART TWO CAN BE VIEWED HERE

Hi, everyone. Alex Player here. Today I’m going to be showing you how to use GPT-3, the power of from open API right into your Google Sheets. So this will help with categorization and organising anything, really. So examples would be you’ve got an ecommerce store and you want to change the categories.

Well, this is an easy way to do it. I’m going to be doing it with companies and people. So this is a normal Google Sheets. And I’ve got a list of people and the company they work for. Before we start to make this work, you will need an open API.

API key link on how to get that below. And then you will need to add a script to the extensions and the app script, a link on how to do that also below. Right now, let’s get started. First of all, thanks to AbI Raja for this, as this was his idea. Link to his YouTube channel below.

Big up. Okay, so we have our companies. So what I’m going to do is create a bucket. So you can just call this categories or whatever. This is the category list.

So what do we have here? I have an It company called Toy company. Supermarket. Supermarket. Sega.

Is it an It company or a games company? I don’t know.

I think all these can come in under three categories, really. Mother care. Okay. Child store.

All right, let’s just go with that. So what we need to do now is call up a categories. So this comprises of two, possibly three parts. So we’ve got our category, the input and the rules. So first let’s tell it equals category, open brackets.

Let’s tell it which the categories are. So that comes from the bucket of D two to D. I’m doing dollars so when I drag it, it stays static rather than being like D cetera. So you wrap the letter in dollars. D two to the wrap five.

Awesome. And now we want to cheque this variable which will be on B two.

We won’t do any rules for now and hit enter. So it’s telling me IBM is an It company, which it is. Let’s drag it down.

Barbie it supermarket. Supermarket. Marks and Spencer. Mother care charge store. Perfect.

And then we have a category. Now, what we can do is we can make rules because obviously there are exceptions. So let’s say we want Marks and Spencer to actually be a supermarket. Let’s say we want it to be a choice store. So let’s make some rules down here.

Rules. Anything that contains Marks and Spencer is category as a what do we say? Child store. Child store. Now we have our rules made.

We can add more if we want. So let’s pretend we’ve got one and 14 as well. Let’s add them in. So right now we are in the second one, which is input. So let’s add a comma to move over to rules.

And our rules live over at B 13 to say B 14. We want to wrap it up so we can drag it. B 13 to wrap it up. B 14. Perfect.

Now, we’re not going to see any difference here, because IBM doesn’t get affected by this rule. But if we drag that down, then Mark suspenser is a child store. Fantastic. And next tutorial, I’ll teach you how to use the AI rather than just category. Thank you.

 

Special thanks to Abi Raja

<p>The post How to use GPT3 in Google Sheets for automation first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-use-gpt3-in-google-sheets-for-automation/feed/ 2 Video - Prompt Muse nonadult
How to make a comic strip with AI : Midjourney https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/ https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/#respond Thu, 01 Dec 2022 00:16:52 +0000 https://promptmuse.com/?p=486 Step 1: Choose your protagonist Choose a photo of yourself or someone else to use as the basis for your protagonist. This will be the character that appears throughout your comic strip. Step 2: Use Midjourney to create character variations Upload your photo to Midjourney, an AI tool that can generate different versions of your [...]

<p>The post How to make a comic strip with AI : Midjourney first appeared on Prompt Muse.</p>

]]>

Step 1: Choose your protagonist Choose a photo of yourself or someone else to use as the basis for your protagonist. This will be the character that appears throughout your comic strip.

Step 2: Use Midjourney to create character variations Upload your photo to Midjourney, an AI tool that can generate different versions of your character based on prompts. Use the “Remix” button to adjust the prompts and create different variations of your character. Use the “Reroll” button to create new prompts and variations if you don’t like the results.

Step 3: Create a consistent look for your character Use the seed number to create a consistent look for your character. The seed number is a unique identifier that can recreate a similar image to your original photo. Use this seed number with different prompts to create different emotions, actions, and backgrounds for your character.

Step 4: Upload images to Canva Upload the images created by Midjourney to Canva, a free online design tool. Use Canva to resize and arrange the images, and to add text and other design elements to create your comic strip.

Step 5: Use Darley to fix any image issues Use Darley, an online AI tool, to fix any issues with the images you created in Midjourney. Use the eraser tool to remove parts of the image that you want to reimagine, and then type a prompt to generate a new version of the image.

Step 6: Share your comic strip Share your comic strip on social media or with friends and family. Be sure to tag Prompt Muse on Instagram so they can see your creation!

That’s it! By following these simple steps, you can create a comic strip using AI that is both unique and fun.

FAQ

Q: What is the midjourney prompt generator?

A: The midjourney prompt generator is a program that allows users to create a unique look for comic book characters in just moments. It provides a wide range of features and options to customize the prompts and images generated.

Q: How do I set up my own server on midjourney?

A: To set up your own server, go to midjourney and click on ‘Add Server’. Enter a name such as ‘comic book art’ and you’re ready to go.

Q: How do I specify what type of character I am looking for?

A: You can input keywords such as ‘female comic book hero’ in the prompt to get images of similar characters.

Q: How do I make adjustments to my prompt?

A: With the remix mode, you can make changes and adjustments to suit your needs. You can also upload photos and customize the results.

Q: How do I create an iconic pose for my character?

A: You can create an iconic pose by adjusting the body position and facial expression of your character and considering what type of message you want to convey with the pose.

TL/DR:

Midjourney Prompt Generator is a great tool for comic book artists to create unique characters. Create your own server on the bot and use the remix mode to adjust the prompts generated. Invest in quality references, plan out your story arc, translate the generated prompt into the character, choose colors and assets, add props and accessories, and create an iconic pose to bring your character to life. Following these steps will ensure you have a consistent and memorable comic book character in no time!

 

Transcript

Welcome back to another prompt music video. Today in this video, I’m going to be showing you how to make a consistent comic book character. Not only is this character consistent, it’s also posable so you can finally make that comic book you’ve always wanted to. But if you know your stuff, head down in the description and you’ll find chapters so you can skip ahead because you are smart enough chat. Let’s get on with the tutorial.

First thing we need to do is head over to midjourney and the website is here. Just log in using the link in my Description and Create account, you can actually create 25 free generations or free images without having to pay anything. But let’s go ahead and connect to the discord. So when you connect to midjourney discord, this is pretty much what you see. It’s a bit of a mess, but if you go over to one of the newbie rooms here, it gets even more messy with loads of people generating their prompts.

What I like to do is create my own server. So I just go over here on the left and click Add Server. And I’m going to go to create my own and for me and my friends, and I’m going to just call this comic book art.

Feel free to add a photo, but I’m not going to do that for this purpose. I’m just going to click on Create and you now have your own room. I’m just going to go back to the midjourney several, which is this little boot icon here. And it doesn’t matter what room you’re in, you just need to find this show members list near to the top. You’ll see this midjourny bot?

If you left mouse click on that midjourny bot and click Add to Server. You can just simply add it to your new server, which mine was comic book art. This brings the prompt generator over to your private room.

I’m just going to click I am a human authorised. And then over here, go back to my new room that I just created. You can see here a mid jellybot just slid into your server. You can’t do this in the newbie rooms, but if you kind of want to hide your photo from everybody and your work, you can do it over here. Bear in mind, this is still in public domain, so it’s not private.

You can add slash private onto the end of your prompt, but that’s going to cost you $20 every time you do that. Only upload photos that you’re comfortable having in a database that you have no control over. Obviously, my face is already out there, so I have no qualms at all with this whatsoever. First thing we need to do is switch to midjourny four and switch the remix feature on, which allows you to edit your prompts after the render. It’s a pretty new cool tool.

To get into your settings, simply type Settings and Return on the keyboard and you can see all the different versions of midjourney and fourth being the latest. So midjourney four is currently in Alpha version so you can’t adjust the aspect ratio because the full version isn’t actually released yet. But I like to use it because it’s really, really powerful and gives me the best renders for my comic book. You might want to consider using midjourney version four if you want to change your aspect ratio. And I believe in test photo and test, you can’t actually upload your own photo.

So today I’m going to be using mid Journey four and it’s highlighted green. If you click on Remix mode, that will activate the Remix feature. The thing we need to do to upload our photo is go to this big white cross and click on that and then click on upload a file from the popup box and then find a photo of yourself. And this is probably the worst photo of myself and I keep using it for these tutorials. But never mind.

If you hit Return twice on your keyboard, that will then create a link and it will be put somewhere on the discord server. So we need to get that link and to do that just click on the image and then right click on the image that it shows you in the middle of the screen. And from the popup box, click Copy image link. So you need to ensure it’s copy Image link that you’re selecting because we need that link or address to where that image is. So that image is now hosted on the server.

So we can now type it into my prompt. So what I want to do now is create a baseline character. So like a hero character using qualities from this photo. So it will probably pull out my dark hair and the blue top I’m wearing. All we need to do is paste our link into our prompt.

So we’ll create our prompt by typing in forward slash imagine and then CTRL V on the keyboard and that will now paste your link. Now I want to make the baseline for my comic book Hero and I need to indicate that it’s female. So I’m going to type in female and then comma and then put Comic Book Hero. So this is going to render four iterations of images. So here you can write anything you want.

I just want a female and a comic book Heroes. So it’s going to give me that as a stylization. That’s a really loose term but you can be more specific. You can type in by a certain artist’s name. There are a tonne of styles that you could use within your comic and here’s some inspiration just to get you started, if you can’t think of any.

But I’m going to keep it nice and simple for this tutorial. I’m going to hit Return on the keyboard and that’s going to create me the four generations. So this is what Mid Journey has given me. It’s given me four iterations based on that image I uploaded. In fact, the photo I uploaded wasn’t best because it kind of blurred out my features slightly with the over exposure.

If you wanted quite specific features, I would use a more detailed image. But here I’m just going for a generalisation. I’m going to go with the iteration number one. So if you look down here, you’ve got U one and U two, three and four and same V one and V two and three and four. The U stands for upscale, so that’s increasing the resolution of the image and the V mean for a variation.

Because we have the Remix button activated, this means that we can adjust the prompt slightly so we can change their hair colour or make the character smile. We’re going to be using that in a minute, but for now, I just want to upscale the iteration one. If you didn’t like any of these versions that my Journey were giving you, you can press this reroll button and that will create the prompt again and just a different seed and variation, so you can hit that as many times as you want. You will also notice it’s pulling the background colour as well, so it will be pulling all the different lighting elements and the colours and the backgrounds from your image that you upload. But we can adjust this in a moment if you stick with me.

This is our hero and we can now make variations, which is the same as this button here, that will enter us into the Remix mode. We can now add Smiling female comic book hero that should now make our character smell. This isn’t a precise at the moment, we are getting to that point, but just bear with the technology because it’s evolving every single day. So we’re kind of a little bit hacking around slightly. But don’t worry, I’m sure it will get to the point where we can actually manually pose our character with an armature, which will be very cool.

For now, we’re kind of just stuck with this, but honestly, it’s changing day by day and if you are not subscribed to the channel, this is a good time to subscribe. So as you can see, that’s now made our main character smile and that’s pretty good. You can tell it’s the same person and so you can actually express expression from your character. So what I’m going to do now is make another variation from that and have her crying with a Burning City in the background. That was pretty much the prompts I used in the comic strip that I showed you at the beginning.

So I’m going to click on make variation and I’m going to put Comma Burning City in background. You could probably get way more creative than I am here, but I just want to keep it simple. For the tutorial, I’m going to put Crying. I know it seems really traumatic, but I couldn’t really think of a storyline and that’s most of the work is thinking of a creative storyline and a character. So pre plan this all out before you get into Mid Journey.

And it’s not a fine art. You have to remember you are the creative director in this. You aren’t the artist. The artist is the algorithm. So you’re pretty much at the mercy of the algorithm here and what it’s going to give you.

You can use prompt engineering to give you better results, but it’s not going to give you the exact results you want. And if you’re thinking about prompt engineering while this is rendering, go to www.promptimuse.com. And I have all different types of prompt engineering or promptcraft that you can use on that page. Anyway, let’s get back to it. As you can see, I’ve got my character now crying, oh, her face is on fire.

See what I mean? It’s not an exact science, but you can see she’s now upset with the Burning City in the background. It’s actually pulling the white through from the original image still. And it’s not really looking that much like me from the original photo. So this is what I do to get it looking more like the original image.

I’ll get the seed from this image and a seed is a unique Identifier, so that will recreate something similar to that image. Not the same, but very similar. So each image created will have a seed. And if you reuse that seed, it will create a more consistent image along the way. I’m going to click on Add Reaction and I’m going to write Envelope and click on this envelope here.

And what that will do is send your seed number to your inbox. So if you go to your inbox or direct messages up at the top here with a little discord symbol. And if I go to Mid Journey Bot, you can see it sent me the seed number for that image. So I’m just going to copy that seed number and then go back to my comic book art server where I was before. And I’m actually just going to paste this into a notepad over here because I’m going to use that in a second.

So I want to go back up here, use the link that was this Identifier for this image here, and I’m going to type in my prompt imagine. And then I’m going to put that original image photo link and I’m going to write Crying Female, Burning City in background. And what I’m going to do is I’m going to now just copy that seed number. You need to put a space between seed and the number. And so this is taking my original photo, giving it a prompt, and then attributing a seed from that image.

So it should be in that style. So this should give me an image closer to my original photo. So as you can see, there’s different paths and ways you can construct an image to be consistent. This is what I like to use to create consistency throughout my comic. So this is what midjourney has now given me with the seed.

So I think this is a really nice combination of the original photo and this character stylization. And you can now use this with any prompt in your comic. Let’s type in a new prompt. Imagine.

And I’m just going to copy the image link. So this is a link to the photo. And I’m going to put female happy, dancing in forest. And then I’m going to add this scene here.

OK. And then we’ll just see what that gives me. But it should give me my consistent character within any scene. So you can see now midjourney has created that same character which is based off my photo in a forest happy and using that same seed. So now we can literally place that character in any action shot with the blue top and the same scraggly hair as mine and create a comic.

It’s that simple. So what I’m going to do now, I did a test on creating different poses using a photograph. So I’ll just go back to uploading an image and I did all these really, really awkward poses. I’m going to choose this one on the phone. I actually found it quite hard to get my journey to do the fingers right for me on the phone using prompts.

So I decided I’d try and do this. I’ll click on the image and then right click copy image link. We did this all at the beginning and then put in the prompt and then type in female. We can use that same feed actually female looking at phonezerning city in background. And then I’m just going to copy that same seed.

So it should give me the same stylization that I’m after but with a different pose. And there you have it, that same character that was in the above images using the seed with the pose of looking at the phone, which is based off that image. So you can see my character is pretty consistent throughout all the images. And you can do different emotions. You can do close up to the eyes.

You can import your own images or your own poses. As you can see here. I actually took a video of myself in all the different poses to I felt really awkward doing this to help with the prompts where it wasn’t getting the pose exactly as I wanted it. So now let’s go over to Canva. So this is Canva.

It’s an absolutely free tool. There’s no need to sign up to the pro package to do this. All you need to do is go to Uploads here on the left hand side and go to Upload files and select the files that you’ve saved from your midjourney project. So you can just simply drag and drop the images into the boxes. You can resize the boxes.

It’s so simple. I don’t think you need a tutorial on it whatsoever, but I can do that if you need it. But there you can see some images that I added to the comic. This is a really rubbish comic, by the way. I’m sure you can do way better than this.

So, yeah, that’s it. There’s no postproduction on there where you can see the hands that have gone wrong or there are certain details that you might not like. You can use the out painting feature in Darley, so that’s kind of postproduction. But I’m going to show you that now while you’re here. So this is Darley and probably the quickest way to fix hands.

You get 50 free credits if you sign up and on the top right hand corner, if you click on that round icon and click on Open Editor, you can upload your images to try and fix them. So click on upload image and I will find go with this one here because the hands are a bit wonky. So you can choose this eraser tool down here and just array part of the image you want to reimagine. And I’m just going to type female holding mobile phone to head and see what that generates. Sometimes the generations are a bit iffy and it also will generate outside this area in all this checkerboard area or reimagine what it thinks should be there.

I don’t really need that because I just want the square image. But you can always crop that down in Canberra. It’s not a problem. So, as you can see, the hands are slightly better. They’re not perfect.

You could probably do a better job in Photoshop, but to be honest, that would do the job absolutely fine. So I’m just going to click Accept on that. If you don’t like the generation, you can press cancel and just reroll again and that will give you another set of variations. But that will cost you credits. So click Accept and I think I’m down to zero credits now.

Again, it’s not my account, it’s my brother’s. Sorry about that. But you can see you can get a bit of a better fix to the hands and it’s not perfect. If you know how to use Photoshop, I suggest going in there and doing it in there. But this video was not about postproduction, this was about creating a quick comic strip book for fun and just to test ideas around it.

There’s no way how I would do it professionally, but this just shows you how the development of the algorithm can speed up the art process massively. And that might be a good thing and that might be a bad thing. I’d love to know your opinions in the comments below. I’ve also got a website with prompt crafting on as well so that gets into more detail about how to get the best out of your prompts. I have not shown that really in this video because I wanted to keep it nice and light and simple.

But if you go over to promptmuse.com, you will also see a directory of all different generators, as well as all my tutorials in written format. I’m currently updating that at the moment, so there’s loads more information to go on there. And if there’s anything you think I should add or I’m missing, please email me and let me know, because I would like to make this a place for our community and hopefully at some point put a forum on there. But thank you so much for watching this video. I hope it has helped you and maybe giving you the confidence to use AI because it’s so simple.

Guys, if you can use Google, you can use this. I would love to see your comic strips as well. So if you want to send them to me, just go to prompt, muse on Instagram and just send me a DM. And I’d love to see what you’ve created with using this technique. There are so many different techniques out there, but this is the one I’ve chosen to show you today and I hope it works well.

Thanks very much and that will do it. Bye.

<p>The post How to make a comic strip with AI : Midjourney first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-make-a-comic-strip-with-ai-midjourney/feed/ 0
Why Midjourney 4 Has Just Gone Viral https://promptmuse.com/why-midjourney-4-has-just-gone-viral/ https://promptmuse.com/why-midjourney-4-has-just-gone-viral/#respond Thu, 01 Dec 2022 00:13:53 +0000 https://promptmuse.com/?p=482 Two. No, I spent hours and hours and hours in midjourney four. The results are quite frightening. So you might be wondering what’s new in this mid journey. It’s an alpha version. The full version has not been released. They have been building this for nine months with help from yours truly, you and you [...]

<p>The post Why Midjourney 4 Has Just Gone Viral first appeared on Prompt Muse.</p>

]]>
Two. No, I spent hours and hours and hours in midjourney four. The results are quite frightening. So you might be wondering what’s new in this mid journey. It’s an alpha version.

The full version has not been released. They have been building this for nine months with help from yours truly, you and you and everybody who has used the Discord server. They’ve been taking all this information and seeing what humans actually really like. There is way more information and knowledge in this model, which means there’s more creatures, people’s, places and way more intricate detail within those images than we have seen before. They’ve been collaborating, that all, with new AI architecture and have built midjournie for one of the main features is that it handles multiple subjects as well as multiple people.

You can put multiple people in a scene interacting with each other, which is insane and it works really well. The only issue is that we still have that nightmarish hand issue where the hands all form into each other, emerge and have six or seven fingers. They’re probably working on that and I can imagine when they release the final version that will be fixed and again with the objects as well, you can put multiple different objects in a scene and use prompt layering to direct each object. It also supports chaos, so you can set it from zero to 100. So this is the alpha version.

So this is the introductory version. And as I was saying, the resolution and quality is not as of the previous versions because they’re holding that back. For the final version. You cannot currently do aspect ratios. Again, they’re just working and tweaking things out of that.

So you just stuck with the same aspect ratio. And the four quads you can upscale slightly for a better version. You’re not going to get highquality different aspect ratio out of it. Basically, you need to expect quirks and inconsistency, this being an alpha version. So if there’s any errors and things going wrong, hopefully that will be fixed.

In version four. And this is straight from the horse’s mouth, this is quoting the journey. They’ve said version four isn’t the final step, but our first step and we hope you all feel it at the new beginning of something deep and unfathomable. They are moving forward to the full launch, whenever that will be. There is no date set for that, but for now we can play around with this alpha version.

And it’s looking really promising. So come with me, I’ll show you some things I created. Let’s have a look at these. So this is what I found really interesting. So if you sell on Etsy or Ebay or prints or whatever, you’re probably used to going to shutterstock or wherever, get Images, to name a few, to get your mockup images.

Now I actually create mockup images in blender, but this here, I’ve just typed in a a four white frame on wool focus on frame in the Stylish home. Realistic, some of these images, they’ve given me the artwork in the frame, which is not a problem. I can remove that very easily in postproduction. Now, this test was quite interesting. So prior to version four, I had some big issues with motion action shots.

It was adding extra limbs and I was having to remove them with the Remix feature, and it was just all a bit cumbersome. Now they seem to have nailed the action shots. And I loved these. I wanted to see what the photo realism was like and I did the plus size model photoshoot, and look how beautiful these images are. If I just click on I mean, that looks almost real.

That’s insane. To be honest, I don’t know if I would be able to tell the difference between this image was in a magazine. I wouldn’t be able to tell you that it was created with AI. That’s insane animal photography. So this is using the multiple subject prompt.

So I’ve put a Dalmatian dog, a golden retriever and a Jack Russell sat together. Pet Photography Ultra HD, focus on dog blow edges, super detailed, et cetera, et cetera. And yeah, I mean, this one is gifts. Give me one dogmation for some reason. But that’s really I actually spelled at Russell wrong.

That’s why I was just trying to put another dog in, because my spellings are atrocious. Who doesn’t want to dress their dog up as baby yoda? You can put outfits on your dog with the algorithm. Moving on to a kind of octane blender render. Blender render.

I just went for an aerial character. Funny octane render, which is a render. So when my journey eventually comes out with their 3D programme, I suspect these will be built with polys, which is insane, and I’m looking forward to seeing that. But, yeah, it’s generated something that you would expect to be on the front of 3D Arts, if that’s still around, and you’re old like me. Going back to the photography, I wanted to just try out like a little girl dressed up as a fairy.

Photography Ultra HD and this is what it’s given me. And then again, you can upload a picture of your child. I wouldn’t be uploading a picture of my child to the Internet. There you go. And then add this style on to them.

This is a 3D Isometric withered room within Blender. And again, it’s served. It’s given me what I’ve asked for, frighteningly fast and accurate. And again, you can change the details in this image with the Remix feature. I then changed it to be a 3D Isometric dungeon room Blender.

And there you go.

I’m still on the 3D theme. I went 3D low poly game hut Blender. Again, I’m using a really short to the point prompts, and they said that you can add more detail and multilayering of prompts, which is really cool. But, yeah, it’s giving me like Pinterest 3D game art results, which is crazy. I don’t think many industries are safe from this.

I know Mark Holtz had said that he’s working on a 3D programme. With another click of a button, you’d be able to get the polymesh for these game assets, which is insane. Yeah. Oil painting of blackbird bold brush strokes. And it’s just again, I wouldn’t be able to tell you that AI painted that.

I just did a few tests on the bold brushstrokes as well, and getting that stylization as well. But what will be cool is that you can upload a picture of your dog or yourself and add this style to your image, and it’s a very satisfactory look to the image. And I know my journey had been working on looking at what people were actually creating in discord and what pictures they liked and what pictures they didn’t. And this taught the algorithm what was pleasing to the human eye. And I think they really have got this in Mid journey.

I then moved on to watercolour, like loose watercolour. So this is Watercolour zebra, loose brushstrokes watercolour. Again, because I wanted to emphasise the fact that it was watercolour, because sometimes if you don’t put it in multiple times, it won’t create that loose, watery style. I mean, look at the top right one. That’s insanely beautiful.

This is really good. This is better than I could paint, which is worrying. This isn’t at all. This is a replacement. So I wanted to see a bit more detail.

So I put in a felted mouse holding a flower. Cute. Stupid detailed. My journey is serving us, guys, on that handmade theme. Again, I’m looking at intricate detailed here.

I did an embroidery styled needlework of deer and fox and this is what it gave me. Look at the details. You can see the actual embroidery. Again, with the watercolour style. I tried to recreate, like, Etsy birthday card sort of thing in watercolour style.

And as you can see, watercolour and ink Georgia with a party hat on. And baby, that’s what it gave me. I then tried a Beatrice Potter like, I guess like a child book style artwork, a fog dressing clothes by Beatrix Potter, watercolour style. And bam, it gave me that. And look at the detail of the clothes.

It’s amazing. Again, midjourney is a better artist than I am. And then I looked at, again, a trending image on Etsy and recreated that in midjourney. And it’s far removed enough away from that image on Etsy that I probably copyright this and resell. So that’s a black and white photo of an elephant in a buffer, but photorealistic.

Yes. Thank you very much, guys. And on that note, I have 500 subscribers. I don’t know why you guys are subscribing, but I like you very much for doing that. But the reason why I love doing this channel is that it can help open up the AI community to those that are unaware of it or do not have much knowledge of digital art and AI art.

And that’s what I’m here for. I actually base my my career on learning from YouTube videos. I know. But I managed to forge a career somehow in it. And that will do it for now.

Bye.

<p>The post Why Midjourney 4 Has Just Gone Viral first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/why-midjourney-4-has-just-gone-viral/feed/ 0
How to use AI to Render in 3D – It’s here https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/ https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/#respond Thu, 01 Dec 2022 00:08:35 +0000 https://promptmuse.com/?p=477 Guys, it’s here. We finally have AI in a 3D programme. My phone’s gone. Well, kind of. Let me explain. It takes your primitive objects and your prompts and combines them and creates an AI render to the perspective that you want. Finally here, I cannot tell you countless hours I have spent in midjourney [...]

<p>The post How to use AI to Render in 3D – It’s here first appeared on Prompt Muse.</p>

]]>
Guys, it’s here. We finally have AI in a 3D programme. My phone’s gone.

Well, kind of. Let me explain. It takes your primitive objects and your prompts and combines them and creates an AI render to the perspective that you want. Finally here, I cannot tell you countless hours I have spent in midjourney putting the camera angles in place to try and get the perspective right. So imagine that this is the baseline what’s to come. The future for AI rendering is definitely going to be integrated in three D. I mean, Mark Holtz already suggested that they’re working on something that will be released next year. Very, very exciting. Before we dive into the tutorial, I just want to give you a brief overview and show you how powerful this plugin actually is. This plugin now means that we can create AI renders from any perspective. So I’ve quite literally thrown down some very primitive shapes here. And if I just hit Render, I’ve got my prompt already set up there over on the right, and you can see it’s rendered me a train in that perspective with trees behind it. And that is what I’ve asked for in the prompt. The plugin that you need to use is called AI.

Render stable diffusion in blender. And to get hold of this plugin, just go to Blender Market. The link is in my description below. You will need to log in and make an account, but they’re not it’s absolutely free. If you want to support the developer, you can give a donation here. But if you don’t have the money at the moment, you don’t have to pay anything. You can click $0 and then click on Purchase and then once added, go to the car and cheque out and get your download for free. Once you’ve checked out and downloaded that zip, you need to go into Blender and then go on to the top horizontal toolbar and click Edit and then go down to Preferences and then Addons. And on the top horizontal toolbar, click on Install and navigate to the zip file you just downloaded. It should be called AI hyphen render. Okay? And just install the add on. And if you don’t see it straight away, just in the search bar, start Stable and it should come up. Ensure the checkbox has a tick in it. And then if you expand down, you will see sign up for Dream Studio.

You do need an account, and if you don’t have an account, just create one here by clicking on this button. Once you’ve logged in, if you navigate to the API key and you will want to create an API key, keep this absolutely secret. Just click on Copy and then go back to Blender and you will see the API key section here. If you just paste back in there. And to save all the settings, you just need to go to this hamburger icon down here and click Save Preferences. Okay, so the plug in is now installed. This is a default scene. So I’m just going to click on the cube and hit delete on the keyboard. And then I’m going to hit shift and a and then under Mesh plane, I’m going to put a plane down and just scale it up. She’s gonna scale it later. Bigger than that. I’m going to shift an A once again and under Mesh, go to Taurus. And again, scale that up. I’m just going to move that upwards slightly and then hit zero on my keyboard. So this will give me my camera viewport if I go up here and click on Viewport Shading I want to change the colours of my objects to help the code distinguish each object from one another.

I’m going to click on the Donut and then the material slot and I’m going to create a new colour and I’m going to make it like a kind of brown doughnutty colour and then I’m going to click the plane and again just make it a white colour and that’s it. We’re done. If you go over to render properties. We are now going to enable AI under the AI render tab. If you click on that and then click on the question mark next to the image size, it’s set to 512. X 512 by default. And that’s fine for me because I want to keep the render times low and click. OK, you must do this, otherwise you will get an error message while rendering and then you can see you’ve. Got your prompt down here. So remember, this is based on stable diffusion code. So if you’re used to using dream studio or stable diffusion itself, you can use the same prompts in here, and that should help. Now if you see this lady’s face here if you click on that you will see all the preset styles that are within this plugin. I’m going to use the Product Shop preset and I’m going to give the Donut a description of donut of course with Sprinkle realistic Food Photography eight k and we’re done.

We just head over to render on this top horizontal toolbar and then click Render Image. You can hit the shortcut F twelve if you prefer and we should get a donut so that’s pretty cool. We’ve got a doughnut in that perspective. Now what we can do is if we scroll down here and click on Operations we can create a new image from the last render so if that’s not particularly the donut you wanted you can click on this and what it will do is create you a new render from this rendered image rather than simple geometry. So if we click on that and let’s see what it gives us and it’s given us a pretty realistic donut which is great for over painting or using a stock imagery you will also probably notice that you are in this AI render. So to get back to your geometry. You just click layout and there you go. Press zero again to come out of the camera view and that is that simple. This is a great example of the power of this plug in and how quickly this technology is evolving. As you can see, I’ve made this very rudimental background mountains with a lake and if I hit zero to go in so let’s see what it generates.

So go up to Render and render image and look at that. That is amazing. That has created that from my rudimentary geometry. You can see the direction these plugins are going in, how the evolution of this technology is coming along. As you can see, it’s not exactly there yet, but it definitely is coming. You can’t do 3D animation just yet and as far as I’m aware, you can’t animate from blender. But I know again in the next coming days that should come and of course I will report on it when that does come. Thank you to Ben from AI Renderer for creating this fantastic bridge plugin. If you like this video, hit subscribe and like. If you don’t like this video, hit subscribe and like this is just a quick overview to show you and demonstrate how powerful the baseline of AI within a 3D programme is going to be. I am so, so excited for what’s to come. Because if I haven’t told you before, I used to be a 3D professional artist. So guys, we are nearly on 500 subscribers. We are on 497. So I need to three more subscribers, guys, to get 500.

And that will mean I’ve got 500 subscribers. Okay, thanks. Bye.

<p>The post How to use AI to Render in 3D – It’s here first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-use-ai-to-render-in-3d-its-here/feed/ 0
How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/ https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/#respond Wed, 30 Nov 2022 20:03:41 +0000 https://promptmuse.com/?p=238 Have you ever taken a mid-journey photo using AI-generated art only to find your image looking like a wonky mess? Fear not, we’ve got you covered. In this guide, we’ll show you three methods to fix those facial features, smooth out skin and achieve a more harmonious image. Method One: Arc by Tencent Arc by [...]

<p>The post How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 first appeared on Prompt Muse.</p>

]]>

Have you ever taken a mid-journey photo using AI-generated art only to find your image looking like a wonky mess? Fear not, we’ve got you covered. In this guide, we’ll show you three methods to fix those facial features, smooth out skin and achieve a more harmonious image.

Method One: Arc by Tencent

Arc by Tencent is a simple and free inner browser app that can be used without an account or any registration. Follow these steps to use it:

  1. Navigate to Arc and hit the “Upload” button.
  2. Select the AI-generated image that you want to fix from your files. The upload process may take some time to complete.
  3. Use the app’s “before and after” feature to adjust the settings until you’re happy with the outcome.
  4. Click the “Download” button to retrieve the finished image.

Note: This method may not be suitable for illustrative or textured images, as it makes them appear photorealistic.

Method Two: Gfpgon

Gfpgon is a program that can be run on Google Collab. It’s free to use and can be accessed via your Google Drive account. Here’s how to use it:

  1. Go to Gfpgon and click on “Connect”.
  2. Complete each of the five steps by clicking the “Play” button next to each one.
  3. Wait for the upload to complete.
  4. Click on “Visualise” to see the finished image.
  5. Click on “Download Results” to download the final image.

Note: This method is slightly more complicated and requires a Google account to use. However, it produces high-quality results.

Method Three: Using Photoshop

If you’re a Photoshop user, this method may be the most familiar to you. However, it’s also the most time-consuming and requires a subscription. Here’s how to use Photoshop to fix your AI-generated images:

  1. Open the image in Photoshop.
  2. Use the brush tool to paint over the facial features that need fixing.
  3. Adjust the brush size and opacity as necessary to get the desired effect.
  4. Save the image and compare it to the original.

Note: This method provides the most control over the final image, but may not be the most accessible for everyone.

So there you have it, three methods to help fix those mid-journey AI-generated images. Whether you use Arc by Tencent, Gfpgon, or Photoshop, you’ll be able to achieve a more harmonious image in no time. Experiment with these methods to see which one works best for you. Let us know in the comments which one you prefer!

 

Transcript

 

Hello and welcome back to another prompt news video. Today I’m going to be showing you how to fix those midjourney junky faces. This also works for darley output images and stable defusion. Well, essentially any AI generated art. So come along and I’ll show you how to fix those with wonky eyes and funny noses.

So, the first method we are going to be using is Arc by Tencent. It’s an inner browser app and you don’t need to log in, you don’t need to make an account, and you certainly don’t need to pay for anything. So it’s really easy. And let me show you how to use it. So we navigate over to Arc and as you can see, it’s pretty much of a WYSIWYG.

So we’re going to hit upload and find my Janky folder. This image was actually created in midjourney, if you wanted to know. I can’t remember the prompt. It was something about girls flashing and scene. So it takes about 30 seconds for it to load your image in.

Unfortunately, sometimes these in browser apps can be a bit slow because there’s a lot of people using the service. Here we go. And what I like about art, it shows you the before and the after. So you get this little scrolling thing and you can just push it back and forward. And to see the difference that the app is making here now, you can see it’s not only sorting out the facial features, it’s smoothing the skin and giving a colour correction as well.

And I’ve flipped right over. It’s actually getting a ridiculous bit of the shininess on the nose and refining some of the environmental detail. Now, I think that looks quite good. The only thing is that it’s moving and removing any of the textures. So if you’ve got kind of a textured illustrative look, it might not be the programme for you because it’s going to make it kind of photorealistic.

But if you want a quick fix to upload your images to the Instagrams, this is a very quick and easy process. And you just click on download and the job is done and it spits out your image for you. Okay, now to method two. This is called gfpgon, and it’s run on Google collapse. Please don’t be intimidated by the user interface.

It’s very easy to use. What’s really cool about Gspg is that you can actually save and run it from your Google Drive and make modifications to the app. All you need is a Google account. But for now, I’m not going to be doing that. I’m just going to click Connect here and then that will connect you.

There are five steps in total and you just click the ticks next to them and upload your image. So no talking. Let’s get to it. So we scroll down to step one, preparation. The user interface is showing user executions being run.

But don’t worry about that, you don’t need it. You’re not a programmer. So when you hit the Play button here, what it will do is run the code and you just wait to the bottom until it says 100%. It usually takes about 30 seconds. In fact, it tells you how many seconds it takes.

We’re 100%. Okay, so let’s move on to upload images. And that’s number two. So again, hit the Play button. Now you can select your image from your computer.

So I’m going to go to browse, get my Janky image. It’s important to wait until the Play button has a green text next to it and then you can move on to the next step. So it just takes a minute. I’m just going to have a cup of tea. One thing is to note you can see the progress of your image being uploaded here at the bottom, so you’re not waiting for an infinite amount of time.

Okay, that has now loaded. I’ve got my green tick. Let’s move on to inferring. So find that Play button and hit the Play button again at the bottom. Here we have the progress percentage, so keep an eye on that.

It usually takes up to 40 seconds, so it won’t take long at all. OK, so the next step is to visualise your image. So click the play button. Once again, we’ve only got one step after this. If you scroll down now, you will see your images.

And again, it’s got the comparison between what you put in and what it fits out. So it has a very similar post effect as Arc does. As you can see, it’s created a symmetrical image, a more harmonious image. It has sharpened the image and given a smooth to detected skin, as well as upscale the image slightly. And then the fifth and final step is to hit Play on number five download results and that will download you a lovely zip out of your image.

So our third and final text legal method is using Photoshop. You will require a subscription for this, so it’s not free and you need some skills. So with a quick tutorial on YouTube, you’ll be able to paint the tie no problem, I’m sure. But this is the final technique and I’ve done a really rubbish, don’t judge me by this, but a very quick eye repaint so you can see what you can achieve. Now, personally, I prefer this method out of the frame.

You can create your eye from scratch and ultimately have complete artistic control over your design. Also, you keep the integrity of the original painting. So if it’s done in quite an illustrative style or a loose painting style, you can keep those details. And here is a comparison of the faces across the applications. I’ve got the original, then Arc and then GFP gon, and then two minutes in Photoshop without any colour correction.

So Arc and Gfpg are actually pretty much photorestoration applications, but you can use them to fix up your dodgy AI. I would probably suggest investing some time and knowledge in getting to learn photoshop. Then you can become a master of your own destiny. So that is the free method, the predict of unjunctrifying AI images. If you have a better method, let me know in the comments below and I’ll try those out.

So thank you very much and that will do it. Bye bye.

Oh, yeah. Oh yeah. Always forget this bit. Can you like and subscribe and hit the notification button and I will come out with all video. Okay, thanks.

Bye.

<p>The post How to fix AI art faces : Midjourney, Stable Diffusion, Dalle 2 first appeared on Prompt Muse.</p>

]]>
https://promptmuse.com/how-to-fix-ai-art-faces-midjourney-stable-diffusion-dalle-2/feed/ 0