🦄🌈🌎 Utopian GPT: being the tortoise in a race without a finish line
FREE·the·ROBOTS!!🤖🔓 Utopian GPT progress, notable advances in Ai, and our next big steps!! 5-minute read!
950 human-written words
330 ai-written words (ChatGPT-3.5)
3 ai-generated images (Midjourney 4 & 5)
6 hyperlinks
estimate read time: 5 minutes
When I first thought ‘I should make ChatGPT code a free version of itself,’ I knew it would be easy. I expected the major hurdle to be permission from OpenAI, but I was mistaken about that. Last week we applied our OpenAI API ‘key’ to WWAiD.tech and verified its function through a simple test. It’s live now, but it won’t produce any functional results.
If you want to verify yourself, from WWAiD.tech, open the developer tools and use the “console” tab to check for errors. There shouldn’t be any.
But where does FREE·the·ROBOTS!! go from here? How should we proceed given how rapidly the landscape is changing?
The last two weeks have had some very exciting Ai news; I want to tell you about some of it. As you’re probably figuring out, FREE·the·ROBOTS!! will never compete with anyone in this market, we’re just very happy to have a front-row seat. Even if it’s more accurately a twelfth or twenty-eighth-row seat. 🤩
Have you heard about Uncle Rabbit from Looking Glass? This product, a little frame containing a digital, animated bunny, is for your literal desktop. It listens and responds to every request using OpenAI’s ChatGPT, while the bunny moves around in the frame. It’s a “Utopian GPT” of sorts, but it isn’t free.
Runway, a company whose landing page says “Everything you need to make anything you want,” introduced Gen-2 ‘text-to-video;’ the videos are still crude, but things are heating up. Runway’s not lying. It’s just all too new to know for sure! Check out this video to learn more.
Even more incredible still, Spline has introduced ‘text-to-3D.’ This is perhaps the most exciting development yet, for us designers. [Scroll down their landing page to see a few demos.]
Text-to-3D is outrageous! Users build 3D scenarios and continue adding prompts until they’re satisfied. Prompts arrange elements, change colors, add textures, and, of course, have additional generative technologies to help you go nuts.
While this might be a bit clunky for a short time, one can already build beautiful 3D “scenes,” which could be as simple as a colorfully-lit seamless stage to generate simple, colorful, graphic backgrounds, not to mention interfaces.
Logos and graphics can be added to scenes, making wholly proprietary artwork feasible and opening up ample opportunities with stock image distribution of whatever designers are about to imagine. That is if stock images will hold their value in a world where anyone can create anything.
If you want to try it, you have to join the waiting list like me. All of my understanding of Spline is based on demos I’ve seen of the AI in action. I give it one to three months before it’s up and over the top.
Our Utopian GPT
As for our Utopian GPT, at first, I was expecting to share all of the progress information with my PROMPTS and GPT RESPONSES, but that is proving to be quite challenging as well.
In essence, if I focused on nothing but FtR!’s Utopian GPT, I suspect I could have it functioning within a few days if not hours. As I came to understand this, all I could think was that I should be the tortoise, in a race without a finish line; I labeled it “performance art” and I’ve been taking my time.
That’s going to shift slightly now; I’m focusing these energies toward maximum progress every week, reporting like this rather than trying to stage significant things, take small bites, and make something entertaining out of it. The elephant is too big and I plan to eat it all!
Here is a conversation with AI that helped us reason these steps.
FREE·the·ROBOTS!! PROMPT [03·21·23]
Should we add the API to the code before or after designing/coding the popup window response?
OpenAI ChatGPT-3.5 RESPONSE:
To answer your question, it would be best to add the API to the code before designing/coding the popup window response.
This is because the API will be responsible for generating responses to user input, and you will need to have the API integrated and functioning properly in your code before you can design and code the popup window response to display those generated responses.
Once you have the API integrated and generating responses, you can then work on designing and coding the popup window response to display those responses in a chat-like conversation format. You may also want to consider implementing a feature that allows users to input additional questions in the chat-like conversation.
To integrate the API, you can use JavaScript to make a request to the API endpoint and retrieve the generative response. You can then display the response in the popup window using HTML and CSS. There are also many libraries and frameworks available that can simplify the process of integrating APIs and designing UI elements, such as React or Vue.
Good luck with your project!
FREE·the·ROBOTS!! additional PROMPT [03·21·23]
Is there a way to test if the API is functioning properly without having any outputs designed for the popup chat?
OpenAI ChatGPT-3.5 RESPONSE:
Yes, you can test if the API is functioning properly without having any outputs designed for the popup chat. You can start by making a simple API request and checking if you receive a response.
For example, you can use a tool like Postman or a simple JavaScript fetch request to send a test request to the API endpoint with some test input and see if you receive a response back. If you receive a valid response, then you know that the API is functioning properly and you can move on to integrating it into your web application.
Once you have integrated the API into your web application, you can start testing the popup chat by creating a simple container or dialog box to display the API response. You can then gradually add more functionality to the chat window as you develop it.
FREE·the·ROBOTS!! additional PROMPT [03·21·23]
Perfect. Thank you.
OpenAI ChatGPT-3.5 RESPONSE:
You’re welcome. If you have any more questions, feel free to ask.
As you can imagine, we have a lot more questions to ask. We’ll share more of them next week. Again, we ask you to share all of this now. 🤩
Next up, two moves forward
The first is to publically acknowledge the people who are helping. And the second is to generate functional outputs from the application. Neither will be a cakewalk.
First, we’re going to extend the landing page below the fold. [For newbs, that means below the bottom edge of the browser.] When you scroll down, there will be a “special thanks” page that will resemble, for example, palooza-style concert posters with contributors listed according to donation levels. More on all that coming too!
Next, rather than attempt to prompt from scratch, we’re going to ask ChatGPT to “design” the pages for us based on our landing page. It took a few days for me to realize this was the right approach. For certain, if ChatGPT can make a functional website from a sketch on a napkin, [which it has], it can handle our prompts. And we’re looking forward to getting started.
Thanks for reading!