AI in Action: Navigating Generative Tech at Lullabot

Podcast episode player
0:00 0:00

Host Matt Kleve assembles a crack team of Lullabot experts from various company departments to share their hands-on experiences and insights into how innovative technology influences and enhances our field.

We discuss integrating AI into coding, design, and tasks like writing emails and RFP responses, along with the broader implications for the future of web development.

Join us as we navigate the complexities, challenges, and vast potential of Generative AI in shaping our world.

Episode Guests

Seth Brown

Photo of Seth Brown, white male wearing a white button down oxford shirt in front of a gray background.

Seth Brown is Lullabot's CEO and serves on the Board of Directors.

More about Seth

Karen Stevenson

Karen Stevenson wearing a white button down shirt and blazer with gray backdrop behind her.

Karen is one of Drupal's great pioneers, co-creating the Content Construction Kit (CCK), which has become part of Drupal core.

More about Karen

Andrew Berry

Photo of Andrew Berry, white male wearing a blue button down oxford shirt in front of a gray background.

Andrew Berry is an architect and developer who works at the intersection of business and technology.

More about Andrew

Helena McCabe

Helena McCabe wearing a yellow sleeveless top with white polk dots in front of a gray background.

Helena is Lullabot's friendly Technical Account Executive. She's based out of Orlando, Florida. She loves dogs, web accessibility, and unusual flavors of ice cream.

More about Helena

Matt Robison

Matt Robison wearing a dark gray button down shirt in front a gray background.

Matt has been working with Drupal since 2008. He loves spending his time reading, writing, playing with his three kids, and eating lots of ice cream.

More about Matt
Transcript

Transcript

Matt Kleve:
For November, 16th, 2023, it's the Lullabot podcast.
Matt Kleve:
Hey everybody it’s the Lullabot podcast, episode 267. I'm Matt Kleve, a senior developer at Lullabot and today we welcome our new robot overlords. Lullabot is a strategy, design, development company primarily working in the Drupal space building websites for great clients. Today we're talking a little bit about AI. New tools that are out there that are maybe changing the way people are working or enhancing or making it more difficult. We're gonna hear from a few people from different parts of the company who might be using AI tools, generative AI in new and interesting and different ways. To kick it off, we're going to bring on Lullabot’s CEO from Carbondale in the Colorado Mountains. Hi, Seth Brown.
Seth Brown:
Hi, good morning or whatever it is, wherever you are. So this is the podcast about how to generate pictures of like monkeys riding dogs in red riding costumes.
Matt Kleve:
If you wanted to be.
Seth Brown:
I thought that was the key, generative AI function.
Matt Kleve:
The world is your dog horse. Also, with us, we have our technical account executive from Orlando, Florida, Helena McCabe. Hi Helena!
Helena McCabe:
Hi!
Matt Keeleve:
And our content writer and strategist from Louisville, Kentucky, Matt Robinson. Hi Matt!
Matt Robison:
Hello!
Matt Kleve:
Chief Operating Officer and Chairperson of the board from Normal, Illinois, Karen Stephenson. Hi Karen!
Karen Stevenson:
Hello, how are you? People don’t usually get that wrong. But that's good!
Matt Kleve:
Normal sounds like a great place to be.
Karen Stevenson:
Normal is a great place to be. It's where all the normal people are, yes.
Matt Keleve:
Also joining us later today is I understand Lullabot’s Director of Technology, Andrew Berry had a conflict but should be joining us here shortly.
Helena McCabe:
Oh fun!
Matt Kleve:
So generative AI, so we're all in a group of people that are talking about some AI tools within Lullabot and how we're using them and got together and after our meeting yesterday I just kind of sent out an invite to everybody who was there and said hey if you can make it to a podcast I'd love to hear how you do it so we have some folks who are executives who are dealing with sales and clients and writing for the website. I'm a developer, Andrew sees lots of technology too. It seems like AI can impact everything we're doing at Lullabot in some regard, right?
Seth Brown:
Yeah, absolutely. I mean, I got into GPT and had sort of this mind-blowing moment where I felt like, this is gonna be utterly transformational like the way that the internet was, the way mobile phones were. There was no hesitation or doubt that I've sometimes felt with some of the other technological innovations of the last 4 or 5 years where I was like, yeah, I don't know if that's gonna be a thing or maybe that's not gonna be a thing for a while. But this one just hit home immediately and certainly credit to OpenAI for the interface that made it so approachable. But I started using it for all kinds of, actually this is gonna make everyone uncomfortable, but legal questions so clarifying things about the ESOP or asking as CEOs do we have lots of legal questions and lawyers are super expensive and it's not that I like that poor lawyer who used it to file their their brief. I wasn't trying to actually have it generate things, but it was more just having a conversation to help educate me about certain areas of contract law or ESOP law or those sorts of things. And then I realized just more and more use cases like monkeys riding on dogs, pictures. Now that we have DALL-E 3 in GPT-4 Turbo and there's all kinds of cool things we can do now with these GPTs with the agents that are starting to come out. And so I find myself now using it a lot to check my writing. It comes into my writing process a lot for RFPs and RFIs and stuff…
Matt Kleve:
Request for proposal and request for information?
Seth Brown:
Yep.
Matt Kelve:
Okay. And that's something that our prospective client would send you saying, hey, we need somebody to build our website. Maybe you can tell us about how you might do that, right? That’s kind of the RFP process in case somebody doesn't know.
Seth Brown:
Yep, and so we do a lot of those and they're tedious and they're hard and they're always slightly different in a way that makes reuse difficult. So you can do all sorts of things. You could create an agent and feed it now like let's say the last 2 or 3 proposals because it's good at remixing and summarizing those sorts of things. But you're definitely gonna have to rewrite whatever it generates. Like it's a co-pilot it's augmented intelligence that maybe helps you brainstorm and think it's no replacement yet for writing although it may someday.
Helena McCabe:
Also clients might not love hearing that their proposals were worked on by GPT. But I guess we should, never mind never mind.
Seth Brown:
[inaudible 00:05:34] that's kind of the scary world we're living in now is that when we were at NASCIO, which was a state conference for state CIOs, one of the things that everyone was talking about in the RFP AI session, was literally that what happens in a world where basically bots are writing the RFPs and the RFIs and responding to them because that just seems like such an obvious use case to everyone.
Helena McCabe:
Woah!
Seth Brown:
They were sort of like this could break the whole process. You're gonna have robots talking robots.
Helena McCabe:
It's just battlebots at this point.
Seth Brown:
Yeah.
Helena McCabe:
We'll have our bots talk to your bots and then we’ll come up with something.
Seth Brown:
Our bots will talk to each other and then make a decision and we'll let you know if you got the job.
Matt Kleve:
Is a bot reading the response as well?
Seth Brown:
Not necessarily, hopefully that's a human, but as an evaluator, you could certainly use it to summarize, you could use it to compare, you could…
Helena McCabe:
To thin the pack? I mean, that’s a thing like the job Indeed kind of stuff. They throw away half the resumes that come in before any human looks at it because it just keyword scans and says well these are the 20 that come the closest to what you're looking for, just review these.
Matt Kleve:
It also sounds like, Seth, you're not replying with a straight copy paste of what the bot spits out.
Helena McCabe:
No.
Matt Kleve:
You've mentioned that it's not always super and you might have to make some changes to everything that it thinks is right. Right?
Seth Brown:
Oh, absolutely. Like nothing, nothing comes through either in code or in writing suggestions in my mind, in a finalized or ready to use state. And so it's much more about the interaction, I think, with generative AI and letting it help you brainstorm, question, explore, that sort of thing. And then, the AI tools that we've traditionally had for writing, things like grammarly or Hemingway App have also taken a leap forward recently. Hemingway app has added a new AI sentence rewriting feature and that's at such a granular level that as an editor, can kind of help you with something you've written, when you have really long-winded passages that are too complicated or verbose. Sometimes it'll give you some good suggestions for how to simplify. But again, these are all augmented intelligence as opposed to artificial or replacing human intelligence.
Helena McCabe:
It's like rubber duck plus.
Seth Brown:
Yes.
Helena McCabe:
If you already have an idea, it can help you shape it, it can help you format it, you can bounce it off and it's not just the wording also in the art space. Like I'm an oil painter in my spare time and being able to bounce ideas off of AI and see them visually before I start touching Canvas or Photoshop. It's a great springboard but it can't replace that human element. At least not yet.
Matt Kleve:
One thing I see that is missing the human element is definitely when it starts to write prose. I'm not sure it has a solid voice that matches what you want to write and that's where I want to bring Matt Robison into the conversation. Matt, I know you write a lot of stuff for our website. How does Chat GPT end up, or other tools you might be using, end up helping you out or not being good or being good? How does it work for you?
Matt Robison:
Yeah, I haven't found anything that you can just use one to one and not just feel terrible about. Now it has been helpful a little bit in brainstorming, as some other people have said. So I can give it “I am the CMO of a major university. What are 10 things I should ask about when I'm evaluating CMSs for our website”, stuff like that. And even then I'm not using things verbatim, but it does help get the juices flowing a little bit. But a lot of it doesn't quite have that touch that you know someone has crafted something at least in the writing portion. I've seen some of the image generation that actually has fooled me sometimes. But even then a lot of artists are taking that and tweaking it and tweaking what they get and using it as just like a rough draft. So…
Matt Kleeve:
Which is interesting for you, Matt, if we can take the other part of your life into perspective. You've written a children's book and worked with an illustrator and are in the process of a second. Is that something that you might be able to use on the art side of things?
Matt Robison:
It's something that I would like to be able to use. However, it's not going to be there for any type of narrative, continual storytelling, I don't think, at least not any time soon. Because it cannot, right now, keep any sort of continuity between characters, at least not for I don't know - you can - it will always be an approximation. And not - and you’ll be able to tell that it was obviously randomly generated and not quite the same.
Helena McCabe:
You can get a little closer that way. When Chat GPT generates something, if you prompt it, you can say, “here's my prompt and also give me your seed.” And then it will give you the prompt and it will tell you what random seed it used to generate that prompt and then in the future you can say, “hey, here's my prompt and please use seed, blah, blah, blah.” And that will give you more similar results than you would have gotten otherwise.
Matt Robison:
Yeah, that's interesting. And you can also send your own reference image to look at, but yeah.
Helena McCabe:
But you have to make sure you ask it for the seed because it can't tell you later. If you don't ask for the seed, it just wipes it once it answers you and that seed is gone forever.
Matt Robison:
Yes.
Matt Kleve:
Interesting.
Seth Brown:
I wonder if having a GPT dedicated, like an agent as your co-author, would give you similar results of consistency like I have not played with creating GPTs that much yet, they came out like what a week ago. But that seems like one, another way you might approach getting some more consistency.
Helena McCabe:
Yeah, maybe if it's been fed the same and put you may get closer and closer. I've also found that sometimes though, it gets further and further away from what you want. Like I'll start with a prompt and I'll be like, “oh, this is very close. Can you just change this one little piece?” And the drift, it gets like weirder and weirder and I'm like no put it back, back toward the thing I had.
Seth Brown:
Yes.
Helena McCabe:
And sometimes it's easier to just start a whole new chat than try to coax that one. Like I found if Chat GPT or if DALL-E puts glasses on a character, I can never get them off again. There's nothing I can say to DALL-E to get rid of those glasses once they're on. [laughter 00:12:56]
Matt Kleve:
That has to be kind of frustrating.
Helena McCabe:
I've even tried words like unbespectacled [inaudible 00:13:02] maybe I'd say like, “no '' and the word “glasses”, and it’s still putting glasses and once they're on they’re on.
Matt Robison:
They have perfect eyesight and they’re not hipsters. [inaudible 00:13:12]
Matt Kleve:
Also joining us, he just came into the meeting, I foretold his presence. I've introduced him on the podcast before. It's my favorite Canadian, Lullabot’s Director of Technology, Andrew Berry. Hi Andrew!
Andrew Berry:
Hello everyone, I'm glad to be here. Thanks for working around my unplanned house project schedule.
Matt Kleve:
Sometimes it happens. So what has been your experience? How are you using any kind of generative AI? Have you written any code? Have you done anything fun?
Andrew Berry:
Yeah, so I've been using it basically as a comparison to see like, okay, what do I get compared to what I would write? And I used it most recently - I was curious to see what it would do for a DrupalCon session submission, right? Just like if I asked - if I gave it something and then said, in the form, there’s like, “give 3 learning objectives”. I know what they are because they're already in that description, right? Like they should - you're saying the same thing in a different way and to see how I could come out with that. And what was interesting, I would say is, you know sometimes when you're writing and you don't know where to start and you just got like a blank form or a blank document or whatever, it feels like it's pretty clear that there’s value in having it start something because then you can get annoyed and angry at where it's wrong and then you start fixing it and that can be a lot of motivation. I had the same thing when I was trying out some of Google’s tools with an email and I asked it to write a “thanks but no thanks” email and then it ended the email saying, “I look forward to following up and talking further” and I was like oh my goodness no and like…
Seth Brown:
[inaudible 00:14:57].
Andrew Berry:
Exactly.
Matt Kleve:
“thanks but no thanks”, what you're saying is, just to clarify, you're replying to somebody who wanted a job and you said “no you can't have my job” or…
Andrew Berry:
This was actually a sales person.
Matt Kleve:
A sales person - but I don't want to deal with you today…
Andrew Berry:
Yeah.
Matt Kleve:
…I look forward to following up. Right.
Andrew Berry:
Yeah.
Helena McCabe:
That's funny that you'd say that because someone on Reddit was saying that when they ask a question to Stack Overflow, sometimes what they would do is they would ask their question and then if they didn't get an answer, they would sign into Stack Overflow with another account and answer it incorrectly because people like correcting wrong people more than they like helping people and if they put a wrong answer they'd immediately get a bunch of good answers in reply. [laughter 00:15:40]
Matt Kleve:
There's a name for that, right?
Helena McCabe:
Is there? I don’t know.
Matt Kleve:
I just…
Andrew Berry:
XKCD386.
Helena McCabe:
There’s always an XKCD for anything.
Matt Kleve:
Cunningham's law. Does that sound right? Google just made it right. Essentially the way to get the right answer on the internet is to publish the wrong answer and somebody always has to “well actually” you.
Seth Brown:
[laughter 00:16:09] Yeah.
Helena McCabe:
Wow.
Matt Kleve:
Karen, have you had the chance to use these tools or found anything interesting?
Karen Stevenson:
Yeah, I've done a couple of different things. One is we have long board meetings and we have to take notes and Seth found a tool that will basically attend our board meetings and take notes and then summarize things for you and then I can sort of re-format that into the minutes which is really interesting but…
Seth Brown:
And utterly secure, Andrew, in case you’re listening.
Karen Stevenson:
Yeah, shhh. [laughter 00:16:41]
Matt Kleve:
Andrew being a member of the security team at Lullabot.
Seth Brown:
Absolutely would never leak information.
Karen Stevenson:
Yeah, we still have that discussion about security. But the other thing that's funny is it either can give you every word, like every “um”, every “ha”, every “pause”, every single word, or it condenses a two hour meeting down into one sentence. Like, which is way too little. That's not even close to enough detail. So a lot of this is around - and actually kind of related to that, Seth was showing off something that he [inaudible 00:17:18] that summarized a PDF and I read the PDF and I said, “I think that PDF is as long as the document it was summarizing”. Like it really was, it just went on and on and on and on. Finding that happy middle ground of a useful summary. I think that's an interesting area is, what's the least you can say that still provides a useful amount of detail? And is accurate. And I don't know if that's a solved problem yet or not.
Matt Kleve:
So…
Andrew Berry:
I’m curious Karen, how you found the accuracy of the summaries because I had a summary created on some notes I'd written where I met with someone and it re-wrote the summary saying Seth had met with someone which was completely false like he had no involvement whatsoever.
Karen:
It did occasionally have the wrong - attribute things to the wrong person. I did run into that. It got the wrong speaker or something.
Seth Brown:
Messed up on the audio. I've had better luck with some reason than you all I feel like for one thing with this newest release of GPT for the Turbo version seems to be a little bit better actually at summarizing and summarizing in a particular word length like, “give me a 500 word summary in ninth grade English” or something like that and it does a fair job of it. I've also used another tool that I've liked, although it does do lengthy summaries, but there's a tool called Genei.io. But it's spelled funny. It's like “G-E-N-E-I dot I-O”. And it's a way to summarize - it'll summarize web pages, it'll summarize PDFs, break it out for you and then you can organize your summaries into buckets. So if I'm doing research on ESOPs over here and I'm doing research on competitive strategy over there, I can have different buckets that I keep those summaries in and pull notes from. And that's been kind of a handy tool, but it may actually just be GPT 3.5 under the hood. I'm not sure exactly what it's using as it's AI. Sorry if I maligned our good friends over at Genei, I did not mean to if that's not the case. [laughter 00:19:32]
Helena McCabe:
Goblin Tools is a similar tool that I really enjoy. I have terrible ADHD and Goblin Tools is so useful for me. You can tell it a task like, “I need to clean my kitchen, can you make me a list?” And it'll break cleaning your kitchen into a million steps and you can even say like, “how many peppers of neuro spicy you are” and it will break it out even further like how much instruction do you need? Do you need 5 big tasks or 30 small ones? And it'll give you a list. Wow, it's so useful. I love it.
Seth Brown:
Yeah, there's a lot of these so-called “thin wrapper” applications that you wonder are they just going to be made obsolete by some near-term release of the big big generative AIs or are these businesses gonna continue to thrive and we're just gonna have this huge ecosystem of apps that we subscribe to that kind of perform certain functions and with AI as the backup.
Helena McCabe:
I hope this doesn't become like the Netflix, Hulu, Discovery Plus debacle.
Seth Brown:
Yeah.
Helena McCabe:
Yeah or oh, I need 85 subscriptions because I already have Midjourney and Chat GPT and it’s like I don’t want to pay for another thing. But they're all so exciting and I wanna use them all. So I keep opening my wallet over and over. Yeah.
Matt Kleve:
Seth, what have you done with agents? The GPT agents that are fairly new. You mentioned them and I kind of stumbled into one and I didn't know what I was looking at.
Seth Brown:
Yeah, yeah, I mean you can do all sorts of things like you could give it, a very simple example, you could give it a PDF of the bylaws of your company for instance and ask it to act as an interpreter for questions, legal questions or something like that that you might just want to have a tool to rubber duck with. You could, I'm not saying we've done this, Helena, but you could theoretically put a bunch of your recent RFP responses in as PDFs have it parse those and then ask it new RFP questions that are coming in in a new format and have it pull from your past responses and sort of summarize together things. And then also you could have it role play. So you could put it in the role of a, let's say you upload an RFI from let's say, Guam. I'm just gonna pick something completely random, right? And then you could say you're an IT stakeholder from the US territory of Guam, you've just written an RFI, here's the RFI, give this a persona, give the persona some characteristics and then have it sort of evaluate your answers like say here's what I'm going to submit. What do you think of this? What's missing? What are your top 3 questions about what I've just written? You know, those sorts of things. So…
Helena McCabe:
That's cool!
Matt Kleve:
This is a plug-in or an extension or are you doing it through prompting? So you have to have GPT Plus. And then there's a little - in the current UI, there's an explore button in the sidebar on the left and you click “Explore” it shows you some GPTs that have been made by OpenAI like as an example they have something called Math Mentor and Math Mentor helps parents help their kids with math. Need a 9pm refresh on geometry proofs? I'm here for you. So that's like the description of an Open AI GPT. But then you can go click at the very top, “Create a GPT”, “My GPTs”, and then there's basically a Create and Configure screen and it create - you kind of describe what you're trying to make. And then in Configure you can name your GPT, give it a description, upload files, point it at websites… I'm eager to play with that a little bit. I had the idea - I think you can even handle like authentication and stuff. So let's say we could point it at Lullabot’s intranet and make a GPT agent that's good at answering Lullabot’s benefits questions and take some of that off the admin team. So let's have people start with the agent and ask their question about PTO days or a 401K or whatever and see how the agent does before you hit someone in Slack from the admin team. Like that could be a really interesting use case.
Helena McCabe:
I hate it. I am the worst version of myself when I have to talk to a robot before I can talk to a person. I'm that monster who will like call the health insurance phone robot. And I’m like “person! person!”
Seth Brown:
And it's like, “0-0-0”. Yeah.
Matt Kleve:
Agent!
Helena McCabe:
I just want a person! Agent! [inaudible 00:24:35]
Karen Stevenson:
That brings up the other thing which is, what's safe to put into these things, that's what I keep wrestling with. Is I have something I’d like to do but I don’t know what I can put where. Where am I gonna lose control of my information? Where might my information leak out to somebody else? How do you know?
Matt Kleve:
We'll put a pin in that. Right after this, we're gonna talk a little bit about security and these generative AI tools that we’ve found to be useful, but can we trust them? Coming up, right after this.
Matt Kleve:
Welcome back to the Lullabot podcast. I stopped the conversation short because I knew it was going to go long. We're talking Chat GPT, Generative AI, all the different tools that are becoming more and more useful in our everyday jobs. But the question was security. Karen, I think you brought it up. Like if we're throwing a bunch of data at these robots, where does that data go and what can we trust? When you use GPT 4, at least when I did the other day, it warned me it was like, “hey don't give me anything sensitive” so it's apparently my fault now if it leaks something but…
Seth Brown:
Matt, when has big tech ever used our data in ways that we didn't expect it to? [laughter 00:26:03]
Matt Kleve:
I don't have an answer. [laughter 00:26:09]
Helena:
I think you do! [laughter 00:26:12]
Matt Kleve:
Andrew, as the representative here from the Lullabot security team, tell me, what are your thoughts?
Andrew Berry:
Yeah, I mean, like all good engineering problems or fun business problems, the answer is, it depends. And, as an example, Chat GPT has their free version or their personal version and I actually haven't looked at what their latest policies are around the “pay us 20 bucks a month for check GPT 4”. But in general, especially if it's free or consumer oriented, you have to assume that they are keeping copies of every single prompt you put in, every single generated output and that it’s entirely possible that we have human reviewers looking at those because they use that to judge the quality of their models.
Helena McCabe:
[inaudible 00:27:10] the product.
Andrew Berry:
Right. And Google is doing this right now. My understanding is Google Bard, which is their sort of again, chat GPT competitor - when I looked at it 2 weeks ago, it was saying that they keep all data in it for 3 years and you cannot even ask to have it deleted. So if you want those features then you have to pay Google for their Google Workspaces version which is quite a bit more expensive. It’s 20 or 30 bucks a month.
Matt Kleve:
Per user, right?
Andrew Berry:
Per user, yeah.
Matt Kleve:
That was the pricing I saw. It's 20 or - I think it was $30 a month per user. So I thought that was fairly hefty for a sizable organization that…
Andrew Berry:
Yeah, absolutely. And I mean, I think part of this is that it’s early days. They don't know what the actual infrastructure costs are going to be, but the other part of it is, look at how long it takes Chat GPT or Midjourney or any of these tools to actually return you a result. It's probably the slowest thing you do on the internet these days, right? The idea that you are watching words show by word is not because they think it's a good user interface, but because their servers in a data center are chugging really hard to give you every one of those. So there's a lot of real infrastructure costs around this that is new. But to go back to the security question, for our listeners thinking of this from a business perspective, assume that you should probably be paying for generative AI tools if you want to have any control over your data. And from a personal perspective, be careful about what you put in because you can write, if you have it writing an email or a letter or something like that, that may not be something that you want it to have access to for eternity.
Matt Kleve:
Yeah, so as we're talking from the perspective of Lullabot, any sort of client data would be pretty sensitive and we wouldn't want out. Even Lullabot data, maybe. So we just kind of have to figure that out along the way. I mean, along with the rest of the internet, I suppose.
Seth Brown:
It can be helpful to ask it things like, here's my social security number. [laughter 00:29:30]
Seth Brown:
What are the ways that I could better protect these 9 digits? That kind of thing. No, I mean…
Helena:
Are these passwords secure? [laughter 00:29:40]
Matt Kleve:
Not anymore!
Seth Brown:
Yeah, here's my list of passwords for these websites. How could I improve them? I sort of have a feeling that there's this kind of magic that happens in the AI brain that somehow the attribution is lost but maybe that's my own misunderstanding of how it all works. That it's not easy for them, for instance, to go pinpoint information that's been shared or leaked but then there was that issue that Samsung had I think where an engineer used it to write some source code and that got leaked somehow, so. Who knows?
Helena McCabe:
I think it's kinda like this algorithm database is like a big stoop that we're all making together and like generally if you pour an ingredient in you're not gonna be able to specifically identify like “oh, there's a tomato puree that was poured into that soup” but you might find a clam shell if you throw a whole one in. It's a bouillabaisse of information.
Matt Kleve:
So one thing I initially said when I started playing with this, like I don't know, almost a year ago was that the one thing that's missing is any kind of unique thought. But I suppose the unique thought is coming in every time you're asking it a question. If it's training it on you, you are now the system.
Helena McCabe:
Ah!
Karen Stevenson:
One of the things that's interesting, and this comes up over and over again when we talk to people about the results they get, is the fact that you never get the same result twice. Like even if you ask the same thing, you will not get the same answer. You will keep getting different answers. And so that could be really interesting if you're doing creative work, but if you're trying to do fact checking, that makes me kind of nervous. So…
Matt Kleve:
The best explanation I've heard of somebody who's using it now, they say treat it as if an intern gave you this data. The intern might have done a great job. The intern might have any idea what they're doing. Yeah.
Helena McCabe:
Yeah, it's not rock solid. Especially with DALL-E. I found it's very similar to that scene in The Good Place, if you've seen it where he's like, “do you have Eleanor Shellstrop’s file or do you have a cactus?” It's like “I have Eleanor Shellstrop’s file”. “Are you sure you don't have a cactus?” And she hands him the cactus. It feels a little like that sometimes.
Seth Brown:
Yeah, I’ve definitely had some head slapping dumb moments where you just want to kind of throw it away but then the next day it'll completely blow your mind and impress you and you’re like, “oh my god, here we are, we’ve arrived!” And it's funny, that inconsistency. I'm imagining we're gonna see less and less of that as it improves.
Helena McCabe:
I love the more people collaborate like working on these things and like our techniques together online, there's always almost every [inaudible 00:32:49] moment where you're like, “oh, I didn't know it could do that”, but someone else found that it can on Reddit and as we're iterating I think we all become more powerful at using it. And then that makes it more powerful as we teach it how to do new things.
Matt Kleve:
Andrew, have you done any coding with any kind of tools like this?
Andrew Berry:
Little bits actually. I haven't done - I found that I've been waiting for a problem where it's like I have code or I know the code in one language and I need to transform it to a language I don't know very well. Just because I've read that that is actually a relatively good use case because it means you understand the algorithm, you understand what you're trying to do but you don't understand the syntax. And when I think about one of the problems of things that could be facts coming out of GPT is that there isn't a layer on top that allows you to go back and double check and be like, okay, is this sentence saying that, “this event that occurred in 2017 actually happened”. Now with a programming task there is because you have the compiler and you have the programming language, your IDE can tell you if this code is valid or not. So I've been waiting for a chance to do that sort of thing with it. But actually I think, Matt, you've done more from a coding standpoint than I have.
Matt Kleve:
I have but being on the podcast I feel like I always have to ask other people questions instead of me just kind of rave and tell everybody how it is but yeah I have…
Andrew Berry:
Hey Matt, have you done any good programming?
Matt Kleve:
[laughter 00:34:32] Hey thanks, yeah I have. I've given a talk at a couple of different Drupal events about my experiences so I've been doing Drupal for a long time but when it came to migrations, I was always the developer that sat really quietly when the project manager said, okay, who wants to do these migrations? And I never picked that up and had done that until very recently. So I knew it was something new I had to learn and these tools were coming out and I thought, well, let's see what I can do here. And I found Chat GPT to actually be fairly knowledgeable when it comes to Drupal. There were definitely some limitations, but as far as writing migration plugins, it was pretty great. I gave it a problem and it didn't give me the best solution. But it gave me a solution. And when you looked for the code that it returned, I did not find it on the internet. So it actually generated the logic to write the migration plug-in that I needed. But it could have been done with a migration plugin that was already in code just using it a little bit differently. So it wasn't smart, but it was okay.
Helena McCabe:
Huh!
Matt Kleve:
I found a couple of things where there was some decent limitations. A lot of times I would use it to just kind of stub out any kind of scaffolding that I might want for a plugin or something like that. And I found myself asking Chat GPT to give me a process plug-in where I can inject something from the service container. And I knew I needed to implement an interface of some kind. And I Googled quick for it, I didn't find the easy answer. So I thought, well, maybe AI can just stub it out for me and get the solution. And I found Chat GPT 3.5 did not have the solution and 4 did. I most recently used the Drupal Coder agent because I kind of had this baseline of knowledge like what I expected and it failed. And it was telling me that, “oh yeah, you can just do it using the Drupal object.” It's like, “yeah, I could”, but Andrew would yell at me in my pull request if I did that. So for better or for worse it was all right. I found the solutions became worse when you start asking the wrong question. When you're thinking something works a certain way and you're saying, how do I make this work this way? It starts making up stuff that makes no sense in the real world because the real world knows that it doesn't work that way.
Seth Brown:
Yeah.
Matt Kleve:
So as long as you're asking good questions, it comes up with good answers.
Seth Brown:
But I also feel like really long Q&A on coding problems, it's almost like interacting with a parent who's really trying to get a kid to go to bed and stop asking questions. Like the answers just get a little less accurate and worse and they're more like just go to bed kid but the other day I kind of experienced that same drift with code that Helena was talking about with DALL-E 3 with the glasses where I was just asking it to write some HTML and CSS for a simple, whatchamacallit, oh, this is a great time for my mind to go blank, but for a - what's the thing where you’re scrolling and the picture scrolls with you? But not quite the same…
Helena McCabe:
A parallax?
Seth Brown:
A parallax. I was trying to write parallax and I was having some trouble with the visibility of the layers and it threw in z-index and that was definitely not the solution to the problem and I was like, “get rid of the z-index” and I kept asking it but it just kept putting it in. Once the z-index was in there, it was like, “nope this is in there now and can you please go away and stop asking questions”. I felt like it kind of loses the thread sometimes when you go along with it trying to get your code to a better place but maybe, Matt, that's just me asking the wrong questions and you were asking the right questions.
Matt Kleve:
Yeah, so…
Helena McCabe:
Did you say un-de-z-indexed?
Seth Brown:
Yes, De-Z! De-Z this thing man!
Matt Kleve:
Un-de-z-indexed. I haven't actually noticed drift like that. I realize when I come to the limitation of what the robot's telling me is right, when I keep saying, “no, this isn't right, and here's why I think it's not right.” And it doesn't give me the right answer but what I do, and I don’t know if this is right or wrong, is I have one giant conversation that I've continued to have with the bot since almost the beginning of my trials. And I feel like that context still exists. So when I'm asking these migration questions, it already knows that I’m migrating from a CSV document, not from a Drupal website. So…
Helena McCabe:
Yes, each individual chat window feeds on itself and is like a continuing conversation. So if you want to crumple up the proverbial piece of paper and throw it away, you need to open a new chat window.
Seth Brown:
Wasn't there some talk that context was getting lost in longer and longer threads…
Matt Kleve:
I wouldn’t doubt it.
Seth Brown:
…and maybe that's less with GPT 4 and GPT 4 Turbo but I feel like I'd experienced that in the past where it starts to lose focus.
Andrew Berry:
My understanding is that's actually one of the vectors for trying to escape the rules that they define to limit what can be returned. And so there's only so much - I think it's the context window and I am not an expert in this enough to know that that is the right term but you imagine you have room for 100,000 characters and when you have more than that you get to a point where it’s like, “we’ll you just gotta dump the ones that were put in at the beginning, right?” Or have some algorithm to try to figure out which ones are most relevant to keep. What’s that - man there was just a story [inaudible 00:40:33]
Seth Brown:
It's based on tokens, right?
Andrew Berry:
Right.
Matt Kleve:
Yeah.
Seth Brown:
Like the number of, which doesn't necessarily. Yeah.
Helena McCabe:
Huh. I wonder if it’s to keep it from getting too smart. [laughter 00:40:43]
Matt Kleve:
Yeah, I'm willing to bet that it's a limitation on physical resources more than anything.
Helena McCabe:
Could be.
Matt Kleve:
I keep reading articles or at least about a month or 2 ago, I kept seeing articles about how Open AI was going broke because of the simple requirement for server space and cycles [inaudible 00:41:05]
Seth Brown:
Yeah, every question you asked cost us $30 and you're paying us like 20 bucks a month.
Matt Kleve:
What I remember was a couple of cents. Yeah.
Helena McCabe:
Well I would be delighted if they would make a higher tier. I would pay it if they would make a higher tier and just give me unlimited usage?
Unknown:
Shhhhh!
Seth Brown:
Oh no. Oh Helena! Let’s cut that out of the podcast. [laughter 00:41:23]
Seth Brown:
Now they know!
Helena McCabe:
[inaudible 00:41:26] like DALL-E only lets you generate so many images a day and it's dynamically calculated based on how many people are using it. So instead of me knowing like, “oh, I only have 20 left”, it'll just randomly, in the middle of using it, be like, “oh, you're done using it today. Try again in 17 hours.” It's like, “Ahh, I was using that!” So…
Seth Brown:
You can no longer dress your pug like Yoda. Your day is up.
Helena McCabe:
I would pay a premium premium subscription and just be able to keep using it.
Matt Robinson:
I'm sorry that's probably my son's doing Mario versus Darth Vader or trying to come up with movie posters to prank their friends.
Helena McCabe:
That’s exactly the kind of nonsense I’m doing so I can’t begrudge them that.
Matt Kleve:
That's an amazing idea actually. So like movies that don't exist?
Matt Robison:
Yep. Yep. Hey look what’s coming soon! And they’re like, “Oh man! That’s awesome!”
Andrew Berry:
[inaudible 00:42:18] done with was it Tron? They redid a movie as if it was done in the Tron style, if I’m remembering right? And this wasn't like on a service where you could just go and be like, generate me this movie. But, yeah, that came out maybe 8 months ago.
Helena McCabe:
Awesome.
Matt Kleve:
So Helena, so you're limited by the number of total usages, not by your usages?
Helena McCabe:
Sort of. So there's not - the official word of Chat GPT is that you get 50 image generations a day. That is not true. I have generated hundreds of images in a day. But if a lot of people are using it, they dynamically adjust like what the limits are and it's all kind of cloaked of why and how. So sometimes [inaudible 00:43:07], and they decide you're done for the day. But there’s no real…
Matt Kleve:
So does your count - when they quote fail and do something that you don't want, does that count?
Helena McCabe:
Yes it does count. Which is infuriating.
Matt Kleve:
Yeah!
Helena McCabe:
Yes, yes it all counts. I tried to argue with Chat GPT about that. I was like, “hey, that's not fair because you didn't do what I was asking that's why I had to use so many prompts it's not fair for you to kick me off” and Chat GPT was like, I'm a robot and I have no customer service abilities. So I'm sorry you should write to customer service if you don't like this. All I can do is tell you the policy.
Matt Kleve:
Yeah. Matt, one thing I wanted to ask you. I see on social medias a long time Drupal person getting more and more frustrated trying to read the Drupal Planet because it's full of AI generated content. Do you think it will ever get to the point where this AI generated content doesn't seem like AI generated content?
Matt Robison:
Well, the scary thing is probably some of the I generated content is better than some of the human content. I mean, there's always been, there's always been spam to try to game the search engine results and a lot of it's really bad. But AI is probably better, which is worse.
Matt Kleve:
Better at content or better at spamming the search engine?
Matt Robison:
Better at spam - would be much better at spamming the search engine which…
Matt Kleve:
It can talk to robots so yeah…
Matt Robison:
It reaches that threshold where it's good enough pretty easily. My guess is that a lot of the - on Amazon and a lot of the Kindle unlimited stuff is gonna be , start - it can probably replicate a lot of the trash that's on there pretty well.
Matt Kleve:
You're talking about somebody who wrote a trashy romance novel that's a couple hundred pages and then release it on…
Matt Robinson:
Yes. I’m guessing… yes, story types that have a lot of tropes and are predictable. I would guess that it can crank those out very well.
Seth Brown:
I bet fan fiction sites are getting a lot of submissions right now.
Matt Robison:
Yeah.
Helena McCabe:
Oh wow, yeah.
Matt Robison:
But in terms of actually good writing, not yet, but maybe someday.
Matt Kleve:
When you read something, how do you determine? Like what is a trigger that you read something and say, a robot wrote this.
Matt Robison:
Hmm. There's a lot of over use of passive voice.
Matt Kleve:
Okay.
Matt Robison:
Maybe a lot of equivocations and I don't know it's hard to put your finger on a lot of it. A lot of it's maybe just, well, I could tell maybe a firstman college wrote this. Perhaps?
Matt Kleve:
I find vocabulary doesn't always fit. Like it might be technically the right word, but it just doesn't feel right.
Seth Brown:
That's not necessarily how a human would put it or it's generic, it's nonspecific. Like it's one of those things like pornography that you can't you can't define but you know it when you see it. You read it and it just has a certain flavor to it which is like yeah that's not a human cover letter. That human written cover letter applicant that you just submitted and you can just sort of tell. But I also wonder if those days are waning now like it's getting better fast and when you think about how much it's improved, 3.5 to 4 to 4 in the GPT world to 4 Turbo. If it's going to get better at this pace, just like in chess, like there's going to become a point at which it surpasses human abilities. Right now we're in this dance where sometimes it's better, sometimes it's worse, depends on the task. It's helpful in some things, it's not helpful in others. But it does sort of seem like it is on a trajectory to make us all obsolete or our skills obsolete and I wonder about that, Matt. What happens - are we still gonna have jobs? Can I ask you that, our podcast host, when I'm CEO?
Matt Kleve:
Today?
Helena McCabe:
That’s up to you Seth!
Matt Kleve:
Yeah. I mean, I wrote that down as a bullet point. I didn't ask Chat GPT to break down this podcast. And I want to mention before I answer the question is one thing I noticed is that everytime I ask Chat GPT to make a title for something, it really likes the two part title. Like the, “this is a title, colon, this is the subtitle.”
Helena McCabe:
I have found that you can kind of get around the “smells like” Chat GPT wrote it by asking it to word it as a specific person. So if you say, “write me this as if Seth Myers wrote it.” Or like, “In Patton Oswalt’s voice.” It will give you their exact sentence structure. It will really sound like that person. Less than a GPT.
Matt Kleve:
Kind of in jest, I asked it to write a bio for me that I show at the beginning of my talk that I’ve been giving and it was actually fairly scarily ok.
Seth Brown:
Really?
Matt Kleeve:
It told me that I was knowledgeable about WordPress and Magento but I’m not. And it also said I was good at front end and I'm not. So. I mean, it was kind of like, here's a bunch of words because I've indexed the Lullabot site at some point and here’s a bunch of words that’ll fit with that and anyway. Yeah, so voicing it like somebody else! That’s a good tip, I’ll have to try that.
Helena McCabe:
Yeah, that happened to David Burns too. He asked it to describe him and he was like my parents would be so proud of this fictional David Burns and all of his accomplishments.
Matt Kleeve:
Yeah. Absolutely.
Andrew Berry:
Has anyone else tried having like specifically asking it to speak like you? So like I tried saying like write this in the style of Andrew Berry who works at Lullabot and is in the Drupal community and I gave it the links to an article I wrote in 2019. So presumably it was in their index and it rewrote it and it in no way felt like it was me and I wasn't sure if that was because there's a difference like there's enough content for it to be useful or if it was just bad but it did do some really like, I mean, maybe it was telling me something about myself because the version it wrote was like way more hyperbolic and the headings were like completely out there. In the article, one of the headings was like, the step was like, check out this state repository. And the heading was, Clone and Conquer. And I thought that was hilarious! And I might use that as a one off but not an entire article of headings written that way.
Matt Kleve:
I kind of like Clone and Conquer.
Helena McCabe:
So it was like a caricature of you. More than like you.
Andrew Berry:
Right. Yeah, yeah. So, okay so no one else has tried that?
Helena McCabe:
No but I’m gonna, as soon as this podcast is over.
Matt Kleve:
No!
Helena McCabe:
I’m very excited to try that!
Matt Kleve:
So, will we still have jobs? I guess we all have slightly different jobs. As a developer, I'm not nervous yet. I think at this point it's a tool. And I think if people aren't using it, I think they're not as efficient as they could be.
Helena McCabe:
Yes.
Matt Robison:
I think we'll move into a lot more of synthesis and curation, or at least some aspects of that. But even when you look at some of the chess AIs that can like - one to one a chess AI will beat a human. But then they've done studies of if they put a team of humans that are using the chess AI to help themselves then they can beat a chess AI. And so because chess is so predictable and it has a set rule set. It's really good tactics, but it's not as good at strategy as humans were. And so there's an interesting synthesis there that we might reach. And some kind of equilibrium. What that looks like I have no idea.
Helena McCabe:
I mean, I think we'll just all be - I think we will have jobs and if you leverage technology like this the right way, you'll be better at your job. And you can do a lot less of the more boring [inaudible 00:51:49] parts of your job and focus on the really interesting bits, which I love. Like I look at AI as having a very clever assistant. But it's not a replacement for a human.
Matt Kleve:
And that's from your perspective, dealing with clients and sourcing sales, right?
Helena McCabe:
Yes, definitely in the sales field. It's very useful for writing. Things that would be redundant or kind of boring to write. But also in other side-hustles like my father-in-law and I are playing with a little side business for white labeling products and being like, hey give me some market research on this product. Who would my competitors be and how much would these things cost? And it's just, “bloop!” here you go. Like, cool, that would have taken me hours to chase down on the internet and I have it in my hand in 30 seconds.
Karen Stevenson:
I think one of the tricky things about that question is we can say, would we have - who will have jobs based on the way AI is now. And AI is changing so fast.
Matt Kleve:
Yes.
Helena McCabe:
Yeah.
Karen Stevenson:
The more important question is what will be the case in, at the speed it's going, in a few years probably?
Matt Kleve:
We don't know, we don’t know.
Karen Stevenson:
Tt's going to be massively different. The other side of it though, and I keep thinking about this one, where we have the doom and gloom where it's like AI is gonna be doing everything well the reality is if AI is doing everything then who's gonna buy this stuff? Because we're all broke, we're out of work. How's anyone going to buy it? And the whole world's marketplace falls apart. We still need people to buy things.
Helena McCabe:
I mean like, my hope is that this will make things less expensive to produce. So right now, it can take us hundreds of thousands of dollars to build you a beautiful website that meets the specifications you wanted. And it's a race to the bottom. What if we could leverage technology like this and make you something beautiful? With way less effort and we don't need to charge you as much and we could build way more websites in a year. And people don't need to pay so much for that.
Matt Kleve:
So the designers are out of job, is what you're saying?
Helena McCabe:
No, that's not what I'm saying because it can’t replace a human!
Seth Brown:
Oh you heard it here folks!
Helena McCabe:
All it can do is accelerate and enhance what humans can do. And Karen, I don't know if AI could get to that same place because there's some things that just kind of take that little spark of human spirit and that's something that machinery doesn't replicate well.
Karen Stevenson:
I, but the thing is, what would you have said AI could do 3 years ago? And it probably wouldn't be what you're seeing today.
Helena McCabe:
Yeah, yeah, that's true. But I think - so it's the whole joke about like the engineer goes to the grocery store and his wife says, get bread and if they have it - or get milk and if they have eggs, get a dozen and he comes home with a dozen milks. I think for AI to really replace what we do well, clients would have to be able to very clearly explain exactly what they want. And that is something that clients tend not to be able to do easily. It takes a strategist, really kind of having relationship counseling with like 20 stakeholders to figure that out.
Andrew Berry:
And have the expertise to know when the answer is wrong.
Helena:
Yes! That too.
Andrew Berry:
Right? Like, you give a bunch of people who don't have deep training in web and web content and CMS and all these other things and you say use a generative AI to build a website. They’re gonna get a website, but they're not gonna be happy with the result in the end. And they're not gonna have the expertise to know why that is. It's kind of like going back to when people would rage out against auto-complete and IntelliSense and other things in IDE saying, well, you don't know what the code is doing if you're having it generate a template for you. It's like, well, you still have to have enough expertise to know how to create the right template and fill in the right bits for it to do what you're being asked to do. And I think the risk that I see with tools like these is kind of like what we've seen in the world of search engine gaming is that we have a lot more low-quality websites out there, right?
Helena McCabe:
Yeah.
Andrew Berry:
That doesn't solve people's needs, that are just trying - and that the people who own them and run them don't have the expertise in how to make them better. But that at the high end, I don't necessarily see what takes $300,000 to build a website today all of a sudden costing $50,000. I see that they - all those things that you have in a discovery that are like, “wouldn't be nice if we could do this.” Turn from a “no, it doesn't fit into your budget” to a “yes it does” but that the budgets themselves will stay the same.
Helena McCabe:
Yes. That's yeah absolutely. I just feel like we could do more better using these tools.
Matt Kleve:
Of course we can, we just want more.
Seth Brown:
I feel like, to take an analogy, I feel like we're luddites in the middle of destroying cotton gins and someone walks in and says, “what do you think about social media and what role is it going to play in the future?” And they would have like, zero context for that. And they would not know what those jobs look like or what it would mean or anything like that. But there will be jobs, they'll be different jobs, and things are going to change because of these technologies. For sure that seems certain. Like for right now I feel like it takes a master or professional of a craft in order to improve the work product of an AI to the point where it's even usable. But that seems to be changing facts, like it's catching up. So when it does either catch up or surpass in particular professions, some jobs may go away, but I feel like there's going to be jobs on the other side of that that we're not able to anticipate, and maybe even our best science fiction writers can't foresee yet. So I think change is the only certainty. Is kind of where we’re at.
Helena McCabe:
Yeah.
Andrew Berry:
I mean Drupal has kind of done this like there was a period of time when it was a very technical job to just be responsible for writing HTML and working with tools to upload that to websites. And that was what we would historically have called the webmaster, right? And now Drupal says, “hey, you have to know a little bit of HTML maybe, depending on how the website is set up, but the software is handling that part of it for you.” And if you look at what our clients are building out for their teams, they're much more focused on things like web accessibility, good copywriting, proper use of images to convey their messages, all of these sorts of things. Which is a different skill set, but that person who was a webmaster in 2000 and didn't push their career towards say becoming a programmer or designer or something else could have pushed their career to becoming that editor on that website and they still have a job, they're still very valuable but what they're doing to day is still pretty different.
Helena McCabe:
Yeah, I mean, I think with all of this, it's not a question of like, how do we hamstring technology so that we can keep our jobs. It's how can we harness this technology to elevate what we can do. So that my job is something more interesting than maybe it used to be because I have this doing the parts of my job that a robot can now do.
Matt Kleve:
Yeah, I love it. I recommend we all get together in 5 years and we can discuss how wrong we were.
Helena McCabe:
From this side of the street in the boxes we live in.
Matt Kleve:
So when we're relying on the computers to write things, and this is probably just gonna have to be daily quick because we’re already going long. We're relying on the computers to write things. How do we know that we can use what they've written? Like it seems like we're kind of introducing a new problem to the already hazy art of copyright law. And I had a question kind of related to that when I gave my talk at GovCon. And it was like, okay, so you're copying and pasting this code and you're using this code. You can use that code and my answer was, yeah, I'm using GPL code and I might be copying somebody else's GPL code. I’m comfortable with that. Some lawyer might tell me I'm wrong, but okay, it’s open source.
Andrew Berry:
One of the interesting things, it's not like there’s court cases that'll take years to come through, right? One of the ways that I look at evaluating a service for business purposes is does the company you're procuring the AI tools from offer any sort of guarantee about whether you can use the content or not. So, Microsoft has recently released I think they call it an AI comment around all of their tools where basically they say that if any of their customers are sued like what has happened with I think was it OpenAPI and Getty Images, a couple other cases around software source code that they will take the risk for that. And that to me, is a really good insurance policy from a business standpoint to know that some copyright troll isn't going to come and you can then go back to your tool provider and have them deal with it. But I also think like you don’t know that the code that you’re getting from Chat GPT is actually covered by the GPL. Because, you don’t know, it can’t attribute where it’s being sourced from. It could have come from, in the Drupal world, something that was from Symphony. Right? That is actually BSD or MIT licensed or whatever. And I would - there’s a couple of companies out there who are actually building generative AIs that allow you to specify licenses so you could say, “hey only use GPL code when generating this”. So that you know it's all consistent. But I don't think there's anyone big doing anything like that yet.
Matt Kleve:
How about with prose Matt? With something that you might want to write and adjust and reuse or not?
Matt Robison:
Yeah, there's not really a way to tell.
Matt Kleve:
Other than we can kinda tell at this point, right?
Matt Robison:
Yeah, other than if it's pulling from ideas, you have no idea if it's actually original thought unless it is a complete hallucination. You know it's a hallucination and you can be pretty sure that it is original. I know, some author tried to list all the books by and so when it literally made up a book at the end of the list and like, “no, they didn't write that”. And so, “oh, you're right. I'm sorry.” And they started playing with if I did write that what would the first scene be, type of thing. So I don't know.
Helena McCabe:
That’s neat.
Matt Kleve:
Helena…
Helena McCabe:
I know at least with like mid - sorry what were you gonna say?
Matt Kleve:
I was gonna ask you about art now.
Helena McCabe:
Oh good!
Matt Kleve:
So, Helena, you've talked about using these tools with DALL-E and art. And one interesting thing I read once upon a time was that when you start asking to give me something in the style of another artist. Say give me some kind of picture or artwork in the style of Kinkade or Peter Max or something like. Then you might end up crossing lines too.
Helena McCabe:
I mean, ethically, morally, probably yes. You probably shouldn't completely emulate an artists’ style, but legally from what I've read how it - in the US…
Matt Kleve:
I’m not a big city lawyer but…
Helena McCabe:
… copyright law in the United States says that you cannot copyright anything that was not made by human hands, ie. AI. So while you are free to use whatever you generate in a Midjourney or something like that so is anyone else so you wouldn't want to use that art for something that you wanted to keep for your company like your new logo or something like that. Because you can’t copyright it, it doesn’t belong to you. So it's something to be taken with a little bit of caution.
Matt Kleve:
Interesting.
Andrew Berry:
One of the things that bothers me about the way this has all gone down as far as these big AIs is they have done the things that small companies or individuals are completely blocked from doing, right? So Chat GPT or OpenAI crawling the entire web, ignoring - whether or not you agree with them or not, the terms and conditions around what you can do with content on sites, right? They are basically writing their own rules like Uber or any of these other companies. Where they're just hoping that they get big enough, but that they show enough value that the laws and perceptions will change around them. And as someone who's more sort of in the, like I think about the role of open source software and sharing content that way, I can't just write a web scraper and throw it on a web server and hope it's gonna work, right? I'm gonna get blocked. [Inaudible 01:05:32] gonna get sent legal requests asking for my accounts to be turned off and they're going to be turned off, right? And so, there’s this really symmetry in terms of the power between these large AI companies right now and the limits in terms of who else could break into the market or what individuals could do with it. And I feel like maybe we luck out and from one perspective we get to the other side of this and we say like, hey you know all those terms and conditions on websites that say you can't copy and paste text or whatever? Maybe that all goes away. Maybe, there's - we lead to a world where copyright is actually weakened in terms of what large corporations say what they do with it, but it's probably gonna take a decade ‘till we know that.
Helena McCabe:
Andrew, do you think maybe they're Robin Hooding a little bit for us there. I mean, like you said, well, I can't do this and scrape the internet, but if this tool that you have access to is allowed to do that, are they kind of like robbing that information for you and giving it to you?
Andrew Berry:
I think with the pivot that OpenAI made from sort of being a “for the public good” research institution to being a fully for-profit business institution would indicate that they're not - that's not their goal like it's certainly not what they want to be doing. I think all these tools are trying to build up the biggest motes they can. So you're used to working with Chat GPT and you know any competitor is not gonna have the historical context of your interactions the way that you use it that the costs of moving are just gonna be too high.
Helena McCabe:
Oh, that's interesting. Yeah.
Matt Kleve:
Well, I think we could continue to talk about everything, but it might be best if we point toward wrapping this up. So I'll give you all one last chance to act like we all know exactly what the future holds. And maybe have some takeaways from our conversations and anything we might have from our experiences using these tools. I'll start with Andrew.
Andrew Berry:
What does this mean for the average Drupal developer who wants to put code up on Drupal.org and so on? And the good news is there's initial passes of policies up for that that you can find if you look up, AI generated content on Drupal.org you'll find it. And they just got some very basic guidelines around disclosure and review and what human intervention you need to provide so take a look at that and that's probably a good set of policies and guidelines for any work that you're doing with others, whether it's in your personal life or in your business life, so a good place to start.
Matt Kleve:
Karen.
Karent Stevenson:
Yeah, so many things. One thing that I think is gonna be really interesting is the impact on, how do you know what's true? How do you know what’s real? That's just getting murkier and murkier as we go.
Matt Kleve:
Oh yeah.
Karen Stevenson:
And it's not gonna get any better and I don't know how we're gonna solve that but we don't have some way of telling what's true.
Matt Kleve:
Yeah, coming into an election year and we haven't even talked about deepfakes and other tools that are out there to make it look like…
Karen Stevenson:
Hallucinations! You're working with a tool that just makes things up and we can’t tell the difference.
Helena McCabe:
[inaudible 01:09:03] podcasts right there.
Matt Kleve:
If I have enough video and audio of you, Karen, I can make you stand in front of the presidential podium and give whatever speech I want. So…
Karen Stevenson:
That's right. Why don't you just have me run for president while you’re at it?
Helena McCabe:
She’s a good candidate! I’d vote for Karen!
Matt Kleve:
You’ve got my vote! Matt?
Matt Robison:
I don't have anything else to add other than that I can only predict the past. I can’t relly say what’s gonna happen in the future.
Matt Kleve:
It’s true. Helena?
Helena McCabe:
I mean I'm a pie eyed optimist so I'm very excited about this. As a creative person the idea of being able to create as many things as I want, as quickly as I want, as fast as I can have ideas? It's very exciting. I'm like a kid in a candy store and I hope all my teeth don't rot.
Matt Kleve:
Yeah, what would that mean? Like if we're gonna explode that metaphor, your teeth rotting from AI…
Helena McCabe:
Just like if you enjoy too much of a good thing, sometimes it's not good for you. I mean, I see my sleep already lacking because I'm like oh I can create this art and I can create art of that and then it's 2 in the morning and I'm like, oh, I didn't sleep tonight. Oops!
Matt Kleve:
And you have littles at home. That's no fun. You can't do that.
Helena McCabe:
I know. I know. I need more discipline, but it's just it's my new toy. It's so exciting.
Matt Kleve:
Seth?
Seth Brown:
I foresee the pictures of my Labrador in a yellow rain slicker getting better and better each year. No, I guess for me the thing that I identify most is the way that it's changing writing and my writing process. I used to work for a magazine and had a team of editors that I would work with and every piece was better for it because you had a lot of minds to work with. But that's expensive and hard to replicate when you don't work for a really good magazine. And so the thing that I'm amazed by is how much it's starting to feel like I can bring that level of resources and perspective to bear on my own writing. Just by using all of these various tools. And so that's exciting to me. The way that the writing process is changing and it's and there's - it feels less lonely, more interactive, there's more direct feedback available and I think that's pretty cool. So but that's not a big prediction for the future that's kind of here and now.
Matt Kleve:
No, the only prediction I have is that I would seriously love to revisit this a few years from now. And I said this earlier, but we can all just laugh at how wrong we are because we don't know what the future holds for lots of things including how AI can affect us.
Seth Brown:
Hopefully we still have jobs and money enough to afford podcasting equipment at that point.
Matt Kleve:
Well, or we could just like, do the new version of the podcast where you don't use any equipment and you just hang out and talk with your friends.
Matt Robison:
Just AI versions of all of us. And just see what happens.
Helena McCabe:
I’ll have my bots call your bots.
Seth Brown:
I for one want to be the first to welcome our sky net overlords and I'll do whatever they ask.
Helena McCabe:
Yeah, I've been polite with my Google home ever since that rolled out. I'm like, could you please call my phone? Because when the robots take over, I want them to know that I was nice to them when I asked them to adjust my refrigerator's temperature.
Matt Kleve:
I agree with you. That's a joke I made in my talk. It was the, I'm always gonna be nice to the bots because I never know what's coming.
Matt Kleve:
Thanks everybody.
Helena McCabe:
Thanks! This was fun!
Matt Kleve:
Bye!
Matt Robison:
Thanks!
Andrew Berry:
Yeah, thank you!