This is really well articulated, and I’m particularly interested in the interactivity component of AI in writing. Maybe this is unpopular to admit, but I often use AI as a conversational companion (I think best when in dialogue with others) to help suss out what I’m trying to say. I hardly ever use the actual words AI produces, but it’s more that the back and forth will spark something that leads to further clarity. When writing about my own past and history, it has even felt… dare I say… kind, like there was someone there listening to me and echoing me as I did the difficult work of excavating the past.
A conversational companion and I’d say is an excellent model for us to not only understand but get the best benefits from AI.
It’s worth bearing in mind the only reason this might be unpopular is most people are still terrified of new things and we’re still very early on in this adoption cycle and in trying this out you’re definitely very early adopter .
One of the many great things about your way of seeing things using AI as a conversational companion is it means we can think critically about the responses we’re getting is it a good response? Is it a bad response? Does this really help me a bit like a friend we’re getting to know .
That’s great because we’re not just obeying it or trusting it as a conversational companion were continually giving feedback and evaluating what it’s saying and getting a chance to see what it can do what it’s not good at its strength and weaknesses .
And using it as a conversational companion you’re actually getting a more nuance and accurate understanding of what AI actually is and of course what it is is not fixed thing it’s constantly changing because AI is constantly developing which is another mistake many people make they just assume they know what AI is and most of them haven’t even tried it which is somewhat ridiculous .
I think your practical pragmatic approach is perfect and you’re in a better position to actually really know what AI is cause you’re using it and understand it more than those who love to hate on AI but don’t understand it because they don’t use it, or those who think AI is wonderful like a God and its going to save everybody but again don’t even use it so don’t have a clue .
Well done, you’re on the edge of reality and the future 🙂
A lot of your commenters are using an analogy with photography to say that AI fiction will not be all bad. But extend the analogy and think about what happened to art in the first hundred years of photography.
Mid-19th century art was mostly representational and becoming more life-like until the Impressionists zigged left and conveyed a mood as much as capturing an image. Images were for cameras. As the 20th century went on — Seurat, Duchamp, Kandinsky, Mondrian, Pollock — art got further and further from what the artists of the mid-19th century would have called art.
While photography took most of the business of representational art, painters had to create new genres to even have a place where they could contribute. There’s a good side to this: photography is beautiful and gave us new ways to speak with an image and we got Picasso and Chagall. But there were also 100 years where more traditional artists lost their calling while their audience flocked to the new technology which was cheaper and more available. It’s nice that we have photography, but it took a long time to get to the point where photography gave us back what we’d lost.
There is going to be a lot of lazy junk music spit out by lazy use of the generators, but focusing on these piece will be a distraction - for the same reason that we should not disregard painting after seeing a collection of poor amateur paintings. Some ambitious artists will use AI to unlock musical projects of much grander scale. It will unlock the ability to scan musical space more effectively, searching for novel beauties and new satisfying genres. The ambitious composer standing out from the crowd will use musical output from AI as raw material to ensemble a coherent whole whose very aesthetic transcends what is possible by lazy one-shot use of AI tools. This will follow from a sort of natural selection in the space of beauty. Beauty that is too easy to craft will be in too heavy supply, and so it will completely loose its novelty and appeal to audiences. The beauty that is remembered is usually beauty that is novel at its time of production. The demands of novelty in art designated masterpiece status will pretty much rule out lazy use of genAI for these pieces. I am personally very excited to see what clever, ambitious, and hardworking musicians come up with.
Look up a Michael Smith, recently busted for generating hundreds of AI songs then using AI bots to play them generating millions of $ on streaming platforms
"If it’s fair for a human to learn how to write from reading others, AI should be able to as well." I hadn't thought about this before and it's a good point. But the issue here is still consent - authors published their writing thinking they knew what would happen to it. They knew that, if they were successful, they would influence future writers. But they didn't agree to be added to data sets kept by profit-focussed software companies. That's where the difference lies, I think. I'm also not sure if plagiarism has to mean copying from one specific work - if you copied work from 3 different bachelor’s theses and submitted it as your own, it would still class as plagiarism.
I feel like I'm disagreeing a lot here, but I really enjoyed reading this - thank you!
I don't see you as disagreeing! I agree about consent, and see it all as part of the gray area. What is reasonable to expect and what is fair use are genuinely hard to grapple with problems here. I think both analogies (it's just like a human reading it, or it's just plagiarism) fail to capture AI accurately.
Maybe you can clear something up for me…it seems to me that there is a bit of a grey area in 1) whether AI can make art (or have an artistic process of it’s own) and 2) whether AI can help us in in our own artistic process. Can you speak to this distinction?
Yeah, I was a bit frustrated that the Chiang article doesn't grapple with this distinction at all.
There's a strong claim you could take anti-AI folks as making: AI cannot contribute at all to art. This is silly and clearly false, since people use it all the time--sometimes for simple things like brainstorming, or for helping with phrasing, etc.
The weaker claim they could be making is AI can't make art on its own. This could be true in a boring sense: current genAI doesn't act with intention or feeling and we can define art as requiring those. But it's certainly true it can create stories and pictures that are pleasing, and that sometimes it creates ones that you would think are art if you didn't know it was produced by an AI.
There's lots of grey area in between these extremes--even in the "weak" sense above, usually a human is playing the role of "curator", picking what's good enough to share, but you could imagine someone who uses AI to generate a whole story and then just does light editing, or heavy editing. You could imagine someone writing the first draft and having AI do the editing, or writing about half and having AI write the other half.
If we don't consider genAI stuff art because it was created without intention, at what point in the grey do we flip from considering it art to not? That's kind of a judgment call, which to me points to the fact that we can't make strong claims about what AI can or can't do in the art world. There isn't a simple rule to distinguish art from non-art, and there's no obvious reason AI can't be involved in art, even if we define art in such a way that it excludes stuff purely made by AI.
Not sure if that speaks to your question, but those are the thoughts I had based on your comment!
This is a subject I think about a lot and have written about recently. I also reflect on this as an artist using generative AI images as raw input for making audioreactive visual arts that would not be possible with a lone human, nor with a lone AI. I pretty much completely agree with everything you here. I am surprised that Ted, who I admire just as much as you, bring up the photography example and then proceed to say "Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. ". I would just say I strongly disagree with Ted's conclusion there. The analogy between photography and generative AI for imagery is in fact an extremely good analogy in many respects.
We can think of a genAI as a form of camera that photographs a new alien world. Sure, it is easy to snap some picture - just type in some random prompt. This is analogous to just taking a normal camera, aiming it in a random direction and clicking the shutter-release button. Of course, for an artists pursuing some kind of vision, neither of these are going to give what they want. A lot of fine tuning ensues in either case. The photographer finds something else to point the camera at, adjusts the settings of the camera, does post processing, etc. An exactly analogous process happens with the image generators, which I strongly prefer to think of as "intelligent paintbrushes" (I don't go around calling cameras "photo generators"). Anyway, adjusting and fine-tuning the prompt is well analogized to adjusting where you point the camera and altering the setting of the camera. But this is just the beginning of the story. Advanced users of the intelligent paintbrush know that there is a whole lot of dials and knobs that lets you get more precise control over the result. As an example, you can pick a subregion of your image and redraw that region with a separate prompt, or with separate settings for the diffusion model. I could give many more examples, but the point is already clear. To reach a final piece of art cohering with an artists vision, the artist usually accumulates a whole slew of intentional small and large decisions. Even when AI tools are involved. Sure, you might get lucky with your first prompt, but so what? The same can happen in photography. The best art will still be made by the artists who, wielding the new tools, put in the hard work of intentionality. The fact that you can decide not to do this shouldn't be the primary thing shaping your view of the tools.
And this doesn't even get to the other fact that you very adequately discuss. These ease of creating high quality visual imagery now opens up new genres of art. We can now pursue projects of much greater ambition. Interactive stuff, as you described, is one great example. And more generally, any art project that require large volumes of static images, but where the interesting part of the art arises from the assembly of these individual static image raw materials into a bigger whole. I think a lot of genres that thrive going forward will be genres that, simply by the virtue of their aesthetic quality, prove that human input was involved. And the fact that individual raw materials was generated by AI is not any more of a problem than the fact that photographers don't build and assemble their own cameras, or painters foraging and synthesizing the materials needed for their paint. What if someone sits down and generates 10000 AI images with a coherent theme, style, and story, and covers the full interior of a flight hangar with all of these? Sounds like art to me.
As another example, I will be shameless enough to link to one of my own pieces that I put out recently. AI video tools are terrible at high-precision audioreactive stuff. Audioreactivity is made satisfying by the very tight synchronization to the audio, and there is very little room for imprecision. But with genAI, you can now generate raw materials to input into manual workflows for audioreactive image processing. This opens up for new projects that just wasn't possible for a single artist before, unless you were a team of people. Anyway, here is the piece:
"Even if the tools are useful to writers at this point, they're not game-changers by any stretch of the imagination."
This sentence points to a sort of gap in the current conversation, which generally focuses on AI doing existing things on its own, or on humans using AI to do new things. What's missed is that humans also use these tools, *as* tools, to do existing things more easily (and maybe even better).
Because ChatGPT (or any service like it; I prefer Claude) can replace a whole set of other writing tools: It's a thesaurus and a reverse dictionary and the 'Romance Writer's Phrase Book' (only with a more intuitive index than any of these) and it's also Gardner's plot wheel and Burroughs's cut-up technique and a set of RPG random encounter tables (only just plain *bigger* than any of these).
Granted, the original versions of these tools are themselves a bit stigmatized. But no one asks you to disclose if you used them or regards a work as tainted if they were part of its production.
I don't see why AI should be treated any differently, but it looks like that's where we're headed.
I mentioned this is one of my own articles, but the procedurally generated assets in No Man's Sky are a good example of the augmented human creativity you're describing. There are 18 quintillion planets in that game (not an exaggeration, look it up), and each one has its own flora, fauna, terrain, and weather patterns. Human artists can't generate that kind of output, but your console's CPU can do it in the thirty seconds it takes you to enter the planet's atmosphere.
I'm not going to rant about the replacement of the artist by the machine in my reply but try to focus on where I see its application being useful and welcome. You mention video games, and although I'm not a gamer, I get excited about the potential of more meaningful relationships between the player and NPCs, say in The Breath of the Wild.
Characters in the fantasyscape will become rich and nuanced, fundamentally transforming the game into something that isn't achievable without AI. In the game now, you can build a home for yourself, but its use and meaning is limited. The village you claim as your own holds no real importance because you can't develop relationships with its inhabitants. NPCs follow you on your quest in other games becoming a helper or part of a team, but they're currently little more than a bag of tools. AI- driven interaction will open up meaning within these games dramatically.
I also welcome it for mundane tasks. It's being used to create graphs based on the content of the article you've written, and I don't see that as replacing the purpose-driven expression of a human soul. It allows writers to focus their efforts on the important things.
I’m leery of AI, but equally skeptical of the alarmists claim of intellectual property protection and compensation rights for generative AI systems. I share their real fears of replacement with refined technology, but I can’t stop feeling their arguments are flawed.
All art is derivative. We have to ingest to be inspired. Creating something from nothing is much harder than improving, altering, continuing, or contributing. We’ve all referenced a forerunner or pedagogy when offering rationale for our actions, regardless whether those are conforming to or contradicting of. No one intends to recreate the wheel.
These creators with ownership rights and compensation interests that will be focused away from content creation exclusively by humans, and the humans impacted by tools designed to replace them or find more efficient ways to expedite content, all have valid interests and concerns in the coming years. But avoiding it is inevitable and addressing it is better than ignoring it, no matter your outlook.
I think total replacement is a long ways away. But establishment of compensation and role models for new paradigms isn’t dumb. And addressing it sooner is better. It can always be amended if things don’t collapse too. My totally uninformed view is something more collaborative where layouts and direction need to be established by human editors and writers, but AI can create roughs of scenes quicker, and even that will be overseen by an art director and AI prompt tech (AI Input Engineer isn’t a real title. Subtle rephrasing of generative requirements and design requests is not engineering. It’s an applied science. And that’s ok.) Finishing can be done in design software with human artists and stylus tablet design. I mean, if that’s not being done already. I don’t know. 🤷♂️
Intent, impetus, and reality aside, something intended to be seen doesn’t dictate who can see it. Nor does it give away rights to depict those things whose ownership was previously known. Meaning AI “using” published material someone bought to read and shared with his optical scanner and uploaded to the cloud could be considered violating copyright and defrauding the company who produced the material, assuming they were selling access, absolutely. Or creating revenue from art or merchandise that contains those trademarks. Even if they were giving it away for free, it’s still theft. But none of that seems to be happening. It’s just the usage as reference material that’s being argued. That seems limiting.
Is that AI company giving it to an optical scanner to view and collate within their algorithms any different than when I read the same content and commit it to memory? I dunno. An easy argument that a license for the content should have been sought and a fair cost for usage rights plus damages seems reasonable. But if I recreate content I’ve read, whether I’ve paid for the experience or not, wouldn’t invite the same scrutiny. That may be appropriate given the hype surrounding GAI and how limited my potential is in comparison. But it still makes one wonder about the difference between what is in effect the same application, but different tolerances.
This is really well articulated, and I’m particularly interested in the interactivity component of AI in writing. Maybe this is unpopular to admit, but I often use AI as a conversational companion (I think best when in dialogue with others) to help suss out what I’m trying to say. I hardly ever use the actual words AI produces, but it’s more that the back and forth will spark something that leads to further clarity. When writing about my own past and history, it has even felt… dare I say… kind, like there was someone there listening to me and echoing me as I did the difficult work of excavating the past.
We used to go to coffee shops & bars to do that. <Sigh>
😩😭
A conversational companion and I’d say is an excellent model for us to not only understand but get the best benefits from AI.
It’s worth bearing in mind the only reason this might be unpopular is most people are still terrified of new things and we’re still very early on in this adoption cycle and in trying this out you’re definitely very early adopter .
One of the many great things about your way of seeing things using AI as a conversational companion is it means we can think critically about the responses we’re getting is it a good response? Is it a bad response? Does this really help me a bit like a friend we’re getting to know .
That’s great because we’re not just obeying it or trusting it as a conversational companion were continually giving feedback and evaluating what it’s saying and getting a chance to see what it can do what it’s not good at its strength and weaknesses .
And using it as a conversational companion you’re actually getting a more nuance and accurate understanding of what AI actually is and of course what it is is not fixed thing it’s constantly changing because AI is constantly developing which is another mistake many people make they just assume they know what AI is and most of them haven’t even tried it which is somewhat ridiculous .
I think your practical pragmatic approach is perfect and you’re in a better position to actually really know what AI is cause you’re using it and understand it more than those who love to hate on AI but don’t understand it because they don’t use it, or those who think AI is wonderful like a God and its going to save everybody but again don’t even use it so don’t have a clue .
Well done, you’re on the edge of reality and the future 🙂
A lot of your commenters are using an analogy with photography to say that AI fiction will not be all bad. But extend the analogy and think about what happened to art in the first hundred years of photography.
Mid-19th century art was mostly representational and becoming more life-like until the Impressionists zigged left and conveyed a mood as much as capturing an image. Images were for cameras. As the 20th century went on — Seurat, Duchamp, Kandinsky, Mondrian, Pollock — art got further and further from what the artists of the mid-19th century would have called art.
While photography took most of the business of representational art, painters had to create new genres to even have a place where they could contribute. There’s a good side to this: photography is beautiful and gave us new ways to speak with an image and we got Picasso and Chagall. But there were also 100 years where more traditional artists lost their calling while their audience flocked to the new technology which was cheaper and more available. It’s nice that we have photography, but it took a long time to get to the point where photography gave us back what we’d lost.
I wonder the same thing about AI and writing music, too.
There is going to be a lot of lazy junk music spit out by lazy use of the generators, but focusing on these piece will be a distraction - for the same reason that we should not disregard painting after seeing a collection of poor amateur paintings. Some ambitious artists will use AI to unlock musical projects of much grander scale. It will unlock the ability to scan musical space more effectively, searching for novel beauties and new satisfying genres. The ambitious composer standing out from the crowd will use musical output from AI as raw material to ensemble a coherent whole whose very aesthetic transcends what is possible by lazy one-shot use of AI tools. This will follow from a sort of natural selection in the space of beauty. Beauty that is too easy to craft will be in too heavy supply, and so it will completely loose its novelty and appeal to audiences. The beauty that is remembered is usually beauty that is novel at its time of production. The demands of novelty in art designated masterpiece status will pretty much rule out lazy use of genAI for these pieces. I am personally very excited to see what clever, ambitious, and hardworking musicians come up with.
How interesting, when you put it like that. I'd never thought about it that way before.
Look up a Michael Smith, recently busted for generating hundreds of AI songs then using AI bots to play them generating millions of $ on streaming platforms
What a fiasco! I'll look into it.
"If it’s fair for a human to learn how to write from reading others, AI should be able to as well." I hadn't thought about this before and it's a good point. But the issue here is still consent - authors published their writing thinking they knew what would happen to it. They knew that, if they were successful, they would influence future writers. But they didn't agree to be added to data sets kept by profit-focussed software companies. That's where the difference lies, I think. I'm also not sure if plagiarism has to mean copying from one specific work - if you copied work from 3 different bachelor’s theses and submitted it as your own, it would still class as plagiarism.
I feel like I'm disagreeing a lot here, but I really enjoyed reading this - thank you!
I don't see you as disagreeing! I agree about consent, and see it all as part of the gray area. What is reasonable to expect and what is fair use are genuinely hard to grapple with problems here. I think both analogies (it's just like a human reading it, or it's just plagiarism) fail to capture AI accurately.
Maybe you can clear something up for me…it seems to me that there is a bit of a grey area in 1) whether AI can make art (or have an artistic process of it’s own) and 2) whether AI can help us in in our own artistic process. Can you speak to this distinction?
Yeah, I was a bit frustrated that the Chiang article doesn't grapple with this distinction at all.
There's a strong claim you could take anti-AI folks as making: AI cannot contribute at all to art. This is silly and clearly false, since people use it all the time--sometimes for simple things like brainstorming, or for helping with phrasing, etc.
The weaker claim they could be making is AI can't make art on its own. This could be true in a boring sense: current genAI doesn't act with intention or feeling and we can define art as requiring those. But it's certainly true it can create stories and pictures that are pleasing, and that sometimes it creates ones that you would think are art if you didn't know it was produced by an AI.
There's lots of grey area in between these extremes--even in the "weak" sense above, usually a human is playing the role of "curator", picking what's good enough to share, but you could imagine someone who uses AI to generate a whole story and then just does light editing, or heavy editing. You could imagine someone writing the first draft and having AI do the editing, or writing about half and having AI write the other half.
If we don't consider genAI stuff art because it was created without intention, at what point in the grey do we flip from considering it art to not? That's kind of a judgment call, which to me points to the fact that we can't make strong claims about what AI can or can't do in the art world. There isn't a simple rule to distinguish art from non-art, and there's no obvious reason AI can't be involved in art, even if we define art in such a way that it excludes stuff purely made by AI.
Not sure if that speaks to your question, but those are the thoughts I had based on your comment!
This is a subject I think about a lot and have written about recently. I also reflect on this as an artist using generative AI images as raw input for making audioreactive visual arts that would not be possible with a lone human, nor with a lone AI. I pretty much completely agree with everything you here. I am surprised that Ted, who I admire just as much as you, bring up the photography example and then proceed to say "Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. ". I would just say I strongly disagree with Ted's conclusion there. The analogy between photography and generative AI for imagery is in fact an extremely good analogy in many respects.
We can think of a genAI as a form of camera that photographs a new alien world. Sure, it is easy to snap some picture - just type in some random prompt. This is analogous to just taking a normal camera, aiming it in a random direction and clicking the shutter-release button. Of course, for an artists pursuing some kind of vision, neither of these are going to give what they want. A lot of fine tuning ensues in either case. The photographer finds something else to point the camera at, adjusts the settings of the camera, does post processing, etc. An exactly analogous process happens with the image generators, which I strongly prefer to think of as "intelligent paintbrushes" (I don't go around calling cameras "photo generators"). Anyway, adjusting and fine-tuning the prompt is well analogized to adjusting where you point the camera and altering the setting of the camera. But this is just the beginning of the story. Advanced users of the intelligent paintbrush know that there is a whole lot of dials and knobs that lets you get more precise control over the result. As an example, you can pick a subregion of your image and redraw that region with a separate prompt, or with separate settings for the diffusion model. I could give many more examples, but the point is already clear. To reach a final piece of art cohering with an artists vision, the artist usually accumulates a whole slew of intentional small and large decisions. Even when AI tools are involved. Sure, you might get lucky with your first prompt, but so what? The same can happen in photography. The best art will still be made by the artists who, wielding the new tools, put in the hard work of intentionality. The fact that you can decide not to do this shouldn't be the primary thing shaping your view of the tools.
And this doesn't even get to the other fact that you very adequately discuss. These ease of creating high quality visual imagery now opens up new genres of art. We can now pursue projects of much greater ambition. Interactive stuff, as you described, is one great example. And more generally, any art project that require large volumes of static images, but where the interesting part of the art arises from the assembly of these individual static image raw materials into a bigger whole. I think a lot of genres that thrive going forward will be genres that, simply by the virtue of their aesthetic quality, prove that human input was involved. And the fact that individual raw materials was generated by AI is not any more of a problem than the fact that photographers don't build and assemble their own cameras, or painters foraging and synthesizing the materials needed for their paint. What if someone sits down and generates 10000 AI images with a coherent theme, style, and story, and covers the full interior of a flight hangar with all of these? Sounds like art to me.
As another example, I will be shameless enough to link to one of my own pieces that I put out recently. AI video tools are terrible at high-precision audioreactive stuff. Audioreactivity is made satisfying by the very tight synchronization to the audio, and there is very little room for imprecision. But with genAI, you can now generate raw materials to input into manual workflows for audioreactive image processing. This opens up for new projects that just wasn't possible for a single artist before, unless you were a team of people. Anyway, here is the piece:
https://www.youtube.com/watch?v=XOvvEHv6gjc
"Even if the tools are useful to writers at this point, they're not game-changers by any stretch of the imagination."
This sentence points to a sort of gap in the current conversation, which generally focuses on AI doing existing things on its own, or on humans using AI to do new things. What's missed is that humans also use these tools, *as* tools, to do existing things more easily (and maybe even better).
Because ChatGPT (or any service like it; I prefer Claude) can replace a whole set of other writing tools: It's a thesaurus and a reverse dictionary and the 'Romance Writer's Phrase Book' (only with a more intuitive index than any of these) and it's also Gardner's plot wheel and Burroughs's cut-up technique and a set of RPG random encounter tables (only just plain *bigger* than any of these).
Granted, the original versions of these tools are themselves a bit stigmatized. But no one asks you to disclose if you used them or regards a work as tainted if they were part of its production.
I don't see why AI should be treated any differently, but it looks like that's where we're headed.
I mentioned this is one of my own articles, but the procedurally generated assets in No Man's Sky are a good example of the augmented human creativity you're describing. There are 18 quintillion planets in that game (not an exaggeration, look it up), and each one has its own flora, fauna, terrain, and weather patterns. Human artists can't generate that kind of output, but your console's CPU can do it in the thirty seconds it takes you to enter the planet's atmosphere.
I'm not going to rant about the replacement of the artist by the machine in my reply but try to focus on where I see its application being useful and welcome. You mention video games, and although I'm not a gamer, I get excited about the potential of more meaningful relationships between the player and NPCs, say in The Breath of the Wild.
Characters in the fantasyscape will become rich and nuanced, fundamentally transforming the game into something that isn't achievable without AI. In the game now, you can build a home for yourself, but its use and meaning is limited. The village you claim as your own holds no real importance because you can't develop relationships with its inhabitants. NPCs follow you on your quest in other games becoming a helper or part of a team, but they're currently little more than a bag of tools. AI- driven interaction will open up meaning within these games dramatically.
I also welcome it for mundane tasks. It's being used to create graphs based on the content of the article you've written, and I don't see that as replacing the purpose-driven expression of a human soul. It allows writers to focus their efforts on the important things.
I’m leery of AI, but equally skeptical of the alarmists claim of intellectual property protection and compensation rights for generative AI systems. I share their real fears of replacement with refined technology, but I can’t stop feeling their arguments are flawed.
All art is derivative. We have to ingest to be inspired. Creating something from nothing is much harder than improving, altering, continuing, or contributing. We’ve all referenced a forerunner or pedagogy when offering rationale for our actions, regardless whether those are conforming to or contradicting of. No one intends to recreate the wheel.
These creators with ownership rights and compensation interests that will be focused away from content creation exclusively by humans, and the humans impacted by tools designed to replace them or find more efficient ways to expedite content, all have valid interests and concerns in the coming years. But avoiding it is inevitable and addressing it is better than ignoring it, no matter your outlook.
I think total replacement is a long ways away. But establishment of compensation and role models for new paradigms isn’t dumb. And addressing it sooner is better. It can always be amended if things don’t collapse too. My totally uninformed view is something more collaborative where layouts and direction need to be established by human editors and writers, but AI can create roughs of scenes quicker, and even that will be overseen by an art director and AI prompt tech (AI Input Engineer isn’t a real title. Subtle rephrasing of generative requirements and design requests is not engineering. It’s an applied science. And that’s ok.) Finishing can be done in design software with human artists and stylus tablet design. I mean, if that’s not being done already. I don’t know. 🤷♂️
Intent, impetus, and reality aside, something intended to be seen doesn’t dictate who can see it. Nor does it give away rights to depict those things whose ownership was previously known. Meaning AI “using” published material someone bought to read and shared with his optical scanner and uploaded to the cloud could be considered violating copyright and defrauding the company who produced the material, assuming they were selling access, absolutely. Or creating revenue from art or merchandise that contains those trademarks. Even if they were giving it away for free, it’s still theft. But none of that seems to be happening. It’s just the usage as reference material that’s being argued. That seems limiting.
Is that AI company giving it to an optical scanner to view and collate within their algorithms any different than when I read the same content and commit it to memory? I dunno. An easy argument that a license for the content should have been sought and a fair cost for usage rights plus damages seems reasonable. But if I recreate content I’ve read, whether I’ve paid for the experience or not, wouldn’t invite the same scrutiny. That may be appropriate given the hype surrounding GAI and how limited my potential is in comparison. But it still makes one wonder about the difference between what is in effect the same application, but different tolerances.