Can a computer make art?

It doesn't really matter, anything can be art. What matters is if it's any good.
If we take good as equal to nice to look at, then can natural beauty be art?
 
How would you define "art" ?

Does art require a conscious creator? Or can it be created using natural processes?

The answer to OP's questions seem to depend on these questions being answered first.
Yeah. To add, if the computer is 'creating art' by running a program written by a human, you can argue whether it is actually the computer 'creating' something, or the human.

Also, if one believes that art exists in nature, then you could argue that art needs no conscious mind. Then creation & evolution itself produces 'art' although not intentionally.
 
Yeah. To add, if the computer is 'creating art' by running a program written by a human, you can argue whether it is actually the computer 'creating' something, or the human.
If the same code created a "work of art", and aged photo of a criminal and denoised medical images would that make sense?
 
Also, if one believes that art exists in nature, then you could argue that art needs no conscious mind. Then creation & evolution itself produces 'art' although not intentionally.

My personal opinion is that nature creates the potential for art. I haven't spent a lot of time thinking about this, so i might very well change my mind.. but it seems to me that a conscious observer still needs to come along and frame the subject matter in some way and "proclaim" that art has been spotted/created/etc.

That's because to me art is a very personal thing, potentially different for every artist and for every art observer/looker at. One artist could disagree with another artist whether something one of them created is art or not.. It might be art to one, but not art to the other.. I tend to take on a rather broad definition here in that.. if it's art to somebody, then it's fine to call it art.. meaning that a whole crapload of things on this planet (and elsewhere) are art, ranging from all the obscene things called art in the past, to things only 4 people on the planet consider art, etc. Something could even be a piece of art without any artist ever considering it art in the first place, by virtue of some random person walking past, looking at it, and proclaiming that exact vantage point to be a piece of art.. I just don't see much value in arguing over what art is exactly.. it will be different things to different people, and IMO that's fine. If something stands in a museum that I don't consider art.. but somebody out there does.. I'll call it art. If I read an article about some random non-art world person calling something art, that nobody else on the planet considers art.. I'll say... "This doesn't fall under my personal definition of art, BUT it has now been framed as a piece of art by somebody out there, so it is now valid to consider it from that vantage point.. i.e. as a piece of art"

So can computer create a piece of art? IMO you need a conscious observer in there somewhere, whether it's the programmer who coded the algorithm that generated the art.. or a passer by who looked at a random piece of output and proclaimed it art.. OR the computer itself being self-aware and able to make decisions like humans can, also being aware of its own self.

This question is so incredibly broad to me, that.. as soon as we point at something and ask: "Is that art?".. Well, it can be. We are now discussing it in the context of art. Even if it's the most vile thing imagineable or simply a straight line.. or one dot.. or a blank canvas. Art is more than just what you can see. A lack of something in a piece of art can be equally if not more powerful than what it does contain. We are now talking about it in the context of art, which is in a way an expression of art in itself.

I suppose I do not like the distinction of art vs non-art in the first place. It feels like a form of gate-keeping. You do not need any sort of certification to create art. You just need your brain, IMO. As for the tools, that is up to you as well. You can go out there and create art without even knowing it. If somebody else is watching you from the 8th floor always walking to work the same way.. the "desire path" that is formed through the field might at one point be considered art, and you'd have never even known you participated in creating art. Somebody photographing or drawing it from above, or even framing it as art in their mind, that is IMO enough to "make it" art.

To me art is essentially a very flexible thing, and up to the artist or the art observer. If a non-self-aware computer construct is able to create something that we can then debate over, as art or non art.. That in itself creates a situation where I'd already accept the thing as art, since the conversation would have already been framed as such. To me "Is it art?" is like asking "is it math?". Many things can be represented as numbers, so in a way.. yes, everything is math. But it isn't. It's the same w/ art (for me)

So.. if a bear craps in the woods.. and nobody is there to see it and call it art.. Would that be art then? Let me just say this. There is a bear out there somewhere crapping right now and it isn't art at all. But if we were watching this via webcam it very well could be. That also assumes that bears aren't hyper intelligent and artsy creatures, crapping in a way that makes sense to their artistic brains and nobody else. Which very well could be as well
 
People can make art with computers. People can use computers to make art.
Either way, if you don’t think those two statements are totally overlapping, people are still a critical element.

But can a computer ever be trained to make art? Or will it always require human input?

I am on the pro-human side. Being human, I am also biased.

A computer can make some types of art - a good example is 2d images. Of course it is touch and go, but it can work because it doesn't require the computer to realize anything, just form a tie to a style and then be trained using a second process (typically one of comparing it to original, non-computer art, to find if it passes according to the computer).
I don't think a computer can make literature as passable art, although that doesn't include flash fiction (because that too is mostly suggestion, and is very brief).

That said, even with flash fiction, the computer would have to be specifically trained on it. You can read the recent article about that Lamda chatbot and its pitiful short story centered on an owl :) (and no, Lamda isn't sentient, there are many videos as to why that wouldn't be possible, and they focus on what the program is, not philosophical questions about it).
 
Last edited:
Lamda isn't sentient, there are many videos as to why that wouldn't be possible, and they focus on what the program is, not philosophical questions about it
I do not buy that, and not only because anything presented only in video form is generally not very trustworthy.

While I do not think LaMDA is sentient, I do not agree that it would not be possible. It would certainly need philosophy to answer. I have looked at some arguments, and none of them define sentience in an objective falsifiable sense. I think we should ensure we have the tools to answer this question before something comes along that is more believably sentient.

For example, here is an argument against it:

The Chinese Room was a philosophical thought experiment carried out by academic John Searle in 1980. He imagines a man with no knowledge of Chinese inside a room. Sentences in Chinese are then slipped under the door to him. The man manipulates the sentences purely symbolically (or: syntactically) according to a set of rules. He posts responses out that fool those outside into thinking that a Chinese speaker is inside the room. The thought experiment shows that mere symbol manipulation does not constitute understanding.

This is exactly how LaMDA functions. The basic way LaMDA operates is by statistically analysing huge amounts of data about human conversations. LaMDA produces sequences of symbols (in this case English letters) in response to inputs that resemble those produced by real people. LaMDA is a very complicated manipulator of symbols. There is no reason to think LaMDA understands what it is saying or feels anything, and no reason to take its announcements about being conscious seriously either.
The difference between the Chinese Room and LaMDA is that LaMDA learns. The rules have been generated by LaMDA by looking at trillions of bits of text, returning other bits of text and altering a neural network so that the responses give the results that the machine is designed to "want" (reinforcement learning). This is also a description of how we learn. The only difference between us and LaMDA is that they postulate that "there is good reason to think LaMDA’s functioning is not sufficient to physically generate sensations and so doesn’t meet the criteria for consciousness", but it seems to me this is only true by definition. If we define certain patterns of neural activity, generally generated in a very small bit of our brain called the limbic system, as "sensations", then nothing without a limbic system can have them. We have decided that lobsters and octopuses can be sentient without a limbic system, so what objective measure can we use to deny LaMDA's reinforcement learning system the same when it does largely the same job?
 
Last edited:
I do not buy that, and not only because anything presented only in video form is generally not very trustworthy.

While I do not think LaMDA is sentient, I do not agree that it would not be possible. It would certainly need philosophy to answer. I have looked at some arguments, and none of them define sentience in an objective falsifiable sense. I think we should ensure we have the tools to answer this question before something comes along that is more believably sentient.

For example, here is an argument against it:

The Chinese Room was a philosophical thought experiment carried out by academic John Searle in 1980. He imagines a man with no knowledge of Chinese inside a room. Sentences in Chinese are then slipped under the door to him. The man manipulates the sentences purely symbolically (or: syntactically) according to a set of rules. He posts responses out that fool those outside into thinking that a Chinese speaker is inside the room. The thought experiment shows that mere symbol manipulation does not constitute understanding.

This is exactly how LaMDA functions. The basic way LaMDA operates is by statistically analysing huge amounts of data about human conversations. LaMDA produces sequences of symbols (in this case English letters) in response to inputs that resemble those produced by real people. LaMDA is a very complicated manipulator of symbols. There is no reason to think LaMDA understands what it is saying or feels anything, and no reason to take its announcements about being conscious seriously either.
The difference between the Chinese Room and LaMDA is that the rules have been generated by LaMDA by looking at trillions of bits of text, returning other bits of text and altering a neural network so that the responses give the results that the machine is designed to "want" (reinforcement learning). This is also a description of how we learn. The only difference between us and LaMDA is that they postulate that "there is good reason to think LaMDA’s functioning is not sufficient to physically generate sensations and so doesn’t meet the criteria for consciousness", but it seems to me this is only true by definition. If we define certain patterns of neural activity, generally generated in a very small bit of our brain called the limbic system, as "sensations", then nothing without a limbic system can have them. We have decided that lobsters and octopuses can be sentient without a limbic system, so what objective measure can we use to deny LaMDA's reinforcement learning system the same when it does largely the same job?

Sorry, by "not possible" I meant "not possible given what we know about Lamda", not in a philosophical sense. Of course by this it is simply meant "it is very highly unlikely". In other words I was only alluding to the examination of how Lamda is very much programmed to accept prompts and not antagonize the person talking to it (if the prompt, part of the question, is that Lamda is useful etc, Lamda wouldn't react in a way that outright denies this). As far as I know, it is a program that is trained to complete a sentence, or react to a prompt, in a way which statistically would be near what a human would do - making use of the responses it has read online to similar prompts. Computer science youtubers tend to focus on that in the videos I have watched.

Besides, why would something trained as a chatbot, even randomly end up as "sentient"?

I know of the Chinese room. The person replacing symbols with other symbols is still a person there, though - he just doesn't know chinese.
 
Sorry, by "not possible" I meant "not possible given what we know about Lamda", not in a philosophical sense. Of course by this it is simply meant "it is very highly unlikely". In other words I was only alluding to the examination of how Lamda is very much programmed to accept prompts and not antagonize the person talking to it (if the prompt, part of the question, is that Lamda is useful etc, Lamda wouldn't react in a way that outright denies this). As far as I know, it is a program that is trained to complete a sentence, or react to a prompt, in a way which statistically would be near what a human would do - making use of the responses it has read online to similar prompts. Computer science youtubers tend to focus on that in the videos I have watched.
The thing is you could say much the same about a human learning language.
Besides, why would something trained as a chatbot, even randomly end up as "sentient"?
The question is why not?
I know of the Chinese room. The person replacing symbols with other symbols is still a person there, though - he just doesn't know chinese.
In the Chinese room the rules do not change. In the standard formulation it is a human, but in this age there would be no need for it to be, a computer following predefined rules would be identical. The difference between us and LaMDA is that we can change the rules depending on the result.
 
Computers/AI can make art now. The human artist is already obsolete in the conventional sense.

it's too soon to say that accurately. art is inherently subjective, and for the moment human-produced art still dominates consumption. i wouldn't bet on that staying true for very long, but for the moment machine art still has only a small sliver of the "art" market.

it's not a reach to conclude that ai will eventually be able to produce art at or above human level for a fraction of the time investment/cost. i would not be surprised to see it in our lifetimes, or even within 10 years. but i also don't think it's guaranteed that we will observe "ai-generated art is the majority of art consumed by humans" in 5-10 years, so it's still a little early to say the human artist is obsolete. it's trending that way, though.

i don't envy those aspiring to be artists in the traditional sense as this tech takes hold. though frankly, artists aren't the only ones who should be worrying about getting priced out of their own market by an ai. that would be most of us here. might take ai development an extra decade or two before it can do surgeries better than humans or drive better than humans etc, but there's no apparent barrier that says it can't surpass human performance on these any more than there is for art...
 
it's trending that way, though.

Here's a quote from Civilization IV, you should all be familiar with, and i'll add something to it.
"What Gunpowder did for war, the printing press did for the mind", and Dall-e 2 will do for art.
The day art is created by the spoken word, the pen, pencil, and brush will be put down, on the whole. You always have hold outs, but i'm sure you understand what I mean.
 
Here's a quote from Civilization IV, you should all be familiar with, and i'll add something to it.
"What Gunpowder did for war, the printing press did for the mind", and Dall-e 2 will do for art.
The day art is created by the spoken word, the pen, pencil, and brush will be put down, on the whole. You always have hold outs, but i'm sure you understand what I mean.

i understand, was just pointing out that "obsolete" implies the overtake has already happened, which isn't true. i suspect it will be true in near-future, but am not certain how quickly. human institutions have a habit of moving to suppress competition, and i predict we'll see a fair amount of that attempted here too. how much the current art institutions embrace vs resist machine art will impact whether i expect a < 10 or > 10 year timeframe on machines overtaking humans in art generation. i don't foresee any way the trend actually stops though, just a matter of how long it's delayed.
 
i understand, was just pointing out that "obsolete" implies the overtake has already happened, which isn't true. i suspect it will be true in near-future, but am not certain how quickly. human institutions have a habit of moving to suppress competition, and i predict we'll see a fair amount of that attempted here too. how much the current art institutions embrace vs resist machine art will impact whether i expect a < 10 or > 10 year timeframe on machines overtaking humans in art generation. i don't foresee any way the trend actually stops though, just a matter of how long it's delayed.

I agree. Plus, there will probably always be a niche market for the "real thing" both in personal commissions and in public galleries. It may end up like anything else we make that is superior quality, but artificial, there are always those that want authentic, name brand, genuine articles, especially those with money. I was speaking mainly towards how most art will be produced. After all, there are still swordsmen and still people that use type writers, although it is very niche. The statement Art is Dead, in my opinion, is total BS. I personally think it's in a Renaissance, and will continue to be for a very long time. However, I do think gallery art is dead. Modern art is a perverse disaster, so hopefully something like this will force it to go back to its roots.

 
I suppose I do not like the distinction of art vs non-art in the first place. It feels like a form of gate-keeping.
I think we need a gate. The usefulness of language is contingent upon having a boundary, though it is up to a degree of interpretation.

No matter how good the algorithm, I can’t appreciate something generated with minimal human input as art; as a tool, the computer is great! But I don’t program anything to produce an image, I just draw it digitally with some assistance.

The aha! response is to find some computer-generated art and then not tell me, and won’t I be made the fool then? Nah, I don’t think so, because once I know the creator isn’t really a human my appreciation is probably going to go down.

It’s a quality that can’t be defined well but I’ll say for me, the WOW! factor: take practical effects in movies vs. CGI. Computers are big and powerful, okay.


Ingenuity and craftsmanship like this, I don’t see it being replicated. I’m not putting down digital art at all; I just feel that it needs something, the human touch.
 
I think we need a gate. The usefulness of language is contingent upon having a boundary, though it is up to a degree of interpretation.

pretty much this. you have to draw a line somewhere. otherwise you get absurdity like your favorite song and the gas you put in your car both being "art".

if we want words to have meaning, the existence of boundaries to constrain those meanings is mandatory.

Nah, I don’t think so, because once I know the creator isn’t really a human my appreciation is probably going to go down.

if you reliably can't discern over time, as i expect will be the case as pc-generated art improves, what drives your importance on the origin of the art? will it get to a point where you must confirm who made something, before you can evaluate how much you enjoy it? if so...why aren't you already doing that without exception now?
 
If we take good as equal to nice to look at, then can natural beauty be art?

A way to answer this is to imagine a painter creating a portrait that is hyper-realistic, but the model didn't exist.
For such types of art (decorative/painting/images) I think that it doesn't matter at all what the work had as input or how the creator got from there to the output. In writing it doesn't seem possible to make any decent art-by-computer, probably because there is no fixed output to compare (the writing isn't as much on the page as in the imagination of the observer, while with painting the output is primarily there and picked up differently in secondary ways).

The issue of visual end-result vs mental input and process is also examined in a nice story by Balzac, the Unknown Masterpiece:
Spoiler :
in the end the artist ruins his painting by trying to present very particular mental processes on it, which have no tie to the allure of the finished painting.


Computers will certainly become great painters, but not even decent authors (unless it is flash fiction).
 
Someone can tell me. I look at the thing that says who made it and it says “Gary” and not “system version 7.2”
 
Someone can tell me. I look at the thing that says who made it and it says “Gary” and not “system version 7.2”

System version 7.2 is Gary's stage name.

Btw, I am currently trying Dall-E mini, and the results are rather very bad. I guess that's part of the reason it is free.

edit: but considerably better if you define a style in the prompt (such as use of pencil)
 
Last edited:
Computers will certainly become great painters, but not even decent authors (unless it is flash fiction).

i wonder about that. i see no inherent reason writing is safe from ai overtake wrt quality. some sports blurbs and similar are already auto-generated, and do the job they need to do. that job is a lot easier than writing a 300 page novel, but i'm not convinced they are fundamentally different jobs beyond the scale. many stories have quite similar structures, to the point where we can predict most of the plot, with a twist or two we don't anticipate. is that out of bound for ai, long-term? i doubt it.
 
Back
Top Bottom