The AI Thread

It's most likely GAN technology, neural network generating pictures from random noise.
Better known by face generator:
https://thispersondoesnotexist.com/ (though everybody probably saw this one)

But in the "this person" it seems that it is just presenting existent images, replacing one with the next each time the page refreshes.
I was wondering if the "generating" part is actually creative (forms something as a non-evident variation of the previous object), or has to do with picking one of a set of already made images.
 
Nice. Can you explain what is happening? (is it really morphed from "one" image, or is this more trivial)

By the way, in the third attempt I got this one, which has shades of Paul Klee ^_^

View attachment 606048
As Red Elk said, it will be a Generative Adversarial Network. You take a training set, in this case some art, and train two neural networks. One tries to make an "art", and you take the output of this and feed it to the other, randomly interspersed with the training set of art, and that tries to guess which is real art and which generated. You penalise the first when the second guess right, and penalise the second when it gets it wrong. After burning loads of electricity the first can generate these endlessly, and no one really knows how. It is the same way you make deep fakes.
 
But in the "this person" it seems that it is just presenting existent images, replacing one with the next each time the page refreshes.
I was wondering if the "generating" part is actually creative (forms something as a non-evident variation of the previous object), or has to do with picking one of a set of already made images.
They are drawn by network from scratch, literally.
These people do not exist, if you refresh the page a few times, you'll notice the network makes mistakes occasionally.
I just saw a woman's face with glasses "dissolving" on left part of the picture.
 
They are drawn by network from scratch, literally.
These people do not exist, if you refresh the page a few times, you'll notice the network makes mistakes occasionally.
I just saw a woman's face with glasses "dissolving" on left part of the picture.

I noticed some mistakes too. Very interesting. So, is there any access to the actual calculation the program makes? (or at least part of it). Clearly it wouldn't identify the dynamics of the face in a way similar to the conscious one for a human, but maybe not even the unconscious one.
 
By the way, @red_elk , here is a nice one I got:

upload_2021-8-19_21-21-18.png


Individuality of the second face isn't a factor. Maybe a human's face, when there are two, is just stuff around stuff and this looks fine. The distortion is very clear every time you have a secondary face in the image.
 
I wonder if the hand morphing into the second face has to do primarily with spatial issues, or just ends up covering up the issue in this way. I don't recall seeing any image where there is only one face and a hand is in front of it.
 
So, is there any access to the actual calculation the program makes? (or at least part of it).
Not sure about this one in particular, but generally all code and training data is open-source, so theoretically anyone can use it to train his own network.
There is no apparent "logic" in how it works though, if you look at networks internal data, it's just numbers.

Some people make funny stuff from it, google "deep dream"

Individuality of the second face isn't a factor. Maybe a human's face is just stuff around stuff and this looks fine.
It's just messed up background. Since in original dataset backgrounds were essentially random, it's very difficult for GAN to learn generating background which look plausibly.
 
I wonder if the hand morphing into the second face has to do primarily with spatial issues, or just ends up covering up the issue in this way. I don't recall seeing any image where there is only one face and a hand is in front of it.
There will be some little probability corner that represents the images in the training set that had either 2 faces next to each other or one person holding their hand to their face. This is one of the examples from that corner.
 
Not sure about this one in particular, but generally all code and training data is open-source, so theoretically anyone can use it to train his own network.
There is no apparent "logic" in how it works though, if you look at networks internal data, it's just numbers.

Some people make funny stuff from it, google "deep dream"


It's just messed up background. Since in original dataset backgrounds were essentially random, it's very difficult for GAN to learn generating background which look plausibly.

But why can't you include some code in the program which will force it to produce some meaningful output of the "logic" it used? It could be an output of the same form as the image (coexisting with it; some secondary image or mark etc). Many ways to decipher even such an output, particularly if it is forced to use limited size and progressively alter it - then you could track what output corresponds to which part of the process. I am just so far on the outside that I cannot see why any kind of method would not work to reveal the "logic" of the machine.
 
But why can't you include some code in the program which will force it to produce some meaningful output of the "logic" it used? It could be an output of the same form as the image (coexisting with it; some secondary image or mark etc). Many ways to decipher even such an output, particularly if it is forced to use limited size and progressively alter it - then you could track what output corresponds to which part of the process. I am just so far on the outside that I cannot see why any kind of method would not work to reveal the "logic" of the machine.
If you can write code that produces meaningful output then you do it, that is what computer programming is. This is about the computer producing code/algorithms that are useful without humans. Humans are expensive, computers are cheap.

The thing is the "logic" of the machine is vary complicated. We have the sequence on the human genome, but the whole of the medical world has not worked out how that translates to the real world of medicine. To go from the millions of weights in a neural network to some sort of understanding about the "logic" it uses would be more like that than the analysis of a mathematical proof.

There is work on making this more explainable, but it comes with an efficiency cost
 
But why can't you include some code in the program which will force it to produce some meaningful output of the "logic" it used? It could be an output of the same form as the image (coexisting with it; some secondary image or mark etc). Many ways to decipher even such an output, particularly if it is forced to use limited size and progressively alter it - then you could track what output corresponds to which part of the process. I am just so far on the outside that I cannot see why any kind of method would not work to reveal the "logic" of the machine.
It can be done to a limited degree. For example in simple case of image classification, you can look at signals from particular neurons and see which part of image causes them to fire.
Like (very oversimplifying), image patch with striped pattern increases chance that the image will be recognized as tiger.

But I don't know how to make network fully explain its reasoning when it recognizes image. When I recognize it myself, I doubt I can fully explain my logic either.
 
But I don't know how to make network fully explain its reasoning when it recognizes image. When I recognize it myself, I doubt I can fully explain my logic either.

But the machine won't be recognizing it in your way (your way doesn't have to be the same as that of any other person either, given you don't have all of your mental 'world' in your consciousness nor is it identical to that of the next person's), so there is a chance it can meaningfully produce something to guide you. Furthermore, if we assumed the machine went into the same kind of complexity, it would be more impressive that it produces stuff in finite time, no?

PS, there is also the issue with gaps in the logic - would a machine have any meaningful meta-logical level? Can't see how. Not that a human doesn't have their own gaps, but they won't be factored in the same way as the computer's, since we almost entirely function (unconsciously) in meta-logic, while a machine functions in the given logic. But that is very vague for me, I'd rather have a response to the first point above :)
 
Last edited:
But the machine won't be recognizing it in your way (your way doesn't have to be the same as that of any other person either, given you don't have all of your mental 'world' in your consciousness nor is it identical to that of the next person's), so there is a chance it can meaningfully produce something to guide you.
In general, machine can see non-obvious patterns in data and in some cases, these patterns can be extracted. For example it can notice correlation between some variables, which human can miss.
But it's tricky with images and other complex data. If machine learns to recognize images in its own way, it's not easy to formalize recognition algorithm in terms of words or other images.

If there is a task to recognize whether there is a cat or dog on a picture, you would probably use a little different logic than I. But if we were to figure out the differences between our reasonings, algorithms - how to do it? I have no idea.
 
In general, machine can see non-obvious patterns in data and in some cases, these patterns can be extracted. For example it can notice correlation between some variables, which human can miss.
But it's tricky with images and other complex data. If machine learns to recognize images in its own way, it's not easy to formalize recognition algorithm in terms of words or other images.

If there is a task to recognize whether there is a cat or dog on a picture, you would probably use a little different logic than I. But if we were to figure out the differences between our reasonings, algorithms - how to do it? I have no idea.

The problem is that my question is from something further down the road from what I can see. The question being about whether a machine (any machine) can meaningfully have a differentiation of "level" in how it arrives at a connection, analogous to the differentiation of level existent in humans: if you see an image of a dog, you are not conscious of anything other than the immediate edge of the calculations of form, but those are of a different level from the lower level neuron "calculations" which allow you to do so. In the machine there isn't any apparent higher level which likely HAS TO be cut off from the lower level. In humans this has to be so, otherwise you'd be forced to calculate billions upon billions of stuff consciously all the time, leading to death a while later; the machine isn't similarly split into an ego-center level and a low level with the former being able to at times run similar processes to the later; the machine would seem to all be one level (no reflection on lower level- otherwise expressed as the higher level being continuous with the lower level).
At least that is my impression. If so, I can't see why the machine wouldn't be able to produce stuff from any intermediate or pseudo-level, and present them along with everything else; a human would have to lose awareness of self to do so, but the machine already has no self.
 
The problem is that my question is from something further down the road from what I can see. The question being about whether a machine (any machine) can meaningfully have a differentiation of "level" in how it arrives at a connection, analogous to the differentiation of level existent in humans: if you see an image of a dog, you are not conscious of anything other than the immediate edge of the calculations of form, but those are of a different level from the lower level neuron "calculations" which allow you to do so. In the machine there isn't any apparent higher level which likely HAS TO be cut off from the lower level. In humans this has to be so, otherwise you'd be forced to calculate billions upon billions of stuff consciously all the time, leading to death a while later; the machine isn't similarly split into an ego-center level and a low level with the former being able to at times run similar processes to the later; the machine would seem to all be one level (no reflection on lower level- otherwise expressed as the higher level being continuous with the lower level).
Most neural networks have hierarchical structure too, for example they can learn image features from simple edges at bottom layers, to more complex patterns at top ones:
1*yILgZZxuHnQQhtsK-b1HLQ.png

https://pallawi-ds.medium.com/ai-st...in-keras-from-scratch-to-perform-a059eaa6d4ff

AFAIK some neurons in human visual cortex work similarly, they learn to detect edges as well.
 
AFAIK some neurons in human visual cortex work similarly, they learn to detect edges as well.
I think the edge detection comes in the retina, and it is the mid and high level features that are modelled in the visual cortex. Much of this stuff comes from analysis of human vision.
 
Back
Top Bottom