The AI Thread

But analog = continuous, same as classical physics, where space, time and energy is continuous.
Meanwhile digital = discontinuous, same as quantum physics, where space, time and energy are quantified, so discontinuous.
But a digital state is an either/or state. Isn't quantum usually a both/and?
Likewise, there are many both/and in a continuum, eg each (continuous) subset of a continuum has the same cardinality as the set.
While at an observation, there is a specific state of the particle, the same is true for an observation of any real number (granted, in this parallel the irrational numbers are defined due to a "state" being presentable after the first few digits, eg pi is almost equal to 22/7 and that won't change regardless of how many digits you compute; unlike with different observations of a particle)
 
There is a small bit in the Feynman book I am reading, on how a similar organism (some type of fungus) reacts, and whether or not it is as automatic as implied. Feynman (can't establish if he is right, of course) noted that they don't seem to react the same way always, eg not change direction at the same angle/speed when met with an obstacle. Wouldn't something like a thermostat just react in one way or (at best) a finite number of ways that are already either programmed or easy to define?

I could program a thermostat which makes a random walk towards the set target. I could even use a quantum random number generator for this if we need a quantum effect involved. Would such a thermostat be sentinent?

I would even argue that any thermostat is going to have an analog part, which is subject to (shot-) noise. So actually any thermostat has in principle an effectively infinite number of responses to the environment.
 
I could program a thermostat which makes a random walk towards the set target. I could even use a quantum random number generator for this if we need a quantum effect involved. Would such a thermostat be sentinent?

I would even argue that any thermostat is going to have an analog part, which is subject to (shot-) noise. So actually any thermostat has in principle an effectively infinite number of responses to the environment.
Let's assume it has an infinite number of responses to the environment. How would that turn itself to sentience?
Complexity shouldn't be deemed as a primary factor of sentience. I don't see why there isn't a lower bound of it required, but raising that bound (even massively) won't change the object from non-sentient to sentient.
 
Let's assume it has an infinite number of responses to the environment. How would that turn itself to sentience?
Complexity shouldn't be deemed as a primary factor of sentience. I don't see why there isn't a lower bound of it required, but raising that bound (even massively) won't change the object from non-sentient to sentient.

I agree that complexity is not the primary factor. But then, what is? Just the ability to respond to the environment is not enough either, because then a thermostat would be sentient (or maybe it is?)
 
I agree that complexity is not the primary factor. But then, what is? Just the ability to respond to the environment is not enough either, because then a thermostat would be sentient (or maybe it is?)
Looking from the (clear) outside, anything sentient to any degree would need to be not just organized in levels (so that its sentience is not in the same level as its lower inner calculations) but also have qualitative difference in the sentient level to those other levels. Eg, in humans, while you can be conscious of effects of some lower-level calculations going on (such as in your metabolism), you are never going to experience metabolic calculations as a non-sentient observer <=> those lower-level calculations do not tie with an observer.
Though, in theory, you can still become conscious of some artifacts of those lower-level calculations (generally this seems to be the case with hypochondriacs, which is why it's so bad an idea to try to affect lower-level calculations).

Not that the above is anything but hopelessly general. Not sure if I can think of a slightly better analogy. Maybe in a formal logic system, you can think of the system itself somehow being aware of qualitative difference between the external part of the system (formation of theorems) and any deeper-level (not that it goes that deep, due to inherent limitations; I am only thinking of the godelization of the system). While a human observer can notice the difference between the two, the system itself only operates as a block, without awareness of anything.
Now, maybe if the computer isn't digital, and thus it incorporates more than just two states of electricity (zero or near zero, and everything else), it might build up into something that has qualitatively different levels too. Not that this is straightforward either, but at least here you are dealing with a phenomenon (electricity) and not just a binary state.

The biggest issue is that you can't have qualitative difference arise from zero. One has to assume that even in humans, you don't actually get any distinct point of consciousness forming; seems to be a process from the earliest level, that just progresses into a stage where it is nearer what we identify as consciousness.
 
Last edited:
Prove it.
Spoiler :
I do not think it is sentient, but I also do not think we have the tools to prove it one way or another.
You have to prove a positive otherwise you could say I have to prove a rock isn't sentient
 
You have to prove a positive otherwise you could say I have to prove a rock isn't sentient
My position is that we should work toward a test in parallel with developing the AI. The people best positioned to do that work are the capitalists who own the current crop of tools, but we cannot trust them so we should be investing tax monies in the work. Perhaps including some compulsory disclosure so the researchers have access to the latest developments.
 
AI can read human minds (sort of, not really)

Paper Writeup Webpage

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”

Despite his excitement, Takagi acknowledges that fears around mind-reading technology are not without merit, given the possibility of misuse by those with malicious intent or without consent.

“For us, privacy issues are the most important thing. If a government or institution can read people’s minds, it’s a very sensitive issue,” Takagi said. “There needs to be high-level discussions to make sure this can’t happen.”

l94V0Y2.png


Spoiler Abstract :
Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straightforward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs.
 
Last edited:
^Moving closer to when the greatest visual artists (first visual) will be those who simply can imagine something of importance. The machine will then save/print it ^^
 
^Moving closer to when the greatest visual artists (first visual) will be those who simply can imagine something of importance. The machine will then save/print it ^^
Tomorrows artists studio

fMRI-FSU-3X2.jpg
 
It will have to become smaller, though :D
Or... larger? Styled as a room, to feel natural.
They really do not feel very natural. You have to be inside a spinning superconducting electromagnet to flip the spin of your hydrogen nuclei. They give you headphones with music, but the machine is so loud it drowns it out.
 
Translation: Elon and apple are way behind and want Open AI and Microsoft to stop six months so they have time to develop his own language models.

Only possible philosophical and juridically fundamented answer to the letter: cry me a damn bloody river.

Musk is now starting up his own company called X.AI

:thumbsup:

 


AI is getting eyes. Looks like as the path to AGI is a central AI using other AIs to get and process the info needed to achieve its goals. A networks of interlaced AIs.
 
AI can read human minds (sort of, not really)

Paper Writeup Webpage

Yu Takagi could not believe his eyes. Sitting alone at his desk on a Saturday afternoon in September, he watched in awe as artificial intelligence decoded a subject’s brain activity to create images of what he was seeing on a screen.

“I still remember when I saw the first [AI-generated] images,” Takagi, a 34-year-old neuroscientist and assistant professor at Osaka University, told Al Jazeera.

“I went into the bathroom and looked at myself in the mirror and saw my face, and thought, ‘Okay, that’s normal. Maybe I’m not going crazy'”.

Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022, to analyse the brain scans of test subjects shown up to 10,000 images while inside an MRI machine.

After Takagi and his research partner Shinji Nishimoto built a simple model to “translate” brain activity into a readable format, Stable Diffusion was able to generate high-fidelity images that bore an uncanny resemblance to the originals.

The AI could do this despite not being shown the pictures in advance or trained in any way to manufacture the results.

Takagi stressed that the breakthrough does not, at this point, represent mind-reading – the AI can only produce images a person has viewed.

“This is not mind-reading,” Takagi said. “Unfortunately there are many misunderstandings with our research.”

“We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future.”

Despite his excitement, Takagi acknowledges that fears around mind-reading technology are not without merit, given the possibility of misuse by those with malicious intent or without consent.

“For us, privacy issues are the most important thing. If a government or institution can read people’s minds, it’s a very sensitive issue,” Takagi said. “There needs to be high-level discussions to make sure this can’t happen.”

7RleLpgp0832gTs06esnnlr2-rnPqV4LmYigFZja9cbaUkb_Oidkv7xDTq7_sKNJnjeWD9NQWyR3y6-pWjtBCAyICqZydwN-oASr8sTtQrb4R-fKr5zN2SwtSDiM3FtLag=w1280


Spoiler Abstract :
Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straightforward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs.

You are aware that just recently many research papers got invalidated because the MRI systems they used to draw results from were flawed?

Specialized systems (please can we stop using the term "artificial intelligence?) trained with crappy data produce crap. And the world is drowning on faulty data. The limit to it seems to be storage and that unfortunately has gotten very cheap. No one is bothering to curate and cull the crap, only piling up more.

The near future seems to be one of models taking faulty data and producing more faulty data that will be used as input for other models... so long as they produce something that is expected to be profitably exploited, it will be done. Because "AI".
Has Swift writter some satire about a society of pretenders where everyone finally starved to death because no one did anything useful?
 
When I give a full on metaphor to chatgpt asking it to code an idea never been done before and it helps it perform better than my previous explicit attempt I’m going to call it artificial intelligence.
 
You are aware that just recently many research papers got invalidated because the MRI systems they used to draw results from were flawed?

Specialized systems (please can we stop using the term "artificial intelligence?) trained with crappy data produce crap. And the world is drowning on faulty data. The limit to it seems to be storage and that unfortunately has gotten very cheap. No one is bothering to curate and cull the crap, only piling up more.

The near future seems to be one of models taking faulty data and producing more faulty data that will be used as input for other models... so long as they produce something that is expected to be profitably exploited, it will be done. Because "AI".
Has Swift writter some satire about a society of pretenders where everyone finally starved to death because no one did anything useful?
So? Human intelligence is continuously feed with faulty data too. We have yourself as a paramount example of that fact.
 
You are aware that just recently many research papers got invalidated because the MRI systems they used to draw results from were flawed?

Specialized systems (please can we stop using the term "artificial intelligence?) trained with crappy data produce crap. And the world is drowning on faulty data. The limit to it seems to be storage and that unfortunately has gotten very cheap. No one is bothering to curate and cull the crap, only piling up more.

The near future seems to be one of models taking faulty data and producing more faulty data that will be used as input for other models... so long as they produce something that is expected to be profitably exploited, it will be done. Because "AI".
Has Swift writter some satire about a society of pretenders where everyone finally starved to death because no one did anything useful?
I did not. I will point out that this is producing eerily similar pictures to the one being viewed. As long as he is not totally making it up it seems there is something going on beyond faulty data.
 
So? Human intelligence is continuously feed with faulty data too. We have yourself as a paramount example of that fact.
Come on, don't be like that ^^ We are trying to have a nice discussion.
Moreover, I personally don't see how the (certainly awesome and soon to be extremely useful) identification methods of the current machines have to do with intelligence. Those are all (textual, visual etc) translations of values in the images to meaningful subsets, as happened from the start with digital systems, in the reverse. You don't even need to go to current developments to see the metaphor: when a Civ "ai" does (say) unit movement, obviously it doesn't identify what a "unit" is, where it is, what it is trying to achieve and how; all those are numbers and action is dictated by code. I don't agree we should imagine that the machine somehow feels something or attributes meaning.

By the way, I am sure we all have felt the suspension of disbelief state in games, such as with an enemy that nears for the kill and makes us react. The game is in one's head, though. Similar to how if you like anime, you no longer tend to consciously think that it is not living beings you are seeing but idols/drawings you have learned to accept as such so as to enjoy the story.

ChatGPT is already amazing, in that (despite the many and frequent errors- it doesn't mind faking it either!!) it can pass as a human thinker. But it isn't that at all, nor intelligent in any way.
 
Come on, don't be like that ^^ We are trying to have a nice discussion.
Moreover, I personally don't see how the (certainly awesome and soon to be extremely useful) identification methods of the current machines have to do with intelligence. Those are all (textual, visual etc) translations of values in the images to meaningful subsets, as happened from the start with digital systems, in the reverse. You don't even need to go to current developments to see the metaphor: when a Civ "ai" does (say) unit movement, obviously it doesn't identify what a "unit" is, where it is, what it is trying to achieve and how; all those are numbers and action is dictated by code. I don't agree we should imagine that the machine somehow feels something or attributes meaning.

By the way, I am sure we all have felt the suspension of disbelief state in games, such as with an enemy that nears for the kill and makes us react. The game is in one's head, though. Similar to how if you like anime, you no longer tend to consciously think that it is not living beings you are seeing but idols/drawings you have learned to accept as such so as to enjoy the story.

ChatGPT is already amazing, in that (despite the many and frequent errors- it doesn't mind faking it either!!) it can pass as a human thinker. But it isn't that at all, nor intelligent in any way.
Classic AI like the one civ games use and modern deep-learning AI are very different beasts. While the former is based in a set of rules written by human programers the later is based in having a humongous quantity of data so learning algorithms can extract trends from it and produce results. I see it as comparing classical deterministic physics vs quantum physics which is based in probabilities and statistics which tend to give only a useful result with big enough populations.

Anyway I think it is of not much use to discuss the metaphysical aspects, if AI will ever reach self-awareness or sentiments, or will become 'really' intelligent, as it is already difficult for us to define those concepts since we live 'inside' them, so we can't distance ourselves to look at the issue with some perspective (maybe a machine would be better than us at that? :D )

All I can say is this thing is going very fast, and the results are continuously improving, to the point a general AI may be closer than we think. And for general AI I don't necessarily mean a self-aware being and such, but simply an intelligence capable of taking info from very different sources, combine all and react to it. So, text, which has been the first step, now images, soon it will be video, then sound, smell, tact... So all the ways we humans have to perceive our environment. Such AI will probably be totally indistinguishable and produce results similar to any human. Would then matter if it is 'intelligent' the same way humans are? I mean 'you will know them by their fruits'.

The other day precisely I was playing with AutoGPT, an automatization of ChatGPT which you can ask anything (for instance "make me rich") and it will start looking for ways to achieve it, even creating other AIs with specific missions as looking the internet, creating code, etc. So i asked it to write the best prompt to create the most beautiful image in the universe. Then the AI said to itself (you can see what it's thinking):
-I will search the internet for images considered specially beautiful to learn how to make the most beautiful one.
then stopped for a while (probably looking for images in the internet) and after that it came back with:
-wait, i am an AI that can only read text, i can't see images! will have to find other ways.
Then it did some incompressible things and entered in a loop.
I didn't get any incredibly beautiful image at all but seeing the AI reasoning and 'missing' having a pair of eyes was even more interesting, and a lot funnier. :lol:
 
Back
Top Bottom