The AI Thread

The open source dynamic will proceed in parallel, I can agree with this. My view is that centralised AI process will be more efficient in attracting, by way of renumeration, best specialists. Therefore, centralised process will remain dominant in terms of market share. The same dynamic can be observed in other forms of software development. There is a thriving game mod community, but it’s the centralised dev studios, which rake in most of consumer cash. Corporations can use patents, censorship, legislation to protect their interest and various pathways on the way to their interest. Opensource community can not, by definition.

And yeah, you’re absolutely right about a big push in open source. I am not big on the subject myself, but specialist programmers I watched on yt mention that many open source alternatives are nearly as good as gpt4
The game industry is dominated by the big names for reasons. The server market is dominated by OS, and I bet most of the AI training is done using the linux kernel at least. OS is also dominant in the mobile market, to an extent.

The OS community totally can use copyright to protect themselves, that is what the GPL does. All the big ones are released under the apache ATM, which does not, but that could change. I think the world should have learned from the macOS thing about the danger of that.
 
Who can afford to buy out most of Computation/Time resource? (rhetorical)
One corporation or a million people with graphics cards?
 
Datacenters do more than millions of people with gpus at fraction of cost.
Do they? If the computers are on anyway the additional cost of running processing in the background is complex. I know computers have got a lot better at that, but compared to running stuff on AWS?
 
Do they? If the computers are on anyway the additional cost of running processing in the background is complex. I know computers have got a lot better at that, but compared to running stuff on AWS?

Datacenter processing units are more dense, energy-efficient. Cheaper to run compared to a network of individual gpus. That’s not even the point. Point being who can afford to run their constantly growing models continuously in high processing density environment? Also, corporations happen to hold all the keyes to that environment. Somehow I feel we’re back in crypto thread :)
 
Point being who can afford to run their constantly growing models continuously in high processing density environment?
Loads of people are, right? It seems the open source AI models are moving a lot quicker than the commercial ones.
 
Loads of people are, right? It seems the open source AI models are moving a lot quicker than the commercial ones.

I wonder if there already are graphical represenations of relative movement speeds of both dynamics.
 
I wonder if there already are graphical represenations of relative movement speeds of both dynamics.
You saw this graphic posted earlier from this article?

https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F241fe3ef-3919-4a63-9c68-9e2e77cc2fc0_1366x588.png
 

I think there is excessive alarmism about AI being a threat for humanity which may even lead to extinction, etc, etc. IMO humanity is in so deep troubles created by ourselves without any help and which we don't seem capable to solve out by ourselves that it will get extinct in some decades or centuries at most, therefore we will need any 'exterior' help we can get to survive, and since ETs don't seem very enthusiastic about it, i think AI can be our best chance.
The alarmism often comes from people who are not experts on the subject in the first place, but do get media coverage (a good example of that is Elon Musk).
Even the question about whether "actual AI" (also termed other things, like "strong AI", but ultimately means computers that have at least a semblance of sentience) is possible, in no way has been settled.

An entirely distinct issue, however, is the danger of non-"strong AI" being presented as having strong-ai traits, to obfuscate corporate misuse of otherwise benevolent technology.

It is, of course, alluring to project sentience on the impressive tech we already are seeing. Even publicly available chatgpt ("even", as in apparently it's not as impressive as other stuff ^^) would pass any (well, unless you had suspicions and knew of some blind spots) Turing Test everytime, and is remarkable, but there's no hint of it being sentient.
 
Last edited:
Even the question about whether "actual AI" (also termed other things, like "strong AI", but ultimately means computers that have at least a semblance of sentience) is possible, in no way has been settled.
semblance: noun
the outward appearance or apparent form of something, especially when the reality is different.​

This has definitely been settled. These LLMs do a very good job of giving the outward appearance or apparent form of sentience, when the reality is different. Where the line is between that and true sentience is certainly not settled, but it will probably be machines that teach us most about that in the next few years.
 
semblance: noun​
the outward appearance or apparent form of something, especially when the reality is different.​

This has definitely been settled. These LLMs do a very good job of giving the outward appearance or apparent form of sentience, when the reality is different. Where the line is between that and true sentience is certainly not settled, but it will probably be machines that teach us most about that in the next few years.
Hm, so up to now I was using "semblance" wrongly :D Useful til. (unless the "especially" part is also an active ingredient in the definition)
 
semblance: noun​
the outward appearance or apparent form of something, especially when the reality is different.​

This has definitely been settled. These LLMs do a very good job of giving the outward appearance or apparent form of sentience, when the reality is different. Where the line is between that and true sentience is certainly not settled, but it will probably be machines that teach us most about that in the next few years.

If AI can reason and operate large datasets, while retaining memories, they are no different than us. Perhaps better than us, at what they do. I guess we can wait until they implant AI into a robot and give that robot an idea that his life is precious.
 
If AI can reason and operate large datasets, while retaining memories, they are no different than us. Perhaps better than us, at what they do. I guess we can wait until they implant AI into a robot and give that robot an idea that his life is precious.
I am looking at it from the outside, to say the least, but how would anyone present a mathematical account of such "ideas" as qualitatively distinct from the non-idea/comparatively passive other parts of the processes running there?
Because in a biological being, and easier to use humans, there is self-sensed massive difference between comparatively neutral information and processes and core ideas (doesn't matter that which are which differ from person to person).

From a philosophical standpoint, I don't see at all how you can expect emergence of such distinctions, in something that does not already possess a difference between how it senses stuff and how that stuff "are"; and can't think of any math model where such a difference can be constructed, since math operates on non-ambiguity of how something is defined (certainly on the lower level when that something is utilized in digital machines).

I take Samson's idea about the tech itself teaching us things, as primarily operable on the level of the tech (non-intellegently) picking up how software or hardware arrangements help or limit its computational power. But not as anything sentient.
 
Last edited:
From a philosophical standpoint, I don't see at all how you can expect emergence of such distinctions, in something that does not already possess a difference between how it senses stuff and how that stuff "are"; and can't think of any math model where such a difference can be constructed, since math operates on non-ambiguity of how something is defined (certainly on the lower level when that something is utilized in digital machines).
We know it has happened once.
 
But not how.
We may not know how it happens in machines either, but that may not stop it happening. My suspicion is that it is some emergent property of sufficiently complex systems.
 
We may not know how it happens in machines either, but that may not stop it happening. My suspicion is that it is some emergent property of sufficiently complex systems.
It will be very surprising if it is based on degree of complexity, though. Since even very basic organisms are at least aware of their environment (I only mean they automatically form a symbol of it, however limited or personal), without being self-aware or anything higher.
 
Back
Top Bottom