Warfare

And, hu, how is that fundamentally different than "true" intelligence ?
In absolutely every sense of the word. Generative AI does not think. It does not have the capacity to understand what it is working with. All it does is basic pattern recognition.

It's basically just a more advanced version of the OCR software your printer has to recognize written text and turn it into a word document. No matter how well or badly it does that job the fundamental fact is that it does not understand the actual text, or even the concept of text it self. Only that certain characters match up to certain shapes.

And while this is incredibly useful as a tool for certain things such as OCR or data analysis it is not equivalent to actually having a thinking mind that can make intelligent decisions.


And this is my professional opinion as an actual software engineer with a proper computer science degree. And that's computer science and not just programming.
 
In absolutely every sense of the word. Generative AI does not think. It does not have the capacity to understand what it is working with. All it does is basic pattern recognition.

It's basically just a more advanced version of the OCR software your printer has to recognize written text and turn it into a word document. No matter how well or badly it does that job the fundamental fact is that it does not understand the actual text, or even the concept of text it self. Only that certain characters match up to certain shapes.

And while this is incredibly useful as a tool for certain things such as OCR or data analysis it is not equivalent to actually having a thinking mind that can make intelligent decisions.


And this is my professional opinion as an actual software engineer with a proper computer science degree. And that's computer science and not just programming.
While you be hit but such intellectual rocket, don't forget to say, you are not really true AGI
 
In absolutely every sense of the word. Generative AI does not think. It does not have the capacity to understand what it is working with. All it does is basic pattern recognition.It's basically just a more advanced version of the OCR software your printer has to recognize written text and turn it into a word document. No matter how well or badly it does that job the fundamental fact is that it does not understand the actual text, or even the concept of text it self. Only that certain characters match up to certain shapes.
And what makes you think that intelligence isn't, at the core, just pattern recognition ? At which point exactly does an AI stops just aping intelligence and starts to actually become intelligent ?
And while this is incredibly useful as a tool for certain things such as OCR or data analysis it is not equivalent to actually having a thinking mind that can make intelligent decisions.
For now, of course. I'm not saying AI are actually intelligent "now", I'm saying that the borders between "simulating" and "being" are much more blurry than what we like to think, just like people from one century ago (and some still to this day) wanted to believe that humans and animals were fundamentally different, rather than just being about degree of complexity in the brain.
 
And what makes you think that intelligence isn't, at the core, just pattern recognition ? At which point exactly does an AI stops just aping intelligence and starts to actually become intelligent ?
Pretty sure that's the million dollar question. It's why we still have the Turing test.

Generative AI is a product. It's something to be sold. Commercialised. It's not blurring any borders at all. It's just throwing a large amount of compute at a problem and making people pay for the data centre costs.

Humans are by definition animals. We're categorised as such. Generative AI isn't. If you're asking at what size of data centre a language model becomes sentient, or even sapient, that's a much more focused question. And there isn't an easy way around the thermodynamics (shrink a data centre to the size of a brain and you get molten slag, if it could even get that small in the first place).

"football pitch of server racks having the lateral compute of a single human brain" isn't the same as "being" human. It isn't even blurring any lines.
 
Pretty sure that's the million dollar question. It's why we still have the Turing test.

Generative AI is a product. It's something to be sold. Commercialised. It's not blurring any borders at all. It's just throwing a large amount of compute at a problem and making people pay for the data centre costs.

Humans are by definition animals. We're categorised as such. Generative AI isn't. If you're asking at what size of data centre a language model becomes sentient, or even sapient, that's a much more focused question. And there isn't an easy way around the thermodynamics (shrink a data centre to the size of a brain and you get molten slag, if it could even get that small in the first place).

"football pitch of server racks having the lateral compute of a single human brain" isn't the same as "being" human. It isn't even blurring any lines.
No worries, AI will teach us how to make pocket dimensional space so it can become infinitely small and cold.
 
Pretty sure that's the million dollar question. It's why we still have the Turing test.
Yeah but the Turing test has already been passed, but somehow shrugged off because it's just "too focused on text chat" and more tests are being devised. So we're already veering into tailoring tests to make AI fail at it while allowing humans to pass it.
Generative AI is a product. It's something to be sold. Commercialised. It's not blurring any borders at all. It's just throwing a large amount of compute at a problem and making people pay for the data centre costs.
But at the same time we can't really define intelligence nor the actual borders between pattern recognition and "true" intelligence, so who can say that throwing large enough amount of data, time and computing power can't lead to emergent AI as a byproduct ?
 
From today's local Albuquerque paper.


Kirtland Air Force Base unveils WARS Lab

It’s AI and computers, yes, but humans remain in the loop
BY JOHN LEACOCK JOURNAL STAFF EDITOR

It was reveal day at Kirtland Air Force Base. The Air Force Research Laboratory unveiled its stateof- the-art WARS Lab on Monday. WARS is an acronym that describes the basic activity of the project: Wargaming and Advanced Research Simulation Laboratory.
About a hundred people, including scientists, engineers, base officials and representatives of the state’s congressional delegation, were on hand for a ribbon-cutting ceremony at the $7 million, nearly 11,000-squarefoot facility, which will focus on wargaming and advanced technology concepts. The WARS Lab’s shiny new interior features a massive display wall and 130 workstations on its wargaming floor plus flight simulators. For wargaming, the lab fuses advanced technologies across all domains of warfighting — land, air, sea and space.
But it’s not all artificial intelligence and computers. Rather than being replaced by artificial computer systems, humans remain a critical part of such efforts, with officials promising improved human-in-the-loop analysis, base officials said at the event.


ajax-request.php
zoom_in.png

Community members attend a ribbon-cutting ceremony for a WARS Lab at Kirtland Air Force Base. JON AUSTRIA/JOURNAL



Why wargaming?
In an uncertain world, there is an urgent need to rapidly integrate and field next-generation weapons, officials said. Wargaming introduces participants to emerging and rapidly changing battlefield technology. It’s also a way of finding out shortcomings in a simulated environment rather than the real war, base officials said. The games ultimately provide planners and scientists with an in-depth understanding of how to prepare for a realistic battlefield confrontation. And the findings are shared. Officials said the goal of the facility is to create partnerships between AFRL and industry, academic and others, including other branches of the military and allies.

Ultimately, it’s all done, said Dr. Shery Welsh, director of AFRL’s Directed Energy Directorate, in the pursuit of “a safer, more secure world.” AFRL is the primary scientific research and development center for the Air Force. The WARS Lab is a joint effort between the AFRL’s Directed Energy and Space Vehicles directorates. “Within the realm of scientific research and development, the Air Force Research Laboratory stands out as the primary center for innovation in the Department of the Air Force,” Welsh said. “And we’re all very proud of that — supporting the Air Force and Space Force.”


ajax-request.php
zoom_in.png

Col. Jeremy Raley, director of the Air Force Research Laboratory’s Space Vehicles Directorate, speaks during a ribbon cutting ceremony for a wargaming facility on Kirtland Air Force Base. JON AUSTRIA / JOURNAL
 
Yeah but the Turing test has already been passed, but somehow shrugged off because it's just "too focused on text chat" and more tests are being devised. So we're already veering into tailoring tests to make AI fail at it while allowing humans to pass it.
Are you referring to the Google thing? Google itself seems to disagree with the singular engineer that made the claim.

Regardless, I can see your point, but I personally don't think a single thing (presumably designed to excel at the conditions presented by a Turing test, Google wouldn't go into this with the idea of failing) really sets the benchmark for sentience. Science is all about refining our understanding of stuff. Stopping at a single instance of the Turing test being passed isn't statistical, for starters. It doesn't seem to have been repeated, secondly. Where's the methodology? What are the costs? Surely these are things we should be interested in?
But at the same time we can't really define intelligence nor the actual borders between pattern recognition and "true" intelligence, so who can say that throwing large enough amount of data, time and computing power can't lead to emergent AI as a byproduct ?
Nobody says it can't. As far as I'm aware. But you're talking about a kind of real-world sprawl that evokes imagery of the Machine City from the Matrix. I'm not knocking the concept (I love the Matrix, in my mind a classic movie and always will be), but if that's what's required for non-quantum* programming code to count as some kind of actual intelligence, generative AI absolutely, one-hundred percent is not that.

I'm much more at home comparing the actual programming to how the human brain works. In short, we work nothing alike. It's why humans consistently have an edge in, say, video game strategy, but a computer can crunch numbers that give educated mathematicians a headache, in a heartbeat.

*quantum computing works on inherently different principles to regular binary-based systems, and is basically a whole other realm that I'm not equipped to comment on.
 
You have to love this last paragraph in which he aptly describes the US military machine for the last one hundred years and warns about "authoritarians" and then lists two US military officers with a long background of the most reactionary schooling and training available on the planet. lol oh the irony...

The warning there is the giveaway not to take the article seriously. It's a plug to give more money to these AI scammers. They can't even design an "AI" that can safely drive a car through a city, remember.

It takes a pretty heavy duty piece of hardware to protect electronics from being burned out brute force. And that means a much larger, and more expensive, machine.

You have to encase it in lead, pretty much.

I'm much more at home comparing the actual programming to how the human brain works. In short, we work nothing alike. It's why humans consistently have an edge in, say, video game strategy, but a computer can crunch numbers that give educated mathematicians a headache, in a heartbeat.

This. The human brain does not solve problems by brute-force calculation as machine-learning systems do.
 
I see we have two threads full of fantasists about warfare.

"AI" is, as it was before more than once, a conman's game to fleece "investors". It won't change warfare. Not have drones. If people had been paying attention to the ongoing real war, it's firepower that has a big impact, good old explosives and the heavy platforms to deliver them. The turkish wonder-drones that were allowed to finish off the armenian enclave in Azarbaijan got shot down in a matter of days. Between peer powers drones are no more a "game changer" than bombers were. Those when they were a new weapons were very useful for the british to massacre helpless natives in Iraq. But didn't chnge the war against Germany.
 
This. The human brain does not solve problems by brute-force calculation as machine-learning systems do.

OK, let's try to unwind this a little.

Brute force methods are used in areas where the solution space is not prohibitively large, or for problems where no efficient algorithms are known, such as certain types of cryptographic attacks (e.g., trying all possible keys until finding the right one). It's a niche method, and while used for some tasks in modernity, its usefulness is rapidly fading in the face of new, more advanced tools: pattern recognition and inference (deduction, induction, transduction).

Today, ±40% of AI compute is inference. (aka logical reasoning)
10 years ago 10% of AI compute was inference.
This is how we know AI has "made it", btw.

The method: Pattern recognition is a foundational skill that enables the identification of patterns, structures, regularities in data, while inference (including induction, deduction, and transduction) relates to the reasoning processes that build upon pattern recognition to form generalizations, predictions, or specific conclusions.

Your brain is doing both pattern recognition and various types of inference every day of you life. With the growing use of inference, every passing day Machines are becoming more like you. They just think faster, way faster. 4090 GPU can reason at 85 trillions floating point operations per second (flops), while we humans are left to operate within a narrower band of throughput. Point is, we use the same logical instruments, AI and us.

AI compute is brain in a box connected to an energy source. AI's methods are derived from operational blueprint of a human brain. AI =/= Human. But human brain and AI are in fact becoming very similar, rapidly.

In this regard I can't really tell what's the point of bringing "brute force calc" as some sort of delineation between human brain and AI, especially in light of the fact both AI and humans don't rely on brute force all that much.
 
Last edited:
And what makes you think that intelligence isn't, at the core, just pattern recognition ? At which point exactly does an AI stops just aping intelligence and starts to actually become intelligent ?

Shhhh! He's got a job man and he has to protect it, you know like he's probably got a wife and kids.

Guild masters can't tell the truth or else money is lost. Just like let him lie and don't let the boss man know any better. Common don't be a snitch, like you know snitches get stitches and don't get laid! 😂 Hahahaha!
 
Not have drones.

I do disagree with this as written; while drones have by no means made tanks, heavy guns, missiles etc obsolete the much greater surveillance of battlefields allowed by drones has changed things, making feints more difficult and allowing faster targeting of the artillery. Cheap drones also significantly increase the capabilities available to groups like Hamas. However these are incremental changes, not revolutionary ones.

OK, let's try to unwind this a little.

Brute force methods are used in areas where the solution space is not prohibitively large, or for problems where no efficient algorithms are known, such as certain types of cryptographic attacks (e.g., trying all possible keys until finding the right one). It's a niche method, and while used for some tasks in modernity, its usefulness is rapidly fading in the face of new, more advanced tools: pattern recognition and inference (deduction, induction, transduction).

Today, ±40% of AI compute is inference. (aka logical reasoning)
10 years ago 10% of AI compute was inference.
This is how we know AI has "made it", btw.

The method: Pattern recognition is a foundational skill that enables the identification of patterns, structures, regularities in data, while inference (including induction, deduction, and transduction) relates to the reasoning processes that build upon pattern recognition to form generalizations, predictions, or specific conclusions.

Your brain is doing both pattern recognition and various types of inference every day of you life. With the growing use of inference, every passing day Machines are becoming more like you. They just think faster, way faster. 4090 GPU can reason at 85 trillions floating point operations per second (flops), while we humans are left to operate within a narrower band of throughput. Point is, we use the same logical instruments, AI and us.

AI compute is brain in a box connected to an energy source. AI's methods are derived from operational blueprint of a human brain. AI =/= Human. But human brain and AI are in fact becoming very similar, rapidly.

In this regard I can't really tell what's the point of bringing "brute force calc" as some sort of delineation between human brain and AI, especially in light of the fact both AI and humans don't rely on brute force all that much.

This post is a bunch of technobabble that means basically nothing. I understand you are caught up in the hype around this technology (just like crypto, iirc? How's that revolution going?) But in fact machine learning and the human brain do not work similarly and machine learning systems can only "learn" things by processing massive amounts of data for correlations - not at all how the human brain works, even if the phrase "brute force calculation" is also not quite accurate to describe how ML works.

@Gorbles is correct that ML really represents a concentration of computing power on specific problems. And of course the way we are talking about ML, including me in this post, elides that all the metadata that allows AI to actually work still done by humans, usually humans who are massively underpaid for their time. I mean, arguably the real innovation that enabled that neural network to win Imagenet back in 2014 or whenever was the virtual farming out to underpaid foreign workers the essential labor of labelling all the images so that the network could correct its mistakes.

Anyway, suggest reading some stuff written by Meredith Whitaker or someone else who actually knows what they're talking about rather than hucksters.
 
But in fact machine learning and the human brain do not work similarly and machine learning systems can only "learn" things by processing massive amounts of data for correlations - not at all how the human brain works, even if the phrase "brute force calculation" is also not quite accurate to describe how ML works.

AI, and its subset, ML are solving the same kinds of problems human brain is solving at a faster rate using instruments of logic. Boosting productivity across the spectrum of jobs. You can argue fine points that brain and AI are dissimilar in some ways, and I will agree with you on that, but then I have a question: is it important for you to emphasise dissimilarities and neglect similarities when clearly there is a mix of both?
 
ML is not really a subset of AI: if "AI" actually means something as opposed to being a marketing buzzword, then it refers to various applications of machine learning/neural networks.

Now, my hypothesis is that whatever "similarities" can be adduced between ML and human brains are ultimately pretty superficial because ML struggles or is simply incapable of doing many tasks that human brains perform effortlessly. I think the main difference is that humans can think symbolically and in terms of concepts whereas ML systems are completely incapable of this. For example, they can only learn to "recognize" (scare quotes because these programs have no conscious experience and thus no real agency, and are technically incapable of "learning" "recognizing" or "understanding" anything at all) images that have been labelled - by humans - and if they are trained to recognize, say, a black cat (which takes hundreds of images at least, compared to a human who can recognize it after seeing one image of it) they are then stumped by an orange cat (whereas a human can see that this is the same thing as the black cat, just a different color).

When self-driving vehicle companies get profitable and the military is deploying drone swarms in battlefield conditions, get at me. Until then I remain a skeptic.
 
When self-driving vehicle companies get profitable and the military is deploying drone swarms in battlefield conditions, get at me. Until then I remain a skeptic.
I posted this movie year ago in AI thread, and it's time post it again 😁

Very good movie about Chaos Theory. After watching it, you'll understand, what such time will come. 146%

 
For now, of course. I'm not saying AI are actually intelligent "now", I'm saying that the borders between "simulating" and "being" are much more blurry than what we like to think, just like people from one century ago (and some still to this day) wanted to believe that humans and animals were fundamentally different, rather than just being about degree of complexity in the brain.
Humans and animals share most of the hardware, machines are fundamentally different. As far as I can tell there's been no quantum leap in computing, they're just trillions of times faster than before but ultimately still just calculators.

Your brain is doing both pattern recognition and various types of inference every day of you life. With the growing use of inference, every passing day Machines are becoming more like you. They just think faster, way faster. 4090 GPU can reason at 85 trillions floating point operations per second (flops), while we humans are left to operate within a narrower band of throughput. Point is, we use the same logical instruments, AI and us.
I don't see how machines are becoming more like humans. They seem to be very dissimilar, no intuition and tons of brute speed. Computers are a tool to compensate for humans lack of speed like a forklift is a tool to compensate for our lack of strength. More and more moving parts doesn't upgrade their status from mere tool.

Machine "intelligence" reminds me of the million monkeys thought experiment. A million monkeys typing for infinity will eventually write Shakespeare just as a machine typing nonsense letter configurations (or even learning language via some algorithm) eventually will but intelligence is nowhere to be seen in either.

Computers can make smarter decisions that us about just about everything (what we should eat for lunch, perhaps even how we should reply to our girlfriend's text) but that doesn't make them intelligent anymore than an 80s pocket calculator is intelligent.
 
Last edited:
I see we have two threads full of fantasists about warfare.

"AI" is, as it was before more than once, a conman's game to fleece "investors". It won't change warfare. Not have drones. If people had been paying attention to the ongoing real war, it's firepower that has a big impact, good old explosives and the heavy platforms to deliver them. The turkish wonder-drones that were allowed to finish off the armenian enclave in Azarbaijan got shot down in a matter of days. Between peer powers drones are no more a "game changer" than bombers were. Those when they were a new weapons were very useful for the british to massacre helpless natives in Iraq. But didn't chnge the war against Germany.

No. The difference here is cost and availability. Just because there is the usual money idiocy sideshow, doesn't mean that there is not a new thing here.

This is a tool where a sufficiently functional version can be available to both sides in an asymmetric war. When was the last time that happened?
 
Back
Top Bottom