Warfare

That's what man is doing with neural networks now, trying to create AGI. It is just a matter of model size, computing power and time. The question is not whether we can create, but when we will create artificial intelligence.

If computing power was the roadblock in front of creating true artificial intelligence, we would have achieved that many years ago.

What the AI creators are doing, is writing clever programming that mimics human behavior and creative impulses quite well. It's basically advanced simulations. The breakthrough was much improved models of neural networks, especially in their ability in analyzing data very fast.
 
Guys, I know how computers work. And how programs work. I was programmer long time ago, and still work with computers tight. And how neural net work. I used them in my university work.
But you should carefully read my last post. It's not about computers
 
Guys, I know how computers work. And how programs work. I was programmer long time ago, and still work with computers tight. And how neural net work. I used them in my university work.
But you should carefully read my last post. It's not about computers
It's not a competition about who knows what :D

Anyhow, your last post explicitly invokes neural networks. Which pretty much requires computing.
 
No, I'm more on the metaphysical discussion about what intelligence is and how we try very hard to think ourselves fundamentally different/better.
In that case I think we'll probably agree when I say that fundamentally I feel we aren't. There is nothing that would make a human mind somehow magically superior to a hypothetical machine mind of equal capability. Just as there is nothing that would make us inherently superior to an alien or an uplifted animal.

I just do not think that the current generative AI approach is going to ever produce such a mind due to the inherent technical limitations of its fundamental methods of operation.

the rest of the post
It's difficult to explain in English for me. But I'll try.

Basically what distinguishes what generative AI does from what we do is that we operate in the field of metainformation where it does not. This allows us to operate in basically a higher order space than generative AI.

Fundamentally, the pattern recognition part of intelligence is pretty much the same with both of us in that we both create patterns where there are none based on statistical correlation in order to form what we in human terms call "concepts". Where concept is something I define as the categorization of things into groups defined by the phrase "I can't explain it but I know it when I see it."

However, the difference is that humans have a higher order awareness of the fact we are doing this where as the generative AI approach is fundamentally incapable of producing a system that has such an awareness. So where as both human and AI both can operate within a single concept to formulate output based on data AI can not than take the extra step to take multiple concepts and use them as building blocks for something higher.

That is why you will see AI producing text that looks like human speech but can actually be complete nonsense. There is for example a famous legal case in america where the lawyers asked AI to find them cases to confirm their legal requests and the AI just made stuff up. That AI output is perfectly valid within the confines of the concept that the AI was designed for, that being speech. It's just invalid in the broader concept group of speech + law which the AI was unable to comprehend.

And fundamentally it is my professional opinion that the generative statistics driven learning approach is fundamentally incapable of producing anything beyond that because the very way it is designed is with a deep fundamental focus on mastering one concept based on a sea of data. If we wanted to create a true General AI we would need to fundamentally change some of the principals of design at play.

Which is not to say that the current approach is not a stepping stone toward that. It absolutely is. But it's a stepping stone in the same sense that the arrow was one to the assault rifle. The process won't be one of derivation as much as inspiration and taking the principals learned by operating the one and applying them to the other without a direct evolution of the actual products.

I hope that made mostly sense.
 
There is nothing that would make a human mind somehow magically superior to a hypothetical machine mind of equal capability. Just as there is nothing that would make us inherently superior to an alien or an uplifted animal.

Unless you're a God fearing man then we are his chosen creation.

All aliens and robots would thereby be possessed with the spirits of demons if they were more intelligent than us. And therefore they would be apostate enemies of Christ and would have to be opposed.

Animals likewise having no souls are possessed by the spirit of God according to medieval metaphysics. But sometimes a demon goes into the animal in the case of familiars or that passage whereby Jesus exorcises demons and they go into the pigs before drowning themselves.

Therefore aliens and robots would be more sophisticated demonic familiars of a future age should medieval metaphysics make a comeback within the West.
 
I don't see how this is fundamentally different than computers replacing humans doing math tasks 50 years ago.
The fundamental difference is that it wont and can't happen.

Put simply, AI does not actually produce code or art or text or legal cases. It can not because it does not understand what those are. What AI does is produce things that are like those things. And there is a world of difference there.

In case of code in particular what this means is that it the code it produces is technically correct in that it usually compiles and maybe even does what you wanted it to do. But there is no guarantee that it actually does what you need it to do or that it does so efficiently and without nasty side effects. And that's the critical difference.

Furthermore, even if it did that's mostly irrelevant because writing code is not what we developers do. People don't seem to understand this but the job of software development is a branch of engineering. For every line of code I type I might spend hours planning out what it is that needs typing. Because my job is not just to hack away at a problem until it goes away but to actually design a solution. Actually tying it all out is just the final 10% or so of a long process that so far only a human can do.

That is why everyone who thinks AI will replace developers is wrong. Because nothing short of a proper AGI can replace that engineering work. It's not even that much of a productivity aid because as I said, actually coding, is a relatively small part of what serious developers actually do.

Which is not to say AI won't be useful to us. If you look at the actual AI driven programming tools being developed what you'll basically see is that they are all evolving toward what is essentially a turbocharged autocomplete. And that has a lot of potential to take out the tedium of writing idiomatic code and generally cut down on busywork leaving engineers more time to do their actual work.

So it's going to be a productivity tool to make us more productive. And that might eventually lead to a shrinkage of the total number of jobs available much in the same way any productivity tool does. But that's something that happens to our industry every time a new tool comes out which is about every 32 seconds.

Of course, try explaining this to idiot managers and see where it leads you. But as always the results will speak for them self at compile time. :)

I never said "AI" pales in comparison to human intelligence, computers are obviously vastly superior to humans in many aspects but that doesn't make them intelligent, they're still just tools to be used to make human life easier.

To put it crudely, interacting with an "ai" "girlfriend" isn't a sexual relationship, it's still just masturbation with a fancy human speech generating calculator
Which you must admit is needlessly complicated and inefficient.
 
Now that we've established there is a subset of people who object to the referral to current artificial intelligence as intelligence, lets move on to AI in warfare.

Swarm intelligence is a concept within AI and computer science referring to the study of “decentralized, self-organized systems, natural or artificial.” The origin of this concept dates back to a 1992 study by Marco Dorigo, ant colony optimization, in which Dorigo’s observation of the social behavior of ants led to the development of algorithms that allow an optimized interaction between the individual agents of the whole system.

What this means in drones is that AI is allowing UAVs to function in a coordinated and coherent manner in a “swarm” which allows a military to launch a barrage of drones in an offensive capacity. A carefully planned and executed swarm drone attack can overwhelm the enemy and perform tactical and strategically important military tasks, thus further increasing a drone’s individual usefulness.

Add automation to the equation, and the results look like scenes from a sci-fi movie. AI-powered automation is poised to become the most transformative emerging technology to revolutionize UAVs for military use. What was once an imaginary concept is already becoming the new way of warfare, producing a new kind of lethal weapons which are now known as “killer robots” or Lethal Autonomous Weapons Systems (LAWS) which can be ground-based, air-based, or water-based.

The introduction of AI automation into UAVs is already producing advancements in several areas of drone operations, including in autonomous navigation, communication networks, data analysis, integration with the Internet of Things (IoT), and lastly autonomous decision-making for offensive operations. Autonomous decision-making is by far the most controversial feature, and the lack of any international regulations or cooperation between states to regulate the norms of production or use of such weapons is already alarming human rights groups and advocates of ethical warfare. Human Rights Watch, for example, calls for a “preemptive ban on the development, production, and use of fully autonomous weapons.”

As of 2023, several countries are already in the race to harness the power of automation in drones, including the United States where the Valkyrie experimental aircraft, for example, is being developed as the first prototype of drones entirely run by artificial intelligence. Automation is already in play in the conflict in Ukraine where Russia is alleged to have used AI-enabled Kalashnikov ZALA Aero KUB-BLA loitering munitions. The race to perfect killer bots is also becoming the next area of competition between the United States and China, reminiscent of the Cold War arms race.

These two capabilities combined will turn drones from auxiliary, support devices into strategic weapons that states will turn into large arsenals given their relatively cheap cost and significant potential as a force multiplier. With the coming capacity to launch autonomous aircraft able to act collectively in a swarm, and drones with the autonomous capability to select a target and launch a strike, what sounds like a horror movie scenario of killer robots acting in unison during an attack could soon be a reality.

Questions related to the international regulation of LAWS and killer robots — or the lackthereof — will determine the new norms of warfighting in the coming age of uncertainty and tense geopolitical competition. This very recognition is bound to push states into a self-perpetuating security dilemma of either developing these weapons and acquiring them or running the risk of losing their relative military capability vis-a-vis friends and foes alike. What states choose today and how they will direct the evolution of AI norms in drones will impact the shape of wars to come.


In short, the author proposes that synergy from drone swarms and automation, guided by human orders and constantly "in learning" through neural nets would be so substantial - this will elevate drone from being a mere force multiplier to becoming a strategically important weapon on the battlefield.
 
In short, the author proposes that synergy from drone swarms and automation, guided by human orders and constantly "in learning" through neural nets would be so substantial - this will elevate drone from being a mere force multiplier to becoming a strategically important weapon on the battlefield.
Honestly I can't wait to see what these systems are going to look like when deployed. I mean, it's basically turning war into an RTS game where each pilot has a swarm of AI troops at his command.

This said, I wonder how well that will all work out in high EW environments. After all, all the AI in the world won't help your swarm of drones if their internal communications go down and it's every drone for it self.

Either way this is shaping up to be a interesting century.
 
Interesting is not the word I would use. About EW, maybe there is not need for complex comms or complex AI, only a set of rules plus some sporadic signal very difficult or impossible to jam, so a swarm of drones could work together to clean a given area just like ants or bees can work together to achieve a common goal without being very intelligent or having complex interactions among them.
 
Well, the obvious counter-response is electro-magnetic (EM) weaponry.

1. M-SHORAD (Maneuver Short Range Air Defense): This system is designed for short-range air defense and incorporates a 50-kilowatt laser mounted on a Stryker combat vehicle. The system is part of the U.S. Army's effort to enhance its capabilities against aerial threats, including drones. https://www.defensenews.com/land/20...se-laser-prototypes-take-down-drones-at-yuma/

2. Raytheon's DEFEND (Directed Energy Front-Line Electromagnetic Neutralization and Defeat): DEFEND is a high-power microwave system being developed by Raytheon for the U.S. Air Force and Navy. The system aims to disable electronic components of drones and other airborne threats, with prototypes scheduled for delivery in fiscal years 2024 and 2026. https://www.c4isrnet.com/battlefiel...irected-energy-zappers-for-us-air-force-navy/

3. Leonidas: A high-power microwave weapon developed by Epirus, Leonidas is designed to neutralize drone swarms by frying their electronics. The U.S. Army has awarded a $66.1 million contract to Epirus for this system, highlighting its effectiveness against multiple unmanned aerial systems simultaneously. https://www.thedefensepost.com/2023/01/24/epirus-counter-drone-microwave/

4. THOR (Tactical High-power Operational Responder): THOR is a high-powered microwave counter-drone system developed by the Air Force Research Laboratory. It has demonstrated effectiveness against drone swarms in tests, showcasing its capability to disable multiple drones quickly with its wide beam and high peak powers. https://taskandpurpose.com/tech-tactics/air-force-thor-directed-energy-drone-swarm-test/

5. CHIMERA (Counter-Electronic High Power Microwave Extended Range Air Base Defense): Although not as detailed in the sources, CHIMERA is another initiative by Raytheon to develop a high-energy laser system for the Air Force, focusing on extended range defense against airborne threats. Raytheon's work on directed-energy weapons like CHIMERA emphasizes the push towards portable and rugged systems for front-line use. https://www.c4isrnet.com/battlefiel...irected-energy-zappers-for-us-air-force-navy/
 
As said, it's going to be an interesting century to watch. Honestly I am holding off to see if that american program for a new air dominance fighter that's more like a heavy bomber commanding a drone swarm ever gets developed. Because if that thing happens well than we are really in the age of RTS warfare in the air.
 
Which is not to say that the current approach is not a stepping stone toward that. It absolutely is. But it's a stepping stone in the same sense that the arrow was one to the assault rifle. The process won't be one of derivation as much as inspiration and taking the principals learned by operating the one and applying them to the other without a direct evolution of the actual products.

I agree with most of what you said but I think that the current approach is a dead end. And not even a new one in the history of this we call AI. We may be in agreement if your comparison was in the sense that guns and bows are both called projectile weapons but the science behind guns (chemistry) is entirely different from the science behing bow&arrow. But then it was rockets that were a stepping stone, not bows.
 
I agree with most of what you said but I think that the current approach is a dead end. And not even a new one in the history of this we call AI. We may be in agreement if your comparison was in the sense that guns and bows are both called projectile weapons but the science behind guns (chemistry) is entirely different from the science behing bow&arrow. But then it was rockets that were a stepping stone, not bows.
I think we agree in principal if not in the details of the analogy.

Basically I think, and you seem to agree with, is that the current approach will yield useful tools for the future and that it will reveal principals and lessons that we can eventually apply to future more general AI projects. But that if we ever do create a General AI it won't be a development of what we are doing now. And those lessons and principals will, whilst valuable, be just a small piece of the overall puzzle.
 
I see the current fad as a deployment on a greated scale of something that is not new in AI. Only the scale is new. And the conclusions are already in: regurgitating associations from a training set does not produce useful output except for some very limited applications. The usefullness of this will prove small and non-disruptive of how things are done. Onde past the hype phase where it may get misapplied with comic or tragic effects.
 
I'd say the difference between a hypothetical AGI and current machine learning tools is more analogous to the difference between chemical and nuclear explosives.
 
Top Bottom