Unmanned Thoughts
The military has bought into the unmanned craze hook, line, and sinker. They’re jumping on unmanned for every possible application with absolutely no thought given to whether it makes sense or is practical on the battlefield. Unmanned vehicles can certainly make life easier during peacetime but what happens when the enemy starts shooting and unmanned vehicles find their comms jammed and their lifespans measured in minutes? Well, we’re not going to dig into that. Instead, here are just a few updated thoughts to help inform our opinions on unmanned vehicles.
Data
Picture swarms of unmanned land, sea, and air vehicles ranging across the battlefield with their various radar, IR, and optical imaging sensors. We’ll have total battlefield awareness right down to how many buttons on each enemy soldier’s shirt! I wonder what comes after quadra-giga-tetra-gazilla bytes of data, because that’s what we’ll have. There won’t be anything we don’t know! We’ll be unstoppable and unbeatable.
Of course, history, including very recent history, proves this to be completely false. As noted in the recent post about the Yemen missile attacks on the USS Mason, despite having multiple ships with Aegis, IR, and optical sensors all backed up by satellite coverage and various airborne regional sensors, we don’t even know if any attacks actually occurred! We had tetratons of data and yet no actual answer.
How can we have that much data and yet no answers? Let me ask you, what is the exact width of the lot your house sits on (renters, just play along)? You have no idea, do you? And, yet, you had a survey done as part of your purchase of the house (whether you were aware of it or not) so you have the data. You just didn’t assign it any importance and probably have no idea where those documents/data are now.
You have the data but you don’t have the answer.
Or, consider that after every terrorist act the post-event analysis inevitably reveals that we had all the data points necessary to predict and prevent the event but no one was able to assemble the data and connect the dots.
More data is not the answer – better interpretation is.
A UAV can record images of a hundred fishing type vessels but which of those, if any, are carrying terrorists or disguised enemy forces? Having the data isn’t the answer, interpretation is. Someone has to interpret the images and decide which, if any, are threats.
Those swarms of unmanned vehicles roaming the battlefield and collecting data are, arguably, just making the problem worse! We already have more data than we can intelligently interpret and now we’re envisioning more?!
We should not be working on putting more sensors over the battlefield (setting aside the fact they aren’t survivable), we should be working on putting more interpretation over the data.
Data without proper interpretation is, at best, a waste of time and effort and, at worst, distracts or misleads from what’s really important. So, what’s the point of more UAVs? We already have more than we can productively use. We think more UAVs will help but we’re proving on a daily basis that we already can’t make good use of what we have.
UAVs are not the magic observation platforms that so many people believe them to be.
Commander’s Intent
Commander’s Intent is the Holy Grail of warfare - subordinates who can act on their own exactly as the Commander wishes with nothing more than the Commander’s Intent as guidance. This has been repeatedly attempted throughout history with varying degrees of success. At its best, Commander’s Intent allows a commander to direct a battle with a minimum of interaction with his subordinates. Nelson’s guidance at Trafalgar is an outstanding example of this. At its worst, it produces erratic, unintended actions due to failure to accurately convey and/or understand the intent. Unfortunately, the latter has proven more likely than the former on the battlefield.
The reasons for failure to accurately convey intent fall on both sides of the commander-subordinate relationship. Commanders fail to clearly convey their intent and subordinates fail to clearly understand the conveyed intent.
Presumably, we’d like to apply this same philosophy to our interactions with unmanned, autonomous vehicles. However, if we can’t reliably convey Commander’s Intent to humans, how will we convey it to unmanned, autonomous machines? How will an autonomous machine interpret and act on vague statements of intent like, “Hold out as long as you reasonably can.”? Will we have to stop the war to write, test, and debug new software every time we want to issue a new statement of intent?
Yet, without some form of intent instructions to an autonomous UAV, we’ll have to “pilot” every UAV and then what have we gained (see, Manning, below)? Currently, UAVs are incapable of “intent” guidance so we do have to pilot them and, perversely, unmanned platforms require more manning than manned ones!
Manning
Unmanned vehicles have been “sold” as reducing overall manning levels, among many other near-magical claims. The reality, however, is just the opposite. While we may, indeed, remove the pilot from the cockpit, we don’t eliminate him, he just moves to a different location. Further, unmanned systems require more manpower to support. From an Armed Forces Journal article,
Yet the military’s growing body of experience shows that autonomous systems don’t actually solve any given problem, but merely change its nature. It’s called the autonomy paradox: The very systems designed to reduce the need for human operators require more manpower to support them. (1)
The [remotely piloted aircraft] ... requires much more architecture than, say, an F-16 squadron, Kwast said. While the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, he said, it takes about 10 people to operate one UAV at any given time. (2)
The [remotely piloted aircraft] ... requires much more architecture than, say, an F-16 squadron, Kwast said. While the ratio of people to aircraft in manned aviation is roughly 1.5 to 1, he said, it takes about 10 people to operate one UAV at any given time. (2)
Industry’s experience has been the same. Automated systems may remove the worker from the immediate task but they create legions of new workers to maintain, program, troubleshoot, repair, and modify them. Automated systems increase overall manning levels, not decrease them.
We saw a closely related example of this phenomenon with the LCS. While not an unmanned platform, it was designed to operate with a bare minimum crew thanks to a large degree of automation. The reality turned out quite different. The number of “crew” required to support and operate an LCS is larger than if the ship were fully manned and the numbers are probably greater than for the older, less automated Perrys that they replaced.
Conclusion
Unmanned vehicles offer some benefits but they are far from being the panacea that so many, including the military, believe. The Armed Forces Journal article put it nicely, “that autonomous systems don’t actually solve any given problem, but merely change its nature”. The military’s obsessive pursuit of unmanned vehicles is ill-considered and short-sighted and is distracting the military from larger, more serious issues like maintenance, readiness, numbers, and firepower.
_____________________________________
(1)Armed Forces Journal, “The Autonomy Paradox”, 1-Oct-2011 ,
http://armedforcesjournal.com/the-autonomy-paradox/
(2)military.com website, "Air Force Wants To Decrease Manning For Its UAVs", Oriana Pawlyk, 24-Feb-2018,
https://www.military.com/daily-news/2018/02/24/air-force-wants-decrease-manning-its-unmanned-vehicles.html
(2)military.com website, "Air Force Wants To Decrease Manning For Its UAVs", Oriana Pawlyk, 24-Feb-2018,
https://www.military.com/daily-news/2018/02/24/air-force-wants-decrease-manning-its-unmanned-vehicles.html
Belum ada Komentar untuk "Unmanned Thoughts"
Posting Komentar