Industry Focus – Defence

Internet of Defence IoD is Here –

Should We Be Scared?

Over the past 30 years, Gamers turned drone operators have migrated from destroying targets onscreen to destroying targets in the real world. Drones have revolutionised the way Western powers fight wars across the globe. The applications for these devices have come a long way from reconnaissance, night mission infra-red support and destroying targets in the field.

We’re now entering the opaque world of Internet of Defence (or IoD) – automated and driverless military hardware.

Big investments in Internet Of Defence

The USAF has just given BAE Systems the green light (and $400 million) to create what they call the Skyborg attributable drone; or robots that fly themselves and can destroy targets with higher accuracy than any human could ever dream of. The remit for these currently includes fighter support, but over the longer term, future iterations will likely expand into fully autonomous engagements.

Leaders within the USAF have never watched Terminator 1, 2, 3, 4, 5 or 6.

Internet Of Defence is Here - Alex Gash

What is the benefit?

In current fighter fleets, a large portion of the fighter is specifically dedicated to the pilots. The entire design of a fighter aircraft is actually designed around having an aviator inside the cockpit.

Without this need, we will see completely modern design formats, new shapes and sleeker designs. It will reduce weight, increase aerodynamics and more. That said, we are likely going to see some pretty clunky initial iterations. 

What are the concerns?

As much as we love to see innovation – does the world need an autonomous fighter? If we are talking about improving national security, then it probably does. But, what are the ethics behind autonomous military hardware? I am OK with having a human on the other end of a drone who can press R2 on a retro-fitted XBOX or PS4 controller. But, alarm bells ring when it moves to fully autonomous systems.

Planning for WW3?

Empires rise and fall. Think of any historical Empire that has existed – The Roman Empire (The Sword), The Mongol Empire (The Horse), The British Empire (The Ship) or the American Golden Age (The Military-Industrial Complex).

Not to put a downer on things, but with the escalating China vs US competition to see who has the biggest schlong, the development of AI + AV technologies become a necessary evil.

Similar to the cold war where countries built up stockpiles of nuclear warheads where war policy was focused on mutually assured destruction.

Whoever gets AI supremacy in the ground, air and cyber war. Will dictate terms for the foreseeable future.

TOUGH QUESTIONS AROUND IoD

Futurists need to decide on how to shape the technology of the future.

We need to consider the social, environmental and economic impact, in a way that doesn’t damage the fabric of what our role as humans will be. 

Before committing $400m to a project like this, the developers have to ask themselves some difficult questions: 

  • When can a robot decide if a life is worth taking?
  • Is this technology safe?
  • If this technology gets into the wrong hands, what could they do with it?
  • Is this technology providing value for money? (should this money go to solving the US homelessness problem instead?)

Global – AI & Robotic Laws.

Science fiction writer Isaac Asimov directly influenced the move I,Robot.

I Robot Book Isaac Asimov Science Fiction

Asimov’s book envisages all robots on earth are programmed and build under the following laws.

His Three Laws are as follows:


  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These types of international robotics laws are still, unfortunately, science fiction. Deployment of autonomous military hardware is breaking all of these laws anyway.

No legislation currently exists that allows us as species to manage and mitigate the risks of these technologies. If we create autonomous systems (especially ones that might kill other humans), shouldn’t we create international AI & Robotics laws that allow the developers of these technologies to create them reponsibly?

So maybe before we start building actual robots that start to kill humans based on algorithms, it might be high time we put some more comprehensive legislation together.

Like pronto! 

Post Categories

Post Tags

Sign Up

Join the newsletter.

About Me

Alex Gash Strategy Sales & Marketing Freelancer

I’m the founder of The Difference Group. We are focused on helping you grow faster, increase profits, find you new and nurture existing customers.

I started my career at LinkedIn and have collaborated with a large number of startups, scaleups and SMEs throughout my career.

I look forward to having a chat.