In the sci-fi universe woven by Isaac Asimov, the man who coined the term 'robotics', all robots had to adhere to three laws; but to list them all would put you off this piece. Luckily, he instituted a fourth that encapsulates my point perfectly 'A robot may not harm humanity, or, by inaction, allow humanity to come to harm.'
Laws governing robots seem like something that only belong in the pages of sci-fi novels, but we're closer to needing them than you'd believe.
Robots are getting ridiculously more advanced by the day and there's already a group of people campaigning to stop killer robots. They call themselves 'The Campaign To Stop Killer Robots'. Not very imaginative for a group that foresees an impending uprising of killer robots, but forgive them.
The campaign is actually a global collective of NGOs groups that's fighting for a pre-emptive ban on 'killer robots' - or, more specifically and less dramatically - all fully-autonomous weapons.
The collective includes Human Rights Watch and Article 36 among others and it hasn't just sprung up overnight. In fact, they've been lobbying the UN for the past two years because we're reaching a critical point in the legislation of these weapons.
Advances in artificial intelligence (AI) have made it so that the militaries of some countries are close to developing completely autonomous lethal weapons. These weapons, which can select and engage targets without any human intervention, are what the group is fighting against.
Just last year Russia announced that five of their ballistic missile bases were to be protected by mobile robots, capable of patrolling the bases, identifying threats and even destroying targets.
While many countries use robots to detect mines or defuse bombs, they are neither weaponised nor wholly autonomous. And semi-autonomous weapons such as drones are already growing in favour among armed forces worldwide, even as the outcry against their use swells. But what increasingly seems like the future of warfare are the wholly autonomous variety.
Anything that can increase precision while reducing casualties is bound to be looked upon favourably by armies. It's why in America's 2012 military budget, spending on drones and robotics increased even while 1,00,000 personnel were downsized. The US' robotic arsenal now includes over 11,000 Unmanned Aerial Vehicles (UAVs) and well over 12,000 ground-based robots.
It isn't just the US and Russia.
Israel already has robotic weaponry that are seemingly named like Marvel superheroes. At sea, the Protector, an unmanned combat vehicle stands guard. Overhead, Israel has the choice of two types of UAVs, the Super Heron and the Harpy, both combat-ready. And on land the Guardium patrols its contentious borders.
All of these are capable of lethal force.
China has already made great strides in robot weaponry with the Pterodactyl and Sharp Sword, two indigenously-produced drones.
Even Singapore, a country with only a third of Delhi's population, is hoping to capitalise on military robotics to bridge the gap between its own military and those of larger countries.
Battlefield robots are clearly the way forward.
The Campaign to Stop Killer Robots argues that the use of fully autonomous weapons, especially those capable of exercising lethal force, would constitute open a major ethical dilemma.
Can robots actually navigate the complex ethical and situation-specific rules of humanitarian law? Can the decision to take a life be left in the hands of a machine devoid of the human aspects of compassion and mercy? Who is ultimately responsible if a robot kills a civilian? Can an algorithm really be trusted to tell right from wrong in an infinitely complex, high-stress environment?
A report titled Losing Humanity by Human Rights Watch showed that fully autonomous robots would be incapable of meeting the basic standards of international humanitarian law.
The US Office of Naval Research though, is trying to answer and overcome those doubts.
The Office has awarded researchers at a handful of American universities a grant of $7.5 million over a period of five years. Their job? To explore how to build a moral conscience and the ability to determine right from wrong into autonomous systems.
It's a long way off from answering the ethical dilemmas but it does at least serve as official acknowledgement of the technology's pitfalls.
The easy replicability of this technology also means that smaller countries will be able to purchase and implement it. Could it possibly make its way into the hands of terrorists? Seems entirely plausible, and the consequences could be beyond imagination.
The Campaign is hoping that the United Nation's Convention on Certain Conventional Weapons (CCW), a treaty consisting of 120 member countries that has the power to restrict or ban weapon use, will accept their reasoning and ban fully autonomous weapons.
It's looking grim so far, though. The CCW, which meets for a week every year to discuss the issue, has so far only held discussions, with member states often obfuscating the harsher realities by arguing technicalities.
Even the definition of what constitutes a fully autonomous weapon was debated. So far, no actual decisions have been taken. In fact, only five countries, none with significant might, have proposed the idea of a ban.
The CCW's next meeting, in November, will mark a turning point in the regulation of fully autonomous weapons - it will determine the actions taken at its review meeting next year, one that takes place every five years.
It's not like the CVV doesn't have any power, should it choose to act.
Similar meetings resulted in a ban on laser weaponry capable of blinding combatants in 1995. The worry is that the longer this action takes, the more such weaponry will be implemented. Getting countries to backtrack will then become harder - drones are a prime example of this.
The rise of the commercial robotics sector, led by Google and its numerous robotic lab acquisitions, is also an indication of the direction warfare is going to go. Even though a large amount of this technology is meant to be commercial, even minor modifications would militarise them instantly.
With the commercial players already leading the robotics pack by some distance, and the commercial market set to grow to $37 billion by 2018, the proliferation of mechanical battlefield destroyers is inevitable.
The situation is so potentially worrying that 20 Nobel Peace prize winners and some of industry and science's biggest minds such as Elon Musk and Stephen Hawking have also called for a ban on such technology.
Whether their pleas or those of the Campaign will be heard though is another matter.
With countries increasingly trying to stay at the cutting edge of warfare, fully autonomous lethal weapons seem inevitable, if not imminent.
A deceptively banal-sounding organisation - the CVV - is pretty much our only hope.