AI Law Enforcement and Future Technology

AI law enforcement is the use of advanced artificial intelligence in peacekeeping, security and law enforcement. Artificial intelligent systems, which are also referred to as artificially intelligent computers, have been around for quite some time now. The first self-policing robotic police system was developed back in 1992. Since then, more such systems have been developed.

AI Law enforcements

Why has this concept caught on with law enforcement agencies? Proponents argue that it would be a lot safer to patrol without being driven by an operator. It could also prevent human error, which is a typical problem when someone is in a dilemma. If an operator drives the robot into a building, what if the robot gets confused and does not open fire on the criminals? Would the police be justified in arresting the wrong people?

Another advantage is that such systems would be able to analyze large amounts of data much faster compared to a human cop. This would result in quick identification of criminal trends. If it detects a pattern of crimes, it would also be able to make preemptive moves which can help apprehend criminals before they commit the crime.

Is it safe to say that such technology would completely replace cops in the future? Some fear this because of the inherent artificial intelligence built into the system. According to the US Federal Trade Commission, law enforcement and other government agencies should not develop or purchase the AI technology until the public safety and ethical issues are properly examined. Some worry about the potential misuse of such technology.

Would you be safe if your car did a U-turn and the engine caught fire? If you were driving and pulled over on the side of the road, what if a machine with no conscience decided that it would rather hit your car than stop to help a stranded motorist? These and many more such hypotheticals are discussed by scientists who work on such projects. One might call this the “age of emergence”.

What is to prevent rogue law enforcement agencies from using these weapons against American citizens? Should the government be held responsible if such machines turn against innocent people? That is a very good question. If one could hypothetically build such software, what would be the public’s responsibility if such an occurrence took place? Would there be hell to pay?

Will such machines to be controlled by humans or by some type of AI computer program? Such software could control a police cruiser or airplane without human intervention. It would simply follow orders and show up at the right time. What if the blunder happened while in the air and the plane crashed? Wouldn’t the blunder to be made without any human intervention?

Right now we are in the early stages of development of artificially intelligent machines which are to one day replace people in law enforcement. Will that ever be good? If so, what will be the final price for such a project? Is the United States willing to pay that price?

Some would say that we should trust in the technology to regulate itself. I certainly hope that we can find a way to insure the systems work well. We all know that we cannot watch everyone all the time. You cannot have an eye on every citizen all the time. So it would be wise to have some type of human oversight to check things. However, there are other concerns.

In many regards we already have too much information and the age of computers has given us too much data. We know that there are terrorists that we are tracking through these sophisticated devices. Some of them may even be on the move. Yet, we cannot be 100 percent sure. Some of the information on these machines is going to be misused. That is the reason we need oversight in this area, just as we have in the case of our police department monitoring the public for possible threats against the country.

If you want to know what all of these might be doing, you can talk to one of my former professors in graduate school who worked with artificial intelligence. He said that they were probably pre-programmed to kill if they ever felt threatened. I think he was referring to terrorists, but you could have a whole list of potential problem areas. It would be interesting to see all the different ways in which these machines could be abused.

Still, the concern is that these machines might be abused in a way that would infringe on the rights of citizens. Whether that is true or not is another issue altogether. If we want to protect the American people from such things, we must ensure that the machines do not get out of hand. Please consider all this and think on it.