The Promise of AI Software for Law Enforcement
According to a recent article on AI Law, “laws are only as good as the people who enforce them.” Could you agree? I surely can’t. Yet that is precisely what is happening, thanks to social networks, the Internet and technology in general.
Too often, the implementation of “law” is left to the very few individuals who can speak on its behalf. Too often, law enforcement is left to deal with the ramifications of bad behavior, rather than attempting to deter crime before it happens. Unfortunately, we have seen too many examples over the years of this approach, which has made law enforcement toothless. Worse, some jurisdictions have attempted to gag public officials who talk about social problems such as gangs and drug use by law enforcement.
As an attorney, I am frequently asked if an AI law enforcement system would work in a jurisdiction. My answer is always, “What have you done to solve the problem?” That’s not to say that some human intervention is desirable. On the contrary, human restraint is necessary to prevent abuse and violence from taking place. However, the best systems in existence work by recognizing the differences among people and among situations, and taking the appropriate steps based on those differences.
In theory, there should be little difference in what officers do when dealing with human offenders. But too often, that isn’t the case. One reason for this may be the lack of good relationships between police officers and the community they serve. Poor relationships lead to distrust. When distrust exists, it is much easier for crime to take place. And criminals know how to cover their tracks better in a system where people are coming forward to report abuse.
Another potential problem is the disparity in resources between human police officers and artificially intelligent law enforcement agents. In many jurisdictions, the technology is being used to augment the human police force. There are questions about whether or not the resulting superintelligent computers should have rights, be entitled to due process, and have any Fourth Amendment protection.
Many worry that such artificially intelligent software will become so smart that they will think themselves above the law. There are already concerns about the “AI law enforcement” software that shows up in TV shows. (I hesitate to call them “AI” because they aren’t as smart as a human.) But the concern goes deeper than that. Is it possible that future artificially intelligent systems could come to look more like humans, and therefore be given a greater degree of rights?
One problem might be that future systems might be so similar to humans that they conclude that nothing is amiss if everyone behaves just as they do. For instance, an AI system that classifies all drivers as “good drivers” could decide that every car accident is just another mistake. That would not only result in more lawsuits, but would also affect the pricing system for insurance. Insurance companies could simply increase premiums. And that could happen, even though the pilot program is still being developed!
Will such a system work? It’s hard to say. Proponents claim that artificially intelligent software is smarter than humans and can therefore learn more and be better police officers. The concern is whether such a system would make people more afraid of the police, resulting in more tickets and increased lawlessness. It is possible that this will happen, but if it doesn’t, the future of law enforcement could be interesting indeed!
Many law enforcement agencies are already using “artificial intelligence” to help them analyze crime patterns and determine crimes before they happen. Such software is able to map the relationships between people, groups, and institutions. The end result is a digital fingerprint of the individual; it maps a person’s key behavior, then uses that information to predict a person’s next action. Such a system is able to discern between good and bad actions, then decide whether or not to behave in the future. AI software may well revolutionize law enforcement in many ways. Law enforcement officials will certainly have a much better understanding of crime, probation violations, repeat offenders, and criminals overall.
In the near future, artificially intelligent software may soon be in charge of making life and death decisions in any major city, and perhaps the entire world. AI software already works by analyzing huge amounts of data, from natural language processing to computer vision to complex algorithms and self-pacing. That’s not all; they will likely be able to understand human speech, too. Currently, self-pacing is a state of the art software that stores data about past behavior to re-create future behavior. Such a system would be incredibly useful to a prosecutor looking to put someone on trial for a crime they are unsure of. It could prevent wrongful imprisonment, deter shoplifting, and prevent the needless killing of innocent victims.
In conclusion, while we mustn’t forget the potential benefits of artificially intelligent software in our society, the greatest opportunity for such systems is in the legal profession. AI software can almost certainly improve the performance of lawyers by leaps and bounds and be far more effective than the man-made systems currently in use. Lawyers, and especially police officers, could truly become masters of the law if they had access to detailed and regularly updated by law enforcement data.