AI development can be as thrilling as it is frightening. Public cameras that scan our faces are one of the developments that raise more than a few eyebrows.
For many, this setting is straight out of science fiction – a world where artificial intelligence reports on you every second of every hour. It raises enormous concerns regarding every citizen’s right to privacy.
Are the increased automation methods worth expanding the government’s efficiency? Or do they overstep boundaries of privacy, creating an environment under constant surveillance in one way or another? Let’s examine public service automation, and the concerns people highlight regarding this topic.
What does public service automation include?
Any artificial intelligence and machine learning-based methodologies that perform public service actions without human input constitute public service automation. These can include currently accessible technologies like automatic tax reporting and diverting traffic after accidents to potential developments such as self-driving cars. So far, you might have seen many AI-related projects, which can be related to various industries, from IT to healthcare.
How can AI and other advanced tools pose a threat to privacy?
Giving a machine the task of managing day-to-day infrastructures seems beneficial. It can keep the functions consistent and minimizes the chances of human errors. Many studies have indicated similar results after studying the potential influence of AI and automation.
Still, such systems can severely harm your digital and physical privacy. A few known threats include the following.
Cameras and sensors can track your every move. Even if it usually makes its way to a computer, the chances are that any unsavory individuals can access the same data with the right tools and timeframe.
There is little need to put tails on you when a system tracks your every move. You can rarely be your true self, which stunts your personality growth as you constantly remain in fear of being watched. In the worst-case scenario, any shameful act from your side can ruin your reputation, as the data will make it impossible to go away.
Of course, there is also a natural discomfort when having all your actions surveilled by unknown entities. Such tracking already happens online, but many AI-related projects illustrate how such tracking can be adopted into the physical world. Deep North is one of the startups offering computer vision to security camera footage.
Essentially, Deep North supplies technology for malls or restaurants wishing to ensure that their customers follow the rules.
Speaking of data, it is a weapon that helps governments understand the behavior of the masses. Your driving habits, shopping preferences, favorite shows, and political views can all be manipulated using big data. Several governments worldwide have already used it to skew their image regarding the COVID-19 pandemic.
Enigmatic nature of accountability
When the infrastructure belongs to machines, it gives some shade to the nature of accountability in case of wrongdoing. For example, if an automated car crashes, it puts some doubt on whether the event was a malfunction or part of a sinister conspiracy. Some suggest that the creators of algorithms are the ones responsible if something goes wrong. Thus, manufacturers should be the ones held liable for any negative outcomes. However, there need to be laws reassuring this practice.
Measures to secure your privacy
However, all is not lost, at least with the current timeframe. We have listed some of the ways that people can make a change when it comes to various AI projects. One of the natural instincts is to speak your mind or create petitions against behavior you deem inappropriate.
Remain vigilant with legislation
Like a private enterprise, you can go through the government documentation to see whether new legislation is worth it. Public administration bills usually don’t make through without the consent of local representatives. So, the next time any legislation regarding online information or internet usage comes across, it is good to study the same thoroughly.
Keep in mind that significant changes can start with subtle hints. It is an excellent idea to attend a public hearing to put forth your stance regarding any bill. If more people resist suppressive legislation, your local representative is more likely to vote against it.
Exercise caution with your online identity
While many AI projects focus on the physical world, overwhelming tracking already happens online. Each action users make on the internet gets recorded in one way or the other. Of course, many entities claim to anonymize data. However, there are studies showing how easy it might be to trace certain activities to individual users with enough data accumulated. Even metadata can reveal significant details about internet users.
When browsing the internet, strive to keep yourself anonymous. Never use your real name in online forums or your social media handles. Keep your financial information safe from shady transaction websites and use messengers that use end-to-end encryption.
A Virtual Private Network is another excellent way to become more private online. This application hides your IP address and encrypts internet traffic. Then, it becomes far more difficult for entities to track your behavior and location. For instance, you can check what is your IP to see the location associated with you. When you connect to VPN servers, this information gets hidden. Instead, you can choose the location of a VPN server, and web entities will see it as your location.
The dystopian cyberpunk future from the likes of 1984 and Blade Runner is nigh upon us. Well, not really, but you get the picture. There is already a base in place that can build that future. But there is still time. With suitable methods in place, you can ensure your privacy remains unharmed. After all, it is the last line of defense to protect your freedom of speech and the right to live without fear.
Additionally, governments need to create clear-cut laws and regulations for AI. For instance, the responsibility for negative outcomes with AI should be clearly distinguished in such documents. Sadly, laws tend to lag behind technology, and with the many projects being developed sooner than lawmakers can respond.