From iPhone’s Siri, Microsofts Cortana, Amazons Alexa and soon to be launched Googles search assistant for self-driving cars, artificial intelligence (AI) continues developing at a rapid pace. Innovations also become a burden of handling big data. As an emerging field, AI is often portrayed as machines with human-like characteristics. However, AI can encompass anything from a simple algorithm to autonomous weapons. Machine learning analyzes past behaviors and similar trends in the world and suggests not just the sort of news headlines to be read, but also what goods and services to be purchase, and, most recently, what political parties should be voted for. The primary objective of this project is to examine the impact of artificial intelligence and big data on the privacy rights of citizens and determine whether existing regulations are sufficient to ensure the protection of privacy rights.
specifically for you
for only $16.05 $11/page
Current technological progress, globalization, and open communication create new opportunities for people to share their thoughts, exchange their knowledge, and discuss multiple topics via such social media as Facebook, Twitter, or Instagram. The growth of the current world and human prospects is evident and fast, and this speed can hardly be controlled, predicted, or explained. Instead of trying to comprehend the essence of this process, it is suggested to understand the value of the constituent parts of this process, such as big data, artificial intelligence or AI, and machine learning (Hansen). These technologies empower the desirable connection of computers and other machines to update data, support interventions, and promote information distribution.
Artificial Intelligence and Big Data
The concept of artificial intelligence (AI) is complex and broad. However, researchers, theorists, and writers contribute to the creation of a clear and factual definition of this term in different ways. For example, McKenzie explains AI as machine intelligence that has to be designed in order to be able to perform a list of specific actions regarding gained knowledge and experience. Bates and Blackmore combine the terms “machine learning” and “artificial intelligence” and define AI as one of the popular techniques with the help of which it is possible to analyze big data. Compared to ordinary computer programming that is used to establish rules and instructions for users, AI makes it possible for computers to develop their own rules relying on experience.
In the definitions of different people, there is one common characteristic that is gained experience has to be taken into consideration while discussing AI. This experience provides AI with a chance to make decisions for users decreasing the level of human involvement in a working process (Nitzberg et al.). Machines are used to evaluate past achievements and decisions of people and give clear and effective recommendations in the present. In other words, a thorough analysis of big data is required as a part of AI or machine learning.
Regarding recent definitions and changes in data exchange, special attention should be paid to big data recognition and evaluation. Greguras defines such important characteristics of big data as a large volume of data, a variety of data types, and velocity of data processing. It is wrong to believe that big data is a fad that cannot be explained. Big data includes all those datasets that can be added to a system under which a search process is organized relying on the above-mentioned 3-V characteristics (Information Commissioner’s Office 6). Still, the analysis of big data turns out to be a challenging process because not many existing data analysis methods can be used for this type of information. People should trace the process of evolution of AI and big data and investigate its deployment details.
Evolution of AI and Big Data
Though AI and big data are the terms used to demonstrate how intelligent modern machines and computers can be, it is not correct to believe that the evolution of AI began as soon as computer science was introduced. Researchers and theorists prove that the history of AI could be dated from the antique times when myths and stories were gathered, stored, and shared between generations. The birth of AI and big data is actually dated in the middle of the 1950s when these two terms were not known, but specific analytical tools were used to uncover different trends and datasets (SAS). New stored-program computers were introduced to the US population to meet their initial business goals and analyze the information obtained from different sources. Every new decade introduced another achievement and innovation in computer science by creating robot companies, demonstrating semantic nets, or supporting intelligent tutoring and reasoning.
Though the evolution of data management and re-structuring through computers cannot be neglected, it is necessary to admit that much paperwork is still present. Big data has a rich history, and sometimes, negative outcomes and failures were achieved with the intentions to digitize data and avoid storage problems. As well as big data, AI had many faces during its evolution, and each time it was improved through regular and effective work of different people. Pioneering initiatives that began before the 1950s were characterized by unstable and complicated schemes for playing different games, learning a new language, and developing rule-based systems. The period between the 1950s and 1980s was remembered by fewer ambitious promises and the possibility to combine research, practice, and funding (SAS). In the 1990s, people were challenged by limited computer power and the lack of data that helped to evaluate the achievements. With time, AI was improved, changing its priorities from knowledge-based learning to data-based learning and focusing on probability theory instead of using personal attitudes.
100% original paper
on any topic
done in as little as
Deployment of AI and Big Data
Regarding a variety of changes and improvements in the field of machine learning, big data deployment may have different forms and outcomes. Nowadays, the issue of privacy is discussed by millions of people, and organizations continue talking about the necessity to promote security and confidentiality by any possible means. Allen discusses how vulnerable ordinary users can be through asking to change their passwords frequently, avoid sharing private information online, or neglect discussions about working conditions or other details. Privacy may have different forms, and the progress of AI and big data endangers in different ways.
Each country and nation may have different attitudes to what kind of information should stay anonymous, and what facts can be available to the public. Allen differentiates four main types of privacy that may be provoked by fast developed AI technologies. The main challenge of AI and big data deployment is connected with the necessity to understand the goals of these technologies and its correct usage. Instead of defining the interests and privacy of users, AI reveals information and makes it available online. There is defensive privacy that is used to protect the financial interests of the population. Today, financial losses online are caused by an effective work of blackmailers and identity thefts who do not differentiate people according to their social status, education, nationality, or age. The main feature of this type of privacy is that any loss is usually transitory (Allen). It is hard to return the situation and avoid losses. Defensive privacy is legally protected. Still, this protection may be not enough to predict the intentions of thefts and protect all potential victims.
AI and big data growth should also be discussed in terms of another type of privacy that is based on human rights. This protection is connected with existential threats that may be the result of data collection. In Europe, much attention is paid to human rights privacy as its violation may end with long-lasting losses that are hard to avoid or accept (Allen). If political views, religious beliefs, and economic relationships may influence the promotion of human rights privacy, the next type of privacy, personal, aims at protecting people against undesired observations and harmful intrusion. AI technologies put a person under threat with an ability to lose a right to be themselves. Low self-esteem, frequent depression, anxiety, and some kind of paranoia can be observed among people who fail to gain a proper understanding of personal privacy issues.
Finally, one should say about the role of contextual privacy in computer science and the development of big data. This type is characterized by unwanted intimacy, also known as the “ickiness factor” (Allen). Violation of this privacy may lead to different outcomes from the inability to gain control over one particular situation to the necessity to repair the relationships between different people.
Evolution of the Right to Privacy
Despite the fact that big data leads to a number of benefits in human and business relationships, this issue should also be investigated from the point of view of big privacy problems. Much data has to be regularly generated, and people are not always able to integrate processes and outcomes (Armerding). The evolution of the right to privacy has to be investigated to comprehend all possible risks for users and developers of AI and big data technologies.
The right to privacy remains one of the oldest and the most stable rights. It was not officially introduced by the Congress or the Court till the 19th century (“The Evolution of the Concept of Privacy”). The 18th century was the period when the first ideas about the right to privacy and the relationships between one person and society were developed (SAS). In 1776, John Adams, a famous American statesman and a future second president of the United States, took a step and revealed his ideas on how to search houses and regard the right to privacy during the fight for independence (“The Evolution of the Concept of Privacy”). Adams explained that the fight for independence should not be used as an excuse for governmental interference in social lives, but had to be evaluated as a chance to improve people’s understanding of what their rights and freedoms could be.
Then, new steps were taken to support the idea of privacy in society. At the beginning of the 1790s, the Fourth Amendment was introduced in terms of which the rights of people to stay secure in their houses, papers, and people against undesirable and unreasonable search were discussed (Kemper 12). At this moment, this constitutional document is the only quoted document that is used to explain people’s concerns about privacy and search for information.
Accumulation of Concerns
Unfortunately, the Fourth Amendment was not enough to evaluate all aspects of privacy and its protection. Though the unreasonable search was defined as the reason for privacy protection, people could hardly understand what such unreasonable search meant. Therefore, the beginning of the 20th century was characterized by numerous debates on privacy in the United States. American people were not confident if their information was safe and properly protected. Personal data was too fragile for experiments and evaluations. People believed that they were protected by the law and used Section 1 of the Fourth Amendment under which no law could abridge civil privileges of life and liberty.
The number of concerns dramatically increased as soon as people learned that the National Security Agency (NSA), as a leading intelligence organization in the country, was involved in gathering information about millions of people around the whole globe. The NSA’s activities were explained as a considerable part of national security, and its mission was to protect the US population against unpredictable and unfavorable conditions or people. Though the NSA aimed at protecting people, it violated the conditions defined in the Amendment because the government failed to give enough good reasons to gather personal information without making people aware of such need. The violation of the right to privacy turned out to be a serious issue for discussion, and people wanted to know more about the worth of artificial intelligence technologies and the important to have big data being gathered within one organization.
The progress of the 21st century has already helped to change the ways of thinking of millions of people. Privacy concerns have gained new boundaries and limitations due to such factors as the use of the Internet, the possibility to share personal information via Facebook or Instagram in a short period of time, and the necessity to communicate frequently through such service as Twitter. Modern technologies require much personal information being added online, and associated technologies increase a chance of a violation of the right to privacy. People are ready to spend days and nights protecting their personal data, creating strong passwords, and filtering what facts can be mentioned and what information is better to stay unrevealed.
To avoid complications and misunderstandings, people find it normal to address the existing political arguments and law to protect their personal data. However, people fail to realize that they are the sources of all information that can be available online. They make calls, share photos and videos via Instagram, or leave feedback/tweet without even thinking how it is easy for a professional hacker to check geographical locations or to break a system. The right to privacy has undergone considerable changes as soon as the Internet became a subject of many interests (Bernal 1). It is hard for people to demand support and protection from the government. The role of AI and big data cannot be ignored because it turns out to be a serious threat to privacy and safety of the US population.
Artificial Intelligence and Big Data as a Threat to Privacy
Artificial intelligence and big data are defined as strong and effective achievements of the 21st century. These technologies create new opportunities and facilitate search processes around the world. At the same time, big data is a treasure that has another dark side with a number of new threats caused by mass awareness and unlimited control (Arora 1). AI is a chance to create a new view on personal information and improve the quality of facts available online. Though a possibility to have one place of search aims at facilitating regular activities, the impact of AI and big data of the right to privacy should not be defined as pure positive.
Artificial Intelligence and Privacy
Decision-Making and Privacy
Debates about the connection between artificial intelligence and privacy issues cannot be stopped today because of several important reasons. First, it is necessary to understand that AI is used to gather as much personal information as possible to make sure that human decisions are made regarding their personal interests and preferences. There is a thought that this service helps to save time, use past experience, and evaluate previous mistakes or shortcomings. The only challenge is the fact that people do not even realize where the line between what they can share and what actual information they want to share is.
Therefore, there are two possible attitudes to AI and big data technologies in human life. On the one hand, people like to put requests and set expectations regarding their personal preferences and hope that the system can facilitate their decision-making processes. AI and machine learning are based on algorithms with the help of which it is possible to read images or, as an example, combine lab results and make accurate diagnoses in health care in a short period of time (Kobie). On the other hand, the same approach in caring for people and gathering information through AI technologies can be defined as problematic. The point is that not all patients can be asked for informed consent, questioning the importance of privacy. In some cases, the decisions have to be made fast that people do not have enough time to understand why they share such information or who may have access to this information.
The same situation may be observed among the users of other services and clients of different organizations. People are ready to give their personal information to make a decision at one point in time without thinking of how the same information can be used with time. The results of such unexpected outcomes may be spam phone calls or annoying emails.
100% original paper
written from scratch
specifically for you?
Biases in Algorithms
Second, it is hard to be confident in the appropriateness of algorithms and schemes used in AI and big data records. So that AI turns out to be a threat to privacy that cannot be controlled by users. Masse and Hidvegi state that these algorithms may be characterized by certain biases and limitations which promote discrimination through diminishing human accountability for this information. The example the authors give includes evident bias discrimination in one of the academic facilities in London where the users with non-European-looking names are challenged during their university admission and the choice of human resources or social security (Masse and Hidvegi). Discrimination is one of the worst outcomes of privacy, and people cannot deal with it even if they stop sharing information online.
Biases and privacy fears over AI are observed in different spheres. For example, if AI is used as a crimestopper by the representatives of police in the US, the developers of this program say about the ineffectiveness of AI because of its time-consuming nature and regulatory scrutiny that may be biased or not updated (Lever). In other words, to make sure that all aspects of AI-based programs are effective, it is necessary to remove all possible biases and other reasons for discrimination and unfair judgments being developed.
Freedom and Expression Online
Modern people get used to live under certain rules that support their privacy and autonomy. Human rights have to be protected. Still, first of all, they have to be recognized and understood. Bernal compares human rights and civil liberties and explains that people cannot stop bothering about such issues as “freedom of association, freedom of expression, freedom of assembly, freedom of religion” defined through the prism of economic and social rights (10). As a result, one should think that AI is a chance to address different societal problems and control online activities of people globally.
In addition to the idea that private information is filtered by special AI technologies, people should know that all these information and online activities are under governmental control all the time. AI is used to gather large data given by people to develop different assumptions about these people (Masse and Hidvegi). Governments try to develop special frameworks, policies, and digital securities with the help of which online users can protect their rights. However, not much attention is paid to the fact that no one can control or measure the activities of the government the representatives of which are responsible for data flow and the quality of information. Government surveillance of the Internet has already become an issue for multiple discussions, and regulations help to clarify when the government can use its authority and use AI to violate privacy issues.
To reduce the level of threat created by AI and governmental participation, a broad consent should be offered to all people. They have to understand that when they share some information online and promote the creation of big data through AI, they provide the government with permission to check all personal information and save it. The responsibilities of the government lie in the necessity to create special policies, to look at a bigger picture, and to think about the future of the current state of technologies. The role of the government cannot be neglected in AI discussions. However, it is necessary to understand that in addition to a number of positive aspects, control, and evaluations, the government is the body that has access to all personal data of every citizen.
Big Data, Customization, and Personalization
Nowadays, big data is characterized by a number of positive and negative aspects. Armerding identifies five worst risks associated with big data: discrimination, shareholders’ access, decreased anonymity, governmental participation, and data brokering. Big data is an opportunity for people to achieve various goals, and if personal data is in wrong hands, the outcomes are hard to predict.
Discrimination of Big Data
The threat of access to big data is based on the inability to control all discriminatory attempts. Today’s people have to face certain discriminatory problems when they try to find a job, ask for a credit, or even choose a seat in an airplane. Predictive analytics and unfair algorithms exist in the system and cannot be avoided. Though discrimination is illegal, it can hardly be traced every time. Therefore, when big data has to be analyzed and applied to decision making or other activity, there is a threat that the supporters of discrimination make their impact and change or misunderstand some information.
The impact of discrimination of big data interpretation has considerably grown. This information can be used to educate the population, choose customers, and predict recidivism (Armerding). Still, due to an automated characteristic of discrimination, not all analytics and users can identify these cases. Discrimination remains concealed reading illicit criteria and the populations’ vulnerability. The dark side of big data is the inability to understand when the classification of needs and interests ends, and unfair and prejudiced discrimination begins. Sometimes, discrimination may help to identify health risks, marketing preferences, or cultural concerns. However, in a variety of cases, it is wrong to rely on automated big data records without regarding how dangerous the concept of discrimination can be.
Sharing of Personal Information
If AI is defined as a tool to gather information about people, big data is the result of the work of this tool. It is hard to identify the scope of big data and facts used in these records. Therefore, organizations which use big data to attract new customers, gather symptoms, or make some general conclusions should understand the level of their responsibility and the way of how personal information may be exposed to shareholders or other interested parties. There are many well-known cases when personal information about former federal workers or university members was stolen, and it was necessary to change passwords or even names in order to avoid card frauds or identity theft (Armerding). Heterogeneity of information and the speed with which personal data is processed and analyzed create certain challenges and threats to privacy (Arora 2). These two factors must be considered when the gap between shareholders and people who give their personal information has to be established. Interests, rights, and freedoms have to be properly combined in big data records.
The approaches and sources which are chosen to generate big data make it almost impossible to anonymize personal data. Individuals have to share their private information and other details that may be important for the workplace. Their employees provide individuals with guarantees that all their data is safely stored and cannot be used for other but professional purposes. In their turn, employees also store their personal information in the chosen system not to lose a single detail in a working process. The presence of personal information at one place makes it vulnerable. Hackers or other interested parties can break a system and steal as much information as they may need.
Still, people may be warned about the threats of adding some private information to websites or sharing it online. However, such activities as watching TV, using mobile phones, or driving a GPS connected car do not seem as dangerous and harmful as they actually are (Armerding). Anonymity has already lost its uniqueness in this digital world, and, therefore, big data can no longer stay anonymized.
Almost the same situation happens to big data when it is discussed in terms of the governmental work. In addition to the fact that the government controls what AI technologies have to be used to gather information, the governmental representatives continue using all personal data for their own purposes. Armerding says that modern “Americans are in more government databases than ever”. The FBI is not the only organization that has special files and cases of all people. Personal information about age, gender, or race is not the only one available to the government. It is easy for some people who take special positions to learn more about individual’s childhood, past fights, or even the reasons to move from one home to another in addition to financial information, fingerprints, or phone numbers.
The government can use big data pursuing any goal without even thinking if they have rights or bothering that someone’s privacy issues are broken. The task of the government is to promote social security and control the activities of all people whose identities may cause concerns. The government continues gathering data putting under threat the concept of privacy, the freedom of expression, and the necessity to share personal information.
Brokers and Big Data
Finally, big data may become a product that can be sold to customers. There are many organizations located in different parts of the globe that deal with gathering, analyzing, and exchanging personal information. Some companies are legalized and have special patents and rights to take the steps they are usually involved in. However, despite the governmental intentions to reduce the number of illegal organizations and services, many people find new ways to break the law and overcome penalties. Brokers find new opportunities to earn money and sell personal information using the law and current legislation to support and approve their activities.
Regarding the hidden and evident conditions under which many organizations offer their services, big data, as well as AI technologies, create certain challenges and provoke new threats to privacy. People are not confident in the necessity to share their personal information online. Still, they cannot always avoid this step when they have to place their resumes, ask for help, organize meetings, and communicate with people. Big data is a huge opportunity in such fields as business, marketing, management, and health care. At the same time, it is a threat to privacy as soon as it is used as a product for sale, a reason for blackmail, or a factor to discriminate.
Regulations to Protect Privacy
One of the possible ways for ordinary people and Internet users to protect their privacy rights is to rely on the existing regulations and investigate what federal and state laws or contracts can be offered. In the United States, business relationships between an organization and a client may be defined in a number of ways. In this paper, certain attention will be paid to tort law and contracts that are legally defined and approved.
The list of tort laws is impressive in the United States. Some of them clearly describe what can be done to protect human rights. Some laws help to understand better the essence of privacy, mobile commerce, and Internet security.
The Federal Trade Commission Act is one of the oldest and most influential laws in the United States. It was signed by President Woodrow Wilson in 1914 to identify the conditions under which competition and deceptive practices that could affect commerce were identified. Under this law, people learned how to develop the relationships between consumers and organizations, how to make legal recommendations, and what behaviors were appropriate and unacceptable. However, Miller admits that this act lacked the authority that could allow investigating of foreign spamming (160). To meet the expectations of the population and explain how safety and privacy can be promoted between users, an amendment, known as the US Safe Web Act, was introduced in 2006 in terms of which consumers received protection from spam, spyware, and fraud. These both acts allow cooperation and sharing information between foreign agencies to indicate the cases of spamming.
In 1999, the Financial Services Modernization Act, also known as the Gramm Gramm-Leach-Bliley Act, was approved by the Congress. It aims at legal re-structuring of insurance and security companies to remove barriers and identify the conditions under which sharing information can be legalized. At that moment, many companies were in need of such document because it could help to clarify their responsibilities and obligations. Financial services had to be offered to different people, but companies were not able to clarify who was appropriate for loans and trust, and whose reputation was not as stable as it had to be. Though this act deals with financial operations and the creation of a new organization, it can also be used to explain privacy issues and support the idea of business integration.
The issues of privacy and the necessity to share personal information are frequently discussed in the field of health care and medicine. Therefore, to understand better how to protect private information and when it is better to share it, the Health Insurance Portability and Accountability Act was created in 1996. This federal law aims at protecting patients’ confidentiality, securing of personal information, controlling administrative costs, and keeping health insurance affordable to all people. According to this act, the privacy rule has to be kept. It regulates the conditions under which personal health information may be disclosed, including such factors as patient’s death, insane state, and child abuse. People have the right to change information and update it per request. Violations of this act may lead to financial payments up to $250,000 or imprisonment from 1 up to 10 years, depending on the goals of information disclosure.
With time, more attention is required to online communication and exchange of information. Many companies store much data within their systems so that the necessity of such regulation as the Electronic Communications Privacy Act can be justified because of several reasons. It was approved in 1986. This act is focused on different types of communication available to people, including wire, electronic, and oral. As soon as information is stored via emails or other electronic forms, it becomes private. Phone conversations and Skype meetings may be note stored by users, but they can be easily obtained from providers. However, as a rule, a special court order is required for an outside person to read or listen to this information. Privacy protection is promoted, and interception of private communication is a crime that has its price.
As well as federal laws help to control privacy issues and support individuals, there are state laws within the frames of which the protection of private information is discussed. In the United States, one of the well-known legislation is the California Electronic Communications Privacy Act. It is an example of state privacy laws that can be used to support the rights of people and limit the governmental ability to seek electronic communication of the population relying on law enforcement (Jolly). Its conditions are similar to the Electronic Communications Privacy Act with the only difference that this protection is available to the citizens of California. There is another important limitation of this state act. Today, public schools, police officers, and even parents can use the system in order to find some information about an individual. This act is under attack in the state, and privacy protection should be promoted through a direct discussion of a problem, open debates, and communication.
Importance of Contracts
Taking into consideration the necessity to follow current US regulations and laws, many organizations and people have to work hard to clarify their responsibilities and discuss the conditions under which personal information can be stored and processed. Big data may be used for different purposes, and the task is to avoid violating the First Amendment or other acts that protect Americans from losing their control over their personal information. Data security contacts are usually signed when one of the parties has to share personal information, and another party should explain what can be done to this information within the legal frames. These contracts are also known as confidentiality contracts or non-disclosure agreements. Despite the names and forms of these contracts, they serve as legal evidence to protect personal information from unwanted misuse (Miller 203). Their violation can lead to a number of proceedings and courts, which increase individual’s chance to keep their data secure and reach effective resolutions in a short period of time.
At the same time, contracts help the parties, which obtain personal information from ordinary people, to identify the scope of their work and the directions to use this data. As a rule, inform consents and special sections are developed to discuss why personal information may be needed, in what form it can be disclosed, and what outcomes can be achieved as soon as some part of personal data becomes available to other people. The developers of contracts should be as careful and rigorous as all the parties that put their signatures under these contracts are. One mistake or shortage may be disputed in courts and result in fines or imprisonment. In some cases, people are ready to share their information or even sell their identities to earn money or achieve certain results. The idea of contractualizing personal information can be used. These contracts prove that users can sell or rent their information, and contracting parties should explain every detail of this procedure to protect personal interests, to achieve professional goals, and not to harm society.
Insufficiency of Contracts and Tort Law
Although contracts and laws aim at clarifying the relationships between owners and users of personal information and relying on current regulations of social and Internet activities, certain concerns regarding the sufficiency of these documents still exist. Laws and contracts share a considerable number of similarities. They usually include the necessity to prevent breaches, avoid financial losses, and deal with different types of damages that may be caused by wrong use of private information or its inappropriate sharing. Tort reforms and contracts help to reduce legal cost, emotional challenges, ambulance chasing, and unreasonable worries (Lombardo). In contrast, the same documents and legalization of activities may be characterized by such demerits as the obligation to prove cases via law, limitations of finances, inability to predict effects on the court, and a poor understanding of what does actually go wrong and when.
Individuals choose tort laws to protect their rights and freedoms of expression and association. However, one cannot be confident that all misguided victims are able to support their positions and reduce the number of losses from financial and emotional points of view. A personal understanding of a case, individual interpretation, or a subjective description of a situation may be not enough for a successfully made court decision. Sometimes, much information should be gathered to prove one position. Many people have to be involved in discussions and share their knowledge and understanding of a case. Their opinions and words may vary, causing a need for new investigations and explanations.
Not all people who are the victims of online fraud or unfairly disclosed information can be prepared for a price that should be paid on courts. Though the laws and contracts may be used to punish offenders, make them pay money, or even be imprisoned for a couple of years, victims have to continue living with a certain portion of information being revealed and available to many people. Unpredictable negative outcomes of such activities and concerns regarding the insufficiency of contracts and laws continue growing. People want to believe that the law can protect them. Still, certain individuals are able to break the law even if being aware of possible results because they know that they can pay the price, convince the court, or have a deal with a victim. The presence of such situations questions the appropriateness of modern legislation and the necessity to create new laws or respect the past regulations.
One of the main problems that can be observed between privacy and tort laws is that people tend to think that tort law is a tool to protect the privacy of individuals when they share their personal information. Yet, it is necessary to remember that tort law can be used to diminish privacy and pressure people. The government can gain enough powers to survey big data through laws. In addition, property owners, academic employees, manufacturers, and healthcare workers have to gather enough private information to investigate human lives, identify the level of their appropriateness to a particular field, and demand to share some facts. Tort law is a power to ask for personal information being revealed. This level of responsibility is high indeed, and not all organizations know how to take this step properly. Therefore, the intentions to promote privacy through tort law may become the mistakes under which private information is disclosed.
A similar situation and explanations can be given in regard to contracts and their role in business relationships. Pitfalls of contracts cannot be ignored, including unpredictable data leak and unwanted marketing drawbacks. Nowadays, there is a tendency to develop contracts with multiple additions and explanations given in smaller font or below the main part of a contract. Not all people find it necessary to focus on all sentences and statements given in such contracts. They trust their partners and contractors and sign documents without a thorough reading even after being warned about its necessity. Sometimes, during discussions and evaluations, some unclear points may be identified and explained. However, oral discussions and written documents may serve as two different types of evidence with a written part being the only credible source of information.
The pitfalls of contracts may have different faces. Taking into consideration the fact that every sentence or statement can be interpreted in different ways, each unclear point has to be investigated before a final signature is given. Today, it is possible to re-print a contract in several minutes, and it is better to spend these two-three minutes creating a new improved contract instead of spending days and nights at courts and paying fines. A contract is a part of tort law with a number of obligations and responsibilities being defined for all parties involved. However, there is one party only that has to share information and believe that the right to privacy is not broken. Contracts may hurt and destroy human lives in a short period of time. Every individual has to be informed about the power of contract even if it is signed in a restaurant where many unknown people are around, or it is discussed at the round table with a number of business partners and lawyers.
Unfortunately, neither a properly developed contract nor tort law may have only positive or negative sides only. Communication with people, the level of trust, and the development of political, economic, or even environmental factors outside can determine the conditions of contracts and the necessity to disclose or share personal information that can be available to people through artificial intelligence technologies and non-filtered big data. Digital rights may be interpreted in different ways, and it is the task of the government on how to create similar for them conditions, promote trust, and support confidentiality. Sometimes, personal information has to be revealed in order to save human lives or improve a situation. However, even if positive results are achieved, but the law has to be broken, a person who leads to data disclosure is a violator and should take legal responsibility. The law is subtle and unpredictable. Therefore, each step has to be thoroughly taken with all privacy issues being identified and explained from personal and legal perspectives.
In general, the evaluation of artificial intelligence, big data, and current privacy laws proves the fact that being forewarned is not the same as being forearmed. The existing number of merits and demerits of the right to privacy questions the worth of the law. People may know a lot about current regulations and legislation of the United States, sign contracts and agreements to support their business relationships, and make decisions whether to share personal information or not. However, it is hard for an ordinary person to understand how powerful artificial intelligence can be. Big data is gathered through ages, and people do not even remember when and why they find it necessary to leave some personal data.
The use of social media like Facebook, Twitter, or Instagram is defined as one of the current technological achievements with the help of which it is possible to develop tight relationships between people from different parts of the world. Still, regarding opportunities for modern hackers and social needs of different communities, personal information offered via social media or stored on local systems can create new threats and damage to all people. Personal data and privacy are the two closely connected terms the essence and importance of which undergo certain changes. The readiness to these changes and outcomes depends on how much people know about artificial intelligence and what they can do to big data.
The government takes any possible steps to protect people against the threats of artificial intelligence and big data records by creating new laws and regulations. However, it turns out to be clear that the portion of control the government still has over people and their personal information cannot be identified. Privacy is not what can be offered to people today. Privacy is what the governmental and special organizations may use for and against their people. Privacy is what challenges the sphere of law and policymakers. In many cases, people do not bother with the fact that their personal information is revealed. There are the situations when personal data disclosure leads to negative outcomes, destroys families, and interrupts everyday activities. The impact of artificial intelligence and big data on the privacy rights is impressive, and citizens should understand that their knowledge of regulations can be a step to protect their privacy. However, it will never be a final and the only decision due to constantly changing regulations, the governmental role, and a variety of personal attitudes.
Allen, Christopher. “The Four Kinds of Privacy.” Life with Alacrity. 2015.
Armerding, Taylor. “The 5 Worst Big Data Privacy Risks [and How to Guard against Them]”. CSO. 2017.
Arora, Ritu, editor. Conquering Big Data with High Performance Computing. Springer, 2016.
Bates, Richard, and Nicholas Blackmore. “The Privacy Challenges of Big Data and Artificial Intelligence.” Kennedys. 2017, Web.
Bernal, Paul. Internet Privacy Rights: Rights to Protect Autonomy. Cambridge University Press, 2014.
“The Evolution of the Concept of Privacy.” EDRi. 2015.
Greguras, Fred. “Legal Issues in Big Data.” Water Online. 2017.
Hansen, Steven. “How Big Data Is Empowering AI and Machine Learning?” Hackernoon. 2017.
Information Commissioner’s Office. Big Data, Artificial Intelligence, Machine Learning and Data Protection. 2017.
Jolly, Ieuan. “Data Protection in the United States: Overview.” Thomson Reuters Practical Law. 2017.
Kemper, Bitsy. The Right to Privacy: Interpreting the Constitution. The Rosen Publishing Group, 2014.
Kobie, Nicole. “AI Has no Place in the NHS if Patient Privacy Isn’t Assured.” Wired. 2017.
Lever, Rob. “Privacy Fears over Artificial Intelligence as Crimestopper.” Phys.org. 2017.
Lombardo, Crystal. “Tort Reform Pros and Cons.” Vision Launch. 2015.
Masse, Estelle, and Fanny Hidvegi. “Artificial Intelligence: What Are the Issues for Digital Rights?” Access Now. 2017.
McKenzie, Baker. “Risky Relationship between AI and Data Privacy.” Lexology. 2016.
Miller, Roger LeRoy, editor. Business Law: Text & Cases – The First Course – Summarized Case Edition. Cengage Learning, 2016.
Nitzberg, Mark, et al. “AI Isn’t just Compromising Our Privacy – It Can Limit Our Choices, too.” Quartz. 2017.