Introduction
General Data Protection Regulation (GDPR) principles govern the appropriate use of individuals’ information online and limit violations that may harm users and other stakeholders. Therefore, as an influential social media and networking company, the organization should ensure compliance with these regulations to avoid potential issues. The institution’s business model heavily relies on data collected from its users and information fed into neural networks as they aid in personalizing the experience and increasing the time spent on the website. However, as noted by the EU regulator, the company has not satisfied all GDPR requirements. The following sections explain the basics of neural networks, address the potential ethical concerns that may arise from personalization, evaluate the company’s practices, and propose recommendations for the company to adhere to GDPR principles.
An Explanation of Neural Networks
Neural networks in computers are comparable to neurons in the human brain because they serve a similar purpose. Neural networks refer to a machine learning system where computers use algorithms to mimic human operations and identify correlations between vast amounts of data (Aggarwal, 2018). These networks consist of an input, hidden, and output layer, which interact to classify actions and objects. The input layer receives data from various sources and prepares it for processing. The hidden layer processes received data, and the output layer provides the final classifications and makes information usable (Aggarwal, 2018). For example, the input layer would receive images in a neural network to differentiate bicycles from cars. The hidden layer would process this data based on traits such as the number of wheels, size, shapes, and other attributes. Finally, the output layer will specify whether the object is a car or a bicycle. Thus, the components of neural networks work to make data more meaningful and enable its use in decision-making.
Using Neural Networks for Personalization and Potential Ethical Concerns
Organizations with online customers and big data analytics companies use artificial intelligence and machine learning to ensure their system’s effectiveness. Mainly, neural networks assist institutions in personalizing their user’s experience by anticipating their needs (Kliestik et al., 2022). In other words, designers have equipped modern computer systems to collect data such as location, time spent browsing a particular page, navigation, followed links, and other user behaviors. Afterward, this data is sent to neural networks, which create individual-targeted recommendations for friends, news, groups, and other activities as specified (Aggarwal, 2018). However, using neural networks for personalization raises significant ethical concerns, such as racism and manipulation, especially when using a ‘black box’ classification system where users are unfamiliar with algorithms. For example, the system may determine racial characteristics as a boundary position and make decisions based on collected data, thus facilitating social inequalities (Kliestik et al., 2022). In addition, an organization may inappropriately use information for purposes aimed at taking advantage of consumers buying habits and desires without their knowledge. Thus, hidden information and potential biases create room for issues like racism and unfair exploitation.
The Sections of the GDPR Associated with Personalization
The GDPR stipulates various standards organizations should adhere to when collecting, using, and sharing individuals’ data. The sections most relevant to data personalization and the featured case study include purpose limitation, storage limitation, transparency, confidentiality, accuracy, and accountability. Purpose limitation is a rule that restricts the use of collected data for any other reason apart from the intended. Storage limitation dictates the data retention period to limit unnecessary storage (Chang et al., 2019). Transparency obligates being clear about the organization’s intentions and reasons for data collection. Accuracy requires up-to-date record keeping and regular updates. Confidentiality assures data security and prevents unauthorized access. Accountability places the responsibility of adhering to these principles on the company in question (Chang et al., 2019). Going against these principles can result in penalties, litigation, and losses.
An Evaluation of the Company’s Practices and Recommendations to Ensure Adherence
The featured social media organization uses neural networks to personalize its customers’ experience, which raises several concerns because they collect vast amounts of user data. The data collected may not be limited to the purposes the company specifies, meaning it does not adhere to purpose and storage limitation standards. Additionally, the extensive data can facilitate hidden biases, thus presenting issues associated with confidentiality and accuracy (Aggarwal, 2018). The current artificial intelligence and machine learning trends to ensure data privacy and confidentiality include federated learning and differential privacy. Federated learning limits storing sensitive information on a central server and instead uses encrypted information from users’ devices for learning (Kliestik et al., 2022). On the other hand, differential privacy adds noise to sensitive data and makes it impossible to decrypt or reverse engineer. Taking advantage of these developments can assist the organization in complying with GDPR guidelines.
The organization should enhance transparency by explaining how it collects and uses consumer data. Moreover, they should limit data collection and only obtain data relevant to personalizing the user experience. Additionally, they should retain obtained information for as long as it is useable and implement proper security measures to protect user privacy. Mainly, federated learning is an efficient and effective existing practice the organization can adopt because the approach directly obtains encrypted information from users’ devices and prevents hackers from accessing this information in servers and through online connections (Kliestik et al., 2022). Similarly, the organization should reduce the amount of data stored in their systems and allow users to update their information to ensure accuracy. Adhering to these standards will ensure that the organization does not go against GDPR principles.
Conclusion
Using neural networks to personalize user experience on webpages is effective and valuable for companies because it enhances efficiency and returns customers. However, all organizations should observe GDPR guidelines to prevent reparations such as data falling into the wrong hands, breaches, and misuse. The featured company can defend its need to collect vast consumer information. However, it should make adaptations to provide more information regarding the use of information and limit unnecessary storage. Ensuring privacy, confidentiality, accuracy, and limiting storage will benefit the company in the long run by avoiding data use issues and non-compliance penalties.
References
Aggarwal, C. C. (2018). Neural networks and deep learning. Springer, 10(978), 3. Web.
Chang, C., Li, H., Zhang, Y., Du, S., Cao, H., & Zhu, H. (2019). Automated and personalized privacy policy extraction under GDPR consideration. In Wireless Algorithms, Systems, and Applications: 14th International Conference, WASA 2019, Honolulu, HI, USA, June 24–26, 2019, Proceedings 14 (pp. 43–54). Springer International Publishing.
Kliestik, T., Zvarikova, K., & Lăzăroiu, G. (2022). Data-driven machine learning and neural network algorithms in the retailing environment: Consumer engagement, experience, and purchase behaviours. Economics, Management and Financial Markets, 17(1), 57–69. Web.