Part2: Several Available Strategies to Deal With Privacy Threats in Metaverse
Privacy is practically essential to people and thus deserving of protection — it is, in addition, important for autonomy. Liberalism is one of the core characteristics of the metacosm.
Liberalism as a political philosophy aims to protect the rights of the autonomous individual. Insofar as individual autonomy and privacy are values that a society wishes to protect and uphold, it is incumbent on policy-makers to protect them.
However, the metaverse poses challenges most obviously to privacy [1]. Thus it is imperative that policy-makers prepare for these emergencies. This article attempts to provide some recommendations for metaverse privacy protection from the perspective of policy-makers.
1. Legislation.
First, strong legal limits need to be placed on the sorts of information that companies and agencies can gather on individuals and on what they can do with that information.
Laws and regulations are required to ensure that a) the powers of agencies and private companies are strictly limited to accessing information, b) users of the technologies know when their privacy might be threatened, and c) that companies provide opt-out policies for their users.
Legislation to ensure that metaverse will provide users with the ability to alter settings to maintain associational privacy might be considered. Legislation might also be required to prevent the direct manipulation of users of the metaverse and to regulate and prevent the emergence of new addictions.
Providing incentives to encourage the creation of secure networks, online environments, and other digital technologies that will protect people’s privacy is necessary.
This kind of incentive could be achieved by making certain breaches of privacy and threats to autonomy illegal or by providing funding for companies, tech developers, and research groups to develop technological means of protecting privacy.
Previously, the development of peer-to-peer networks for content sharing (e.g., Napster [2]) stimulated increased research into digital watermarking (or Digital Rights Management) and audio/video fingerprinting. The development of analogous technologies for avatars and immersive worlds might go some way towards ensuring that only the genuine owner of an avatar can use it [3].
2. Contracts
Policy-makers will need to examine the sorts of contracts being offered to the metaverse users and analyze the fairness of these contracts, particularly to the protection of privacy.
Users who are eager to use the service will accept the terms and conditions, mainly if they have already built use of the service into their daily lives. This condition means they are unlikely to consider the terms of the contract.
Besides, a condition exacerbated by contracts often being written in technical language, meaning they may not understand it. Finally, users are not required to consider the broader societal implications of the rights they give up when agreeing to these contracts.
A Deloitte survey of 2,000 U.S. consumers in 2017 found that 91% of people consent to terms of service without reading them. For younger people, ages 18–34, that rate was even higher: 97% did so [4]. Therefore, the rights of companies providing these technologies to create contracts must be questioned.
3. Transparency
Similarly, users should be alerted to what sort of digital footprint they are leaving in the metaverse and who will be able to see it. Ensuring that individuals can see that data about them, and remove it, would also be desirable.
This should also apply to data about a person’s physical self. Real-time interactions will gather a great deal of information about people — users should be permitted to access their own information. Promoting open-source software so that users can see whether there exist backdoors for security agencies could be considered [5].
However, this is unlikely to benefit all users since most users are unlikely to have the expertise to assess and analyze the software’s safety. Nonetheless, this would be of benefit to those with coding literacy.
4. Research Funding
Research funding bodies need to be made aware of the privacy threats in the metaverse and how metaverse will exacerbate these threats.
Companies and funding bodies that value privacy might aim to fund technological developments that would protect people’s privacy and ensure that autonomy is not threatened. Research funding bodies that value privacy could, in theory, provide alternatives that perform the same services as the metaverse giants but that do not try to steal privacy of users.
5. Education
Finally, policy-makers will need to ensure that users are educated regarding the threats posed by the metaverse.
With education, users will be able to make informed choices regarding how they interact online and what sort of information they are willing to reveal. People should also be educated regarding their legal rights.
In this way, users can customize their privacy policies when participating in metaverse activities. Moreover, when their privacy is compromised, the metaverse users can respond in a timely and effective manner.
Zecrey official website: Zecrey
Welcome to join our communities and follow us on twitter:
Medium: https://medium.com/@zecrey
Twitter: https://twitter.com/zecreyprotocol
Telegram: https://t.me/zecrey
Discord: https://discord.com/invite/U98ghQsJE5
References
[1] https://zecrey.medium.com/analysis-on-potential-threats-to-privacy-in-metaverse-477f51800df5
[2] https://en.wikipedia.org/wiki/Napster
[3] https://www.digitalbodies.net/ai/sensoriums-ai-driven-avatars-another-step-toward-the-metaverse/
[5] https://www.contrastsecurity.com/knowledge-hub/glossary/open-source-security-guide