Online Security Consortium Publishes First Safety Standards

- Wednesday, January 26 2022 @ 09:36 am
- Contributed by: kellyseal
- Views: 526
The OASIS Consortium published its first set of safety standards for the next generation of the Web, which includes the metaverse. The goal is to create a blueprint for how companies and developers should approach online safety going forward.
The Consortium was founded in 2021 shortly after the insurrection at the Capitol in the U.S., a day where a mob terrorized and threatened lawmakers ahead of the certification of Biden as President. Investigations into the riot have shown how social media, particularly Facebook and its outsized influence, has helped to radicalize people when it comes to their political viewpoints.
According to Time Magazine, The Consortium addresses issues across a number of tech industries, including dating apps, video games, and immersive tech platforms that are leading the way into the metaverse, or what’s called Web 3.0. Leaders from Riot Games, The Meet Group (which owns dating apps like MeetMe, LOVOO, Growlr, and Skout) and others have helped to develop these standards for improving safety and privacy as people spend more time online.
The OASIS Consortium founder Tiffany Xingyu Wang, chief strategy & marketing officer at Spectrum Labs which develops AI technology, told Time: “There’s no consensus or definition of good: Most platforms I talked with do not have a playbook as to how to do this,” Wang says. “And then that’s not even mentioning the emerging platforms. There’s a huge gap in terms of fundamental governance issues, which is not a tech problem.”
The goal of the standards is to help tech companies handle safety concerns, from privacy to interactions with governments and law enforcement. Wang says the goal isn’t to leave the work for government intervention, but to have the companies themselves figure out a solution.
One of the group’s concerns is the metaverse, which Match Group and Bumble have already mentioned will be part of their apps’ platforms going forward. But a so-called “3D Internet” means that the user’s experience is immersive and in real time, which means there should be a ramp-up of support (both technical and in terms of manpower) to ensure proper intervention or moderation of problematic situations. In addition, the metaverse deals with virtual currency and exchange, which means you can transfer your digital identity and goods across platforms, increasing the risk of scammers and privacy loopholes.
Some of the recommendations include appointing an “executive-level officer of trust and safety,” investing in moderation tools so that there can be oversight before a post or account or group becomes a problem, and partnering with hate speech organizations so that companies can be more sensitive to interactions on their platforms and reduce harm to users. They hope to increase the number of companies eager to adopt these standards.
According to Time, Wang says one of the challenges will be the speed at which the metaverse will expand and grow, and that the standards might not be able to keep up, which is part of the reason The Consortium doesn’t want to rely solely on government intervention.
“I want safety, privacy and inclusion to be three core pillars to how we operate in a digital society,” she told Time.