The future of online content moderation and user safety

Sacha Lazimi, Co-founder and CEO of Yubo, explains the steps the company has taken to keep its users safe on the platform.

Generation Z is the first cohort of young people to grow up entirely with access to social media at their fingertips, and as a result this generation is dramatically reshaping the internet and how we interact with each other online. As the ways in which Gen Z communicates with each other become more and more digitised, many believe that online platforms have a duty of care to protect young people from the harmful content and behaviours that can exist online.

The Online Safety Bill has been introduced to further improve the safety of young people online, but while this bill and an expanded groundwork for safety are still in process, there are steps that online platforms can and should be taking now to protect their users.

Content moderation is crucial when it comes to creating a safe online environment in this fast paced digital world, and social platforms should not wait for updated regulations to be put in place to implement these advanced protections for their online communities. When the lines between online and offline are blurred, prioritising content moderation is an essential means to helping to keep users safe.

At Yubo, which is a live social discovery platform specifically for Gen Z, we have extensive safeguards in place to ensure the platform is as safe as it can be for our users. We pride ourselves on constantly pushing the boundaries of what is possible around online safety and moderation. Because of this, we have become recognised as a leader in online safety and innovation by industry experts and safety organisations all over the world.

Technology provides necessary safeguards
Technology plays a huge role in keeping young people safe online, and in moderating content. Already, we are seeing social media companies turn to existing technologies such as artificial intelligence (AI) to do so, but for many, leveraging this technology is still largely behind the times.

From the beginning, at Yubo we knew that in order to be successful, we had to put the safety of our users at the forefront of everything we did. Since then, we’ve made massive strides in developing solutions that monitor and moderate content on the platform and, in part, user behaviour and conduct. For example, we became the first social platform globally to implement real-time moderation and intervention in livestreams. We have advanced AI technology that captures second-by-second screenshots to flag any violations of our Community Guidelines to human Safety Specialists who can then intervene in the livestream in real-time.

We also provide safety advice in the form of real-time pop-ups. If a user breaks our Community Guidelines, we share the specific reason(s) behind the penalties issued by our Safety Specialists regarding these violations. We also send pop-up alerts to users who we detect are about to send personally identifiable information (such as their address or phone number) as a caution and to make them think twice about whether they’d like to share.

Alongside these solutions we have in place, Yubo also partners with technology providers such as Yoti and Hive for their advanced age verification and audio moderation capabilities. In doing so, Yubo became the first major social platform to verify 100 per cent of users and to introduce audio moderation – two of the largest challenges in moderation today.

Partnering with Yoti, we use their age verification technology, which estimates with 98.9 per cent accuracy an individuals age, to enhance the accuracy of age gating on Yubo. Asking people to verify their age when they sign up to the platform helps to ensure that no one under 13 is able to access the platform, and that interactions between age groups, specifically teens and adults, are limited. This technology also reduces the number of fake profiles and bots on the platform. Through verifying 100 per cent of our users’ ages, we have reduced our user base by 10-20 per cent in order to protect our digital community.

Yubo also recently introduced audio moderation technology to the platform, testing it among English-speaking users in the US, UK, Australia and Canada. Hive’s audio moderation technology works by recording and transcribing 10-second snippets of audio in user livesteams. The resulting text is then scanned using AI and transcripts containing words or phrases that violate our Community Guidelines are flagged and reviewed by our team of Safety Specialists. While this technology is still in its infancy, it is already showing great promise for tackling the incredibly complex nature of detecting and moderating verbal interaction.

Technology, therefore plays such an important part in keeping young people safe online, but it is only one piece of the puzzle.

The need for a human touch point
Undoubtedly, technology is helping us to keep pace with the ever-changing nature of the internet and communication online. However, in order to reach its full potential, it must work in tandem with human moderators. Having a clear human touch point in user safety is invaluable.

Yubo has a Safety Specialist team who monitor the platform 24/7 and who, when the technology flags concerning content or a discrepancy in a user’s age for example, take immediate action. Our technology would not be what it is today without this team providing the nuance and context necessary when analysing potential infractions and taking appropriate action. 

In addition to our Safety Specialist team, we also have a board of safety experts, made up of respected thought leaders from organisations such as the Diana Award, NCMEC and Thorn. This board helps to guide us in our decisions regarding user safety on the platform, offering expertise on the dangers young people face online and the solutions we can put in place to combat these dangers.

Finally, we have our Emergency Escalation team. This team has direct contact with law enforcement and has been created to quickly respond to any serious threats or signs of troublesome behaviour on our platform that the Safety Specialists feel pose a serious risk.

Giving users the tools to keep themselves safe
As social media platforms deploy Safety Specialists and technologies to keep their users safe, a key factor which must not be overlooked is education.

While Gen Z undoubtedly deserve their digital natives title, they still need to be taught how to behave and respect others online. With 99 per cent of our users being Gen Z between ages 13-25, Yubo believes in user education and we aim to teach users about online safety best practices, respectful behaviour and accountability.

Our Safety Hub, a centralised place where community rules are defined, outlines the respectful behaviour we expect from our users while they use our platform. We continually update this content as we navigate the changing landscape of the digital world.

However, we understand that not all of our users will be taking the time to read these guidelines which is why we’ve also created tools which put user safety in the hands of our users. As mentioned previously, we use educational pop-up alerts when a user violates our Community Guidelines or tries to share personal information to help them understand and think twice about their decision. Our ‘Muted Words’ feature also gives users the tools to block specific words, emojis or phrases they find personally harmful from appearing on the app. They are able to choose who to mute the words from – for example, they can block all users or just those that aren’t on their friends list.

All of our technology features and innovations we utilise and develop on Yubo keep our users as safe as they can be while protecting their data. Safety does not come with a catch, and users privacy should never be compromised, especially when it comes to young people. 

Implementing new-age technologies, complemented by human moderation and driving safety through user education are great starting points for creating safe spaces online for young people. There is still far to go but if social media companies start to act on these now we will begin to see real change in the safety of young people online today.