Connect with us

Technology

Publishing our Internal Enforcement Guidelines and Expanding our Appeals Process

Published

on

facebook south africa

By Monika Bickert

One of the questions we’re asked most often is how we decide what’s allowed on Facebook. These decisions are among the most important we make because they’re central to ensuring that Facebook is both a safe place and a place to freely discuss different points of view.

For years, we’ve had Community Standards that explain what stays up and what comes down. Today we’re going one step further and publishing the internal guidelines we use to enforce those standards. And for the first time we’re giving you the right to appeal our decisions on individual posts so you can ask for a second opinion when you think we’ve made a mistake.

We decided to publish these internal guidelines for two reasons. First, the guidelines will help people understand where we draw the line on nuanced issues. Second, providing these details makes it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines – and the decisions we make – over time.

The Policy Development Process

The content policy team at Facebook is responsible for developing our Community Standards. We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook. I worked on everything from child safety to counter terrorism during my years as a criminal prosecutor, and other team members include a former rape crisis counsellor, an academic who has spent her career studying hate organizations, a human rights lawyer, and a teacher. Every week, our team seeks input from experts and organizations outside Facebook so we can better understand different perspectives on safety and expression, as well as the impact of our policies on different communities globally.

Based on this feedback, as well as changes in social norms and language, our standards evolve over time. What has not changed – and will not change – are the underlying principles of safety, voice and equity on which these standards are based. To start conversations and make connections people need to know they are safe. Facebook should also be a place where people can express their opinions freely, even if some people might find those opinions objectionable. This can be challenging given the global nature of our service, which is why equity is such an important principle: we aim to apply these standards consistently and fairly to all communities and cultures. We outline these principles explicitly in the preamble to the standards, and we bring them to life by sharing the rationale behind each individual policy.

Enforcement

Our policies are only as good as the strength and accuracy of our enforcement – and our enforcement isn’t perfect.

One challenge is identifying potential violations of our standards so that we can review them. Technology can help here. We use a combination of artificial intelligence and reports from people to identify posts, pictures or other content that likely violates our Community Standards. These reports are reviewed by our Community Operations team, who work 24/7 in over 40 languages. Right now, we have 7,500 content reviewers, more than 40% the number at this time last year.

Another challenge is accurately applying our policies to the content that has been flagged to us. In some cases, we make mistakes because our policies are not sufficiently clear to our content reviewers; when that’s the case, we work to fill those gaps. More often than not, however, we make mistakes because our processes involve people, and people are fallible.

Appeals

We know we need to do more. That’s why, over the coming year, we are going to build out the ability for people to appeal our decisions. As a first step, we are launching appeals for posts that were removed for nudity/sexual activity, hate speech or graphic violence.

Here’s how it works:

If your photo, video or post has been removed because it violates our Community Standards, you will be notified, and given the option to request additional review.

This will lead to a review by our team (always by a person), typically within 24 hours.

If we’ve made a mistake, we will notify you, and your post, photo or video will be restored.

This post shows an example that could have been incorrectly removed and can now be appealed.

We are working to extend this process further, by supporting more violation types, giving people the opportunity to provide more context that could help us make the right decision, and making appeals available not just for content that was taken down, but also for content that was reported and left up. We believe giving people a voice in the process is another essential component of building a fair system.

Participation and Feedback

Our efforts to improve and refine our Community Standards depend on participation and input from people around the world. In May, we will launch Facebook Forums: Community Standards, a series of public events in Germany, France, the UK, India, Singapore, the US and other countries where we’ll get people’s feedback directly. We will share more details about these initiatives as we finalize them.

As our CEO Mark Zuckerberg said at the start of the year that “we won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.” Publication of today’s internal enforcement guidelines – as well as the expansion of our appeals process – will create a clear path for us to improve over time. These are hard issues and we’re excited to do better going forward.

Monika Bickert is the VP of Global Product Management at Facebook

Modupe Gbadeyanka is a fast-rising journalist with Business Post Nigeria. Her passion for journalism is amazing. She is willing to learn more with a view to becoming one of the best pen-pushers in Nigeria. Her role models are the duo of CNN's Richard Quest and Christiane Amanpour.

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

TikTok Invests Fresh $200K in AI Media Literacy in Africa

Published

on

TikTok AI Media Literacy Tokunbo Ibrahim

By Modupe Gbadeyanka

An additional $200,000 will be invested in Artificial Intelligence (AI) media literacy initiatives across Sub-Saharan Africa, TikTok announced during its third annual Sub-Saharan Africa Safer Internet Summit in Nairobi, Kenya.

The platform hosted government officials, regulators, online safety partners and industry leaders for the event, reinforcing its commitment to collaborative approaches to online safety.

The funds will be provided in ad credits to help support local organisations in the region to expand AI media literacy.

This investment builds on the company’s initial $2 million AI Literacy Fund, launched in November 2025, which awarded 20 global non-profits to create content that boosts public understanding of AI.

In Sub-Saharan Africa, TikTok initially supported three organisations to advance digital literacy and combat misinformation.

“With the rapid advancement of AI, we are committed to educating our community online, so they feel empowered to have responsible experiences with AI, whether that’s as viewers or creators.

“We are partnering with trusted local organisations that communities already know and rely on, because their expertise and deep local connections are essential to making AI literacy programs truly impactful,” the Global Head of Partnerships, Elections and Market Integrity at TikTok, Mr Valiant Richey, stated.

Earlier, the Head of Government Relations and Public Policy for Sub-Saharan Africa at TikTok, Ms Tokunbo Ibrahim, said, “As we host the 3rd Annual Safer Internet Summit here in Kenya, our mission is clear: to share learnings, insights, tackle common challenges and collaboratively advance actionable solutions that protect citizens online.

“By bringing together a diverse coalition of policymakers, tech innovators, and creators, we are ensuring that the conversations we have at this Summit are all-inclusive and lead to a more resilient digital landscape.”

The summit featured expert panels and discussions on critical topics, including TikTok’s Trust and Safety efforts, protecting young people online, and policy frameworks for responsible AI governance.

A key highlight of the event was showcasing how TikTok uses AI to transform how people share their creativity and discover new passions, while ensuring the community remains safe through transparent and responsible AI practices.

The platform also shared more about how recent advancements in AI are helping the platform moderate content faster and more consistently at scale, by improving automated moderation and empowering human teams with better moderation tools.

With over 100 million pieces of content uploaded daily to TikTok, these advances, which work alongside human moderation teams, are helping get violative content down faster, reducing the likelihood of the community seeing it.

According to the latest Community Guidelines Enforcement Q3 2025, TikTok removed over 14 million videos across Sub-Saharan Africa, with 96.7 per cent detected and removed proactively using automated technology, underscoring TikTok’s commitment to proactive moderation and swift action.

Continue Reading

Technology

Interswitch Technovation 4.0 Hackathon Winners Share N10m

Published

on

Interswitch Technovation 4.0 Hackathon

By Modupe Gbadeyanka

The winners of the Technovation 4.0 Hackathon, themed The Wicked Hackathon, organised by Interswitch, have been given N10 million in cash prizes for their efforts.

At the one-day finale event, which took place on Wednesday, March 4, 2026, at the Interswitch Innovation Lab and Co-Working Space, the money was shared among the top teams whose innovative solutions stood out during the rigorous multiple phases of the competition.

Team Quickteller Fashion emerged as the overall winner, securing the grand prize of N4 million for a solution that impressed judges with its originality, practicality, and strong strategic relevance. Team Kampe claimed second position with N2.5 million, while Team Stable placed third, receiving N1.5 million. Up to N300,000 worth of cash prizes were also awarded to the fourth, fifth and sixth qualifying teams.

For nine months, cross-functional teams from across the organisation collaborated to conceptualise, validate, develop, and refine solutions, moving from raw ideas to minimum viable products (MVPs) with ready-to-market potential and deployment across the business.

The atmosphere at the grand finale reflected that of preparation and anticipation as the top 9 teams presented their innovations through live demonstrations and detailed pitches, fielding questions from a distinguished panel of judges before the top three winners were selected. Each presentation highlighted rigorous validation processes, thoughtful market considerations, and a strong emphasis on measurable impact.

While many of the solutions remain confidential due to their strategic relevance, the diversity and depth of ideas showcased during the hackathon’s final underscored the organisation’s growing culture of intrapreneurship and structured innovation. The projects illustrated how technology-driven thinking can unlock efficiencies, strengthen operational capabilities, and open new pathways for growth across the digital payments and commerce ecosystem.

“Technovation continues to reflect who we are as an organisation, bold, forward-thinking, and deeply committed to building impactful solutions from within. Over the years, we have seen ideas conceived during this programme evolve into meaningful capabilities that strengthen our ecosystem.

“The passion, discipline, and ingenuity demonstrated by our teams this year reinforce our belief in the power of African innovation to solve complex challenges and shape the future of technology on the continent,” the Chief Innovation Officer for Interswitch, Ms Adaobi Okerekeocha, stated.

Continue Reading

Technology

Google Introduces Yorùbá, Hausa Language Support for AI Search Features

Published

on

google AI Search

By Modupe Gbadeyanka

The language support for its AI Search features has been expanded by Google, with the inclusion of Yoruba and Hausa in Nigeria.

This is part of a broader effort to make AI more inclusive across the continent, with support now extending to a total of 13 African languages.

Under the AI Overviews and AI Mode, speakers of both Nigerian languages can utilise AI-powered Search experiences in their mother tongue for quick summaries and conversational exploration.

This means existing AI features in Google Search are now accessible to people like the student in Kano asking a question in Hausa, and the trader in Ibadan seeking advice in Yorùbá.

By addressing language barriers, this update ensures that technology reflects the identity and culture of the people it serves. With this expansion, more people can now use AI Mode to ask complex questions in their preferred language, while exploring the web more deeply and naturally through text or voice.

The 13 languages now supported across Africa include Afrikaans, Akan, Amharic, Hausa, Kinyarwanda, Afaan Oromoo, Somali, Sesotho, Kiswahili, Setswana, Wolof, Yorùbá, and isiZulu.

These languages were chosen based on the vibrant search activity across the continent, ensuring that our AI experiences reach the communities that need them most.

Commenting on the development, the Communications and Public Affairs Manager for Google in West Africa, Taiwo Kola-Ogunlade, said, “Building a truly global Search goes far beyond translation — it requires a nuanced understanding of local information.

“With the advanced multimodal and reasoning capabilities of our custom version of Gemini in Search, we’ve made huge strides in language understanding, so our most advanced AI search capabilities are locally relevant and useful in each new language we support.

“This is about ensuring Nigerians can converse with Search in their mother tongues, making information more helpful for everyone.”

To use AI Overviews and AI Mode in the local language, users must open the Google app on an Android or iOS device, or via the Web. They are required to tap on AI Mode within the Search experience. Thereafter, they can type or speak the question in their preferred language, such as Hausa or Yorùbá, and let the AI guide the journey.

Continue Reading

Trending