Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.
Every social media network has its own niche: TikTok for short-form video creation and discovery; X for concisely expressing thoughts and opinions and sharing news; Twitch for creating and viewing gaming livestreams — the list goes on.
But at their core, all social media networks are very similar. They've been created to empower users to express themselves, share content and build connections with others. As networks that encourage self-expression and exchanges of conversation and content, safety and moderation measures are a large part of every social platform. It is, of course, up to each network to identify where they fall between user safety and the freedom of expression.
At Wizz, a social discovery app that offers teens around the world a space to freely express themselves, our top priority in building the app was to create a space where our users felt comfortable to be themselves and safe to engage with the community. It's this that allows them to make meaningful connections with others.
To create this safe space that prioritizes users' comfortability to express themselves, we built a technology-driven moderation system that focused on age-verification, content moderation and user privacy. But building this was no easy feat.
Here's a look at four challenges we overcame while developing our moderation system — and how mobile app developers can best address them as they build safety ecosystems of their own, for social media networks or other community-focused projects.
1. Figure out how to make users feel safe without feeling censored
What content is okay to share? What language should be flagged? It all comes down to an individual app's content policy. This is a constant challenge for every social media app because it means defining what's universally offensive and what's more subjective.
An effective way to create a content policy is to bring together a diverse group of people to determine what behaviors, language and content are deemed universally offensive by a majority of them. The target audience of the app should also be considered as part of these decisions.
For instance, a large concern for us is the emotional safety of our users because they're primarily 13-21 years old. We'll remove any language or content that is bullying, humiliating, insulting, sexually-oriented, violent or includes defamation, profanity or hate speech, among other things–and suspend the user behind it.
With any content policy there might be initial pushback from users, but it will ultimately create trust among users and strengthen the community you're building.
2. Moderate content before it hits users by embracing automation
With content guidelines in place, the next step is to enforce them. While it's important to rid the app of any piece of offensive content, language or behavior, the goal should be to remove the content before a user sees it. But whether an app has hundreds of users or millions, it's impossible to sift through all the content on their own.
An effective way to do this is through moderation technology partners. For example, we work with AI-based solutions that flag and remove harmful content before it's distributed on the app. Partnering with technology has allowed us to effectively enforce content policies at scale, in a timely manner.
It doesn't have to be completely left up to technology, either. Allowing users to flag content that they find offensive not only empowers citizen moderators but enables users to share direct feedback on what makes them feel safe and comfortable in the app.
3. Replace the idea of "likes" or other performance metrics
While removing harmful content and addressing offensive behavior should be top priority for social media apps, it's not the only thing that defines a safe space for users. It also means curating a space where users don't feel judged and free to be themselves.
With most social media platforms, likes, comments and other reactions are the current social currency. At Wizz, our focus is to make users feel good about themselves so they can genuinely connect with others, so we removed likes and other performance metrics. Users no longer feel pressured for their content to perform — they're creating content that reflects their interests and personality, and can help them build relationships with others like them.
This idea can apply to any app feature, too. Just because it's working for other applications, doesn't mean it's right for your app, audience or vision.
4. Resist the urge to introduce too many features
All developers can be susceptible to "feature creep," when ideas beget more ideas and there is a push to continuously stuff more into your project. And sure, more features might increase downloads and visitors, but it's also the fastest way to lose the identity of your community.
App features should reflect the needs and interests of your users.
This doesn't mean to stop exploring new features to implement, but rather to constantly communicate with your audience to identify what they're looking for in their apps. And when features are built, they should be tested with smaller audiences to ensure they're of interest to users and driving the engagement they're supposed to.
We didn't introduce moderation into Wizz because we had to — we did it because we knew it would play a large role in creating an app our users wanted to be a part of. Like any new implementation, it came with a few challenges but creating this ecosystem has helped us to build a space where our users feel like they can completely be themselves.
Industry News
Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.
Securiti announced a new solution - Security for AI Copilots in SaaS apps.
Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:
Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.
Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.
OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.
Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.
Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.
Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.
Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.
Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.
Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.
MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.