Gcore Launches Advanced AI Solution for Real-Time Online Content Moderation and Compliance

 

Gcore, a leader in global edge AI, cloud, network, and security solutions, today introduced its Gcore AI Content Moderation service.

This new offering provides real-time automated moderation of audio, text, and video content, designed to enhance user safety and ensure compliance with key regulations like the EU’s Digital Services Act and the UK’s Online Safety Bill.

Platforms hosting user-generated content (UGC), from short comments to extensive videos, especially those accessible to children, must moderate such content to shield viewers from harmful material. The risks of not doing so include reputational damage, legal repercussions, service suspension, and significant fines—up to 6% of global revenue under the DSA.

Real-time AI Content Moderation

With the surge in UGC, traditional human moderation struggles to keep up with the rapid identification of harmful and illegal content. Human moderators can be overwhelmed by high volumes, leading to prohibitive costs and operational inefficiencies that may allow harmful content to slip through or delay the publication of permissible content.

Gcore’s AI Content Moderation service automates the monitoring of video content streams, identifying inappropriate material and alerting human moderators when needed. The service leverages advanced technology to begin reviewing videos or live streams within seconds of their upload:

  • State-of-the-art computer vision: This technology employs sophisticated models for object detection, segmentation, and classification to pinpoint and flag inappropriate visual content accurately.
  • Optical character recognition (OCR): OCR technology converts text within videos to machine-readable formats, facilitating the moderation of unsuitable or sensitive textual information.
  • Speech recognition: This feature uses advanced algorithms to analyze audio tracks, detecting and flagging offensive language or hateful speech to moderate audio content as effectively as visual elements.
  • Multiple-model output aggregation: For complex decisions, such as identifying child exploitation content, Gcore’s service combines multiple data points and model outputs to make precise and reliable moderation decisions.

Organizations can seamlessly integrate Gcore AI Content Moderation into their existing systems via an API, requiring no previous AI or ML knowledge. Running on Gcore’s robust global network with over 180 edge points of presence and a capacity exceeding 200 Tbps, the service ensures low latency, high speed, and resilience.

Alexey Petrovskikh, Head of Video Streaming at Gcore, stated, “The task of human-only content moderation has become unmanageable. It’s simply impossible to allocate sufficient human resources while controlling costs. Gcore AI Content Moderation empowers organizations to moderate content globally, maintaining human oversight for review of flagged content. This new service is crucial for companies aiming to protect their users, communities, and reputations while meeting regulatory requirements.”

Source: streetinsider.com

Hipther

FREE
VIEW