As more people continue to share content and information on social platforms, those with bad intentions will find ways to spread malicious and inappropriate content. Terrorists, hate groups, human traffickers, extremists, bullies and hostile foreign countries have all demonstrated the ability to promote harm and confusion on these platforms. Current counter-measures are expensive and often ineffective. Nucleus provides a novel approach to automated detection which improves monitoring inefficiencies.
Posts at high velocity and volume make identifying objectionable content both difficult and expensive.
As nefarious posters learn the current models, they adapt to circumvent the learning systems, increasing the burden upon the content monitor and the amount of malicious content seen by users
We address these challenges through machine learning algorithms that learn on-the-fly at large scale.
We provide a highly responsive content monitoring and moderation process that can keep pace with the rapidly adjusting abusers