Driving Value Added Services & Content|Billing & Engagement In Motion|Minutes, Messages & Traffic That Pays|Engage & Commercialize Connected Consumers|Making Interactive Media Pay|Billing & Alternative Payments That Convert|Mobile Strategies For Merchants & Content Owners|Monetising Premium Content & Services
ROCCO DCB Vendor Performance
World Telemedia Marbella 2019 Header
MACROKIOSK BoldPay
Mobilesquared VAS Research
DOMOCO Report - Middle East

Ofcom reports on how to use AI to moderate online content

0

In the last two decades, online platforms that permit users to interact and upload content for others to view have become integral to the lives of many people and have provided a benefit to society. However, there is growing awareness amongst the public, businesses and policy makers of the potential damage caused by harmful online content.

Now Ofcom has teamed up with Cambridge Consultants to produce a report as a contribution to the evidence base on people’s use of and attitudes towards online services, which helps enable wider debate on the risks faced by internet users.

The user generated content (UGC) posted by users contributes to the richness and variety of content on the internet but is not subject to the editorial controls associated with traditional media. This enables some users to post content which could harm others, particularly children or vulnerable people. Examples of this include content which is cruel and insensitive to others, which promotes terrorism or depicts child abuse.

As the amount of UGC that platform users upload continues to accelerate,1 it has become impossible to identify and remove harmful content using traditional human-led moderation approaches at the speed and scale necessary.

This paper examines the capabilities of artificial intelligence (AI) technologies in meeting the challenges of moderating online content and how improvements are likely to enhance those capabilities over approximately the next five years.

Recent advances in AI and its potential future impact

The term ‘AI’ is used in this report to refer to the capability of a machine to exhibit human-like performance at a defined task, rather than refer to specific technical approaches, such as ‘machine learning’. Although AI has been through several cycles of hype followed by disillusionment since its inception in the 1950s, the current surge in investment and technological progress is likely to be sustained. The recent advances in AI have been enabled by progress in new algorithms and the availability of computational power and data, and AI capabilities are now delivering real commercial value across a range of sectors.

The latest advances in AI have mainly been driven by machine learning, which enables a computer system to make decisions and predict outcomes without being explicitly programmed to perform these tasks. This approach requires a set of data to train the system or a training environment in which the system can experiment. The most significant breakthrough of machine learning in recent times is the development of ‘deep neural networks’ which enable ‘deep learning’. These neural networks enable systems to recognise features in complex data inputs such as human speech, images and text. For many applications, the performance of these systems in delivering the specific task for which they have been trained now compares favourably with humans, but AI still does make errors.

The advances of AI in recent years will continue, driven by commercial applications and enabled by continued progress in algorithm development, the increasing availability of low- cost computational power and the widespread collection of data. There are, however, some inhibitors to making the most of the potential of AI, such as the lack of transparency of some AI algorithms which are unable to fully explain the reasoning for their decisions. Society has not yet built up the same level of trust in AI systems as in humans when making complex decisions. There are risks that bias is introduced into AI systems by incorporating data which is unrepresentative or by programming in the unconscious bias of human developers. In addition, there is a shortage of staff suitably qualified in developing and implementing AI systems. However, many of these inhibitors are being addressed: the supply of AI- skilled engineers will increase and it is likely that society will gradually develop greater confidence in AI as it is increasingly seen to perform complex tasks well and it is adopted in more aspects of our lives. Other issues will, however, continue to be a problem for at least the short term and these are discussed in more detail in this report.

Current approaches and challenges to online content moderation

Effective moderation of harmful online content is a challenging problem for many reasons. While many of these challenges affect both human and automated moderation systems, some are especially challenging for AI-based automation systems to overcome.

There is a broad range of content which is potentially harmful, including but not limited to: child abuse material, violent and extreme content, hate speech, graphic content, sexual content, cruel and insensitive material and spam content. Some harmful content can be identified by analysing the content alone, but other content requires an understanding of the context around it to determine whether or not it is harmful. Interpreting this context consistently is challenging for both human and automated systems because it requires a broader understanding of societal, cultural, historical and political factors. Some of these contextual considerations vary around the world due to differences in national laws and what societies deem acceptable. Content moderation processes must therefore be contextually aware and culturally-specific to be effective.

Online content may appear in numerous different formats which are more difficult to analyse and moderate, such as video content (which requires image analysis over multiple frames to be combined with audio analysis) and memes (which require a combination of text and image analysis with contextual and cultural understanding). Deepfakes, which are created using machine learning to generate fake but convincing images, video, audio or text, have the potential to be extremely harmful and are difficult to detect by human or AI methods.

In addition, content may be posted as a live video stream or live text chat which must be analysed and moderated in real time. This is more challenging because the level of harmfulness can escalate quickly and only the previous and current elements of the content are available for consideration.

Over time, the language and format of online content evolve rapidly and some users will attempt to subvert moderation systems such as by adjusting the words and phrases they use. Moderation systems must therefore adapt to keep pace with these changes.

Online platforms may moderate the third-party content they host to reduce the risk of exposing their users to harmful material and to mitigate the reputational risk to the organisation. However, removing content which is not universally agreed to be harmful can also result in reputational damage and undermine users’ freedom of expression. Facebook, for example, has been criticised for removing an image of a statue of Neptune in Bologna, Italy for being sexually explicit and the iconic photograph of a young girl fleeing a napalm bombing during the Vietnam War for showing child nudity.

The variety of types of harmful content makes it difficult for online platforms to define in their community standards the content and behaviours which are not permitted on their platforms. AI-enabled content moderation systems are developed to identify harmful content by following rules and interpreting many different examples of content which is and is not harmful. It can be challenging for automated systems to interpret the community standards of a platform to determine whether content is harmful or not. Clarity in platforms’ community standards is therefore essential to enabling the development and refinement of AI systems to enforce the standards consistently.

Overall, it is not possible to fully automate effective content moderation. For the foreseeable future the input of human moderators will continue to be required to review highly contextual, nuanced content. Human input is expensive and difficult to scale as the volume of content uploaded increases. It also requires individuals to view harmful content in order to moderate it, which can cause them significant psychological distress. However, AI-based content moderation systems can reduce the need for human moderation and reduce the impact on them of viewing harmful content.

Share.

About Author

Paul Skeldon

Editor and content creator for Telemedia – for 18 years and counting

Leave A Reply