During its first week of hearings, the Select Committee on Deliberate Online Falsehoods heard from academics, experts and community leaders on the potential impact of the problem and possible approaches to combat it.
They highlighted how Singapore's multicultural make-up and high Internet connectivity make it an easy target of disinformation campaigns that sow enmity among different groups and destabilise the country.
Deliberate online falsehoods bear trademarks of what has been termed a "wicked problem", which refers to a complex problem, often intertwined with other issues, that has no easy solution.
It is a "wicked problem" due to considerable ambiguity, such as how falsehoods should be defined and who should define it. There is also a lack of data on the precise magnitude and impact of exposure to deliberate online falsehoods.
The problem, as evident in countries like the United States, Ukraine and Indonesia, is often connected to broader societal issues such as highly polarised politics and historical conflicts among communities.
Such a "wicked problem" warrants a whole-of-society approach, and experts seem to agree that a suite of measures comprising self-regulation by Internet intermediaries, increasing critical literacy, and fact-checking is required.
The role of legislation, its potential limitations and pitfalls have been a focal point in the ongoing deliberations. If the Government decides to adopt a legislative tool to tackle the problem at hand, what principles and considerations should guide the use of legislation?
First, it is imperative to balance the need to protect national security and public order with safeguarding the ability of citizens and media to discuss and comment on pertinent issues, including those relating to governance and policies.
Second, legislation should not have the unintended effect of cultivating over-reliance on authorities among members of the public to help them discern truth from fiction.
To achieve this, there are two considerations when leveraging legislation.
FOCUSED APPROACH MATTERS
The first is to be as precise as possible in deciding which type of deliberate online falsehoods to act against.This will avoid legislative overreach as well as channel limited resources to fighting the real fight. Based on our research, we categorise deliberate online falsehoods into two types - "low breach" and "high breach" falsehoods.
"Low breach" deliberate online falsehoods create anxiety among the public and cause inconveniences to people. Some examples in the local context include the photograph of a "collapsed rooftop" of Punggol Waterway Terraces in an article published on the All Singapore Stuff website, and the alleged selling of plastic rice and issuing of fines by the National Environment Agency at hawker centres.
Fortunately, in many "low breach" cases, the stakeholders involved are often able to quickly establish the facts and debunk the falsehood; corrective action is promptly taken. For instance, residents in Punggol took to Facebook to debunk the falsehood of the "collapsed rooftop", and the website editors deleted the article and issued an apology.
However, what pose a more severe threat or "high breach" are the coordinated and covert efforts targeted at disrupting democratic processes in a country. Deliberate online falsehoods as part of a disinformation campaign have wreaked havoc on domestic politics and allegedly influenced referendum and election outcomes in other countries. Recently, the US Justice Department charged 13 Russians and three Russian firms with using stolen identities to pose as Americans, and with creating Facebook groups to distribute divisive content to subvert the 2016 US presidential election.
"High breach" deliberate online falsehoods also disrupt social and national stability by exploiting the pain points of a society, as seen in France and Indonesia. Thus, any use of legislation should focus on targeting such "high breach" deliberate online falsehoods.
FRAMEWORK TO DECIDE WHICH FALSEHOODS TO TARGET
Second, the evolving and amoebic nature of cyberspace makes it difficult and impractical to come up with a precise legal definition for deliberate online falsehoods that will stand the test of time. Thus, we propose a "5Cs" framework to help determine what online falsehoods warrant regulatory intervention and whom to act against.
The first "C" is Content: An important question to ask is if the content is verifiably false. Falsehoods should be distinguished from opinion.
The second "C" is Context: The content of an online falsehood should be considered within a country's political, economic and social milieu. Despite rapid changes in the online space, Singapore's approach to regulating speech in general has always focused on protecting racial and religious harmony among its population and maintaining public order and security. Moving forward, one possible approach is to focus on deliberate online falsehoods that pose a threat to these pillars that Singapore has always upheld.
The third "C" is Communicator's Identity. Our research found that there are different types of perpetrators. They include members of the public, corporations, domestic political agents and foreign state actors. Some actors could also be part of a larger network (for example, accounts linked to a foreign Internet troll factory).
The fourth "C" is Communicator's Intent. Looking at intent means dealing with perpetrators differently: Ordinary individuals might then be dealt with differently from networked players and foreign state actors who act from a larger insidious agenda to disrupt social stability and national security.
Different categories of potentially harmful online information serve different intents. For example, non-profit organisation First Draft identifies three types: misinformation such as parody and satire is false or inaccurate information produced or shared without the intent to deceive or harm; mal-information is information that is genuine but produced with a clear intention to cause harm; and disinformation is the deliberate creation and sharing of information known to be false with the intent to deceive or incite hostility. The earlier example of Russian manipulation of online discussion illustrates this.
Legislative intervention should focus on disinformation.
The final "C" is Consequence. The extent and magnitude of a falsehood in terms of its frequency and volume should be considered. The likelihood of harm to Singapore's social fabric and national security, including its imminence, should also be taken into account.
Deliberate online falsehoods pose severe challenges for societies, countries and global politics. The above proposed approaches will not eliminate the problem, but they help provide the necessary clarity and focus for countermeasures.
•Carol Soon, senior research fellow, and Shawn Goh, research assistant, are from the Institute of Policy Studies, National University of Singapore.
Source: The Straits Times © Singapore Press Holdings Limited. Reproduced with permission.
The views, material and information presented by any third party are strictly the views of such third party. Without prejudice to any third party content or materials whatsoever are provided for information purposes and convenience only. Council For The Third Age shall not be responsible or liable for any loss or damage whatsoever arising directly or indirectly howsoever in connection with or as a result of any person accessing or acting on any information contained in such content or materials. The presentation of such information by third parties on this Council For The Third Age website does not imply and shall not be construed as any representation, warranty, endorsement or verification by Council For The Third Age in respect of such content or materials.