GovTech and National Security

By Lyric Jain for GovTech Europe

Lyric is the Founder and CEO of Logically, an organisation specialising in AI and its relevance in the fight against disinformation.

It’s a topic that will likely hit home for every single European government, as the continent sees what many are calling the worst conflict since the Second World War.

Fake news is widespread. Fact-checking and independent verification are vital.

Protecting information integrity, particularly in times of crisis such as pandemics or during conflict, is key. Policymakers and civilians need access to reliable information in order to make informed and safe decisions, and with the increasing amount and sophistication of malicious actors trying to warp this information using advanced technology, the public sector needs to be looking at tech solutions to counteract these challenges. Tracking and tackling harmful online content that migrates across multiple different platforms and jurisdictions is important, as is restricting its impact and identifying the bad actors and their intentions behind it.

When considering effective responses to disinformation, resilience and capacity-building initiatives should also be at the top of public sector priorities. Resilience building, for example in the form of media literacy education, is crucial in approaching the threats from a civilian viewpoint. Capacity building initiatives are also key; there are some centres of excellence that host vital expertise in the fields of mis/disinformation, but without doubt we need to make this expertise more accessible. 

THE CHALLENGE

We’ve seen the rapid growth of increasingly sophisticated private companies working on the other side of the disinformation fight, which can be financially lucrative. With the amount of money on this side of the fight and the continuous evolution of the threats we are seeing, we need to continue to invest in our technological capabilities in order to stay ahead of the curve and prevent these actors evading content moderation. 

“The public sector, working alone, does not have the technical or operational capacity to coordinate and scale an effective response.”

For governments, scalability is key and can be a challenge when dealing with the nuances amongst the masses of misinformation and online content. The way we identify, analyse and flag threats is automated, but we can’t treat every piece of content or threat the same.

Therefore, to achieve scalability whilst retaining quality of analysis and effectiveness of response, a combined approach is needed that brings in human expertise and specific knowledge, alongside cutting-edge artificial intelligence tools that can ingest and analyse large amounts of data at speed.

The public sector, working alone, does not have the technical or operational capacity to coordinate and scale an effective response, nor should they have to build solutions from scratch each time there is a significant challenge. It needs to be a collaborative process that brings in multiple different stakeholders from both within the public sector, and private sector actors such as tech companies and platforms.

THE SOLUTIONS

Privacy and handling sensitive data can be challenging for the public sector when working to safeguard national security. Tools, such as differential privacy tech that allows researchers and analysts to handle sensitive data without breaching privacy rules, can help enable and streamline these processes.

Additionally, the ‘explain-ability’ of artificial intelligence tools is definitely something to be aware of. For example, when monitoring and analysing the information landscape for threats to national security, in order to develop an effective response, we need tools to be able to explain why they might have categorised something as harmful and show how it has translated the threat context into that decision.

Not only does demonstrating this explain-ability make tools more reliable for governments and law enforcement, but it also helps the builders of those tools to improve them as necessary.

More generally, when it comes to protecting nations from information threats, developing solutions which combine human expertise and artificial intelligence is crucial. In situations critical to public safety and national security, analyst expertise is essential to be able to appreciate contextual nuances or identify new threats that tech solutions might not have yet been trained on – this is particularly true in crisis situations. 

THE LATEST DEVELOPMENTS

One of the biggest challenges and frustrations we face when developing effective machine learning solutions is the need for vast quantities of data to effectively train models. Developments in few-shot and zero-shot learning, which allow for less data and fewer training samples to be used to model algorithms, mean there is a lot more potential for more solutions to be developed, more quickly.

“There is certainly no simple solution, but it is positive to see the amount of political will behind these efforts.”

Those spreading harmful and misleading online content are not restricted by national borders, so nor should those trying to tackle it. Data-sharing is important, and I’d strongly advocate for efforts to safely and responsibly open up more datasets for research and to allow platforms, law enforcement and governments to effectively track coordinated malicious campaigns across platforms and jurisdictions. 

Our approach needs to be coordinated to address cross-jurisdictional and cross-platform content. This involves collaboration between policymakers and between the public and private sector: between governments, cybersecurity firms, analysts, advocacy groups and platforms.

We are seeing positive moves at a regulatory level, for example the Digital Services Act and the Code of Practice on Disinformation, which is actively looking at ways to improve counter disinformation tactics and information sharing across the EU. There is certainly no simple solution, but it is positive to see the amount of political will behind these efforts.

Separately, there are 24 official languages in the EU, though many more are spoken across the continent; therefore training both our AI tools and our expert analysts with multilingual capabilities is important. Our tech relies upon Natural Language Processing (NLP), but it can be difficult to make NLP models that are effective at detecting and analysing content in different languages and dialects. Not only is research and development key, but the human element is also critical here, to ensure that colloquialisms and contextual nuances are understood. It is hard to generalise the information landscape across the whole of Europe, and so community expertise is needed to make countermeasure actions effective. 

Share:

More Posts

Send Us A Message

© Univmedia Ltd

t/a Universal Media
360 North Circular Road, Phibsborough, Dublin 7
talk@unimedia.ie

© Univmedia Ltd

t/a Universal Media
360 North Circular Road, Phibsborough, Dublin 7
talk@unimedia.ie