TEL AVIV, Israel, April 21, 2021 /PRNewswire/ — Adversa, a leader in Trusted AI research and advisory, has published extensive research on the security and trustworthiness of artificial intelligence (AI). This was prompted by initiatives launched by the USA and European governments, Gartner predictions, and recent adversarial attacks on AI.
Building trust in the security and safety of machine learning is crucial. We are asking people to put their faith in what is essentially a black box, and for the AI revolution to succeed, we must build trust… The risks are too high – but so are the benefits.
stated Oliver Rochford, Former Gartner Analyst, Adversa’s Advisory Board member
The report “The road to secure and Trusted AI” is written for anyone responsible for the security of AI, and compiles experts’ opinion and predictions. It reveals the most critical applications, security threats to AI, and the ways to protect AI-powered systems in the wake of ongoing regulations.
The highlights are as follows:
- The exponential growth in AI has motivated governments, academia, and industry to publish more research papers on its security for the past two years than for the last two decades: they have amounted to 3500.
- USA-China-EU standoff is expected to continue in the Trusted AI race: the USA has released 47% of all research papers but China is gaining momentum.
- The AI industry is woefully unprepared for real-world attacks, every machine-learning (ML) model of over 60 commonly used ones is prone to vulnerabilities.
To raise security awareness in the field of Trusted AI, we started a project more than a year ago to analyze the past decade of academic, industry, and governmental progress. Research has revealed that organizations should keep up with the latest threats, implement AI security awareness initiatives, and protect their AI development lifecycle now.
said Eugene Neelou, Adversa’s CTO
Follow Adversa and subscribe to newsletters to be the first to know the news and latest analytics.
Adversa is the leading Israeli company developing applied security measures for AI. The mission is to build trust in AI and protect from cyber threats, privacy issues, and safety incidents. With the team of multi-disciplinary experts in mathematics, data science, cybersecurity, and neuroscience, Adversa is uniquely able to provide holistic, end-to-end support for the entire AI Security Lifecycle.
+ 972 050 479 4776