Chen Yuqiang, Founder & Chief Scientist at 4Paradigm, Attended the UN Internet Governance Forum and Gave a Report on AI and Privacy Protection

2018.11.23

The annual Internet Governance Forum (hereinafter referred to as IGF) was held recently at the headquarters of UNESCO in Paris, France, shedding light on how to guarantee the sustainable, stable and safe development of the Internet. UN Secretary General António Guterres and representatives from other countries were present. Founder and principal research scientist of 4Paradigm Chen Yuqiang attended the forum and highlighted issues related to AI ethics and safety. This year the IGF is hosted by the French Government, and French President Macron urged in his speech that as the digital revolution concerning people’s future is at an unprecedented turning point, our common responsibility is to seek innovative, practical and sustainable solutions to confront the challenges as early as possible and effectively harness the potential of the technology.

AI, big data and the Internet of Things are new topics that attract wide attention. The rise of a new generation of AI has brought surprises and higher productivity while triggering concerns over the possibility of AI threatening human safety or social ethics. There have been examples of data leak of Internet giants that infringes user privacy, sounding the alarm for data use and management in enterprises. It is a trend for privacy protection in data to be given increasing emphasis. Over the past few years the United States has been paying attention to relevant legislation while in the EU, the ‘General Data Protection Regulation’, the strictest data privacy protection in the world, just came into force this May. China, on the other hand, is just getting started, due to relatively weak awareness of such protections in this early stage of development and limited technological level.

During the forum, Chen introduced 4Paradigm’s privacy protections when promoting AI technology. Data privacy protection is especially emphasized in sensitive industries like finance and medicine, and is key in enterprise security. Methods of data privacy protection are plain, like k-anonymity, l-diversity, t-closeness and adding noise in model training, but there will be technical difficulties in actual applications, too, like when noise and disturbance is added, the original information available may be covered and make the data invalid.

4Paradigm has been working with financial and medical institutions since the very beginning, and valuing the AI smart technology developments that protect privacy. Thanks to the advantages of and experiences in machine learning, the company’s technology in this respect has proved to be effective in actual use. Take the medical industry for example, where the diabetes prediction product “Rui-Ning Zhitang” developed in cooperation with Ruijin Hospital uses the differential privacy technology in data extraction and analysis to protect user privacy and prevent leaks of identifying information while still making the most of the data.  

When facing the Internet security and ethic and risk problems in AI use, the international community hopes to reach common ground and step up together in terms of setting up policy and developing technology.  At the IGF, Chen said that, like all other technologies, AI development and application has to address ethical issues, but it is clear that most robots are not unethical; they are just “irrelevant” to ethics, and it is how humans use them that cause ethical dilemma.

In recent years, the growth in AI has been driven by big data, and the privacy problem has been a main threat in the process of data resource development and utilization. When concerned with credit or criminal risk assessment, the result of AI decisions will affect the loan quota and degree of penalty. Auditing is necessary to make sure that AI algorithms are equal and fair and will not be a “black box”. Regulation of AI will need to be established at the national level to include a public and transparent supervision system, so as to oversee the overall process of AI design, development and use, increase punishment for violations, and boost proper and legal development of AI technology in and out of the AI industry.

As to how to guarantee that AI is used to benefit people, Chen said that it has always been a challenge in the technical area; strict management should be performed to prevent improper use, while the negative side should never overshadow the positive side, because, when used reasonably, AI would do a great deal of good to society, and advance human progress.