Huawei Chief Cybersecurity Officer Denies Chinese Law Compels It to Cooperate with Chinese Government, But Admits Chinese Government Could Hack Huawei

Today, the United Kingdom House of Commons Science and Technology Committee heard testimony on what the Committee termed “the possible security risks involved with 5G communications networks and to what extent those risks can be managed.”  One of the key witnesses was John Suffolk, a Senior Vice President and the Global Cyber Security & Privacy Officer for Chinese telecommunications company Huawei Technologies.  Huawei has been the target of U.S. efforts to discourage other countries from adopting Huawei 5G technology, because of persistent U.S. Government concerns that the Chinese government could exploit Huawei “to spy on other countries and companies.”

In his testimony, Suffolk reportedly stated that Huawei had “sought guidance from its attorneys to see if a Chinese law on domestic companies’ cooperation with the government on security matters could force it to conduct foreign intelligence work.”  According to Suffolk, Huawei’s outside counsel had twice validated that “[t]here are no laws in China that obligate us to work with the Chinese government with anything whatsoever.”

At the same time, The Times reported, Suffolk – who had been Her Majesty’s Government Chief Information Officer and Chief Information Security Officer before joining Huawei in 2011 — admitted that Huawei “could be broken into by the Chinese security services.”  When asked whether “Chinese agencies could get into Huawei’s systems if they wanted,” Suffolk responded, “Edward Snowden amply demonstrated that governments of capability can break into most things, including breaking into Huawei servers, so you could never say that a government, whoever they are, if they have the capability, can’t break into systems.”

Note: Suffolk’s assurance that Chinese law does not compel Huawei to cooperate with Chinese authorities will give scant comfort to governments with existing concerns about cyber-espionage.  Moreover, his frank acknowledgement that Chinese authorities could hack into Huawei technology, regardless of the state of Chinese law, will undoubtedly be cited by U.S. authorities in its continuing offensive against Huawei.  Suffolk’s words will not be the last word on the potential cybersecurity risks that Huawei poses, but could well influence the pace and direction of that debate.

European Commission Reviewing Past Money-Laundering Cases, With Possible Anti-Money Laundering Rule Changes in Mind

On June 6, Reuters reported that, according to a European Union official, the European Commission (EC) “is reviewing past money-laundering cases at EU banks to assess what went wrong and decide possible tweaks to rules . . . .”  The review is reportedly “part of a broader plan to improve the European Union’s approach to combating money laundering,” after a number of banks in Cyprus, Denmark, Estonia, Latvia, Luxembourg, Malta, and the Netherlands experienced substantial problems with anti-money laundering (AML) compliance.

The EU official indicated to Reuters that the review includes three categories of financial institutions:

  1. Firms that met unexpected ends after reported money-laundering concerns, such as Maltese bank Pilatus, whose license the European Central Bank (EBC) revoked in 2018 after its chairman was arrested on money laundering charges in the United States, and Latvian bank ABLV, which the ECB deemed “failing or likely to fail” in 2018;
  2. Banks “that have been at the center of large scandals such as Danske Bank”; and
  3. Multinational banks Deutsche Bank and Societe Generale, for reasons the official did not explain.

As part of the plan, the EC reportedly “is assessing cases between 2012 and 2018 with the aim of producing a report this summer that identifies the factors that contributed to the banks’ failure in preventing financial crime.”  The official also stated that missions to EU states “are under way,” but did not clarify “whether reviews are conducted at the banks themselves or only with their national supervisors.”

Note: Chief compliance officers at financial institutions, both in the European Union and elsewhere, should watch for followup information on the EC’s review, including the expected report to be released this summer.  The fact that the EC is only now conducting such a comprehensive review of banks with AML failures indicates how far the EU has yet to go in developing and implementing an improved AML regime across Europe.

LabCorp Discloses Data Breach of Third-Party Provider, Affecting 7.7 Million People

On June 4, LabCorp, which describes itself as “The World’s Leading Health Care Diagnostics Company,” informed the Securities and Exchange Commission in a Form 8-K that Retrieval-Masters Creditors Bureau, Inc. d/b/a American Medical Collection Agency (AMCA) – an external collection agency that LabCorp and other healthcare companies use – had notified it “about unauthorized activity on AMCA’s web payment page (the AMCA Incident). According to AMCA, this activity occurred between August 1, 2018, and March 30, 2019.”

LabCorp stated that it

has referred approximately 7.7 million consumers to AMCA whose data was stored in the affected AMCA system. AMCA’s affected system included information provided by LabCorp. That information could include first and last name, date of birth, address, phone, date of service, provider, and balance information. AMCA’s affected system also included credit card or bank account information that was provided by the consumer to AMCA (for those who sought to pay their balance).

LabCorp also stated that it “provided no ordered test, laboratory results, or diagnostic information to AMCA. AMCA has advised LabCorp that Social Security Numbers and insurance identification information are not stored or maintained for LabCorp consumers.”  According to LabCorp, AMCA has informed it that AMCA “is in the process of sending notices to approximately 200,000 LabCorp consumers whose credit card or bank account information may have been accessed.”

LabCorp further reported that AMCA had informed it

that it intends to provide the approximately 200,000 affected LabCorp consumers with more specific information about the AMCA Incident, in addition to offering them identity protection and credit monitoring services for 24 months. LabCorp is working closely with AMCA to obtain more information and to take additional steps as may be appropriate once more is known about the AMCA Incident.

In addition, LabCorp stated that AMCA “has indicated that it is continuing to investigate this incident and has taken steps to increase the security of its systems, processes, and data. LabCorp takes data security very seriously, including the security of data handled by vendors.”

Note:  LabCorp and another health-care diagnostics company, Quest Diagnostics – which informed the SEC on June 3 that the AMCA breach had affected 11.9 million of Quest’s customers – demonstrate yet again the importance of companies’ regularly monitoring the data-security practices and measures of their third-party providers.  The reports that the AMCA breach lasted for nine months in 2018-2019 indicate the importance of providers continuously maintaining robust data-security measures, and of companies’ frequently conferring with their providers about the soundness of the providers’ measures.

Bank of England Official Delivers Speech on Governance of Artificial Intelligence

On June 4, the Bank of England’s Executive Director of UK Deposit Takers Supervision, James Proudman, delivered a speech on “Managing Machines: the governance of artificial intelligence,” at the United Kingdom Financial Conduct Authority (FCA) Conference on Governance in Banking.

In his speech, Proudman first gave an overview “of the scale of introduction of artificial intelligence in UK financial services.”  He noted that artificial intelligence (A) and machine learning (ML)

are helping firms in anti-money laundering (AML) and fraud detection. Until recently, most firms were using a rules-based approach to AML monitoring. But this is changing and firms are introducing ML software that produces more accurate results, more efficiently, by bringing together customer data with publicly available information on customers from the internet to detect anomalous flows of funds.

About two thirds of banks and insurers are either already using AI in this process or actively experimenting with it, according to a 2018 IIF survey.4 These firms are discovering more cases while reducing the number of false alerts. This is crucial in an area where rates of so-called “false-positives” of 85 per cent or higher are common across the industry.

ML may also improve the quality of credit risk assessments, particularly for high-volume retail lending, for which an increasing volume and variety of data are available and can be used for training machine learning models.

But Proudman also cautioned that

[w]e need to understand how the application of AI and ML within financial services is evolving, and how that affects the risks to firms’ safety and soundness. And in turn, we need to understand how those risks can best be mitigated through banks’ internal governance, and through systems and controls.

In that context, he provided some comments on interim results from the survey that the Bank of England and the FCA sent in March 2019 to more than 200 United Kingdom financial firms.  First, he characterized the mood concerning AI implementation among firms regulated by the Bank of England as

strategic but cautious. Four fifths of the firms surveyed returned a response; many reported that they are currently in the process of building the infrastructure necessary for larger scale AI deployment, and 80 per cent reported using ML applications in some form.

Second, he commented that

barriers to AI deployment currently seem to be mostly internal to firms, rather than stemming from regulation. Some of the main reasons include: (i) legacy systems and unsuitable IT infrastructure; (ii) lack of access to sufficient data; and (iii) challenges integrating ML into existing business processes.

Not surprisingly, Proudman noted that “large established firms seem to be most advanced in deployment,” with “some reliance on external providers at various levels, ranging from providing infrastructure, the programming environment, up to specific solutions.”

Proudman also stated that 57 percent of the respondent firms regulated by the Bank of England

reported that they are using AI applications in risk management and compliance areas, including anti-fraud and anti-money laundering applications. In customer engagement, 39 per cent of firms are using AI applications, 25 per cent in sales and trading, 23 per cent in investment banking, and 20 per cent in non-life insurance.

By and large, firms reported that, properly used, AI and ML would lower risks – most notably, for example, in anti-money laundering, KYC and retail credit risk assessment. But some firms acknowledged that, incorrectly used, AI and ML techniques could give rise to new, complex risk types – and that could imply new challenges for boards and management.

Based on his observations, Proudman identified three challenges that AI/ML posed boards and management in the United Kingdom financial sector:

  • Data quality, including the existence of “complex ethical, legal, conduct and reputational issues associated with the use of personal data.”
  • The role of people, with particular regard to the use of incentives and the introduction of human biases that can affect machines’ output. In that regard, Proudman warned that “it may even become harder and take longer to identify root causes of problems, and hence attribute accountability to individuals,” and stated that “[f]irms will need to consider how to allocate individual responsibilities, including under the Senior Managers Regime.”
  • Change, including change associated with “the extent of execution risk that boards will need to oversee and mitigate” as the rate of introduction of AI/ML in the financial services sector increases. Here, Proudman stated that “the transition to greater AI/ML-centric ways of working is a significant undertaking with major risks and costs arising from changes in processes, systems, technology, data handling/management, third-party outsourcing and skills.” He also commented that this transition “creates demand for new skill sets on boards and in senior management, and changes in control functions and risk structures,” and “may also create complex interdependencies between the parts of firms that are often thought of, and treated as, largely separate. As the use of technology changes, the impact on staff roles, skills and evaluation may be equally profound.”

From these three challenges, Proudman derived three principles for governance of AI/ML:

  • “[T]he observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.”
  • “[T]he observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.”
  • “[T]he acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.”

Note: Governance, risk, and compliance officers in financial firms should read and give careful consideration to the issues that Proudman raises.  Media reports have often seized on how AI has sometimes led to counterproductive and misleading results. Proudman’s remarks, however, identify a number of even more complex challenges that senior executives and boards will need to address – not least because United Kingdom regulators can be expected, over time, to hold firms accountable if they do not address the constantly changing array of AI/ML execution risks and the need to maintain individual accountability under the Senior Managers Regime.