Employees who are trained in the use of cognitive intelligence technologies will greatly improve a firm’s chances of success.

Educate Your Team On Machine Learning To Reap Reward

The author of a new report commissioned by the largest accounting trade association in the U.K. on the power of machine learning for auditors and accountants tells Radar that firms who make their staff more comfortable with the technology will win in the long-term, with automation here to stay.

Better training of employees in the use of cognitive intelligence technologies will greatly improve a firm’s chances of success, according to a major new report from experts at the Institute for Chartered Accountants in England and Wales (ICAEW). The report promotes a vital need for a broader understanding of algorithms, robotic processes, machine learning, natural language processing, and natural language generation if corporations want to continue growing.

The ICAEW tech faculty said in its landmark study, Risks And Assurance Of Emerging Technologies, that adoption of “thinking tech” is a powerful incentive for companies, and can revolutionize how they operate.

The association told Radar that it was drawn to how auditors were using data analytics in areas such as forensic investigatory work, and wanted to consider the inverse, namely, how well firms understand the powerful technology that is now on the market.

“These technologies can detect subtle patterns in data and make predictions about what might be coming down the line,” the report said. 

“Cognitive technology is bringing automation to business processes previously thought un-automatable, such as reviewing contracts, classifying images, or detecting inappropriate content.”

David Lyford‑Smith, ICAEW technical manager, is the author of the report and told Radar that more input from compliance departments, enhancing levels of human oversight, and adding regular review processes, would help others in their respective companies to better understand the capabilities of what they are handling.

“Many larger organizations struggle to stay on top of what cognitive projects they have going on – particularly as it becomes easier for small teams to do it themselves. That can also mean that getting a consistent set of standards for development, control, and monitoring into place is horribly difficult,” said Lyford-Smith. 

“And that has implications for accountability, too – if a rogue project goes wrong, is it the rogue developer that’s more at fault, or the chief technology officer that didn’t put in place a system to discover and control that project?”

Accountants and professional services firms must pay particular attention to sensitive issues of bias and data protection that can arise, added Lyford‑Smith, and “consider the impact of omissions, errors and biases encoded in that data early in the process.”

He said that machine learning and natural language processing are becoming ubiquitous tools in financial services, but they could also have an enormously positive effect on society when applied to areas such as healthcare, education, airport administration, and law enforcement. Taking this step requires a broader understanding and for corporates to show the way by improving their internal controls. 

How To Train Your Algo

The exponential rise in the volume and complexity of data that exists is fueling the maturity of cognitive technology, which continuously learns from live data sets, the report said.

However, ensuring that an algorithm doesn’t build on any incorrect knowledge of how to execute a task will be crucial to its future success. A machine does not follow human ethics when making a decision, and could be manipulated to act in such a way by fraudsters, the report found.

The study also stressed the central importance of accurate data fed into the software and the creation of transparent models that are not “black boxes.” This should ensure that any output can be explained. Inexplicable outputs can occur because robots learn in a different way to humans.

The report further explores some of the more general clichés and myths around the widespread adoption of automation software, addressing the misconception in popular culture that robots will put generations of people out of work.

 Replacing humans with cheaper automated technology may seem appealing, but the business would also lose the invaluable experience and knowledge that come with human employees, the report found.

“No single control can prevent or detect all possible errors, but creating several good quality controls systems can help to reduce the overall possibility of error,” the report said. “These may be entity-wide business processes that are considered at the board level, to inbuilt systems of control embedded into specific cognitive solutions by the implementation teams coding them.”

Future Shocks

The report also notes how slow regulators have been to note the benefits of promoting a better understanding of automation.

Accounting and financial services regulators take the approach of “regulating the output over the process,” said Lyford-Smith.

However that may change, given the potential for disruption and the divergence in how a human completes a task, compared to a machine. 

“The line from multiple different regulators here is almost always, ‘we regulate activities/ outcomes, not processes/technology, so this isn’t relevant’,” Lyford-Smith said. 

“They say that they are in the business of regulating finance whatever they’re regulating, but that they don’t care what method the regulated organizations or persons are using. I don’t buy this. For starters, I came across this line repeatedly in regulators’ reports about technology. They are clearly aware that this makes a difference. But more than just making a difference, I think the cognitive technology approaches aren’t just new ways of doing old tasks but have the potential to be completely new ones. The saying ‘A difference in amount becomes a difference in kind’ is one I like in this context.”

While cognitive technology is currently the largest area of interest for many businesses, several other emerging technologies are making their impact felt, and are poised to increase in importance in the coming years, the report found.

As such, the approach taken in the report ensures the issues and responses around cognitive technology can be applied to other emerging technologies, even those not yet invented.

“It is interesting how relatively few organizations have made an entry into the assurance of the artificial intelligence technology market as of yet,” said Lyford-Smith.

“While it has some significant technological and technical barriers to overcome, there’s definitely a need for assurance in this area as the risks are still relatively under-considered.”

He said that it was often the case that companies developing cognitive technologies have to explain to AI adopters that risks within the software do exist, and this might warrant assurance and extra training.

Managing Risks

According to Lyford-Smith, simpler controls with similar aims to human review could utilize a control collar, with a maximum and minimum that the system cannot override on its own. Kill switches attached to manual or automatic triggers could also immediately suspend the cognitive technology’s operation.

“These prevent runaway errors and feedback loops from creating outsized results, the kind suspected to be behind the so-called ‘Flash Crash’ of 2010, when high-frequency trading algorithms accelerated a market dip into a 9% decline within a few minutes,” the report said. “Such kill switches, called circuit breakers, are now in place at many exchanges to suspend trading automatically if unusually sharp swings in prices are detected.”

Whether a machine learning-taught system is developed only once, or whether it continues to evolve while in live operation, it is important to keep up a regular review of its fitness for purpose, said Lyford-Smith. 

“As well as keeping a human in the loop for specific decisions, it may be appropriate to implement a regular larger-scale review of the model, its influences, and its decision-making,” he said. 

“For example, this could consist of a regular review of trends in the model’s high-level outputs, plus a spot review of a sample of particular decisions.”

 

For more stories, download the full edition of Radar 9 here.