Automation fixes small errors – but it can lead to disaster

  1. airplane
    Francois Badenhorst

    Francois Badenhorst Business Editor, UKBF & AWEB Staff Member

    91 18
    2 |

    Automation can help finance teams thrive, but lessons from other industries and everyday life offer a warning: AI tunes out small errors while creating opportunities for bigger ones.

    There’s a park in South Bristol with hidden steps. Greville Smyth Park, right next to Ashton Gate Stadium, is usually brimming with families, but at night, it’s empty and dark.

    The steps in question are the sort of landmark only a local would know about. So if Google Maps directed you through the park on your bike, you’d know better. That’s not a path, those are stairs.

    I recall, a while back, when my housemate explained how he watched a Deliveroo rider - those aqua-marine jumper-wearing bike elves that bring fast food to hungover people - zip past him in Greville Smyth, only to crash down the unexpected, staggered steps.

    Google Maps, clearly, had told this man that the park was the fastest route. And time, especially in the Gig Economy, is of the essence. The Deliveroo rider’s fate is a particularly modern mistake.

    He had no reason not to trust Google Maps. It’s always been right, right? But he would’ve saved himself a nasty fall if he had just looked.

    Computers don’t just help us anymore - they tell us what to do, how to get there. Users trust them now. ‘Just Google it’. And as these systems have seeped into our private lives, so have they begun to filter into business.

    The pilot and the dog

    “There’s this joke in the aviation industry,” explained Seb Dewhurst, the director of business development at EASA Systems told me recently. “It goes ‘in the future, there won’t be a pilot and a co-pilot. It’ll be a pilot and a dog. The pilot is there to feed the dog, and the dog is there to bite the pilot in case he tries to touch the controls’.”

    This joke has been doing the rounds since the early 90s according to Dewhurst, a keen amateur pilot, and it exemplifies how advanced automation in aviation has become. Passenger planes these days are complex machines that can operate autonomously.

    Airplane automation has a strange place in the popular consciousness. It’s widely accepted and trusted by travellers - and yet the famous ‘autopilot’ system has seeped into the public lexicon as operating automatically, without thinking about what you are doing.

    ‘Oh, sorry, my brain was on autopilot’ is often used an apology after you mess up some easy task. You were doing it automatically and, by the time your brain clicked into gear, it was too late.

    Autopilot’s second use in colloquial English is an implicit acknowledgement of a uniquely modern problem: the paradox of automation. The better automatic systems become, the less adept humans become at a task (or operating manually).

    To keep going with the airplane analogy, the paradox of automation is perhaps best demonstrated by the tragic Air France crash where the plane’s automatic systems failed and the pilots - desperately confused - manually guided the plane into the Atlantic Ocean.

    But it’s not just air disasters: technology affects us in more everyday aspects, too. Our memories, for instance, have changed quite fundamentally. With nearly ubiquitous online access, it’s become unnecessary to retain knowledge.

    Could a robot be a colleague?

    An AI was the star of the show at the recent Unit4 conference in Amsterdam. The Dutch software house was just the latest to roll out it’s AI voice assistant, Wanda. Still in its nascent form, Wanda is limited but from speaking to Unit4’s senior team, it’s clear they have big ambitions for it.

    “We’re focusing on AI-based functionality and Wanda is one of the use cases,” explained Matthias Thurner, founder of the ERP and forecasting tool Prevero, acquired by Unit4 in 2016. “Wanda will allow you to literally talk to your system. You can ask it ‘Hey, what do the revenues in the UK look like’?”

    It’s clear Unit4 - and other software companies - envision a workplace where these AIs aren’t just assistants but colleagues, working and operating in a self-directed way and even acting autonomously.

    But isn’t that a tad risky? “It’s intimidating because you definitely learn to depend on the system,” said Thurner. “Especially with AI. There are some areas where the system might come up with a suggestion for you - but it cannot explain to you why.

    “If a colleague makes a suggestion, you can ask them why and they can explain - but that’s something an AI can’t really do.” The solution, according to Thurner, is an old-fashioned human quality: trust. “The only way to trust an AI is to use it, and if you have good experiences, then you will start to trust it.”

    But this doesn’t quite solve the corrosive impact automation can have on the human at the helm. We aren’t living in the era of genius level, general AIs (that’s still years away). Any contemporary AI still needs experienced oversight; a person to step in once things go south. But manual operation requires practice and experience.

    As the psychologist James Reason put it in his book Human Error: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills.”

    For a modern accountant, relying heavily on software and automation, that’s a scary thing to hear. “I agree with the question,” said Thurner when asked if AI is making life more convenient but also potentially more hazardous for accountants.

    “But you can’t stop technology evolving further. If you look at the advantages, I think they’re bigger than the disadvantages. Look at the aviation example: if you look at the number of accidents we’ve had in Europe, it’s almost zero during the last years. The systems they use are so good that they’re way better than humans in many situations.”

    Trust me, I’m a bot

    A curious item popped up on the news recently: a prototype self-driving Uber car hit and killed a cyclist in Arizona. It turns out the self-driving car “saw” the woman, but filtered her out as a false positive in its detection system.

    The ‘driver’ could have corrected the car - but trusted its systems and didn’t act. This appears to be a symptom, rather than just a one-off. Recent research by Jaguar Landrover suggested that the design of these self-driving cars induces operators to over-trust their vehicles.

    Humans are notoriously inconsistent at long-term monitoring tasks. Now operating “hands and feet free”, the research said people aren’t being adequately supported in their new monitoring responsibilities. The end result is complacency and over-trust. “These attributes may encourage drivers to take more risks whilst out on the road,” the study concluded.

    This is a warning shot for finance teams across the globe. At the most recent Accountex, the exhibition floor was packed with applications that can automate tasks. Some, like Fluidly, even have a machine learning element where the software can learn from data.

    When he spoke to us at Accountex, the accountant Chris Hooper noted he’s comfortable with AI at the moment because it’s ‘not self-executing’. “A lot of the AIs we’ve played with at Accodex [Hooper’s firm] are centred on bringing things to the accountant’s attention. Then the human can exercise professional judgement and go ‘Do I need to take action on this - or can I snooze that alert?’

    But what about the medium-term future? “Ask me again in five years time when it becomes self-executing parameters and its completely removed from our control. Right now, it’s not that scary of a prospect.”

    #0
  2. Priyanka Agarwal

    Priyanka Agarwal UKBF Newcomer Free Member

    0 0
    AI and machine learning are sure to change the way we do things. They have the potential to bring about a drastic improvement in customer service and many other interfaces that exist between the services and users. However, like all technologies, they can have their flip sides as well and depend how they are being used by humans.
     
    Posted: Jun 14, 2018 By: Priyanka Agarwal Member since: May 24, 2018
    #2
  3. AugustinC

    AugustinC UKBF Newcomer Free Member

    12 1
    After three decades in IT, most of which in software development, I deeply mistrust automatic systems when it comes to complex tasks where human lifes are at stake!
    Here it is why:
    with a computer being nothing else than a repetitive machine operating according to a set of human written instructions, a software is only as predictable as the scenarios imagined by the people creating it, while real life is anything but predictable!
    So, in my opinion, any human-created AI will never be absolutely fail-proof, because we, the humans, will never be fail-proof ... and let's stop here, without talking about electric power shortages :cool:
     
    Posted: Jun 24, 2018 By: AugustinC Member since: Jun 22, 2018
    #3