science in hd 1WBN JKSmKI unsplash 1

Exploiting the power of new technologies and new capabilities often demands that we deny something equally forceful—human nature.

Our natural approach to problem-solving is additive: stir in more data, layer in more sophisticated modeling, think through solving the problem for every contingency. It’s our tendency to puzzle out what we think is missing, solve every edge case, add those pieces and give ourselves a pat on the back.

While that’s the first instinct, it’s not necessarily the best course to follow when leveraging AI to solve business problems.

A recent Washington Post article, based on research published in the scientific journal Nature, explores this phenomenon and demystifies why humans are wired to add complexity, even when doing so runs counter to our best interests, our goals and aspirations. When we approach problem-solving, we default to adding complexity—rather than seeking out ways to simplify.

Adding just feels better than subtracting or simplifying because numerical concepts of “more” and “higher” map to “positive” and “better,” even when the ultimate outcome proves to be neither.

To harness the full power of AI, it’s best to recognize human tendencies and to question the value of adding complexity not only when designing and implementing solutions but also when troubleshooting an AI solution that’s not well adopted.

Bolstering the build

When designing an AI solution, higher accuracy is the undisputed goal but that can’t be achieved at the expense of transparency and intuition. Increasingly sophisticated data modeling is irresistible but when it adds excessive complexity, we have to ask: Was the incremental accuracy gain nullified by the added complexity?

More data is better. It’s accessible, increasingly abundant and can make algorithms more accurate. But consider the cost of simply adding more data without understanding the ramifications. We need to scrutinize the data that’s being added and understand the implications to ensure that we’re not adding bias. One also needs to better understand the engineering and other requirements if many complicated features are to be derived from the data in real time to power, for example, a next best recommendation algorithm.

To appreciate the real cost of complexity, look no further than the famous Netflix Prize, a machine learning, data mining competition that had coders around the world salivating for the $1 million award. Netflix paid up but never used the winning movie recommendation algorithm, saying in its blog that too much engineering was needed to achieve the accuracy gains.

Implementing a solution

When bringing a solution to life—implementing and operationalizing it—think simple. Think modular, like Lego blocks. Be patient and open to building the solution over time, as opposed to designing the end state and working to implement the most complete picture (which brings its own complications).

Fixing when things don’t fly right

When AI fails to meet expectations, our tendency is to make the case for why the solution just needs a little something more added because human nature dictates more is better. However, successful adoption is possible only when end users clearly understand the models and the process; transparency is vital to adoption. If additions increase complexity and opacity, we’re just going in the wrong direction. The goal should be to simplify.

I am not suggesting that additional complexity is never worth it. But the first instinct to add more complexity may not always be the right thing to do. Is the incremental value from additional complexity offset by the additional cost of the complexity in the form of lack of transparency?

Alternatively, it simply may not be worth it to add more complexity for the sake of it.

It’s useful to return to the basics; AI is an enabler first and foremost. Its role is to dissolve complexity. Human nature—indeed, our deep-rooted psyche—may whisper otherwise but it is possible to master what may be an instinct that does not serve us well.

Arun Shastri