What Are Black Box Models In Machine Learning?

When we think of black boxes the first thing that comes to mind is an object that is not transparent and difficult to interpret.

This is exactly what black boxes are.

Black boxes are used as a metaphor in both computer science and engineering to describe a system that is difficult to explain or interpret. This system normally takes an input does complex calculations/actions then outputs a result.

In essence, black boxes aren’t exactly a good thing because we can’t explain exactly how it makes decisions which is not good since it poses risks such as possibility of bias, lack of trust and many more.

However some organisations may like black boxes as it is their proprietary ownership, meaning that their trademark is the complex backbox algorithm itself.

Such as for example Google’s AI search algorithm. In a way that other organisations cannot replicate Google’s AI success due to its complexity.

In this article we review the environment of black boxes and exactly the current technology being used to improve the transparency of black boxes.

⭐If you like to learn more about machine learning, then we recommend to sign up to one of our FREE online interactive courses by CLICKING HERE. ⭐

Affiliate Disclaimer: We sometimes use affiliate links in our content. This won’t cost you anything but it helps keep our lights on and pays our writing and developer teams. We appreciate your support!

Why do we have black boxes in machine learning?

When we say black boxes, its almost used interchangeably as complex artificial intelligence, machine learning models or deep learning, in a way that these models are complex and not a lot of people understand exactly how it works.

For example when we use unsupervised learning to classify large datasets, these algorithms discover hidden patterns and information without the requirement of human intervention, we then use the algorithm to discover more hidden patterns on other datasets.

In doing so its complexity rises in a way the model itself becomes a black box as we cannot exactly get a complete understanding of the inner working after the model as it has become so complex.

This is just a dumbed down explanation of exactly the reason we use black box models.

It is also mentioned that as model accuracy increases the complexity of the model also increases. In saying so most entities are rewarded based on accuracy, as the more accurate the model is, the better it is for making money.

Therefore companies such as banks are motivated through accuracy and the negative affect is the model becomes complex to the point that most individuals have difficulties understanding exactly how the model works.

In summary black boxes are used because it is good at:

  • Doing complex tasks efficiently
  • Reduces manual work from people
  • Reduces costs compared to manual work

Example: Black boxes used in bank loans

Before the great 2008 Financial crash, lending loans out to individuals were relatively easy, in a way that most individuals were accepted.

In doing so after the crash most of these individuals defaulted on their loans. In order to avoid such risks, banks have now used models such as credit scores in a way to judge an individual’s ability to pay back loan.

These models used to judge an individuals credit score or loan approvals are trained on massive amounts of data in a way that it predicts whether or not a customer will pay back a loan.

Such information that are considered are taking in profiles of other individuals, checking their history, income and etc using machine learning. And then using that trained model to determine whether or not another individual will face the same scenario.

Although bank loans approvals are much more complex than this, in summary this is how the bank loan approvals work.

Source: https://www.afr.com/companies/financial-services/banks-warned-using-ai-in-loan-assessments-could-awaken-a-zombie-20210615-p5814i

What are current issues with black boxes?

As black box models are difficult to understand the intricacies of how it works, there are many risks it poses. We visualize these risks by giving scenarios below:

Lack of transparency:

The main issue with black boxes is the ambiguity with the model, with the model being not transparent this makes it difficult for individuals to understand exactly how the black box works, which leads to a mindset of “trusting the process” or “trust in the model”.

In doing so, it leads to overconfidence and over reliance.

Another issue with lack of transparency is the difficulty in explaining results to customers.

For example our bank loan scenario, if a customer was declined a bank loan, it would be difficult for a manager of a bank to explain exactly what needs to be improved since the model is too complex to explain exactly what variables were included.

Bad data used in the black box:

It is common that humans make mistakes, it is a well known fact that a model is only as good as the data it is trained on, if the data we used in the data pipeline of a black box model is inaccurate this causes adverse risks such as:

  • Reputation damage to a business
  • Possible resignation and firing of individuals
  • Harms share price
  • Creates bad strategic decision making

And because of such industries that rely heavily on black box models to make decisions, the extensive usage of such a system is overconfidence usage for most individuals, and due to the complexity of the system such issues are possibly not known for until it is investigated.

What are techniques used to improve the transparency of black boxes?

The interesting thing is that nearly “68 percent of business leaders believe that customers will demand more explainability from AI in the next three years” (IBM Institute for Business Value survey).

There is currently a field being researched called explainable artificial intelligence (XAI), this field works in a way that it helps explain/interpret the output of AI results.

One good source I would suggest is to read up on IBM’s AI explainability 360 toolkit where they help users identify possible techniques and solutions to make black boxes more explainable.

Since there is multiple data types and multiple organisations with different needs, there is no “one size fits all” with black boxes in machine learning, this means there is no single approach in explaining black boxes.

However there are some ways to make your black boxes more explainable such as:

  • Static explanations
  • Interactive explanations

References

[1] https://research.ibm.com/publications/one-explanation-does-not-fit-all-a-toolkit-and-taxonomy-of-ai-explainability-techniques

[2] https://www.afr.com/companies/financial-services/banks-warned-using-ai-in-loan-assessments-could-awaken-a-zombie-20210615-p5814i