Thanks to price gains and technological advances, even small companies are beginning to be able to reap the rewards of machine learning and artificial intelligence. However, what goes in and, more importantly, why do the answers that come out arrive?
What is in AI’s black box, how regulation of developing algorithms in AI isn’t ready for the current evolution, the tendency for AI algorithms to collude on pricing, and what impacts manipulating this technology could have on your brand.
What is Black Box AI?
When using artificial intelligence and machine learning businesses are not always clear about why the AI is working in a certain way or why it has made the decision that it has made. In such instances, this may be because the AI has not been programmed correctly, or the business does not understand how the AI is learning.
When a human is unable to understand how machine learning has made a decision, it is known as a ‘black box’. To have confidence in the decisions, AI is making it is crucial that there is a clear foundation of understanding behind it and that the outcomes can be explained.
Price fixing is becoming one such ‘black box’ situation, where it is becoming increasingly difficult for some companies to explain how specific pricing situations have been arrived at.
Algorithms, AI, and regulation
Across the globe, and even in the free-market heartlands of the USA, governments legislate against anti-competitive agreements by companies (or cartels), sometimes called price-fixing. Indeed, perhaps the only monopolistic activity the US will consider prosecuting for is price fixing and with anti-trust laws that have been in place since the 1890s, they are well practised at prosecuting all kinds of traders large and small.
In spite of this, selling being driven by bots powered by algorithms are posing a host of new challenges to anti-trust laws even in the USA. This is in large part due to the black box nature of many AI programmes which can mine vast amounts of data in real time with very little programmer involvement meaning understanding what is going on in the algorithm and who is responsible becomes much murkier, providing companies with prosecution-proofing against cartel-like behaviours.
AI and the tendency for algorithms to collude on pricing
A recent study found that even fundamental pricing algorithms methodically learn and tend towards collusive behaviour and that a third of Amazon sellers used a pricing algorithm in 2016. However, what does this mean? Well, essentially, it means the increasing use of AI in pricing mechanisms is leading to people being ripped off. Take the Uber surcharge at peak times as an example and the more than $400 Uber ride to drop off Jerry Seinfeld’s children during a snowstorm as an indication of ways in which this happens every day.
However, even though the issue that price-fixing can happen without the need for explicit collusion between competitors (who might recognise that healthy industry prices could be useful for them generally), little has been done in terms of proper regulation to ensure this gap in anti-trust law, that is exacerbated by the use of bots, is closed. This means regulation is not meeting the demands being placed on anti-trust laws by this rapidly expanding method of pricing.
Algorithm pricing risks for your brand
The fundamental difficulty when dealing with algorithms is the lack of a ledger of concerted actions that demonstrate the intent of the algorithm to collude. This has been apparent in areas such as credit risk, health status, and retail where black box AI decision making has begun to be used to make decisions without revealing why, with huge issues for transparency and possible biases. This is particularly so given the fact that machine-learning decision making is constructed as a result of human activity and is therefore likely to reflect human biases.
With action against cartel-like behaviour always popular, it is easy to see why engaging this sort of algorithm could be damaging to your brand amongst consumers. However, there is traction amongst regulators to take a strict approach against cartel conduct being carried out by AI algorithms. Nevertheless, there is plenty of room for the regulatory environment to improve, and it needs to happen much faster.
How to reassure customers when using AI?
Undoubtedly, unethical price fixing can lead to a lack of trust from your customer base, and a lack of trust can be devastating for any company. To avoid this, it is essential to be transparent to customers about the use of technology, and AI has within your company. Make it clear to your customers how you set prices and what the role of AI is in the prices customers receive.
At the same time be wise about how you use AI and make sure that your team of developers are entirely in tune to the what, why and how of any machine learning algorithms your company is using.
The use of AI brings some incredible opportunities, and, understandably, companies want to cash in on such technological advances. However, it is important that such developments are dealt with responsibly and that humans keep up with how machine learning is developing.
The case of price fixing is one such example of AI going beyond the control of programmers’ understanding, resulting in customers being ripped off and a possible breach of various countries’ governments’ legislation. This is certainly an area to be watched and, without proper monitoring and regulation, could result in a lot of dissatisfied customers with big brands facing some major dents to their reputation.