AI regulation | Italy hits two food delivery companies with hefty fines over rider algorithm discrimination

What I’m sharing with you today is a news report – Italy’s personal information protection agency fined two food delivery companies divided by millions of euros for algorithmic discrimination against delivery riders.

Italy’s personal data protection watchdog has fined two of the country’s largest online food delivery companies, Deliveroo and Foodinho, millions of euros for using algorithms that discriminate against some “gig economy” workers.

The regulators said couriers could be punished by two companies that assessed the job performance of couriers they hired based on artificial intelligence, also known as machine learning, algorithms. But the algorithms remain confidential, and couriers cannot appeal any such assessments. In addition, the regulators said the companies could not prove their algorithms were not discriminatory.

As a result, Italy’s data protection agency, Garante, on Monday declared a breach of the European Union against Deliveroo

The General Data Protection Regulation imposed a fine of 2.9 million euros ($3 million).

This comes after Garante announced on July 5 that it would fine the online food delivery platform 2.6 million euros ($3.1 million) for GDPR violations following an investigation into Foodinho’s Italian operations. It also issued an injunction calling for specific improvements.

“These two cases have particularly important lessons for tech companies and show some of the conflict between AI and GDPR,” said attorney Jonathan Armstrong, a partner at London-based law firm Cordery.

Deliveroo did not immediately respond to a request for comment on Garante’s findings, nor did Foodinho, which is said to be planning to appeal the fine.

Investigation against Foodinho

Owned by Barcelona, ​​Spain-based Glovoapp23, Foodinho is an on-demand food delivery service. It was searched by Garante for two days in June 2019 as part of a joint investigation with Spain’s data protection agency, known as AEPD. An AEPD investigation into Foodinho’s Spanish operations is ongoing. The company also operates in 22 other countries in Africa, Europe, Asia, Central and South America.

Garante said it found that the company’s algorithms used to manage its Italian workers — booking orders for them and assigning deliveries — were violating those workers’ rights.

Garante reports that all delivery people or “riders” — usually bicycle or moped drivers — initially receive a default score that is then adjusted based on the following characteristics and weights:

Customer Feedback: Thumbs Up or Down – 15%.

Merchant – 5%.

Work during times of high demand – 35%.

Delivered orders – 10%.

Productivity – 35%.

Workers with higher scores gain the ability to get new orders before others. But Garante found that the company’s practices failed to respect workers’ rights.

“For example, the company did not adequately inform workers about the operation of the system, nor did it guarantee the accuracy and correctness of the results of the algorithmic system used to assess riders,” the regulator argued. rights, to express their opinions, and to challenge decisions passed through the use of relevant algorithms, including excluding a portion of riders from job opportunities.”

As a result, Garante ordered Foodinho to revise its systems to verify that booking and allocation algorithms did not discriminate against them, as well as to revise some overly long personal data retention practices.

Garante ordered the company to pay a fine of 2.6 million euros, not only based on AI issues, but also for not having a data protection officer in place and not keeping adequate records. It said the amount of the fine also took into account “the limited cooperation provided by the company during the investigation and the large number of riders involved in Italy – approximately 19,000 at the time of inspection”.

Safeguards Against Automated Decision Making

Cordery’s Armstrong said the algorithm Foodinho was using was found to violate Article 22 of the GDPR, which deals with “automated individual decision-making, including profiling”.

“In accordance with Article 22 GDPR, individuals have the right not to accept decisions, including profiling, which are based solely on automatic processing and which have legal effect or similar significant impact on them, unless certain exceptions apply and specific safeguards for these individuals are in place” , the lawyer said.

“The investigation found that the platform’s use of algorithms to automatically penalize riders by excluding them from job offers if they were rated below a certain level was discriminatory, and there was no opportunity for human review and no ability to question this. A decision, which violates the GDPR.”

Investigation of Deliveroo

Meanwhile, on Monday, Garante announced its ban on Deliveroo, dated July 22.

Founded in 2013, London-based Roofood operates under the Deliveroo name and operates not only in Italy and the UK, but also in the Netherlands, France, Belgium, Ireland and Spain, as well as Australia, Singapore, Hong Kong, the United Arab Emirates and Kuwait .

In June 2019, as part of an investigation into food delivery businesses, Garante raided

Deliveroo’s office, gathering information and conducting interviews over two days.

Here’s how the system works, according to Garante’s penalty notice: Deliveroo’s Italian operations rely on a centralized system hosted in a data center in Ireland to support 8,000 riders who are self-employed contractors. Each rider signs an agreement with Deliveroo and then gets an app that they must install on their mobile device and use during their shifts.

For Deliveroo’s Italian operations, managers feed information into a central system in Ireland, which rates rider performance, but doesn’t reveal the logic being used. Deliveroo told the regulator that “it only has access to the data it can influence, feeding the shared database, without deciding the logic of the processing”.

Garante found that the information used to rate riders included:

Whether riders can work during “critical times” such as Friday, Saturday and Sunday nights. ;

Whether the rider completes the order as booked, or cancels after the order has started;

How fast is the rider delivering the order.

Riders with higher ratings have the opportunity to get more and more lucrative jobs. But as with the Foodinho case, among other issues, Garante said its investigation uncovered multiple transparency and fairness issues surrounding how Deliveroo uses algorithms to distribute work.

For example, Article 5 of the GDPR “Principles relating to the processing of personal data” states that personal data must be “processed in a lawful, fair and transparent manner to the data subject”.

“As part of this, the controller should be able to demonstrate that its algorithm is not discriminatory,” Armstrong said. “Deliveroo said it had changed the platform since the inspection, but Garante stressed that Deliveroo had a responsibility to ‘regularly verify that the algorithm’s results were correct’ sex in order to minimise the risk of distorting or discriminatory effects’.”

Best Practices for Algorithmic Management of Workers

Armstrong said the two Italian cases offer lessons for any organization that uses “algorithmic management of employees.” First, he recommends that all such organizations must first conduct a data protection impact assessment, thoroughly test their algorithms for any signs of bias, and inform employees of how the company is using algorithms to make important decisions about them — all while maintaining a consistent Explain the function of an algorithm in an easy-to-understand way.

Understanding current AI usage norms can also help. “Multiple consultations can be made,” Armstrong said, and “ethics committees and/or employee focus groups can be useful echo boards to gauge whether such measures may be deemed unduly invasive of privacy.”

Armstrong added that there are too many applications of “artificial intelligence” that fall short of this requirement, so you can expect more privacy inquiries from regulators into workplace algorithms.

He said: “Most of the things we see are simple programming – a formula or algorithm that calculates something roughly like: ‘A person who scores 128 on this test is more likely to be the one who will stay. good staff’.

“The problem here is that it’s often bias or intuition that gets dressed up as science and then gets programmed into the algorithm. Usually this kind of stuff doesn’t make sense, it’s discriminatory. For example, if I exclude all rejections in the algorithm in the week People who work on Saturdays or Sundays, does that exclude some people because of their religious beliefs?”

The Links:   LM100SS1T522 FS75R12KE3_B3