Walmart's unintentional, intentional discrimination against African Americans

Since no-one else has done so, let me be the first to say: "Lies, damned lies, and statistics"

Inherent bias in algorithms is an increasingly important topic, see e.g., https://www.theatlantic.com/technol...ng-bias-of-facial-recognition-systems/476991/

It is true that learning algorithms can easily learn conscious and unconscious bias from their training set. But in this case there doesn't have to be a fancy algorithm involved. I think it is plausible that the analysis here is a simple four-liner of code that would hold up to any scrutiny. This is impossible to verify without access to the data, though, and could be racist hatred that is not based on facts at all. In any case, I believe the assumption that their analysis is correct to be valid enough to base a discussion on it.


Well, if it's a product that is specifically designed for black people (for example a shampoo advertised to work well with African-type of hair), then the implicit message there is: "Hey, black people around here steal a lot". While it might be true statistically that black people steal more than white people in this area, the effect on how people see black people because of that might be much stronger than the actual difference; it might cause black people as a group to be seen much more negatively (or just reinforce negative stereotypes that are already there).

It is a real social problem that such messages are self-reinforcing. A hiring manager might take this message from the display and not hire blacks. As a result, these are locked in poverty with a higher likelihood to steal, which in turn would lead to a continuation of these store policies. The cycle is hard to break, especially when a store is able to correctly point out that it would lose money by abandoning the policy.

I would say that the store policy is somewhat reasonable, but they cannot be too obvious about it. In the extreme case you would have the isle labeled "beauty products for blacks" entirely locked. In my opinion that would send messages that are harmful to society too loud and clear. But I don't think that it is necessary to lock up all the beauty products, either. Surely there are some other often-stolen beauty products that could be put in there, so that the display doesn't send and obvious message. After all, there are very few people that will see a locked display and make a statistical analysis, whether products marketed to people of African heritage are overrepresented, as long as you cannot tell at first glance.
 
Maybe it's white people stealing all this stuff to then later re-sell to the black market? Just because a product is targeted at African Americans doesn't mean that that's who's going to be stealing it.

From Walmart's POV it doesn't matter who steals it. It gets stolen, so it gets locked up. 100% profit-oriented reasoning. Some stores in the U.S. have started locking up tide pods from what I understand. Because yeah, apparently it's popular with thieves for some reason
 
I hope that's sarcasm. They need darker shades of cosmetics, not to mention hair products designed for curls.
 
Since no-one else has done so, let me be the first to say: "Lies, damned lies, and statistics"

Inherent bias in algorithms is an increasingly important topic, see e.g., https://www.theatlantic.com/technol...ng-bias-of-facial-recognition-systems/476991/

This is a pretty significant leap from locking specific items you can confirm are stolen, however. The evaluation metric for "did somebody take X without paying for it" is a lot easier to pin down than machines recognizing faces.

Unless you're suggesting some kind of massive RFID screw-up or something I'm not seeing how these are comparable scenarios.
 
To make matters even worse, Allred said store employees told Grundy she would need to be “escorted” to the cash register to pay for the products, Newsweek reported.

I've been 'escorted' as well. In the past that would never happen unless buying something in excess of $100 (electronics). Now it's done at my store for a $19 trac phone. I guess the 'escorting' thing happened after Walmart realized there is no reason to lock it if the thief can just 'pocket' the item between being given the item and making it to the register (on the other side of the store).
 
I'm betting that the very existence of beauty products meant for African Americans is somehow racist, never mind how they are sold.
I will take that bet and collect my money.
 
"At the same time, we have looked at other statistics than our shoplifting statistics and found to our surprise that African-American communities tend to be more impoverished. Since we feel that may incentivize shoplifting, we have decided to address the root cause of the problem by offering our workers a living wage and health insurance, and making special efforts to recruit African-American workers"

said no Walmart memo ever.
 
and making special efforts to recruit African-American workers"

Never mind that in many small communities, Walmart is the only place in town that has hired an African-American.
One of my first managers was a black man. At the nearest store to me an African-American woman is the assistant manager on the day shift.
 
Never mind that in many small communities, Walmart is the only place in town that has hired an African-American.
One of my first managers was a black man. At the nearest store to me an African-American woman is the assistant manager on the day shift.
That and fast food joints.
 
"At the same time, we have looked at other statistics than our shoplifting statistics and found to our surprise that African-American communities tend to be more impoverished. Since we feel that may incentivize shoplifting, we have decided to address the root cause of the problem by offering our workers a living wage and health insurance, and making special efforts to recruit African-American workers"

said no Walmart memo ever.

The moment that becomes their least expensive option in aggregate, you'll see that memo. I suspect you won't see it a moment before that point.
 
It is true that learning algorithms can easily learn conscious and unconscious bias from their training set. But in this case there doesn't have to be a fancy algorithm involved. I think it is plausible that the analysis here is a simple four-liner of code that would hold up to any scrutiny. This is impossible to verify without access to the data, though, and could be racist hatred that is not based on facts at all. In any case, I believe the assumption that their analysis is correct to be valid enough to base a discussion on it.




It is a real social problem that such messages are self-reinforcing. A hiring manager might take this message from the display and not hire blacks. As a result, these are locked in poverty with a higher likelihood to steal, which in turn would lead to a continuation of these store policies. The cycle is hard to break, especially when a store is able to correctly point out that it would lose money by abandoning the policy.

I would say that the store policy is somewhat reasonable, but they cannot be too obvious about it. In the extreme case you would have the isle labeled "beauty products for blacks" entirely locked. In my opinion that would send messages that are harmful to society too loud and clear. But I don't think that it is necessary to lock up all the beauty products, either. Surely there are some other often-stolen beauty products that could be put in there, so that the display doesn't send and obvious message. After all, there are very few people that will see a locked display and make a statistical analysis, whether products marketed to people of African heritage are overrepresented, as long as you cannot tell at first glance.
This is a pretty significant leap from locking specific items you can confirm are stolen, however. The evaluation metric for "did somebody take X without paying for it" is a lot easier to pin down than machines recognizing faces.

Unless you're suggesting some kind of massive RFID screw-up or something I'm not seeing how these are comparable scenarios.

While I don't particularly care about the hair hygiene, I worry about the Shoot First and Ask Questions Later Wild West approach that is used for commercial algorithms and data science. I would worry about this especially if this substantial business decision was indeed made based on 4 lines of code.

Firstly, you have to put in the trust that the entire process that collects the data used is perfect (it is not) or that it is at least unbiased. The latter assumption might work well in the physical sciences, but it is a lot more problematic in the human world. Where we live, what we eat, the name we're called, where we work, when we were born, where we went to school, correlations everywhere.

Secondly, there is a serious risk of feedback loops in these approaches. If a group was historically discriminated, this can cause them to show up poorly in the data, and then the algorithms will discriminate against them some more.

Given the increasing importance of algorithms, scrutiny is necessary in their application. A "first, do no harm" principle would be useful.
 
While I don't particularly care about the hair hygiene, I worry about the Shoot First and Ask Questions Later Wild West approach that is used for commercial algorithms and data science. I would worry about this especially if this substantial business decision was indeed made based on 4 lines of code.

Firstly, you have to put in the trust that the entire process that collects the data used is perfect (it is not) or that it is at least unbiased. The latter assumption might work well in the physical sciences, but it is a lot more problematic in the human world. Where we live, what we eat, the name we're called, where we work, when we were born, where we went to school, correlations everywhere.

Secondly, there is a serious risk of feedback loops in these approaches. If a group was historically discriminated, this can cause them to show up poorly in the data, and then the algorithms will discriminate against them some more.

Given the increasing importance of algorithms, scrutiny is necessary in their application. A "first, do no harm" principle would be useful.

If you can demonstrate some reasonable way that adding up what gets stolen is biased, maybe there's a case here. Otherwise, it's not comparable to your facial recognition example, which necessarily can't ignore race (or many other things).

You could easily identify one product type being stolen more frequently than others while the specific product is de-identified (ID # whatever is stolen more than other ID #'s). That this could create a biased feedback loop on its own is absurd. If you have bias, it' somewhere else in the chain if the process is like this.
 
I don't get why they can't just look at the data, see which products get stolen, and lock those exact products up, and nothing else.

Isn't this what they're doing? If so, it seems perfect enough to me.

Yeah, that's about the 4 lines of code that I was thinking of. To do it more perfectly, you should also consider the time it takes for an employee to unlock a product, multiply that by their wage and the number of times a product gets sold. Then subtract that from the loss by theft and do the ranking then. This way you avoid locking up a very popular product where you would have to have two employees standing next to that are constantly unlocking it. At the moment I cannot really think of much analysis that goes beyond that and could shed more light on the problem.


While I don't particularly care about the hair hygiene, I worry about the Shoot First and Ask Questions Later Wild West approach that is used for commercial algorithms and data science. I would worry about this especially if this substantial business decision was indeed made based on 4 lines of code.

That approach is still better than the Shoot and Never Ask Questions approach that is usually used for corporate decision making. I would be happy if business decisions were made on at least 4 lines of code and not just mere gut feeling. Simplicity has value of its own and in this case, I feel like going beyond a few lines of code would only increase the danger of making mistakes and taking erroneous decisions.

Firstly, you have to put in the trust that the entire process that collects the data used is perfect (it is not) or that it is at least unbiased. The latter assumption might work well in the physical sciences, but it is a lot more problematic in the human world. Where we live, what we eat, the name we're called, where we work, when we were born, where we went to school, correlations everywhere.

I suffer from bad data quality way too much to say that any data is perfect and there are many data sets out there that are totally useless for analysis, but in this specific case I suspect that the data quality is more than sufficient. Records of what gets sold and what gets stolen are necessary to balance the books and financial records like these tend to be the best data you can get your hands on, because it is the only kind of data where you can face criminal penalties for bad data quality.

sure, there are all kinds of correlations. But you have to ask yourself, whether they really matter for the case you are looking at. After all, how would the name of the thief affect whether the data on the goods are stolen or not? Even if the thief wasn't apprehended, because he was friends with the manager or something, the goods would still be missing in the inventory.

Secondly, there is a serious risk of feedback loops in these approaches. If a group was historically discriminated, this can cause them to show up poorly in the data, and then the algorithms will discriminate against them some more.

Given the increasing importance of algorithms, scrutiny is necessary in their application. A "first, do no harm" principle would be useful.

These are feedback loops on a societal scale and who is supposed to break them? Walmart on its own couldn't even if it wanted to. "First, do no harm" would be laudable, but the principle that data scientists are told by the management is "First, improve the bottom line".
 
Let's ignore the statistics for just a moment (I know, I know. Bear with me.)

What happens when a black child is following their mom around the store and goes to the shampoo aisle, then sees all the black products locked up and the white ones out free? Negative internalization.

Simple fix has already been mentioned though, just lock up all the cosmetics in the section.
 
That approach is still better than the Shoot and Never Ask Questions approach that is usually used for corporate decision making. I would be happy if business decisions were made on at least 4 lines of code and not just mere gut feeling. Simplicity has value of its own and in this case, I feel like going beyond a few lines of code would only increase the danger of making mistakes and taking erroneous decisions.
If Walmart decided based on gut feeling to put only shampoo aimed at black people under lock and key, they would be rightly ridiculed and condemned (and would possibly face legal action, depending on your jurisdiction). But add 4 lines of code and 10 minutes work by an intern and everyone is okay.

These are feedback loops on a societal scale and who is supposed to break them? Walmart on its own couldn't even if it wanted to.
Gotta feel some sympathy for poor little Walmart.
We are all supposed to break the feedback loop.
 
Back
Top Bottom