The Effect of COVID on Small Business

With case numbers rising across numerous states in the US, many have started to question what the long-term effects of COVID will be on small businesses. Here in Illinois, small businesses have just now started to partially open. If we have a second spike, as many experts are predicting and the data is suggesting, how will that affect these businesses that have already been hit?

Small Business Data

Reliable data on small business is hard to come by. I don’t know of any groups collecting and analyzing data in real-time. The data that is available is frequently not as useful as it appears. For example, some sources list small businesses as any business with less than 500 employees. That’s obviously far from small.

There is plenty of anecdotal evidence about what businesses are going through. As a manager of a firm that primarily works with small businesses, I have many stories from clients about their operational health. But that data is very specific to the company’s location and area of work. Data that covers broad swaths of American small business is tricky to find.

However, the US Chamber of Commerce recently published a poll of small businesses that appears to have good data backing it up. I have no idea what the US Chamber of Commerce is. ‘Chamber of Commerce’ is usually a title for a local community of businesses, it’s not a common title for a national organization. Given that, I believe the US Chamber of Commerce is a private company or an association of companies. In any case, you can see their original report here.

Their data was interesting. They reported that 41% of small businesses are fully open and 38% are partially open. About 19% of small businesses are temporarily closed, and 1% are permanently closed. Who is the 1% that took the survey despite their business being closed? I suspect that number may be underrepresented.

The report stated that 43% of business owners are ‘very concerned’ about the impact of COVID, which was down from 53% in May. Also, 53% of small businesses reported good overall health. In fact, 24% of business owners went so far as to say that the US economy was ‘good.’

This is touted by the article as a good sign, but I suggest that the reverse of that stat is that 76% of small businesses reported that the US economy was not good. And, by the way, economists would agree. The stock market may be doing well, but the market is not the only indication of a stable, healthy economy.

Another fascinating stat, 50% of business owners expect next year’s revenues to increase and 19% expect them to decrease. These numbers are slightly more positive than in May, but consider what is being said. Fifty percent think revenue in 2021 will increase. Notice, we aren’t talking about 2020. Given how bad 2020 has been for small business, how is it that 100% don’t think next year will be better? I find that stat concerning.

The fact is that these stats are not positive. Most business owners are legitimately concerned about this year and next, and that is without a second spike of cases occurring this fall (or right now).

Businesses Were Struggling Before COVID

NBC recently reported that a study involving 1.4 million small urban businesses done by JPMorgan Chase Institute found that nearly 29% of businesses were not profitable. That was as of September 2019. That study also found that nearly half of those businesses surveyed had no more than two weeks of cash on hand (article).

The fact is that many small businesses were failing before COVID hit, the pandemic simply sped up the process.

Now, even in the wake of the stay-at-home orders being lifted, many businesses don’t believe they can afford to carry on with normal operations. According to a LendingTree survey of 1,260 small businesses, approximately half of small businesses fear that they can’t reopen.

I have been saying this for weeks. The economy is not a light switch. Flipping it on does not guarantee demand for services. Businesses cannot operate at 50% or 25%, they don’t have the margins to survive such a downturn. Even if the stay-at-home orders are lifted, these businesses may not make it.

Business May Shift from Small Businesses to Larger Businesses

An article by The Washington Post considered what they called ‘micro-firms,’ or businesses with fewer than 10 employees (article). They interviewed Mark Zandi, chief economist for Moody’s Analytics, and he said that he wouldn’t be surprised if over 1 million micro-firms ultimately fail due to COVID. Given that there are approximately 30 million small businesses in the US, that would be about 3% of all small businesses. That’s about 30,000 businesses here in Illinois.

But something else to consider, those small businesses frequently transact with other small businesses. That means that when some go under there may be ripples in the small business community.

Large businesses will have a better chance at survival because they are more likely to have cash reserves and be able to borrow money. In fact, the Federal Reserve has made borrowing money easier than ever. Therefore, we may see a shift in our economy away from small businesses and towards larger businesses. I don’t know what that means for customers, but as a small business owner, I believe the movement away from small, local business would be a tremendous loss.

What Happens When We See a Second Spike in COVID?

I don’t see how the numbers could get better if we see a second spike. Businesses that are close to shuttering would almost certainly close. Continued unemployment would mean less demand for products and services offered by these small businesses, which could lead to even more closures.

Business will come back, that’s the beauty of the market, but this current crop of small businesses may see extremely high attrition rates. Time will tell how these closures will ripple out. Will we see impacts on commercial landlords? What about tax revenues?

I expect that this gets much worse before it gets better.

The Good, the Bad, and the Ugly of Geofencing

Over the last few years, the concept of geofencing has become a mainstay in marketing circles, and more recently it has become useful to law enforcement. It’s powerful technology that can be used for good or evil, depending on which side of law enforcement you’re on. I believe that location data is intruding on our privacy in ways that we don’t entirely understand, and the geofencing debate brings many of the problems into the light.

How does geofencing work?

Essentially, companies or law enforcement draw an imaginary boundary on a digital map and all cell phone location data is collected within that boundary. Then the company can use the location data to sell users something or law enforcement can use the data to question witnesses or make arrests.

The concept of geofencing was originally used to provide businesses a way to target potential customers in a hyper-local way. For example, a store could offer all people within one block a coupon for discounted merchandise. Another example was a car dealership offering people leaving a rival dealer competitive financing (article).

I have personally seen this work in legal marketing. I had a company pitch me on the idea of showing my ads for personal injury work to people visiting the local emergency room. That was a few years ago so that gives you an idea of how long this technology has been around.

The technology works because of two things: (1) everyone has a smartphone and (2) virtually all of us are unwittingly giving our location data to a company (probably Google, but there are others). Given that 95% of American adults have a smartphone, this data collection and use are extensive and its applications are numerous.

In comes law enforcement. Of course, this technology is extremely useful to police. They do the same thing as companies – they create an area and track all cell phone data within the area. That data is provided anonymously to detectives who then create a list of suspects based on cell phone movements. This has been used to solve a variety of cases including home invasions and murders (NY Times article).

On its face, this all sounds good, but let’s break it down and discuss where this technology potentially goes awry.

The Good of Geofencing

Geofencing can help law enforcement solve crimes that could not normally be solved. It helps to locate suspects and witnesses. There have been a number of high-profile cases solved using location data collected by Google. For example, the NY Times article I cited earlier is about solving a murder that, without geofencing, would have likely gone unsolved.

Catching bad guys who wouldn’t have been caught using normal means is great. It helps make the world safer and makes it harder to get away with crime.

The Bad of Geofencing

Many of the stories where geofencing was used to solve a crime involve the questioning and arrest of innocent people. That occurs because geofencing warrants cast a huge net. These warrants can cover multiple blocks of distance and time frames stretching into days or weeks. Potentially hundreds or thousands of people can be included in these sweeping warrants.

Imagine for a second being arrested for a crime you did not commit simply because your cell phone was located near the commission of a crime. Suddenly, it’s on you to provide an alibi or to explain why you aren’t the criminal responsible for the crime. Assuming you have that kind of evidence, you could still end up imprisoned for days or weeks until everything gets sorted out.

You might be thinking that warrants of that scope probably come under tremendous scrutiny from judges. I suspect that you are wrong. When I was a prosecutor, officers would routinely come into court to ask the judge (any judge) to sign warrants. I don’t recall a judge ever refusing to sign a warrant. In fact, I don’t recall a judge ever really asking any questions about the warrants. Usually, the judge just signed them, maybe asking a couple of generic questions of the officer. Not exactly the oversight that we would hope for.

There is the added problem that judges may not understand what geofencing is. We have an aging judiciary who frequently struggles with basic technology, and these warrants are not offered with an expert to explain how thousands of users’ data will be collected as part of a geofencing warrant. In fact, and with no disrespect to law enforcement, officers are disincentivized from offering that kind of information to judges because if they had all of the information, judges may refuse to sign such warrants.

The fact is that this kind of data collection ropes many people into a criminal investigation who would not normally be suspects. That level of contact with law enforcement is not usually good for the average citizen.

The Ugly of Geofencing

This mass collection of data could mean real trouble for protestors and social activists. For example, police could collect data on the entire protestor-controlled region set up in Seattle. Then, using that data, they could determine who was most active in the protests and arrest those people. The mere fact that this technology exists may encourage people to avoid protests for fear of being targeted by law enforcement.

Look, I’m not a doomsayer. I know that what I just wrote is a stretch. However, the technology could absolutely be used to police otherwise lawful actions. It’s extremely important that we put limitations on police use of technology before it gets out of hand.

Conclusion

Geofencing is an incredibly powerful tool. It can be used by law enforcement to solve previously unsolved crimes. But if unchecked, the technology could easily be used to oppress freedoms and incarcerate people needlessly.

Bias in Artificial Intelligence

Artificial intelligence is quickly taking over many functions of our day-to-day lives. It can already schedule appointments, look things up on the internet, and compose texts and emails for us. But what happens when AI takes over more serious parts of our lives (though, texting is pretty important)?

That’s the challenge researchers and developers are facing. They want to develop more robust AI solutions, but they are running into a serious roadblock – bias.

Our artificial intelligence solutions are currently being trained by humans using data. The trouble is that the data (and the humans sometimes) doesn’t give the AI an unbiased place to start its work. As a friend of mine says, “It’s garbage in, garbage out.”

Examples of Bias in Artificial Intelligence

The first example is COMPAS, and it’s been well-documented. COMPAS was a tool that would score criminal offenders on a scale between 1 and 10, determining whether the offender was likely to be re-arrested while awaiting trial. This is a common decision made during bond call in criminal cases. Judges have had to weigh factors to determine a criminal’s likelihood of reoffending for decades. COMPAS was designed to help them.

The trouble was that COMPAS gave higher risk scores to black offenders than white offenders. There is a wonderful breakdown of how the algorithm worked from MIT, here. It’s important to note that the program did not explicitly factor in race, but it had the effect of unfairly targeting blacks anyway. Without going into the math, the basic problem was that blacks were more likely to be arrested (due to current and previous racial discrimination), which meant that the program predicted a higher chance of re-arrest. The higher predicted chance of re-arrest translated into higher risk scores. In fact, the data (predictably) was wrong, meaning that it consistently led to a higher percentage of black offenders being held in jail unnecessarily.

For a program created to combat the prejudice that exists in our court system, it failed.

Our second example is the Allegheny Family Screening Tool, which was created to help humans determine whether a child should be removed from their family because of abusive circumstances. The designers knew that the data was biased from the start. The data was likely to show that children from black or biracial homes were more likely to need intervention from the state.

The engineers couldn’t get around the faulty data. Data is the primary way that we train artificial intelligence, and the developers could not bypass this necessary training or fudge the numbers. Because they didn’t feel like they could combat the bias in the numbers, they opted to educate those with the most influence over the numbers going forward, explaining the faulty data and implicit bias to the users of the system – mostly judges (article, here).

This is a good example of how bias in the data can be challenging to overcome.

My last example is from current facial recognition software. The top three facial recognition artificial intelligence from IBM, Microsoft, and Megvii (Chinese) can all correctly identify a person’s gender 99% of the time, if that person is white. If the person was a dark-skinned woman, the chance of correctly guessing that person’s gender is only 35% (article, here).

There is no doubt that facial recognition software has a long way to go. That’s why it is so disturbing to see it being used heavily by law enforcement. Perhaps we will also see its use in contact-tracing for COVID. I believe this technology is likely to start trampling on our privacy rights over the next few years.

Why does it matter?

Bias in artificial intelligence matters because the exact reason we want to use AI is to avoid biases that naturally exist in all humans. Computers represent the only true way to treat everyone fairly. We see how our courts, schools, and banks are biased on the basis of race and gender. AI could provide us with a way past these prejudices. Then as people who have been traditionally held down are lifted, we may see some of these implicit biases melt away.

But we cannot train an AI to avoid bias with biased data. That is the challenge for developers today.

Garbage in, garbage out.