The Good, the Bad, and the Ugly of Geofencing

Over the last few years, the concept of geofencing has become a mainstay in marketing circles, and more recently it has become useful to law enforcement. It’s powerful technology that can be used for good or evil, depending on which side of law enforcement you’re on. I believe that location data is intruding on our privacy in ways that we don’t entirely understand, and the geofencing debate brings many of the problems into the light.

How does geofencing work?

Essentially, companies or law enforcement draw an imaginary boundary on a digital map and all cell phone location data is collected within that boundary. Then the company can use the location data to sell users something or law enforcement can use the data to question witnesses or make arrests.

The concept of geofencing was originally used to provide businesses a way to target potential customers in a hyper-local way. For example, a store could offer all people within one block a coupon for discounted merchandise. Another example was a car dealership offering people leaving a rival dealer competitive financing (article).

I have personally seen this work in legal marketing. I had a company pitch me on the idea of showing my ads for personal injury work to people visiting the local emergency room. That was a few years ago so that gives you an idea of how long this technology has been around.

The technology works because of two things: (1) everyone has a smartphone and (2) virtually all of us are unwittingly giving our location data to a company (probably Google, but there are others). Given that 95% of American adults have a smartphone, this data collection and use are extensive and its applications are numerous.

In comes law enforcement. Of course, this technology is extremely useful to police. They do the same thing as companies – they create an area and track all cell phone data within the area. That data is provided anonymously to detectives who then create a list of suspects based on cell phone movements. This has been used to solve a variety of cases including home invasions and murders (NY Times article).

On its face, this all sounds good, but let’s break it down and discuss where this technology potentially goes awry.

The Good of Geofencing

Geofencing can help law enforcement solve crimes that could not normally be solved. It helps to locate suspects and witnesses. There have been a number of high-profile cases solved using location data collected by Google. For example, the NY Times article I cited earlier is about solving a murder that, without geofencing, would have likely gone unsolved.

Catching bad guys who wouldn’t have been caught using normal means is great. It helps make the world safer and makes it harder to get away with crime.

The Bad of Geofencing

Many of the stories where geofencing was used to solve a crime involve the questioning and arrest of innocent people. That occurs because geofencing warrants cast a huge net. These warrants can cover multiple blocks of distance and time frames stretching into days or weeks. Potentially hundreds or thousands of people can be included in these sweeping warrants.

Imagine for a second being arrested for a crime you did not commit simply because your cell phone was located near the commission of a crime. Suddenly, it’s on you to provide an alibi or to explain why you aren’t the criminal responsible for the crime. Assuming you have that kind of evidence, you could still end up imprisoned for days or weeks until everything gets sorted out.

You might be thinking that warrants of that scope probably come under tremendous scrutiny from judges. I suspect that you are wrong. When I was a prosecutor, officers would routinely come into court to ask the judge (any judge) to sign warrants. I don’t recall a judge ever refusing to sign a warrant. In fact, I don’t recall a judge ever really asking any questions about the warrants. Usually, the judge just signed them, maybe asking a couple of generic questions of the officer. Not exactly the oversight that we would hope for.

There is the added problem that judges may not understand what geofencing is. We have an aging judiciary who frequently struggles with basic technology, and these warrants are not offered with an expert to explain how thousands of users’ data will be collected as part of a geofencing warrant. In fact, and with no disrespect to law enforcement, officers are disincentivized from offering that kind of information to judges because if they had all of the information, judges may refuse to sign such warrants.

The fact is that this kind of data collection ropes many people into a criminal investigation who would not normally be suspects. That level of contact with law enforcement is not usually good for the average citizen.

The Ugly of Geofencing

This mass collection of data could mean real trouble for protestors and social activists. For example, police could collect data on the entire protestor-controlled region set up in Seattle. Then, using that data, they could determine who was most active in the protests and arrest those people. The mere fact that this technology exists may encourage people to avoid protests for fear of being targeted by law enforcement.

Look, I’m not a doomsayer. I know that what I just wrote is a stretch. However, the technology could absolutely be used to police otherwise lawful actions. It’s extremely important that we put limitations on police use of technology before it gets out of hand.

Conclusion

Geofencing is an incredibly powerful tool. It can be used by law enforcement to solve previously unsolved crimes. But if unchecked, the technology could easily be used to oppress freedoms and incarcerate people needlessly.

Bias in Artificial Intelligence

Artificial intelligence is quickly taking over many functions of our day-to-day lives. It can already schedule appointments, look things up on the internet, and compose texts and emails for us. But what happens when AI takes over more serious parts of our lives (though, texting is pretty important)?

That’s the challenge researchers and developers are facing. They want to develop more robust AI solutions, but they are running into a serious roadblock – bias.

Our artificial intelligence solutions are currently being trained by humans using data. The trouble is that the data (and the humans sometimes) doesn’t give the AI an unbiased place to start its work. As a friend of mine says, “It’s garbage in, garbage out.”

Examples of Bias in Artificial Intelligence

The first example is COMPAS, and it’s been well-documented. COMPAS was a tool that would score criminal offenders on a scale between 1 and 10, determining whether the offender was likely to be re-arrested while awaiting trial. This is a common decision made during bond call in criminal cases. Judges have had to weigh factors to determine a criminal’s likelihood of reoffending for decades. COMPAS was designed to help them.

The trouble was that COMPAS gave higher risk scores to black offenders than white offenders. There is a wonderful breakdown of how the algorithm worked from MIT, here. It’s important to note that the program did not explicitly factor in race, but it had the effect of unfairly targeting blacks anyway. Without going into the math, the basic problem was that blacks were more likely to be arrested (due to current and previous racial discrimination), which meant that the program predicted a higher chance of re-arrest. The higher predicted chance of re-arrest translated into higher risk scores. In fact, the data (predictably) was wrong, meaning that it consistently led to a higher percentage of black offenders being held in jail unnecessarily.

For a program created to combat the prejudice that exists in our court system, it failed.

Our second example is the Allegheny Family Screening Tool, which was created to help humans determine whether a child should be removed from their family because of abusive circumstances. The designers knew that the data was biased from the start. The data was likely to show that children from black or biracial homes were more likely to need intervention from the state.

The engineers couldn’t get around the faulty data. Data is the primary way that we train artificial intelligence, and the developers could not bypass this necessary training or fudge the numbers. Because they didn’t feel like they could combat the bias in the numbers, they opted to educate those with the most influence over the numbers going forward, explaining the faulty data and implicit bias to the users of the system – mostly judges (article, here).

This is a good example of how bias in the data can be challenging to overcome.

My last example is from current facial recognition software. The top three facial recognition artificial intelligence from IBM, Microsoft, and Megvii (Chinese) can all correctly identify a person’s gender 99% of the time, if that person is white. If the person was a dark-skinned woman, the chance of correctly guessing that person’s gender is only 35% (article, here).

There is no doubt that facial recognition software has a long way to go. That’s why it is so disturbing to see it being used heavily by law enforcement. Perhaps we will also see its use in contact-tracing for COVID. I believe this technology is likely to start trampling on our privacy rights over the next few years.

Why does it matter?

Bias in artificial intelligence matters because the exact reason we want to use AI is to avoid biases that naturally exist in all humans. Computers represent the only true way to treat everyone fairly. We see how our courts, schools, and banks are biased on the basis of race and gender. AI could provide us with a way past these prejudices. Then as people who have been traditionally held down are lifted, we may see some of these implicit biases melt away.

But we cannot train an AI to avoid bias with biased data. That is the challenge for developers today.

Garbage in, garbage out.

Why Do People Invest in Unprofitable Businesses?

I have long been interested in why people invest in unprofitable companies. As a small business owner, profit is key (along with cash flow). Profit is how your company grows and how you feed your family. But for large companies like SpaceX, profit is not important. Growth is key, as we will see.

I chose to write about this not because it relates to the practice of law, but rather because I am fascinated by future technology. Most companies developing future tech fall into the category of profitless behemoths with seemingly limitless valuations.

SpaceX

I’ll start with SpaceX.

They recently grabbed attention by being the first privately held company to launch people into space. This is one of the reported 15 scheduled commercial launches this year. Each one earns SpaceX approximately $80 million (Forbes article), which means it should earn about $1.2 billion in revenue from launches in 2020. SpaceX is currently valued at $36 billion, roughly 30 times more than its revenue. That is a steep valuation, even for a tech company.

So how does it work? Where does the value come from?

Well, it comes from the concept of a moonshot (pun intended). Space travel is slated to be a $1 trillion industry by 2040 (per marketwatch.com). If SpaceX is a controlling player in the industry, it could be worth far more than $36 billion.

Most venture capitalists look to get their money out of a business within 7-10 years. This has to do with how funds work and what investors expect. The SpaceX gamble will take at least 20 years to play out, an uncommon investment. But SpaceX never claimed to be a common investment, even by VC standards. Venture capitalists are used to taking on risk, but a company like SpaceX is a huge gamble – a moonshot. If it pays off, the VC nets huge return on their investment, much more than they could hope to make on their normal (albeit, risky) portfolio.

Unicorns

We travel from sci-fi to fantasy – let’s talk about unicorns.

The term unicorn is used to represent a company that is a privately held startup worth over $1 billion. The term was coined by VC Aileen Lee in 2013. The idea was that these companies were so rare that the proper metaphor was a mythical creature (Wikipedia).

In 2019, 142 companies became newly minted unicorns (Crunchbase). Suddenly, they don’t seem so rare. However, VC investors bet on many, many more companies that become total flops. The idea is that they take losses on most of their investments and then make it all back (and then some) on the one unicorn company that returns 100x their investment.

There are a bunch of great examples of unicorns: Airbnb, Epic Games (makers of Fortnite), DoorDash, Udemy, Reddit, 23andMe, and Squarespace.

While most investments are either a bust or a unicorn (I’m sure there are some in the middle), a moonshot like SpaceX is something even bigger.

Sure, you must wait 20-30 years for your investment to make a return, but that return could be staggering. Consider Softbank’s CEO, Masayoshi Son, and his $20 million bet on Alibaba Group in 1999, which turned into $60 billion at the time of Alibaba’s IPO in 2014. That’s 3,000x return in 15 years.

That’s the kind of returns investors are looking for.

WeWork

But there is a darker side to these investments. Consider WeWork.

WeWork was a company that specialized in co-working spaces in major cities like New York and London. It portrayed itself as a tech startup but was more like a real estate holding company and landlord.

WeWork was founded in 2010 by Adam Neumann, Rebeckah Neumann, and Miguel McKelvey. They raised $14.2 billion from investors, and in just nine years were valued at $47 billion and poised for an IPO of epic proportions.

But then WeWork imploded over the course of a few months. As they prepared for their IPO, their finances came under heavy scrutiny. It turned out that they were burning $230 million monthly and there was no profit in sight. There were also some serious claims about Adam Neumann (great article on the topic, here), including claims that he took hundreds of millions of dollars out of the company. Really, the accusations are astounding. Neumann ended up receiving $1.7 billion for separating from the company in a shocking display of Wall Street shenanigans.

Now WeWork is valued at just $2.9 billion. A spectacular failure. The venture capitalists that invested billions of dollars in WeWork lost their entire investment. This was a unicorn gone wrong.

Conclusion: Why moonshots?

The simple answer is you can make a ridiculous amount of money. The investments in these companies make sense because there is an outrageous upside. Sure, it could be worth nothing, but it could be worth billions or trillions of dollars.

The other thing to understand is that most of these VC companies are invested in many companies, some are riskier than others, and that is how their business works. They take a lot of risk on each investment, but the hope is that one or two investments will return the fund and then some. Their investors understand this, and, frankly, there is a lot of money to be made.