20141031

Can Authorities Cut Off Utilities And Pose As Repairmen To Search A Home?

Nina Totenberg

Some legal cases do more than raise eyebrows — they push the legal envelope to change the law. Such is a federal case in Las Vegas now working its way through the courts. The question is whether federal agents can disrupt service to a house and then, masquerading as helpful technicians, gain entry to covertly search the premises in hopes of finding evidence that might later justify a search warrant.

The defendants in this case are not your everyday Americans. They are, in fact, Chinese gamblers who were staying in Las Vegas at Caesar's Palace earlier this year.

Caesar's, and other gambling casinos, thrive on these high-rollers and provide them with free villas, butlers and other services. But in this case, at least one of the high-rollers had been tossed out of Macau for running an illegal sportsbooking operation. That fact made the Nevada Gaming Commission and the FBI suspicious that the high-rollers were doing the same thing here.

Suspicions, however, aren't enough for a search warrant. So, according to court papers filed by defense lawyers late Tuesday, the FBI came up with a plan: Working with a computer contractor for Caesar's Palace, the agents first tried to get into the villas by delivering laptops and asking to come in to make sure the connections worked.

The butler, however, wouldn't let them in. Tape from the secret cameras worn by the agents clearly shows the butler blocking their way.

"I just want to make sure they can connect before I leave. Can we just make sure they can connect, OK?" the agent asks.

"The thing is, you can't go in there right now," replies the butler.

When that ploy failed, the agents came up with "another trick," according to defense lawyer Tom Goldstein: "We'll dress up as technicians, we'll come inside, we'll claim to be fixing the Internet connection — even though we can't, 'cause we broke it from outside — and then we'll just look around and see what we see."

Once inside, the agents wandered around the premises as they covertly photographed the rooms, entering the previously off-limits media room. Inside, they saw a group of men watching the World Cup soccer game and looking at betting odds on their laptops — perfectly legal in Las Vegas.

What else the agents saw is not entirely clear at this point, but when they left, they seemed satisfied they had enough to get a search warrant.

"Yeah, we saw what we needed to see," an agent is heard on the tapes saying. His partner responds, "Very cool."

Defense lawyer Goldstein contends that not only was the search illegal, but the government knew it was and tried to cover it up. He contends that the materials submitted to a federal magistrate judge in seeking a warrant later carefully eliminated all indications that the federal agents had themselves cut the Internet line so that the villa occupants would ask for repairmen to come to the villa to fix the problem.

"They just managed not to tell the magistrate what it is they had actually done," says Goldstein.

Indeed, Goldstein notes that he and his clients never would have known that it was the FBI agents who cut the line were it not for one slip of the tongue that the agents made — recorded on tape — when talking among themselves. He adds that when the defense asked for further recordings, the FBI provided two blank CDs, claiming the recording devices malfunctioned.

"There's no real way of looking at this other than to say that it is a cover-up," contends Goldstein.

Cover-up or not, the legal theory used here by the Justice Department and the FBI would change the legal rules of the road dramatically if adopted by the courts.

"The theory behind this search is scary," says George Washington University law professor Stephen Saltzburg, author of a leading criminal law text. "It means the government can cut off your service, intentionally, and then pretend to be a repair person, and then while they're there, they spend extra time searching your house. It is scary beyond belief."

And it's not just Internet service that could be cut off. Cable TV lines, plumbing or water lines — the list in the modern world is a long one.

Saltzburg, who has himself worked for the Justice Department, is frankly puzzled by the brazenness of the search here.

"It's very difficult to understand, unless they want to try to push the law of consent beyond where it's ever gone before," he says.

The Justice Department declined to comment for this story, saying it would make its arguments in court when the time comes.

20141028

UK Filters And The Slippery Slope Of Mass Censorship

We've covered the ridiculousness of the UK's "voluntary" web filters. UK officials have been pushing such things for years and finally pushed them through by focusing on stopping "pornography" (for the children, of course). While it quickly came out that the filters were blocking tons of legitimate content (as filters always do), the UK government quickly moved to talk about ways to expand what the filters covered.

The pattern is not hard to recognize, because it happens over and over again. Government officials find some absolute horror -- the kind of thing that no one will stand up for -- to push for some form of censorship. Few fight back because no one wants to be seen as standing up for something absolutely horrific online, or be seen as being against "family values." But, then, once the filters are in place, it becomes so easy both to ignore the fact that the filters don't work (and censor lots of legitimate content) and to constantly expand and expand and expand them. And people will have much less of a leg to stand on, because they didn't fight back at the beginning.

That appears to be happening at an astonishingly fast pace in the UK. Index On Censorship has a fantastic article, discussing how a UK government official has already admitted to plans to expand the filter to "unsavoury" content rather than just "illegal."
James Brokenshire was giving an interview to the Financial Times last month about his role in the government’s online counter-extremism programme. Ministers are trying to figure out how to block content that’s illegal in the UK but hosted overseas. For a while the interview stayed on course. There was “more work to do” negotiating with internet service providers (ISPs), he said. And then, quite suddenly, he let the cat out the bag. The internet firms would have to deal with “material that may not be illegal but certainly is unsavoury”, he said.

And there it was. The sneaking suspicion of free thinkers was confirmed. The government was no longer restricting itself to censoring web content which was illegal. It was going to start censoring content which it simply didn’t like.
It goes on, in fairly great detail, to describe just how quickly the UK is sliding away down that slippery slope of censorship. It highlights how these filters were kicked off as an "anti-porn" effort, where the details were left intentionally vague.
But David Cameron positioned himself differently, by starting up an anti-porn crusade. It was an extremely effective manouvre. ISPs now suddenly faced the prospect of being made to look like apologists for the sexualisation of childhood.

Or at least, that’s how it was sold. By the time Cameron had done a couple of breakfast shows, the precise subject of discussion was becoming difficult to establish. Was this about child abuse content? Or rape porn? Or ‘normal’ porn? It was increasingly hard to tell.
And, of course, the fact that the filters go too far, is never seen as a serious problem.
The filters went well beyond what Cameron had been talking about. Suddenly, sexual health sites had been blocked, as had domestic violence support sites, gay and lesbian sites, eating disorder sites, alcohol and smoking sites, ‘web forums’ and, most baffling of all, ‘esoteric material’. Childline, Refuge, Stonewall and the Samaritans were blocked, as was the site of Claire Perry, the Tory MP who led the call for the opt-in filtering. The software was unable to distinguish between her description of what children should be protected from and the things themselves.

At the same time, the filtering software was failing to get at the sites it was supposed to be targeting. Under-blocking was at somewhere between 5% and 35%.

Children who were supposed to be protected from pornography were now being denied advice about sexual health. People trying to escape abuse were prevented from accessing websites which could offer support.

And something else curious was happening too: A reactionary view of human sexuality was taking over. Websites which dealt with breast feeding or fine art were being blocked. The male eye was winning: impressing the sense that the only function for the naked female body was sexual.
But, of course, no one in the UK government seems to care. In fact, they're looking to expand the program. Because it was never about actually stopping porn. It was always about having a tool for mass censorship.
The list was supposed to be a collection of child abuse sites, which were automatically blocked via a system called Cleanfeed. But soon, criminally obscene material was added to it – a famously difficult benchmark to demonstrate in law. Then, in 2011, the Motion Picture Association started court proceedings to add a site indexing downloads of copyrighted material.

There are no safeguards to stop the list being extended to include other types of sites.

This is not an ideal system. For a start, it involves blocking material which has not been found illegal in a court of law. The Crown Prosecution Service is tasked with saying whether a site reaches the criminal threshold. This is like coming to a ruling before the start of a trial. The CPS is not an arbiter of whether something is illegal. It is an arbiter, and not always a very good one, of whether there is a realistic chance of conviction.

As the IWF admits on its website, it is looking for potentially criminal activity – content can only be confirmed to be criminal by a court of law. This is the hinterland of legality, the grey area where momentum and secrecy count for more than a judge’s ruling.

There may have been court supervision in putting in place the blocking process itself but it is not present for individual cases. Record companies are requesting sites be taken down and it is happening. The sites are only being notified afterwards, are only able to make representations afterwards. The traditional course of justice has been turned on its head.
And it just keeps going on and on. As the report notes, "the possibilities for mission creep are extensive." You don't say. They also note that technologically clueless politicians love this because they can claim they're solving a hard problem when they're really doing no such thing (and really are just creating other problems at the same time):
MPs like filtering software because it seems like a simple solution to a complex problem. It is simple. So simple it does not exist.
Of course, if you recognize that the continued expansion of such filters was likely the plan from the beginning, then everything is going according to plan. The fact that it doesn't solve any problems the public are dealing with is meaningless. It solves a problem that the politicians are dealing with: how to be able to say they've "done something" to "protect the children" while at the same time building up the tools and powers of the government to stifle any speech they don't like. To those folks, the system is working perfectly.

20141026

When one New Zealand school tossed its playground rules and let students risk injury, the results were surprising

Sarah Boesveld

AUCKLAND, New Zealand — It was a meeting Principal Bruce McLachlan awaited with dread.

One of the 500 students at Swanson School in a northwest borough of Auckland had just broken his arm on the playground, and surely the boy’s parent, who had requested this face-to-face chat with its headmaster, was out for blood.

It had been mere months since the gregarious principal threw out the rulebook on the playground of concrete and mud, dotted with tall trees and hidden corners; just weeks since he had stopped reprimanding students who whipped around on their scooters or wielded sticks in play sword fights.

He knew children might get hurt, and that was exactly the point — perhaps if they were freed from the “cotton-wool” in which their 21st century parents had them swaddled, his students may develop some resilience, use their imaginations, solve problems on their own.

The parent sat down, stone-faced, across from the principal.

“‘My son broke his arm in the playground, and I just want to make sure…” he began.

“And I’m thinking ‘Oh my God, what’s going to happen?'” Mr. McLachlan recalled, sitting in his “fishbowl” of an office one hot Friday afternoon last month.

The parent continued: “I just wanted to make sure you don’t change this play environment, because kids break their arms.”

Mr. McLachlan took the unexpected vote of confidence as a further sign that his educational-play experiment was working: Fewer children were getting hurt on the playground. Students focused better in class. There was also less bullying, less tattling. Incidents of vandalism had dropped off.

And now the principal’s unconventional approach has made waves around the world, with school administrators and parents as far away as the United States and the United Kingdom asking how they, too, can abandon a rulebook designed to assuage fears about school safety in a seemingly dangerous time. It’s an attractive idea for some Western educators who’ve recently extolled the virtues of reintroducing risk into children’s lives. But can such an about-face take shape in a world in which rules act as armor against lawsuits, at a time in which recess gets cancelled altogether in the interest of keeping children safe?

Navy database tracks civilians' parking tickets, fender-benders

By Mark Flatten

A parking ticket, traffic citation or involvement in a minor fender-bender are enough to get a person's name and other personal information logged into a massive, obscure federal database run by the U.S. military.

The Law Enforcement Information Exchange, or LinX, has already amassed 506.3 million law enforcement records ranging from criminal histories and arrest reports to field information cards filled out by cops on the beat even when no crime has occurred.

"That may be where you are starting to cross the line on mass collection of information on innocent people just because you can."

LinX is a national information-sharing hub for federal, state and local law enforcement agencies. It is run by the Naval Criminal Investigative Service, raising concerns among some military law experts that putting such detailed data about ordinary citizens in the hands of military officials crosses the line that generally prohibits the armed forces from conducting civilian law enforcement operations.

Those fears are heightened by recent disclosures of the National Security Agency spying on Americans, and the CIA allegedly spying on Congress, they say.

Eugene Fidell, who teaches military law at Yale Law School, called LinX “domestic spying.”

“It gives me the willies,” said Fidell, a member of the Defense Department’s Legal Policy Board and a board member of the International Society for Military Law and the Law of War.

Fidell reviewed the Navy's LinX website at the request of the Washington Examiner to assess the propriety of putting such a powerful database under the control of a military police entity.

“Clearly, it cannot be right that any part of the Navy is collecting traffic citation information,” Fidell said. “This sounds like something from a third-world country, where you have powerful military intelligence watching everybody.”

The military has a history of spying on Americans. The Army did it during the Vietnam War and the Air Force did it after the Sept. 11 terror attacks.

Among the groups subjected to military spying in the name of protecting military facilities from terrorism was a band of Quakers organizing a peace rally in Florida.

LinX administrators say it is nothing more than an information-sharing network that connects records from participating police departments across the country.

LinX was created in 2003 and put under NCIS, which has counterterrorism and intelligence-gathering missions in addition to responsibility for criminal investigations. LinX was originally supposed to help NCIS protect naval bases from terrorism.

More than 1,300 agencies participate, including The FBI and other Department of Justice divisions, the Department of Homeland Security and the Pentagon. Police departments along both coasts and in Texas, New Mexico, Alaska and Hawaii are in LinX.

The number of records in the system has mushroomed from about 50 million in 2007 to more than 10 times that number today.

Background checks for gun sales and applications for concealed weapons permits are not included in the system, according to NCIS officials and representatives of major state and local agencies contacted by the Examiner.

The director of NCIS, Andrew Traver, drew stiff opposition from the National Rifle Association after President Obama twice nominated him to be head of the Bureau of Alcohol, Tobacco, Firearms and Explosives.

The nomination failed to go forward in the Senate both times, largely because of what the NRA described as Traver's advocacy for stricter gun laws.

He became NCIS director in October.

NCIS officials could not say how much has been spent on LinX since it was created 2003. They provided figures since the 2008 fiscal year totaling $42.3 million. Older records are not available from NCIS.

Incomplete data from USAspending.gov shows at least $7.2 million more was spent between 2003 and 2008. The actual figure is probably much higher, since the spending listed on the disclosure site only totals $23 million since 2003.

Other law enforcement databases have limited information on things like criminal histories, said Kris Peterson, LinX division chief at NCIS.

More detailed narratives and things like radio dispatch logs and pawn shop records don’t show up in those databases, but are available in LinX, he said.

Participating agencies must feed their information into the federal data warehouse and electronically update it daily in return for access.

Why LinX wound up in the NCIS, a military law enforcement agency, is not clear. Current NCIS officials could not explain the reasoning, other than to say it grew out of the department's need for access to law enforcement records relevant to criminal investigations.

A 2008 investigation into the removal of nine U.S. attorneys during the George W. Bush administration found that an overly aggressive push for DOJ to embrace LinX led to the firing of John McKay, then the U.S. attorney for western Washington state.

A DOJ inspector general's report said McKay developed the initiative with NCIS officials, and that NCIS agreed to fund it.

Neither McKay nor David Brant, head of NCIS at the time, could be reached for comment.

The FBI, a DOJ entity, has since built its own system similar to LinX, called the National Data Exchange or N-Dex.

The systems are connected, and much of the information in N-Dex comes from LinX, said Christopher Cote, assistant director for information technology at NCIS.

Putting the military in control of so much information about civilians is what makes people like Fidell nervous.

Americans have distrusted the use of the military for civilian law enforcement since before the Revolutionary War, he said.

Since the passage of the Posse Comitatus Act of 1878, it has been illegal for the military to engage in domestic law enforcement except in limited circumstances, such as quelling insurrections.

The limits in the law were largely undefined for almost a century. In 1973, the Army provided logistical support for FBI agents trying to break the standoff with American Indian Movement militants at Wounded Knee, S.D.

Several criminal defendants later argued the use of the military was illegal under Posse Comitatus.

Ensuing court decisions decreed that using the military for direct policing, such as making arrests or conducting searches, was illegal and should be left to civilian departments. Providing logistical support, equipment and information are allowed.

Since then, the law has been loosened to allow limited military participation in certain large-scale anti-drug investigations.

Aside from the legal issues is the problem of “mission creep,” said Gene Healy, vice president of the Cato Institute and an Examiner columnist, who has written about the overreach of the military in civilian law enforcement.

What begins as a well-meaning and limited effort to assist local police can grow into a powerful threat to constitutional protections, Healy said.

A recent example of mission-creep gone awry is the Threat And Local Observation Notice, or TALON, program created by the Air Force at the same time LinX was launched.

Like LinX, TALON’s purpose was to create a network for information-sharing among federal, state and local police agencies that could be used to help protect military facilities.

In 2005, media reports showed TALON was being used to spy on anti-war groups, including the Quakers. TALON was disbanded in 2007.

“The history of these programs is that they tend to metastasize and that there is mission creep that involves gathering far more information than is needed,” said Healy.

“In general, what you see in these programs is they start out very narrow and they expand beyond the limits of their original logic. Repeatedly throughout American history, what starts small becomes larger, more intrusive, more troubling,” he said.

LinX can only be used for law enforcement purposes, though intelligence and counter-terror officers at NCIS do have access to the system, Cote said. TALON was primarily an intelligence-gathering network.

The rules governing LinX are almost identical to those controlling other federal databases run by the FBI, he said.

While NCIS is a military police unit, its agents are civilian employees equivalent to those at the FBI and other federal agencies, said NCIS spokesman Ed Buice.

While there are limits on military enforcement of civilian laws, it is allowed if it is done “primarily for a military purpose,” which is how NCIS uses the system, Buice said.

Before LinX was launched, NCIS briefed representatives of the ACLU, “who didn't even blink,” he added.

Chris Calabrese, legislative counsel for the ACLU, said he doesn’t know who, if anyone, in the organization would have told the Navy that LinX raised no concerns.

Calabrese was not particularly troubled about LinX being run by the military, though he did question why it is necessary since most of the same information is available in the FBI's N-Dex database.

Generally, the ACLU recognizes the need for police to collect and share information about criminal activity — things like felony histories and outstanding warrants.

Civil libertarians get more concerned as more trivial information on average citizens is collected under the guise of protecting the public, especially absent some reasonable suspicion that a crime has been committed, he said.

Pawn shop records and parking tickets are that kind of questionable information.

“To me, that may be where you are starting to cross the line on mass collection of information on innocent people just because you can,” Calabrese said.

“We live now in a world of records where everything we do is generating a record. So the standard can’t be, 'We have to keep it all because it might be useful for something some day.' The rationale has to be more finely tuned than that,” he said.

Tesco refuses to sell Liam Whelan, 16, teaspoons pack

A 16-year-old boy was told he was too young to buy a pack of teaspoons by a supermarket in Lancashire.

Liam Whelan was sent by his stepmother to their local Tesco in Haslingden to buy replacements for the spoons he keeps losing.

But staff refused to sell the 57p pack of teaspoons to Liam, from Deardengate, because he was not 18.

His stepmother Yvette Whelan said the decision was "daft". Tesco apologised for staff not using their judgement.

Mrs Whelan said she sent Liam out to buy the spoons because he and his brother, Josh, keep losing them.

She admitted she thought he was lying about the incident at first and he had not been to the store.

"Knives, forks I can understand but teaspoons? No," she said.

"There's just no common sense."

Mrs Whelan said Liam was "really embarrassed" by being sent home empty-handed.

A Tesco spokesperson said: "We do include a till prompt for proof of age on our self-service tills for some items.

"We ask our colleagues to use their judgment as to whether this should be applied.

"In this instance, this was not followed and we apologise to our customer for any inconvenience caused."

Mom says local grade school's active shooter drill traumatized child

(KMOV) – A Cahokia mother said a school drill aimed at saving lives traumatized her child. She wants the district to make changes because the drill was too intense for young children.

A’Lia Burrell is a third-grader at Penniman Elementary School. She didn’t understand that Wednesday’s “code red” was just a drill.

"While we were under the computers, I uh started to pray,” said Burrell.

“With all the stuff going on now a day I understand they need drills, but there's a better way to go about it,” said Burrell’s mother, Jackie. "Now she doesn't want to go to school and she’s scared.. When I came walking in the door earlier she said I thought you were an intruder.. that's really affecting her right now.”

She wants the district to dial it back for younger students and let parents know in advance. A’Lia told her, the police resource officer said something very disturbing.

"One of our school people said they are killing that way,” said Burrell.

Cahokia Superintendent Art Ryan was not at the school for the drill. But after News 4 called, he asked the principal about this.

“I talked to the principal and she was with the resource officer the whole time and there was not a statement of anything like that,” said Ryan.

He said in this day and age, he stands behind the intensity of the drill.

“The kids need to be prepared for an extreme circumstance so maybe have the drill be a little on the extreme side doesn't hurt,” said Ryan. “I do understand your concerns about the age of the children, I’d much rather your children be a little bit scared and alive, than not knowing what to do and end up being hurt."

Drills like this one are mandated in both Illinois and Missouri along with tornado and fire drills.

20141025

Judge says prosecutors should follow the law. Prosecutors revolt.

By Radley Balko

I’ve addressed the problem of prosecutorial misconduct here a few times before — both its prevalence, and the fact that misbehaving prosecutors are rarely sanctioned or disciplined. Recently (or perhaps the better word is finally), some judges have begun to speak out about the problem including, most notably, Alex Kozinski, the influential judge on the U.S. Court of Appeals for the 9th Circuit.

Late last year, South Carolina State Supreme Court Justice Donald Beatty joined Kozinski. At a state solicitors’ convention in Myrtle Beach, Beatty cautioned that prosecutors in the state have been “getting away with too much for too long.” He added, “The court will no longer overlook unethical conduct, such as witness tampering, selective and retaliatory prosecutions, perjury and suppression of evidence. You better follow the rules or we are coming after you and will make an example. The pendulum has been swinging in the wrong direction for too long and now it’s going in the other direction. Your bar licenses will be in jeopardy. We will take your license.”

You’d think that there’s little here with which a conscientious prosecutor could quarrel. At most, a prosecutor might argue that Beatty exaggerated the extent of misconduct in South Carolina. (I don’t know if that’s true, only that that’s a conceivable response.) But that prosecutors shouldn’t suborn perjury, shouldn’t retaliate against political opponents, shouldn’t suppress evidence, and that those who do should be disciplined — these don’t seem like controversial things to say. If most prosecutors are following the rules, you’d think they’d have little to fear, and in fact would want their rogue colleagues identified and sanctioned.

The state’s prosecutors didn’t see it that way. Beatty singled out South Carolina’s 9th Judicial District in particular. There’s a good reason for that: He noted in his talk that two prosecutors from that district, overseen by Solicitor Scarlett Wilson, had already been suspended for misconduct and at the time of his talk, another complaint was pending. A recent complaint by the state’s association of criminal defense lawyers recently laid out a list of other complaints (PDF) against Wilson’s office. (You can read Wilson’s response here.)

But Wilson took personal offense at Beatty’s comments. She accused him of bias and sent a letter asking him to recuse himself from criminal cases that come out of her district. In one sense, Wilson is unquestionably correct. Beatty is biased. He’s clearly biased against prosecutors who commit misconduct. But that’s a bias you probably want in a judge, particularly one that sits on a state supreme court. It’s also a bias that isn’t nearly common enough in judges. (Not only do most judges not name misbehaving prosecutors in public, they don’t even name them in court opinions.)

Other prosecutors around the state jumped on, and now at least 13 of the head prosecutors in the state’s 16 judicial districts, along with South Carolina Attorney General Alan Wilson, are asking for Beatty’s to be recused from criminal cases. This would presumably end his career as a state supreme court justice.

Over at the Connecticut Law Tribune, the public defender who writes under the pseudonym “Gideon” comments on this mess:

Why, then, is it so inappropriate for Justice Beatty to remind stewards of justice that their charge includes not only securing convictions, but also maintaining the integrity of the criminal justice system? What is so particularly offensive about the justice making his opinion known? Certainly no one would argue that there are two competing opinions to be had here; there is no pro-suppression of exculpatory evidence lobby. So is it merely the petulance of being chided in public?
This isn’t an unusual occurrence, however. Prosecutors in San Diego have long used a state law to “disqualify” pro-defense judges. Just a few months ago, they boycotted a superior court judge because he issued a few too many rulings upholding the Fourth Amendment, in favor of defendants. They claim that these statements and rulings evince an underlying bias that these judges have, making them unfit to be neutral and detached magistrates in criminal court.
Also in Santa Clara County, Calif., where a few years ago former district attorney Delores Carr responded to a series of scandals in which her office failed to expose exculpatory evidence, and one of her assistants was sanctioned, by boycotting the judge who ruled against her, and then attempting to restrict the power of the state bar to discipline prosecutors. (Something the bar rarely does, anyway.)
In these days when the media and the masses equate every arrest with guilt and every acquittal with a mistaken jury and a technicality in the law, these incidents show that some prosecutors aren’t above playing to these base sentiments, or worse, actually believe these very things.
Why else would a judge who sides with a defendant and his Fourth Amendment rights be unfit to sit in criminal court? Why else would it be grounds to disqualify a judge for reminding prosecutors of their ethical obligation?
Justice Beatty’s remarks are troubling, but not for the reasons the attorney general of South Carolina thinks. They’re troubling because they reveal that prosecutors there engage in witness tampering, retaliatory and selective prosecutions and even perjury. They’re troubling because they reveal that perhaps the South Carolina Supreme Court has been aware of this unethical conduct but has heretofore turned a blind eye to it (“no longer overlook…”). They’re troubling because they reveal that justice in South Carolina isn’t what justice should be and some want to keep it that way.
One more example: Recently in Arizona, the state’s supreme court recommended adopting an ethics rule that would require prosecutors to disclose “new, credible, and material evidence” of a wrongful conviction, make that information available to the convicted and then “undertake further investigation or make reasonable efforts to cause an investigation, to determine whether the defendant was convicted of an offense that the defendant did not commit.”

This seems like a pretty sensible guideline. Yet the office of Maricopa County Attorney William Montgomery opposed it. Why? According to a comment Montgomery’s office submitted to the court, because there’s “no convincing evidence that Arizona has a ‘problem’ of wrongful convictions” or that “prosecutors have failed to take corrective action when appropriate.” In a debate a couple of weeks ago, Montgomery reiterated his opposition. He said he already follows the rule, and so he was insulted that anyone would suggest an ethical guideline would be necessary to hold him to it.

Of course, even if Montgomery himself always follows the proposed rule, he isn’t the only prosecutor in Arizona. Nor will he be the last prosecutor in Maricopa County. Certainly he can’t believe that every current and future prosecutor in Arizona will now and always do the right thing when presented with evidence of a wrongful conviction. Perhaps it’s true that only the rare, rogue, isolated prosecutor would hide, obscure, or sit on such evidence. But if disclosure of that evidence is the right thing to do, it’s difficult to understand why anyone would oppose giving the state bar a way to discipline that prosecutor, rare, rogue, isolated as he may be.

The most plausible explanation for all of these stories is that a significant number of prosecutors just don’t want to be held accountable to anyone but themselves. I suppose a lot of us would like to have that sort of protection in our jobs. But few of us do. And the rest of us don’t hold positions that give us the power to to ruin someone’s life with criminal charges, to convince a jury to put someone in prison or to ask the state to put someone to death.

Transgender Woman Can’t Be Diversity Officer Because She’s a White Man Now


I know what you are thinking. You think some woman had a sex change and now the liberals on campus have gone after the now him because of “the Patriarchy”. It is much worse.

Timothy Boatwright applied to an all-women’s school. While he checked the “female” box when he applied for the school, he identified as “masculine-of-center genderqueer” when he got there. Granted, that is rather nonsensical, but it is not the stupid part. This is:

And, by all accounts, Boatwright felt welcome on campus — until the day he announced that he wanted to run for the school’s office of multicultural affairs coordinator, whose job is to promote a “culture of diversity” on campus.
But some students thought that allowing Boatwright to have the position would just perpetuate patriarchy. They were so opposed, in fact, that when the other three candidates (all women of color) dropped out, they started an anonymous Facebook campaign encouraging people not to vote at all to keep him from winning the position.
“I thought he’d do a perfectly fine job, but it just felt inappropriate to have a white man there,” the student behind the so-called “Campaign to Abstain” said.
“It’s not just about that position either,” the student added. “Having men in elected leadership positions undermines the idea of this being a place where women are the leaders.”
The New York Times ran an in-depth article giving further insight:
Last spring, as a sophomore, Timothy decided to run for a seat on the student-government cabinet, the highest position that an openly trans student had ever sought at Wellesley. The post he sought was multicultural affairs coordinator, or “MAC,” responsible for promoting “a culture of diversity” among students and staff and faculty members. Along with Timothy, three women of color indicated their intent to run for the seat. But when they dropped out for various unrelated reasons before the race really began, he was alone on the ballot. An anonymous lobbying effort began on Facebook, pushing students to vote “abstain.” Enough “abstains” would deny Timothy the minimum number of votes Wellesley required, forcing a new election for the seat and providing an opportunity for other candidates to come forward. The “Campaign to Abstain” argument was simple: Of all the people at a multiethnic women’s college who could hold the school’s “diversity” seat, the least fitting one was a white man.
To recap, Timothy signed up for school as a female so that his mother would not discover he is “transmasculine” (as he put it in the New York Times article). Once at Wellesley, he told his fellow students to call him Timothy and refer to him with male pronouns. They did this with apparently little problem. However, when Timothy attempted to become a multiculutral affairs coordinator, the students turned on him because he is white and considers himself male.

They think Timothy, a person who is female who identifies as male, ever a minority if there were one, is unqualified to be a diversity officer because Timothy is a white female who identifies as white male.

Keep in mind that Timothy has done nothing to alter his body. His body is still female. He simply identifies as male. Yet that simple identification is enough to make his fellow students hold his perceived maleness against him, despite the likelihood that he has never experienced any “male privilege” at all.

This politically correct fail is such a thing to behold. For example, there is this:
“Sisterhood is why I chose to go to Wellesley,” said a physics major who graduated recently and asked not to be identified for fear she’d be denounced for her opinion. “A women’s college is a place to celebrate being a woman, surrounded by women. I felt empowered by that every day. You come here thinking that every single leadership position will be held by a woman: every member of the student government, every newspaper editor, every head of the Economics Council, every head of the Society of Physics. That’s an incredible thing! This is what they advertise to students. But it’s no longer true. And if all that is no longer true, the intrinsic value of a women’s college no longer holds.”
That fear is genuine. A student named Laura Bruno was interviewed, and did not go well for her when stated that having men on campus diminished the importance of having a women’s college:
The interviewer asked Laura to describe her experience at an “all-female school” and to explain how that might be diminished “by having men there.” Laura answered, “We look around and we see only women, only people like us, leading every organization on campus, contributing to every class discussion.”
Kaden, a manager of the campus student cafe who knew Laura casually, was upset by her words. He emailed Laura and said her response was “extremely disrespectful.” He continued: “I am not a woman. I am a trans man who is part of your graduating class, and you literally ignored my existence in your interview. . . . You had an opportunity to show people that Wellesley is a place that is complicating the meaning of being an ‘all women’s school,’ and you chose instead to displace a bunch of your current and past Wellesley siblings.”
Laura apologized, saying she hadn’t meant to marginalize anyone and had actually vowed beforehand not to imply that all Wellesley students were women. But she said that under pressure, she found herself in a difficult spot: How could she maintain that women’s colleges would lose something precious by including men, but at the same time argue that women’s colleges should accommodate students who identify as men?
I feel sorry for students like Laura. They want to be inclusive, yet they want their “sisterhood”. The moment someone identifies as male enters the foray, that ruins the latter possibility. The simplest solution would be to require every student to at least identify as female. That would solve the immediate problem of having females transitioning to or identifying as males on campus. However, it would cause the new problem of asking the students who identify as male to leave.

Yet I also do not feel sorry for the students. This is precisely what happens when political correctness is left to its own devices. Accommodating transgender views about sex and gender render the very concept of sex and gender moot. If you think a person can change their gender or sex, then neither two concepts are concrete. They essentially do not exist. They are simply social constructs no different than race or nationality. And if that is true, then there is little point to having an all-women college. There is no such thing as a “woman” or “female”.

Obviously most people do not believe that. Most people think there are only two sexes, male and female, and that these are biologically determined, not social constructs. I suspect that most people at Wellesley do not actually think people like Timothy are male. They simply go along with it for the sake of appearances. However, people like Timothy do not know or suspect that, so when they attempt to engage in normal school activities, they end up with Timothy’s present situation.

Perhaps the most enlightening part of this is how much this mirrors the feminist attack on men and masculinity. That is essentially the argument at play, that men, maleness, and masculinity are unwanted and bad. Even transmen attending Wellesley share that view:
Others are wary of opening Wellesley’s doors too quickly — including one of Wellesley’s trans men, who asked not to be named because he knew how unpopular his stance would be. He said that Wellesley should accept only trans women who have begun sex-changing medical treatment or have legally changed their names or sex on their driver’s licenses or birth certificates. “I know that’s a lot to ask of an 18-year-old just applying to college,” he said, “but at the same time, Wellesley needs to maintain its integrity as a safe space for women. What if someone who is male-bodied comes here genuinely identified as female, and then decides after a year or two that they identify as male — and wants to stay at Wellesley? How’s that different from admitting a biological male who identifies as a man? Trans men are a different case; we were raised female, we know what it’s like to be treated as females and we have been discriminated against as females. We get what life has been like for women.”
It is bias all around, from the students who do not want transmen in their school to the transmen who do not want transwomen in the school to those oppose men in general.

Grisham and the law

John Grisham waded into a political war-zone when he commented on the conviction of people who possess child pornography. Grisham stated in an interview with the Telegraph:
“We have prisons now filled with guys my age. Sixty-year-old white men in prison who’ve never harmed anybody, would never touch a child,” he said in an exclusive interview to promote his latest novel Gray Mountain which is published next week.
“But they got online one night and started surfing around, probably had too much to drink or whatever, and pushed the wrong buttons, went too far and got into child porn.”
His comments sparked criticism from child advocacy groups. However, Grisham went on to state:
Asked about the argument that viewing child pornography fuelled the industry of abuse needed to create the pictures, Mr Grisham said that current sentencing policies failed to draw a distinction between real-world abusers and those who downloaded content, accidentally or otherwise.

“I have no sympathy for real paedophiles,” he said, “God, please lock those people up. But so many of these guys do not deserve harsh prison sentences, and that’s what they’re getting,” adding sentencing disparities between blacks and whites was likely to be the subject of his next book.
No one paid attention to that part, or this part of the Telegraph article:
Since 2004 average sentences for those who possess – but do not produce – child pornography have nearly doubled in the US, from 54 months in 2004 to 95 months in 2010, according to a 2012 report by the U.S. Sentencing Commission.
However the issue of sex-offender sentencing has sparked some debate in the US legal community after it emerged that in some cases those who viewed child porn online were at risk of receiving harsher sentences than those who committed physical acts against children.
A provocative article in the libertarian magazine Reason headlined “Looking v Touching” argued last February that something was “seriously wrong with a justice system in which people who look at images of child rape can be punished more severely than people who rape children”.
Grisham later issued an apology for his comments.


I  talked about this topic before. There are many cases in which people face harsher sentences for possessing child porn than than people who created the images and abuse children. That is absurd.
I know there are images of me on the internet. I know some people have looked at them. I think that is horrible. I do not want people gratifying themselves at my expense. Yet, the harm from that is incomparable to what I experienced making them. There is no reason to punish someone caught with those images more harshly than those who made them.

I understand the counter argument about the continued abuse experience. As Suzanne Ost explains:
Second, it is far too simplistic a claim that those who view images of child abuse online have never harmed anybody. While they themselves may never have touched a child, they contribute to the harm caused to children involved in the creation of such images in numerous ways. For instance, seeking out these images can encourage the market and thus the abuse of more children to fulfil demand. Viewers also underwrite and take advantage of the sexual abuse of the children who feature in the images. Studies involving counsellors and trauma therapists who have treated victims have shown that awareness that their images have been made available for others to view causes the child further mental suffering: their abusive experience has no end because at any time someone could be receiving sexual gratification from viewing their abuse. Other studies have identified a tendency for viewers to downplay their role in causing children harm. Such a perception, which mirrors that expressed by Grisham, could lead to an individual continuing to view these kinds of images. Thus, one particular message that must be conveyed is that this behaviour contributes to victims’ ongoing abusive experiences.
Yes, it is possible that seeking out such images could cause people to make more of them. That certainly does occur (although it is my understanding that most images are from people’s personal collections that they share online rather than being created specifically to be shared). It is also true that many survivors do continue to feel abused because those images are still out there.

Nevertheless, the majority of people viewing these images are not touching any children. The images were already created and most of those people had nothing to do with that. So what is the sex offense? That is Grisham’s point, and I agree with him.

There is no sex offense. There is simply the thought of the offense. These people look at the images and think about either watching it happen or doing it themselves. In other words, we are punishing people for thought crimes. I think that is why, as noted above, the sentences for possession of child porn doubled in six years.

We are repulsed by the notion that people fantasize about having sex with children. That people write stories and draw artwork depicting such acts sickens us. That people would seek out images of real children is worse.

Yet what makes that so bizarre is that people who actually rape children rarely receive the four and a half to eight-year sentences mentioned in the article, particularly if they are female.

We have gotten to the point where we appear to think it is worse to think about hurting a child than actually hurting the child. That is completely backwards.

Granted, there is a potential legal reason for the sentencing disparity. Most sex offense cases rely solely on the testimony of the victim. There is typically no other evidence. That makes the cases harder to prove beyond a reasonable doubt. Prosecutors are more likely to offer plea deals in those cases.

In contrast, child porn cases have evidence: the child porn itself. There is no question the offense occurred (unless one contested that one did not download the images). This makes them much easier to win, and gives the prosecutors more room to offer harsher plea deals. They can use a handful of images to charge a person with a multitude of offenses that will ensure a long prison term.

That said, I agree with Grisham that we should rethink how we treat people caught with child porn. They should be punished, yet not more harshly than those who actually created the images. There is no reason a person who has not touched a child should spend nearly decade in prison while those who repeatedly abuse children can count the years they served in prison on one hand.

20141023

Why 40% of us think we're in the top 5%

By Christie Nicholson

Psychologist David Dunning explains that not only are we terrible at seeing how stupid we are, but we're also too dumb to recognize genius right in front of us.

In 1996 McArthur Wheeler walked into two banks and attempted to rob them in broad daylight, wearing no disguise. The video surveillance caught his face clearly and later that day he was recognized and arrested, to his surprise. He remarked, “But I wore the juice.” Wheeler mistakenly believed that rubbing lemon juice over your face and body rendered you invisible to video cameras. He had tested this apparently, by shooting a Polaroid of himself, and somehow his image mysteriously never appeared in the shot.

Cornell University psychology professor David Dunning read about Wheeler and it struck an idea: If Wheeler was too incompetent to be a bank robber, maybe he was also too incompetent to know he was incompetent in the first place. Dunning and his team went on to publish a study and found that indeed incompetence can mask the awareness of one’s incompetence. The phenomenon is now called the Dunning-Kruger Effect.

Since then Dunning has performed many studies on incompetence. And he has uncovered something particularly disturbing: We humans are terrible at self-assessment, often grading ourselves as far more intelligent and capable than we actually are. This widespread inability can lead to negative consequences for management and for recognizing genius.

I spoke with Dunning while he was on sabbatical in Palo Alto, Calif., and asked about the negative impact of inaccurate self-assessment and also about a new sub-shoot of his research -- that apparently we are unable to recognize genius in our midst.

You’ve said that people’s self-views hold only a tenuous to modest relationship with their actual behavior and performance. What are some examples?

Well for instance, what people say about their expertise and what they demonstrate on tests [from minor quizzes to serious entrance exams] tend not to be highly correlated whatsoever. If you ask people to rate their own managerial skills and also have their employers and peers rate them, once again, what you get is [nearly zero correlation].

But what your supervisor and peers say about you are very strongly correlated with the quality of your work.

Whether it be an intellectual task, social task, any task, people’s beliefs about the quality of their work does not bear much relationship with reality, as far as we can measure it.

And you’ve found that people tend to overrate far more often than underrate themselves?

Yes. If I do a study where I measure what people think about themselves versus how well they actually perform, I will expect to see marked over-confidence.

Can you share one example?

One of my favorite examples is a study of the engineering departments of software firms in the Bay Area in California. Researchers asked individual engineers how good they were.

In Company A 32% of the engineers said they were in the top 5% of skill and quality of work in the company. That seemed outrageous until you go to Company B, where 42% said they were in the top 5%. So much for being lonely at the top. Everybody tends to think that they are at the top much more than they really are.

Have you found that gender skews the results in any way?

In one area, men on average are going to say they have more have more scientific skill than women have. That is a split that starts happening in the United States around the teenage years, but it is not necessarily mirrored in reality. You can give people a math quiz in which both men and women are performing at exactly the same level, but the men think they are doing just fine and the women are dramatically underestimating how well they are doing.

Do you think this influences their chosen paths?

We wanted to see if interest in science was connected to the misperception of performance. After giving students a pop quiz we asked them if they wanted to volunteer for a Jeopardy contest. We found that women were 20% less likely to be interested. [Their level of interest in the contest] was directly connected to how well they thought they had done on the quiz, but it had no relationship to how well they had actually done.

Why is it so hard to see ourselves the way we really are?

It is extremely hard to spot your shortcomings. One reason is this notion of “unknown unknowns.”

What is that?

People do what they can conceive of, but sometimes there are better solutions, or considerations and risks they never knew were out there. They don’t take a solution they don’t know about. If there is a risk they do not know about, they don’t prepare for it. There are any number of unknown unknowns that we are dealing with whenever we face a challenge in life.

You’ve also done studies on how this general over-confidence leads to problems in corporate feedback.

Giving feedback especially in the workplace is a very touchy situation, and companies make reviews more touchy by directly connecting it to things like pay raises. There are two reasons people may not be receptive to feedback: One is it’s going to come as a complete surprise to them, because they probably don’t know what their weaknesses are, second is that it’s just a natural human tendency to be defensive.

So, you have to work around that. There are three different things you can do as a manager. The first thing is if you are going to give feedback make sure that it’s about a person’s behavior or their actions. Do not make it about their character or their ability.

If you come at them with words like 'You are lazy,' or 'You’re not all that innovative,' then you are attacking their character.

Second, you want to give feedback often. If feedback is rare, people will naturally get their defensive antenna up.

Third, you do not want the only feedback to come when the supervisor is angry. There are a lot of companies where that is the habit. The supervisor has to be driven mad before he or she gives the feedback that a person really needed to hear earlier. How are you going to listen to a mad person yelling at you? So, that is the last thing to avoid.

One of your papers concludes that top performers are much more likely to keep improving and low performers are not.

Right. In one study on emotional intelligence we offered MBA students a book, The Emotionally Intelligent Manager for half price. And we discovered a paradox. When we offered the book, two-thirds of the top performers bought it. But only 20% of low performers bought it. It was the top performers, not the low performers who showed the most interest in improving.

There is some evidence that this is a general tendency, at least among Westerners. There are studies where students have either done really well or really poorly on puzzles. And, then during a waiting period, they just watched to see if the students returned to these puzzles and played with them. It’s the students who have done well that go back and play with these puzzles. But, poor performers want nothing to do with these puzzles.

Mind you, in Japan, that pattern flips. It is when the student has done poorly that they return to those puzzles. The explanation there is that we Americans are more of a self-affirmation culture, whereas Japan is more of a self-improvement culture. And, so people’s orientation to success and failure differ across the two places.

How can we become better at self-assessment?

It is almost impossible for an individual left to their own devices to get self-assessment right. The worth of one’s ideas runs through other people. That is, workers should pay attention to what other workers are doing. You can watch other people to benchmark how they handle the same sorts of tasks or situations. Seek out feedback from other workers or managers.

Get a mentor who can tell you about all those unknown unknowns.

Your recent work surrounds genius. Specifically the fact that we cannot recognize a genius even when they are right in front of us?

Our past research was about poor performers and how they did not have the skills to recognize their shortcomings. Well, ultimately, we found out that that is true for everybody. It’s a problem we all have. We might recognize poor performers because we outperform them. The problem is we do not see mistakes we are making. But people who are more competent than us, they can certainly see our mistakes.

Here is the twist: For really top performers we cannot recognize just how superior their responses, or their strategy is, or their thinking is. We cannot recognize the best among us, because we simply do not have the competency to be able to recognize how competent those people are.

That’s pretty profound. It implies we should approach everyone thinking they know something we don’t.

The idea that we’ve been exploring currently in my lab is that genius hides in plain sight. People do not have the competence or the skill or the intellectual scaffolding to recognize people who out-perform them.

How do you discover this?

We test [subjects] on their logical reasoning skill. Then give them tasks that have been completed by other students, ranging from students who have really done horribly to students who have a perfect score. And we ask the [subjects] to estimate how many answers each student got right.

Essentially what you find is everybody gets the poor performer. They see the person who may be getting four out of twenty right. They get that person. But, on average, the person who got 20 out of 20 right, a perfect score, is seen as merely an average performer by everybody else. They think that person only got 12 or 13 items right.

So what is happening is that when you see an answer and it might be the correct answer, but you’ll think it's wrong because you think your answer is right. You miss how well this person is performing. Even if we offer subjects up to $50 for correct estimates, there is no improvement in their accuracy.

The problem is that we are not smart enough to recognize genius within our midst. Everybody can agree on who the poor performers are, but you get no agreement on who the top performers are.

But just to back up a little, you've proven that others are by far the best judge of our own ability and talent. So does this only happen with those who are average or below average, and not geniuses?

The data suggests that others do better anticipating our competence than we do ourselves. That isn’t true in every case, in that other people may not have the expertise to spot geniuses, but other people appear to have an advantage when spotting our poor performances that remain opaque to us.

So because we cannot recognize those who are the top of the top performers, this is why we have missed geniuses in the past?

Yes. It is interesting to go through the decades and see how many times things that are now considered incredible works of genius, were not recognized at the time.

My favorite example of this is the movie Vertigo, which just this past year went to the number one spot in the British Film Institute’s Sound and Site poll, displacing Citizen Kane, which had been there for like 40 or 50 years.

When Vertigo came out it was a flop. They got very mixed reviews. But now it's considered a singular act of genius in cinema 50 years later. Genius ideas are not going to bowl everybody over immediately. It may take time before the genius in an idea is recognized. What I don't know is how many times it is never recognized. That is an interesting open question.

Vertigo is an odd, odd movie, and so it has taken awhile for people to recognize that that is not oddness, but rather innovation.

20141019

The Ethics of Autonomous Cars

Sometimes good judgment can compel us to act illegally. Should a self-driving vehicle get to make that same decision?

Patrick Lin

Should we trust robotic cars to share our road, just because they are programmed to obey the law and avoid crashes?

Our laws are ill-equipped to deal with the rise of these vehicles (sometimes called “automated”, “self-driving”, “driverless”, and “robot” cars—I will use these interchangeably). For example, is it enough for a robot car to pass a human driving test? In licensing automated cars as street-legal, some commentators believe that it’d be unfair to hold manufacturers to a higher standard than humans, that is, to make an automated car undergo a much more rigorous test than a new teenage driver.

But there are important differences between humans and machines that could warrant a stricter test. For one thing, we’re reasonably confident that human drivers can exercise judgment in a wide range of dynamic situations that don’t appear in a standard 40-minute driving test; we presume they can act ethically and wisely. Autonomous cars are new technologies and won’t have that track record for quite some time.

Moreover, as we all know, ethics and law often diverge, and good judgment could compel us to act illegally. For example, sometimes drivers might legitimately want to, say, go faster than the speed limit in an emergency. Should robot cars never break the law in autonomous mode? If robot cars faithfully follow laws and regulations, then they might refuse to drive in auto-mode if a tire is under-inflated or a headlight is broken, even in the daytime when it’s not needed.

For the time being, the legal and regulatory framework for these vehicles is slight. As Stanford law fellow Bryant Walker Smith has argued, automated cars are probably legal in the United States, but only because of a legal principle that “everything is permitted unless prohibited.” That’s to say, an act is allowed unless it’s explicitly banned, because we presume that individuals should have as much liberty as possible. Since, until recently, there were no laws concerning automated cars, it was probably not illegal for companies like Google to test their self-driving cars on public highways.

To illustrate this point by example, Smith turns to another vehicle: a time machine. “Imagine that someone invents a time machine," he writes. "Does she break the law by using that machine to travel to the past?” Given the legal principle nullum crimen sine lege, or “no crime without law,” she doesn’t directly break the law by the act of time-traveling itself, since no law today governs time-travel.

This is where ethics come in. When laws cannot guide us, we need to return to our moral compass or first principles in thinking about autonomous cars. Does ethics yield the same answer as law? That’s not so clear. If time-traveling alters history in such a way that causes some people to be harmed or never have been born, then ethics might find the act problematic.

This illustrates the potential break between ethics and law. Ideally, ethics, law, and policy would line up, but often they don’t in the real world. (Jaywalking and speeding are illegal, for examples, but they don’t seem to be always unethical, e.g., during a time when there’s no traffic or in case of an emergency. A policy, then, to always ticket or arrest jaywalkers and speeders would be legal but perhaps too harsh.)

But, because the legal framework for autonomous vehicles does not yet exist, we have the opportunity to build one that is informed by ethics. This will be the challenge in creating laws and policies that govern automated cars: We need to ensure they make moral sense. Programming a robot car to slavishly follow the law, for instance, might be foolish and dangerous. Better to proactively consider ethics now than defensively react after a public backlash in national news.

The Trolley Problem

Philosophers have been thinking about ethics for thousands of years, and we can apply that experience to robot cars. One classical dilemma, proposed by philosophers Philippa Foot and Judith Jarvis Thomson, is called the Trolley Problem: Imagine a runaway trolley (train) is about to run over and kill five people standing on the tracks. Watching the scene from the outside, you stand next to a switch that can shunt the train to a sidetrack, on which only one person stands. Should you throw the switch, killing the one person on the sidetrack (who otherwise would live if you did nothing), in order to save five others in harm’s way?

A simple analysis would look only at the numbers: Of course it’s better that five persons should live than only one person, everything else being equal. But a more thoughtful response would consider other factors too, including whether there’s a moral distinction between killing and letting die: It seems worse to do something that causes someone to die (the one person on the sidetrack) than to allow someone to die (the five persons on the main track) as a result of events you did not initiate or had no responsibility for.

To hammer home the point that numbers alone don’t tell the whole story, consider a common variation of the problem: Imagine that you’re again watching a runaway train about to run over five people. But you could push or drop a very large gentleman onto the tracks, whose body would derail the train in the ensuing collision, thus saving the five people farther down the track. Would you still kill one person to save five?

If your conscience starts to bother you here, it may be that you recognize a moral distinction between intending someone’s death and merely foreseeing it. In the first scenario, you don’t intend for the lone person on the sidetrack to die; in fact, you hope that he escapes in time. But in the second scenario, you do intend for the large gentleman to die; you need him to be struck by the train in order for your plan to work. And intending death seems worse than just foreseeing it.

This dilemma isn’t just a theoretical problem. Driverless trains today operate in many cities worldwide, including London, Paris, Tokyo, San Francisco, Chicago, New York City, and dozens more. As situational awareness improves with more advanced sensors, networking, and other technologies, a robot train might someday need to make such a decision.

Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.Autonomous cars may face similar no-win scenarios too, and we would hope their operating programs would choose the lesser evil. But it would be an unreasonable act of faith to think that programming issues will sort themselves out without a deliberate discussion about ethics, such as which choices are better or worse than others. Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child? We don’t like thinking about these uncomfortable and difficult choices, but programmers may have to do exactly that. Again, ethics by numbers alone seems naïve and incomplete; rights, duties, conflicting values, and other factors often come into play.

If you complain here that robot cars would probably never be in the Trolley scenario—that the odds of having to make such a decision are minuscule and not worth discussing—then you’re missing the point. Programmers still will need to instruct an automated car on how to act for the entire range of foreseeable scenarios, as well as lay down guiding principles for unforeseen scenarios. So programmers will need to confront this decision, even if we human drivers never have to in the real world. And it matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes).

Anyway, there are many examples of car accidents every day that involve difficult choices, and robot cars will encounter at least those. For instance, if an animal darts in front of our moving car, we need to decide: whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left of right; and so on. These decisions are influenced by environmental conditions (e.g., slippery road), obstacles on and off the road (e.g., other cars to the left and trees to the right), size of an obstacle (e.g., hitting a cow diminishes your survivability, compared to hitting a raccoon), second-order effects (e.g., crash with the car behind us, if we brake too hard), lives at risk in and outside the car (e.g., a baby passenger might mean the robot car should give greater weight to protecting its occupants), and so on.

The road ahead

Programming is only one of many areas to reflect upon as society begins to widely adopt autonomous driving technology. Here are a few others—and surely there are many, many more:

1. The car itself
Does it matter to ethics if a car is publicly owned, for instance, a city bus or fire truck? The owner of a robot car may reasonably expect that its property “owes allegiance” to the owner and should value his or her life more than unknown pedestrians and drivers. But a publicly owned automated vehicle might not have that obligation, and this can change moral calculations.

Just as the virtues and duties of a police officer are different from those of a professor or secretary, the duties of automated cars may also vary. Even among public vehicles, the assigned roles and responsibilities are different between, say, a police car and a shuttle bus. Some robo-cars may be obligated to sacrifice themselves and their occupants in certain conditions, while others are not.

2. Insurance
How should we think about risks arising from robot cars? The insurance industry is the last line of defense for common sense about risk. It’s where you put your money where your mouth is. And as school districts that want to arm their employees have discovered, just because something is legal doesn’t mean you can do it, if insurance companies aren’t comfortable with the risk. This is to say that, even if we can sort out law and ethics with automated cars, insurers still need to make confident judgments about risk, and this will be very difficult.

Do robot cars present an existential threat to the insurance industry? Some believe that ultra-safe cars that can avoid most or all accidents will mean that many insurance companies will go belly-up, since there would be no or very little risk to insure against. But things could go the other way too: We could see mega-accidents as cars are networked together and vulnerable to wireless hacking—something like the stock market’s “flash crash” in 2010. What can the insurance industry do to protect itself while not getting in the way of the technology, which holds immense benefits?

3. Abuse and misuse
How susceptible would robot cars be to hacking? So far, just about every computing device we’ve created has been hacked. If authorities and owners (e.g., rental car company) are able to remotely take control of a car, this offers an easy path for cyber-carjackers. If under attack, whether a hijacking or ordinary break-in, what should the car do: Speed away, alert the police, remain at the crime scene to preserve evidence…or maybe defend itself? For a future suite of in-car apps, as well as sensors and persistent GPS/tracking, can we safeguard personal information, or do we resign ourselves to a world with disappearing privacy rights?

What kinds of abuse might we see with autonomous cars? If the cars drive too conservatively, they may become a road hazard or trigger road-rage in human drivers with less patience. If the crash-avoidance system of a robot car is generally known, then other drivers may be tempted to “game” it, e.g., by cutting in front of it, knowing that the automated car will slow down or swerve to avoid an accident. If those cars can safely drive us home in a fully-auto mode, that may encourage a culture of more alcohol consumption, since we won’t need to worry so much about drunk-driving.

Predicting the future

We don’t really know what our robot-car future will look like, but we can already see that much work needs to be done. Part of the problem is our lack of imagination. Brookings Institution director Peter W. Singer said, “We are still at the ‘horseless carriage’ stage of this technology, describing these technologies as what they are not, rather than wrestling with what they truly are.” As it applies here, robots aren’t merely replacing human drivers, just as human drivers in the first automobiles weren’t simply replacing horses: The impact of automating transportation will change society in radical ways, and ethics can help guide it.

In “robot ethics,” most of the attention so far has been focused on military drones. But cars are maybe the most iconic technology in America—forever changing cultural, economic, and political landscapes. They’ve made new forms of work possible and accelerated the pace of business, but they also waste our time in traffic. They rush countless patients to hospitals and deliver basic supplies to rural areas, but also continue to kill more than 30,000 people a year in the U.S. alone. They bring families closer together, but also farther away at the same time. They’re the reason we have suburbs, shopping malls, and fast-food restaurants, but also new environmental and social problems.

Automated cars, likewise, promise great benefits and unintended effects that are difficult to predict, and the technology is coming either way. Change is inescapable and not necessarily a bad thing in itself. But major disruptions and new harms should be anticipated and avoided where possible. That is the role of ethics in public policy: it can pave the way for a better future, or it could become a wreck if we don’t keep looking ahead.

Yes, you are under surveillance

In "private" online forums, at malls, and even at home, someone is tracking you

By Julia Angwin

SHARON AND BILAL couldn't be more different. Sharon Gill is a 42-year-old single mother who lives in a small town in southern Arkansas. She ekes out a living trolling for treasures at yard sales and selling them at a flea market. Bilal Ahmed, 36, is a single, Rutgers-educated man who lives in a penthouse in Sydney, Australia. He runs a chain of convenience stores.

Although they have never met in person, they became close friends on a password-protected online forum for patients struggling with mental health issues. Sharon was trying to wean herself from anti-depressant medications. Bilal had just lost his mother and was suffering from anxiety and depression.

From their far corners of the world, they were able to cheer each other up in their darkest hours. Sharon turned to Bilal because she felt she couldn't confide in her closest relatives and neighbors. "I live in a small town," Sharon told me. "I don't want to be judged on this mental illness."

But in 2010, Sharon and Bilal were horrified to discover they were being watched on their private social network.

It started with a break-in. On May 7, 2010, PatientsLikeMe noticed unusual activity on the "Mood" forum where Sharon and Bilal hung out. A new member of the site, using sophisticated software, was attempting to "scrape," or copy, every single message off PatientsLikeMe's private "Mood" and "Multiple Sclerosis" forums.

PatientsLikeMe managed to block and identify the intruder: It was the Nielsen Co., the media-research firm. Nielsen monitors online "buzz" for its clients, including drugmakers. On May 18, PatientsLikeMe sent a cease-and-desist letter to Nielsen and notified its members of the break-in.

But there was a twist. PatientsLikeMe used the opportunity to inform members of the fine print they may not have noticed when they signed up. The website was also selling data about its members to pharmaceutical and other companies.

The news was a double betrayal for Sharon and Bilal. Not only had an intruder been monitoring them, but so was the very place that they considered to be a safe space.

Even worse, none of it was necessarily illegal. Nielsen was operating in a gray area of the law even as it violated the terms of service at PatientsLikeMe. And it was entirely legal for PatientsLikeMe to disclose to its members in its fine print that it would sweep up all their information and sell it.

WE ARE LIVING in a Dragnet Nation — a world of indiscriminate tracking where institutions are stockpiling data about individuals at an unprecedented pace. The rise of indiscriminate tracking is powered by the same forces that have brought us the technology we love so much — powerful computing on our desktops, laptops, tablets, and smartphones.

Before computers were commonplace, it was expensive and difficult to track individuals. Governments kept records only of occasions, such as birth, marriage, property ownership, and death. Companies kept records when a customer bought something and filled out a warranty card or joined a loyalty club. But technology has made it cheap and easy for institutions of all kinds to keep records about almost every moment of our lives.

The combination of massive computing power, smaller and smaller devices, and cheap storage has enabled a huge increase in indiscriminate tracking of personal data. The trackers include many of the institutions that are supposed to be on our side, such as the government and the companies with which we do business.

Of course, the largest of the dragnets appear to be those operated by the U.S. government. In addition to its scooping up vast amounts of foreign communications, the National Security Agency is also scooping up Americans' phone calling records and Internet traffic, according to documents revealed in 2013 by the former NSA contractor Edward Snowden.

Meanwhile, commercial dragnets are blossoming. AT&T and Verizon are selling information about the location of their cellphone customers, albeit without identifying them by name. Mall owners have started using technology to track shoppers based on the signals emitted by the cellphones in their pockets. Retailers such as Whole Foods have used digital signs that are actually facial recognition scanners.

Online, hundreds of advertisers and data brokers are watching as you browse the Web. Looking up "blood sugar" could tag you as a possible diabetic by companies that profile people based on their medical condition and then provide drug companies and insurers access to that information. Searching for a bra could trigger an instant bidding war among lingerie advertisers at one of the many online auction houses.

IN 2009, 15-YEAR-OLD high school student Blake Robbins was confronted by an assistant principal who claimed she had evidence that he was engaging in "improper behavior in his home." It turned out that his school had installed spying software on the laptops that it issued to the school's 2,300 students. The school's technicians had activated software on some of the laptops that could snap photos using the webcam. Blake's webcam captured him holding pill-shaped objects. Blake and his family said they were Mike and Ike candies. The assistant principal believed they were drugs.

Blake's family sued the district for violating their son's privacy. The school said the software had been installed to allow technicians to locate the computers in case of theft. However, the school did not notify students of the software's existence, nor did it set up guidelines for when the technical staff could operate the cameras.

An internal investigation revealed that the cameras had been activated on more than 40 laptops and captured more than 65,000 images. Some students were photographed thousands of times, including when they were partially undressed and sleeping. The school board later banned the school's use of cameras to surveil students.

On April 5, 2011, John Gass picked up his mail in Needham, Mass., and was surprised to find a letter stating that his driver's license had been revoked. "I was just blindsided," John said.

John is a municipal worker — he repairs boilers for the town of Needham. Without a driver's license, he could not do his job. He called the Massachusetts Registry of Motor Vehicles and was instructed to appear at a hearing and bring documentation of his identity. They wouldn't tell him why his license was revoked.

When John showed up for his hearing, he learned that the RMV had begun using facial recognition software to search for identity fraud. The software compared license photos to identify people who might have applied for multiple licenses under aliases. The software had flagged him and another man as having similar photos and had required them to prove their identities.

John was a victim of what I call the "police lineup" — dragnets that allow the police to treat everyone as a suspect. This overturns our traditional view that our legal system treats us as "innocent until proven guilty."

The most obvious example of this is airport body scanners. The scanners conduct the most intrusive of searches — allowing the viewer to peer beneath a person's clothes — without any suspicion that the person being scanned is a criminal. In fact, the burden is on the individual to "prove" his or her innocence, by passing through the scanner without displaying any suspicious items.

John Gass luckily was given a chance to plead his case. But it was an absurd case. He was presented with a photo of himself from 13 years ago.

"It doesn't look like you," the officer said.

"Of course it doesn't," John said. "It's 13 years later. I was a hundred pounds lighter."

John presented his passport and his birth certificate, and his license was reinstated. But the officers wouldn't give him any paperwork to prove that it was reinstated. He wanted a piece of paper to show his boss that he was okay to drive again.

John filed a lawsuit against the RMV, claiming that he had been denied his constitutionally protected right to due process. The RMV argued that he had been given a window of opportunity to dispute the revocation because the letter had been mailed on March 24 and the license wasn't revoked until April 1. John didn't pick up his mail until April 5. The Suffolk County Superior Court granted the RMV's motion to dismiss. Gass appealed, but the appellate court also ruled against him.

John felt betrayed by the whole process. He now is very careful around state police because he worries that he won't be treated fairly. "There are no checks and balances," he said. "It is only natural humans are going to make mistakes. But there is absolutely no oversight.

THESE STORIES ILLUSTRATE a simple truth: Information is power. Anyone who holds a vast amount of information about us has power over us.

At first, the information age promised to empower individuals with access to previously hidden information. We could comparison shop across the world for the best price, for the best bit of knowledge, for people who shared our views.

But now the balance of power is shifting, and large institutions — both governments and corporations — are gaining the upper hand in the information wars, by tracking vast quantities of information about mundane aspects of our lives.

Now we are learning that people who hold our data can subject us to embarrassment, or drain our pocketbooks, or accuse us of criminal behavior. This knowledge could, in turn, create a culture of fear.

Consider Sharon and Bilal. Once they learned they were being monitored on PatientsLikeMe, Sharon and Bilal retreated from the Internet. Bilal deleted his posts from the forum. He took down the drug dosage history that he had uploaded onto the site. Sharon stopped using the Internet altogether and doesn't allow her son to use it without supervision.

They started talking by phone but missed the online connections they had forged on PatientsLikeMe. "I haven't found a replacement," Sharon said. Bilal agreed: "The people on PLM really know how it feels."

But neither of them could tolerate the fear of surveillance. Sharon said she just couldn't live with the uncertainty of "not knowing if every keystroke I'm making is going to some other company," she said. Bilal added, "I just feel that the trust was broken."

Sharon and Bilal's experience is a reminder that for all its technological pyrotechnics, the glory of the digital age has always been profoundly human. Technology allows us to find people who share our inner thoughts, to realize we're not alone. But technology also allows others to spy on us, causing us to pull back from digital intimacy.

When people ask me why I care about privacy, I always return to the simple thought that I want there to be safe, private spaces in the world for Sharon and Bilal, for myself, for my children, for everybody. I want there to be room in the digital world for letters sealed with hot wax. Must we always be writing postcards that can — and will — be read by anyone along the way?