By Jason Meisner
The U.S. Supreme Court on Monday declined to hear an appeal of a controversial Illinois law prohibiting people from recording police officers on the job.
By passing on the issue, the justices left in place a federal appeals court ruling that found that the state's anti-eavesdropping law violates free-speech rights when used against people who audiotape police officers.
A temporary injunction issued after that June ruling effectively bars Cook County State's Attorney Anita Alvarez from prosecuting anyone under the current statute. On Monday, the American Civil Liberties Union, which brought the lawsuit against Alvarez, asked a federal judge hearing the case to make the injunction permanent, said Harvey Grossman, legal director of the ACLU of Illinois.
Grossman said he expected that a permanent injunction would set a precedent across Illinois that effectively cripples enforcement of the law.
Alvarez's office will be given a deadline to respond to the ACLU request, but on Monday, Sally Daly, a spokeswoman for Alvarez, said a high court ruling in the case could have provided "prosecutors across Illinois with legal clarification and guidance with respect to the constitutionality and enforcement" of the statute.
Illinois' eavesdropping law is one of the harshest in the country, making audio recording of a law enforcement officer — even while on duty and in public — a felony punishable by up to 15 years in prison.
Public debate over the law had been simmering since last year. In August 2011, a Cook County jury acquitted a woman who had been charged with recording Chicago police internal affairs investigators she believed were trying to dissuade her from filing a sexual harassment complaint against a patrol officer.
Judges in Cook and Crawford counties later declared the law unconstitutional, and the McLean County state's attorney cited flaws in the law when he dropped charges in February against a man accused of recording an officer during a traffic stop.
Alvarez argued that allowing the recording of police would discourage civilians from speaking candidly to officers and could cause problems securing crime scenes or conducting sensitive investigations.
But a federal appeals panel ruled that the law "restricts far more speech than necessary to protect legitimate privacy interests."
Chicago police Superintendent Garry McCarthy has said he would favor a change allowing citizens to tape the police and vice versa.
Meanwhile, several efforts to amend the statute in Springfield have stalled in committee amid heavy lobbying from law enforcement groups in favor of the current law.
20121128
Supreme Court rejects plea to ban taping of police in Illinois
Drivers adapt to red-light cameras
Written by Jim Walsh
TRENTON — A pilot program for red-light cameras in New Jersey appears to be changing drivers’ behavior, state officials said Monday, noting an overall decline in traffic citations and right-angle crashes.
The Department of Transportation also said, however, that rear-end crashes have risen by 20 percent and total crashes are up by 0.9 percent at intersections where cameras have operated for at least a year.
The agency recommended the program stay in place, calling for “continued data collection and monitoring” of camera-monitored intersections.
The department’s report drew immediate criticism from Assemblyman Declan O’Scanlon, R-Monmouth, who wants the cameras removed. He called the program “a dismal failure,” saying DOT statistics show the net costs of accidents had climbed by more than $1 million at intersections with cameras.
“Any rational person reading this document would conclude that the program has failed and it’s time to pack it in,” O’Scanlon said.
In South Jersey, intersections in Cherry Hill, Deptford, Glassboro, Gloucester Township, Monroe and Stratford have red-light cameras.
The five-year program began in December 2009.
The DOT’s report noted that two intersections in Newark have been part of the camera program for two years, and that 24 others in six communities have been recording violations for at least one year. At the Newark sites, the report said, crashes in the latest year were down by 57 percent from the “pre-camera year,” with decreases of 86 percent for right-angle collisions and 42 percent for same-direction crashes.
It said the number of citations issued at the Newark intersections fell by 85 percent over the two-year period. “While there is no expectation that citations will drop to zero, there is an expectation that driver behavior will change ... and these locations appear to be fulfilling these expectations.”
But the DOT also said the statistics from Newark “are still too limited to draw any definitive conclusions about the pilot program at this time.”
And O’Scanlon argued the program has changed drivers’ behavior in a negative way.“What we are doing is making people paranoid — causing them to slam on the brakes at the slightest hint a light might change, or deciding to fail to make even an absolutely safe right turn on red,” he said.
20121125
How Free Speech Died on Campus
A young activist describes how universities became the most authoritarian institutions in America.
By SOHRAB AHMARI
New York
At Yale University, you can be prevented from putting an F. Scott Fitzgerald quote on your T-shirt. At Tufts, you can be censured for quoting certain passages from the Quran. Welcome to the most authoritarian institution in America: the modern university—"a bizarre, parallel dimension," as Greg Lukianoff, president of the Foundation for Individual Rights in Education, calls it.
Mr. Lukianoff, a 38-year-old Stanford Law grad, has spent the past decade fighting free-speech battles on college campuses. The latest was last week at Fordham University, where President Joseph McShane scolded College Republicans for the sin of inviting Ann Coulter to speak.
"To say that I am disappointed with the judgment and maturity of the College Republicans . . . would be a tremendous understatement," Mr. McShane said in a Nov. 9 statement condemning the club's invitation to the caustic conservative pundit. He vowed to "hold out great contempt for anyone who would intentionally inflict pain on another human being because of their race, gender, sexual orientation, or creed."
To be clear, Mr. McShane didn't block Ms. Coulter's speech, but he said that her presence would serve as a "test" for Fordham. A day later, the students disinvited Ms. Coulter. Mr. McShane then praised them for having taken "responsibility for their decisions" and expressing "their regrets sincerely and eloquently."
Mr. Lukianoff says that the Fordham-Coulter affair took campus censorship to a new level: "This was the longest, strongest condemnation of a speaker that I've ever seen in which a university president also tried to claim that he was defending freedom of speech."
I caught up with Mr. Lukianoff at New York University in downtown Manhattan, where he was once targeted by the same speech restrictions that he has built a career exposing. Six years ago, a student group at the university invited him to participate in a panel discussion about the Danish cartoons depicting the Prophet Muhammad that had sparked violent rioting by Muslims across the world.
When Muslim students protested the event, NYU threatened to close the panel to the public if the offending cartoons were displayed. The discussion went on—without the cartoons. Instead, the student hosts displayed a blank easel, registering their own protest.
"The people who believe that colleges and universities are places where we want less freedom of speech have won," Mr. Lukianoff says. "If anything, there should be even greater freedom of speech on college campuses. But now things have been turned around to give campus communities the expectation that if someone's feelings are hurt by something that is said, the university will protect that person. As soon as you allow something as vague as Big Brother protecting your feelings, anything and everything can be punished."
You might say Greg Lukianoff was born to fight college censorship. With his unruly red hair and a voice given to booming, he certainly looks and sounds the part. His ethnically Irish, British-born mother moved to America during the 1960s British-nanny fad, while his Russian father came from Yugoslavia to study at the University of Wisconsin. Russian history, Mr. Lukianoff says, "taught me about the worst things that can happen with good intentions."
Growing up in an immigrant neighborhood in Danbury, Conn., sharpened his views. When "you had so many people from so many different backgrounds, free speech made intuitive sense," Mr. Lukianoff recalls. "In every genuinely diverse community I've ever lived in, freedom of speech had to be the rule. . . . I find it deeply ironic that on college campuses diversity is used as an argument against unbridled freedom of speech."
After graduating from Stanford, where he specialized in First Amendment law, he joined the Foundation for Individual Rights in Education, an organization co-founded in 1999 by civil-rights lawyer Harvey Silverglate and Alan Charles Kors, a history professor at the University of Pennsylvania, to counter the growing but often hidden threats to free speech in academia. FIRE's tactics include waging publicity campaigns intended to embarrass college administrators into dropping speech-related disciplinary charges against individual students, or reversing speech-restricting policies. When that fails, FIRE often takes its cases to court, where it tends to prevail.
In his new book, "Unlearning Liberty," Mr. Lukianoff notes that baby-boom Americans who remember the student protests of the 1960s tend to assume that U.S. colleges are still some of the freest places on earth. But that idealized university no longer exists. It was wiped out in the 1990s by administrators, diversity hustlers and liability-management professionals, who were often abetted by professors committed to political agendas.
"What's disappointing and rightfully scorned," Mr. Lukianoff says, "is that in some cases the very professors who were benefiting from the free-speech movement turned around to advocate speech codes and speech zones in the 1980s and '90s."
Today, university bureaucrats suppress debate with anti-harassment policies that function as de facto speech codes. FIRE maintains a database of such policies on its website, and Mr. Lukianoff's book offers an eye-opening sampling. What they share is a view of "harassment" so broad and so removed from its legal definition that, Mr. Lukianoff says, "literally every student on campus is already guilty."
At Western Michigan University, it is considered harassment to hold a "condescending sex-based attitude." That just about sums up the line "I think of all Harvard men as sissies" (from F. Scott Fitzgerald's 1920 novel "This Side of Paradise"), a quote that was banned at Yale when students put it on a T-shirt. Tufts University in Boston proscribes the holding of "sexist attitudes," and a student newspaper there was found guilty of harassment in 2007 for printing violent passages from the Quran and facts about the status of women in Saudi Arabia during the school's "Islamic Awareness Week."
At California State University in Chico, it was prohibited until recently to engage in "continual use of generic masculine terms such as to refer to people of both sexes or references to both men and women as necessarily heterosexual." Luckily, there is no need to try to figure out what the school was talking about—the prohibition was removed earlier this year after FIRE named it as one of its two "Speech Codes of the Year" in 2011.
At Northeastern University, where I went to law school, it is a violation of the Internet-usage policy to transmit any message "which in the sole judgment" of administrators is "annoying."
Conservatives and libertarians are especially vulnerable to such charges of harassment. Even though Mr. Lukianoff's efforts might aid those censorship victims, he hardly counts himself as one of them: He says that he is a lifelong Democrat and a "passionate believer" in gay marriage and abortion rights. And free speech. "If you're going to get in trouble for an opinion on campus, it's more likely for a socially conservative opinion."
Consider the two students at Colorado College who were punished in 2008 for satirizing a gender-studies newsletter. The newsletter had included boisterous references to "male castration," "feminist porn" and other unprintable matters. The satire, published by the "Coalition of Some Dudes," tamely discussed "chainsaw etiquette" ("your chainsaw is not an indoor toy") and offered quotations from Teddy Roosevelt and menshealth.com. The college found the student satirists guilty of "the juxtaposition of weaponry and sexuality."
"Even when we win our cases," says Mr. Lukianoff, "the universities almost never apologize to the students they hurt or the faculty they drag through the mud." Brandeis University has yet to withdraw a 2007 finding of racial harassment against Prof. Donald Hindley for explaining the origins of "wetback" in a Latin-American Studies course. Indiana University-Purdue University Indianapolis apologized to a janitor found guilty of harassment—for reading a book celebrating the defeat of the Ku Klux Klan in the presence of two black colleagues—but only after protests by FIRE and an op-ed in these pages by Dorothy Rabinowitz.
What motivates college administrators to act so viciously? "It's both self-interest and ideological commitment," Mr. Lukianoff says. On the ideological front, "it's almost like you flip a switch, and these administrators, who talk so much about treating every student with dignity and compassion, suddenly come to see one student as a caricature of societal evil."
Administrative self-interest is also at work. "There's been this huge expansion in the bureaucratic class at universities," Mr. Lukianoff explains. "They passed the number of people involved in instruction sometime around 2006. So you get this ever-renewing crop of administrators, and their jobs aren't instruction but to police student behavior. In the worst cases, they see it as their duty to intervene on students' deepest beliefs."
Consider the University of Delaware, which in fall 2007 instituted an ideological orientation for freshmen. The "treatment," as the administrators called it, included personal interviews that probed students' private lives with such questions as: "When did you discover your sexual identity?" Students were taught in group sessions that the term racist "applies to all white people" while "people of color cannot be racists." Once FIRE spotlighted it, the university dismantled the program.
Yet in March 2012, Kathleen Kerr, the architect of the Delaware program, was elected vice president of the American College Personnel Association, the professional group of university administrators.
A 2010 survey by the American Association of Colleges and Universities found that of 24,000 college students, only 35.6% strongly agreed that "it is safe to hold unpopular views on campus." When the question was asked of 9,000 campus professionals—who are more familiar with the enforcement end of the censorship rules—only 18.8% strongly agreed.
Mr. Lukianoff thinks all of this should alarm students, parents and alumni enough to demand change: "If just a handful more students came in knowing what administrators are doing at orientation programs, with harassment codes, or free-speech zones—if students knew this was wrong—we could really change things."
The trouble is that students are usually intimidated into submission. "The startling majority of students don't bother. They're too concerned about their careers, too concerned about their grades, to bother fighting back," he says. Parents and alumni dismiss free-speech restrictions as something that only happens to conservatives, or that will never affect their own children.
"I make the point that this is happening, and even if it's happening to people you don't like, it's a fundamental violation of what the university means," says Mr. Lukianoff. "Free speech is about protecting minority rights. Free speech is about admitting you don't know everything. Free speech is about protecting oddballs. It means protecting dissenters."
It even means letting Ann Coulter speak.
The Hackback Debate
The vulnerability of computer networks to hacking grows more troubling every year. No network is safe, and hacking has evolved from an obscure hobby to a major national security concern. Cybercrime has cost consumers and banks billions of dollars. Yet few cyberspies or cybercriminals have been caught and punished. Law enforcement is overwhelmed both by the number of attacks and by the technical unfamiliarity of the crimes.
Can the victims of hacking take more action to protect themselves? Can they hack back and mete out their own justice? The Computer Fraud and Abuse Act (CFAA) has traditionally been seen as making most forms of counterhacking unlawful. But some lawyers have recently questioned this view. Some of the most interesting exchanges on the legality of hacking back have occurred as dueling posts on the Volokh Conspiracy. In the interest of making the exchanges conveniently available, they are collected here a single document.
The debaters are:
RATs and Poison: The Policy Side of Counterhacking
Stewart Baker
Good news for network security: the tools attackers use to control compromised computers are full of security holes. Undergrad students interning for Matasano Security have reverse-engineered the Remote Access Tools (RATs) that attackers use to gain control of compromised machines.
RATs, which can conduct keylogging, screen and camera capture, file management, code execution, and password-sniffing, essentially give the attacker a hook in the infected machine as well as the targeted organization.
This is great news for cybersecurity. It opens new opportunities for attribution of computer attacks, along lines I’ve suggested before: “The same human flaws that expose our networks to attack will compromise our attackers’ anonymity.”
In this case, the possibility of a true counterhack is opened up. The flaws identified by Hertz and Denbow could allow defenders to decrypt stolen documents and even to break into the attacker’s command and control link – while the attacker is still on line.
It’s only a matter of time before counterhacks become possible. The real question is whether they’ll ever become legal. Both the reporter and the security researcher agree that “legally, organizations obviously can’t hack back at the attacker.”
I believe they are wrong on the law, but first let’s explore the policy question.
Should victims be able to poison attackers’ RATs and then use the compromised RAT against their attacker?
It’s obvious to me that somebody should be able to do this. And, indeed, it seems nearly certain that somebody in the US government — using some combination of law enforcement, intelligence, counterintelligence, and covert action authorities — can do this. (I note in passing, though, that there may be no one below the President who has all these authorities, so that as a practical matter RAT poisoning may not happen without years of delay and a convulsive turf fight. That’s embarrassing, but beside the point, at least today.)
There are drawbacks to having the government do the job. It is likely that counterhacking will work best if the attacker is actually on line, when the defenders can stake out the victim’s system, give the attacker bad files, monitor the command and control machine, and to copy, corrupt, or modify ex-filtrated material. Defenders may have to swing into action with little warning.
Who will do this? Put aside the turf fight; does NSA, the FBI, or the CIA have enough technically savvy counterhackers to stake out the networks of the Fortune 500, waiting for the bad guys to show up?
Even if they do, who wants them there? Privacy campaigners will not approve of the idea of giving the government that kind of access to private networks, even networks that are under attack. For that matter, businesses with sensitive data won’t much like the stark choice of either letting foreign governments steal it all or giving the US government wide access to their networks.
On a policy perspective, surely everyone would be happier if businesses could hire their own network defenders to do battle with attackers. This would greatly reinforce the thin ranks of government investigators. It would make wide-ranging government access to private networks less necessary. And busting the government monopoly on active defense would probably increase the diversity, imagination, and effectiveness of the counterhacking community.
But there is always the pesky question of vigilantism…
First, as I’ve mentioned previously, allowing private counterhacking does not mean reverting to a Hobbesian war of all against all. Government sets rules and disciplines violators, just as it does with other privatized forms of law enforcement, from the securities industry’s FINRA to private investigators.
Second, the “vigilatism” claim depends heavily on sleight of hand. Those against the idea call it “hacking back,” with the heavy implication that the defenders will blindly fire malware at whoever touches their network, laying indiscriminate waste to large swaths of the Internet. For the record, I’m against that kind of hacking back too. But RAT poison makes possible a kind of counterhacking that is far more tailored and prudent. Indeed, with such a tool, trashing the attacker’s system is dumb; it is far more valuable as an intelligence tool than for any other purpose.
Of course, the defenders will be collecting information, even if they aren’t trashing machines. And gathering information from someone else’s computer certainly raises moral and legal questions. So let’s look at the computers that RAT poisoning might allow investigators to access.
First, and most exciting, this research could allow us to short-circuit some of the cutouts that attackers use to protect themselves. Admittedly, this is beyond my technical capabilities, but it seems highly unlikely to me that an attacker can use a RAT effectively without a real-time connection from his machine to the compromised network. Sure, the attacker can run his commands through onion routers and cutout controllers. But at the end of all the hops, the attacker is still typing here and causing changes there. If the software he’s using can be compromised, then it may also be possible to inject arbitrary code into his machine and thus compromise both ends of the attacker’s communications. That’s the Holy Grail of attribution, of course.
Is there a policy problem with allowing private investigators to compromise the attacker’s machine for the purpose of gathering attribution information? Give me a break. Surely not even today’s ACLU could muster more than a flicker of concern for a thief’s right to keep his victim from recovering stolen data.
The harder question comes when the attacker is using a cutout — an intermediate command and control computer that actually belongs to someone else. In theory, gathering information on the intermediate computer intrudes on the privacy of the true owner. But, assuming that he’s not a party to the crime, he has already lost control of his computer and his privacy, since the attacker is already using it freely. What additional harm does the owner suffer if the victim gathers information on his already-compromised machine about the person who attacked them both? Indeed, an intermediate command and control machine is likely to hold evidence about hundreds of other compromised networks. Most of those victims don’t know they’ve been compromised, but their records are easy to recover from the intermediate machine once it has been accessed. Surely the social value of identifying and alerting all those victims outweighs the already attenuated privacy interest of the true owner.
In short, there’s a strong policy case for letting victims of cybercrime use tools like this to counterhack their attackers. If the law forbids it, then to paraphrase Mr. Bumble, “the law is a ass, a idiot,” and Congress should change it.
But I don’t think the law really does prohibit counterhacking of this kind, for reasons I’ll offer in a later post.
RATs and Poison Part II: The Legal Case for Counterhacking
Stewart Baker
In an earlier post, I made the policy case for counterhacking, and specifically for exploiting security weaknesses in the Remote Access Tools, or RATs, that hackers use to exploit computer networks.
There are three good reasons to poison an attacker’s RAT:
More problematic is the legal case for counterhacking, due to long-standing opposition from the Justice Department’s Computer Crime and Intellectual Property Section, or CCIPS. Here’s what CCIPS says in its Justice Department manual on computer crime:
Although it may be tempting to do so (especially if the attack is ongoing), the company should not take any offensive measures on its own, such as “hacking back” into the attacker’s computer—even if such measures could in theory be characterized as “defensive.” Doing so may be illegal, regardless of the motive. Further, as most attacks are launched from compromised systems of unwitting third parties, “hacking back” can damage the system of another innocent party
This is a mix of law and policy. I’ve already explained why the Justice Department’s policy objections.
That leaves the law. Does the CFAA, prohibit counterhacking? The use of the words “may be illegal,” and “should not” are a clue that the law is at best ambiguous.
To oversimplify a bit, violations of the CFAA depend on “authorization.” If you have authorization, it’s nearly impossible to violate the CFAA, no matter what you do to a computer. If you don’t, it’s nearly impossible to avoid violating the CFAA.
But the CFAA doesn’t define “authorization.” It’s clear enough that things I do on my own computer or network are authorized. That means that the first step in poisoning a RAT is lawful. You are “authorized” under the CFAA to modify any code on your network, even if it was installed by a hacker. (For purposes of this discussion we’ll put aside copyright issues; it’s unlikely in any event that a hacker could enforce intellectual property rights against his victim.)
The more difficult question is whether you’re “authorized” to hack into the attacker’s machine to extract information about him and to trace your files. As far as I know, that question has never been litigated, and Congress’s silence on the meaning of “authorization” allows both sides to make very different arguments. The attacker might say, “I have title to this computer; no one else has a right to look at its contents. Therefore you accessed it without authorization.” And the victim could say, “Are you kidding? It may be your computer but it’s my data, and I have a right to follow and retrieve stolen data wherever the thief takes it. Your computer is both a criminal tool and evidence of your crime, so any authorization conveyed by your title must take a back seat to mine.”
In a civil suit, the lack of definition would make both of those arguments plausible. Maybe “authorization” under the CFAA is determined solely by title; and maybe it incorporates all the constraints that law and policy put on property rights in other contexts. Personally, I dislike statutory interpretations that fly in the face of good policy, so I think the counterhacker wins that argument.
No matter; computer hackers won’t be bringing many lawsuits against their victims. The real question is whether victims can be criminally prosecuted for breaking into their attacker’s machine.
And here the answer is surely not.
The ambiguity of the statute makes a successful prosecution nearly impossible; deeply ambiguous criminal laws like this are construed in favor of the defendant. See, e.g., McBoyle v. United States, 283 U.S. 25, 27 (1931) (“[I]t is reasonable that a fair warning should be given to the world, in language that the common world will understand, of what the law intends to do if a certain line is passed. To make the warning fair, so far as possible, the line should be clear.”) (Holmes, J.).
The same analysis applies even to the hardest case, where victims use a compromised RAT to access command and control machines that turn out to be owned by an innocent third party. An innocent third party is a more appealing witness, but his machine was already compromised by hackers before the counterhacking victim came along, and it was being used as an instrumentality of crime, sufficient in some states to justify its forfeiture. It remains true that the counterhacker is pursuing his own property.
Finally, when he begins his counterhack, the victim does not know whether the intermediate machine is controlled by an attacker or by an innocent third party. Why should the law presume that it is owned by an innocent party — or force the victim to make that presumption, on pain of criminal liability? (There’s room for empirical research here; while a few years ago hackers seemed to favor compromising third-party machines for command and control, the Luckycat study suggests that some attackers now prefer to use machines and domains that they control. As the latter approach grows more common, a presumption that intermediate machines are owned by innocent third parties will grow even more artificial.)
All told, it seems reasonable to let victims counterhack a command and control machine that is ex-filtrating information from the victim’s network, at least enough to determine who is in control of the machine, to identify other victims being harmed by the machine, and to follow the attacker back to his origin (or at least his next hop) if the intermediate machine is simply another victim. Requiring the victim not to counterhack if there’s uncertainty about the innocence of the machine’s owner simply gives an immunity to attackers.
The balance of equities thus seem to me to favor a recognition of the victim’s authorization to conduct at least limited surveillance of a machine that is, after all, directly involved in a violation of the victim’s rights. If “authorization” under the CFAA really boils down to a balancing of moral and social rights, and nothing in the law refutes that view, then the counterhacker has at least enough moral and social right on his side to make a criminal prosecution problematic — unless he damages the third party’s machine, in which case all bets are off.
The Legal Case Against Hack-Back: A Response to Stewart Baker
Orin Kerr
Stewart says his analysis is “surely” right. I think it’s obviously wrong. Here’s why.
The CFAA is a computer trespass statute. It prohibits accessing another person’s computer “without authorization” just like trespass laws prohibit walking on to someone else’s land without their consent. Like a traditional trespass statute, it is the owner/operator of the property that controls authorization. There is lots of disagreement about how computer owner/operators can create rights on their machines that the law will enforce but everyone agrees that hacking into someone else’s machine is the quintessential example of the kind of conduct prohibited by the statute.
Contrary to Stewart’s claim, there is no genuine ambiguity over whether the statute protects the rights of computer owners or data owners. The statutory language expressly prohibits “intentionally access[ing] a computer without authorization” (emphasis added). It protects access to computers, not access to stolen data. The rule here is the same rule that is used in real property law: The owner/operator of the property controls who has access to it. The fact that your neighbor borrowed your baseball glove and you want it back doesn’t give you a right to break into everything your neighbor owns on the theory that you can authorize yourself to go anywhere to get your glove back. The same goes for computers.
If we assume Stewart’s I-like-it-and-therefore-it-is-the-law argument were valid, I think the results it would produce would be terrible. For every one hypothetical you can devise in which such hacking back might seem like a good thing, you can come up with hundreds of examples in which it wouldn’t be. For example, wouldn’t Stewart’s theory allow copyright holders to hack into the computers of anyone suspected of having any infringing materials on their computers? That would be bad. More broadly, Stewart’s theory appears to have few limits. His test seems to boil down to good faith: As long as someone believes that they were a victim of a computer intrusion and has a good-faith belief that they can help figure out who did this or minimize the loss of the intrusion by hacking back, the hacking back is authorized. Given the well-known difficulty of locating the source of intrusions, that’s not a power that we want to give to every person in the US who happens to own or control a computer.
Another problem with Stewart’s theory is that it would have the bizarre effect of allowing hacking victims to declare that the people who hacked into their machines can’t access their own computers. That is, if A hacks into B’s machine, B just has to announce that A now can’t use A’s own machine. If A uses his own computer, that is “without authorization” from B and therefore a crime. It’s a bizarre result, and even more bizarre given that Stewart uses the rule of lenity to justify it.
Baker Replies to Kerr
Stewart Baker
Orin Kerr and I agree that “authorization” is the central, and undefined, key to criminal liability under the CFAA. In Orin’s view, “authorization” can be determined by asking two questions: First, does the CFAA protect computers or data? And, second, who controls a computer, the data owner or the computer owner?
I believe the right answer to each question is “both.” The CFAA can and should protect both computers and data stored on computers. Similarly, more than one person can have rights to data on a computer. Orin believes that the CFAA forces a choice. If it protects computers it can’t protect data. You either have full authorization or you have none.
If anything the statute refutes that argument. The only textual clue to what the statute means by “authorized” is found in a section that imposes liability on users who exceed their authorized access to a computer; that term is defined as follows: “[T]he term ‘exceeds authorized access’ means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.” Put another way, you exceed authorized access if you obtain or alter information you’re not entitled to obtain or alter.
This definition undercuts both of Orin’s assumptions about authority. And it treats “authorized” and “entitled” as more or less synonymous, which isn’t exactly consistent with the idea that authorization is all-or-nothing. If I’m attending a demonstration on the Mall, and the Park Police tell me to move on, I’m likely to say, “Sorry, but I’m entitled to be here.” As I am. But that doesn’t mean that I can then tell them to move on. They’re entitled to be there too. And what if I try to enforce my edict by taking a swing at one of them? He might say, “You’re entitled to be here, but you’re not entitled to do that here.” Quite right, too; it’s jail for me. It turns out my entitlement was real, but it was neither exclusive nor unlimited.
So too with computers under the CFAA. I may be entitled to retrieve my stolen data stolen from a machine without being entitled to take the machine to a pawn shop and sell it, or to tell the innocent owner what he can and cannot do with it.
And what about policy? Which reading of the statute produces better results?
To understand the policy consequences of the choice, let’s begin with a reminder of our strategic situation. Right now, every computer and network in the country is vulnerable to intrusion by authoritarian foreign governments if not criminals.
The intruders have one clear vulnerability; they collect this stolen data on command-and-control machines, which may in some cases belong to other innocent victims. Victims could gain access to these machines, could render the stolen information worthless, could gather clues about the attackers, and could even identify hundreds of other victims who probably don’t yet know they’ve been compromised. That would be a very good thing.
In Orin’s world, though, it’s illegal. Under his reading of the law, the hundreds of victims go unnotified, the evidence goes ungathered, the stolen data goes, well, to China, until law enforcement gets around to the cyber equivalent of stolen-bicycle paperwork.
And what of all those bad policy outcomes that Orin conjures – the crazed vigilantes and the RIAA rummaging in everyone’s computers? The answer is that we, or at least the courts, don’t have to recognize their authority to do that. The courts don’t have to treat “good faith” as creating a counterhacking entitlement; they could as easily insist on a higher standard, such as probable cause. They could recognize the counterhacker’s authority to gather evidence but not to harm innocent third parties, just as they distinguish today between demonstrators who are entitled to throw insults but not punches on park property. They could reject the notion that the copying of 99-cent music files justifies the same response as a campaign to compromise every network in the country. They could distinguish, in short, between baby and bathwater.
It’s true that my definition of authorization is more complicated than Orin’s, that it requires more line-drawing. But so does life. Orin’s alternative is as simple — and as unjust — as applying the murder laws equally to serial killers and to homeowners who shoot home invaders. Nothing in law or policy requires that we adopt such a reading.
More on Hacking Back: Kerr Replies to Baker
Orin Kerr
Stewart makes a textual argument and a policy argument. I find both extremely weak.
Stewart’s first argument is that it’s possible to read the statute as giving authorization rights to people who have rights in data rather than rights in computers because the statute doesn’t textually distinguish between computers and underlying data.
If you read the whole statute, though, that’s plainly wrong. The statute repeatedly and consistently distinguishes between computers and data. The elements of the statute dealing with rights with computers are covered by the basic unauthorized access concept common to most of the different crimes listed in 18 U.S.C. 1030(a). In contrast, the elements dealing with data are covered by the additional elements Congress required for the additional offenses listed in 1030(a). It’s one of the most basic divisions in the statute.
As I read the statute Congress was pretty careful to distinguish rights to computers — the trespass into the machine, covered by the unauthorized access prohibition — from rights to data — the extra elements of 1030(a) for the different crimes that Congress created. Given that, I don’t think it makes textual sense to read the unauthorized access prohibition as governed by rights in data. The statute is just not as mystifying and unclear as Stewart claims. (Also, what does it mean to “own” data? If someone copies this blog post and saves it on their computer without my consent, can I hack into the computer because I “own” that data? Concepts such as “owning” data and when data becomes “stolen” are notoriously difficult to work with — indeed, 18 U.S.C. 1030 was passed so that such questions didn’t need to be asked. It seems puzzling to reintroduce them sub silentio here.)
Second, is Stewart’s policy argument: Justice demands this reading of the statute because the Chinese are invading our computers and we need to stop them. In his post, Stewart suggests that a proper jurisprudential sophistication frees judges to do whatever they want with the statute to deal with the Chinese. With their newfound sense of sophistication, judges should go forth and devise a set of principles for interpreting “authorization” by which it is not a crime for big US companies to go after their stolen data when the Chinese take that data while it is still illegal for people to hack back when they’re not very good at it, the RIAA wants to do it, or there isn’t really a good reason for it. Stewart doesn’t actually offer any legal basis for that distinction. He doesn’t have an argument for where the line should be or even what principles should be used to interpret authorization. He just wants judges to go figure this stuff out somehow.
If someone needs to figure this stuff out, though, it’s a job for Congress instead of the courts. Stewart isn’t just reading the statute. He’s asking judges to write a new statute that he thinks would be better than the one we now have. Maybe Congress should consider the kind of exception Stewart wants. It’s hard to tell, as Stewart hasn’t told us what the new statute should look like. (Instead, he has only told us the result the statute should reach on one case.) But as long as we’re only talking about what the statute presently means — that is, what Congress passed already, and what courts have to interpret — I don’t see a plausible way to read “authorization” to get to the result Stewart wants.
If I were Stewart, I would try to rely on the necessity defense instead of creative readings of authorization to get where he wants to go. Stewart’s argument is best made not as the claim that this isn’t unauthorized, but that it is unauthorized conduct justified by the specific circumstances he has in mind. That’s an argument for the affirmative defense of necessity. Necessity is a nice and vague exception, which is helpful for Stewart’s purposes. It’s a controversial exception as a matter of federal law, but at least there’s some support for it. And it seems to be really what Stewart has in mind. In my view, it would be better to try to make that argument directly rather than by appeals to justice or interpretations of “authorization.”
Baker’s Last Response
Stewart Baker
Now the debate with Orin is actually getting somewhere. Sort of. Here’s a scorecard:
1. Does authorization depend exclusively on ownership?
A Final Response on Hacking Back
Orin’s latest post does a good job of showing that the CFAA often draws a coherent distinction between rights in data and rights in a computer, and that rights in the computer are the statute’s principal focus. I don’t disagree.
Where we differ is how much that matters. Orin seems convinced that this distinction makes rights in data irrelevant to the question of what constitutes authorized access to a computer. He doesn’t really offer a reason for treating it as irrelevant. He just assumes it must be, probably because he also assumes that authorization is an all or nothing concept, so that if the owner has authorization no one else has any, and vice versa.
But Orin’s assumption has no basis in the statute that I can see. As my last response says, that’s like assuming that because a trespass statute protects the owners of land, everyone else must be punished as a trespasser, no matter what other rights they have to enter the property. That would make felons of rescuers, people in hot pursuit of thieves, easement holders, and government officials. You could come to that conclusion if that’s what the law unequivocally said, but in this case the law only makes felons of people who are not authorized (or not entitled) to access the computer.
So why would we ignore other claims of entitlement – especially when ignoring those claims makes a felon of someone performing an act with undeniable social value?
Orin’s reluctance to defend his assumption is striking. Maybe he’s got a good response; but he hasn’t offered it yet.
2. Should policy influence the interpretation of “authorization”?
Orin continues to look down his nose at the introduction of policy into the interpretation of this central but undefined term. He thinks I’m requesting a new statute. In fact I’m asking the courts to recognize a perfectly plausible reading of “authorization,” in a criminal context where ambiguity would ordinarily be resolved in favor of the defendant.
I agree with Orin that this interpretation requires the courts to decide which entitlements should be recognized and which should not. He thinks that’s a role for Congress, not the courts, an argument that might be more persuasive in discussing a civil statute, or a criminal statute that was not deterring companies from responding aggressively to a dangerous intelligence attack on our economy and our society.
That said, I welcome Orin’s acknowledgement that maybe Congress should permit counterhacking in some circumstances. Though I fear the CCIPS Old Guard lives on in his heart, and that somehow no actual amendment will ever quite pass muster there.
3. Is necessity a defense for counterhacking?
Orin suggests that a federal criminal necessity defense might be more apt in this case. Maybe so, but he acknowledges that it is at best controversial. At worst, in fact, it doesn’t exist. So, while I won’t spurn even a modest agreement with Orin, the chance to prove an affirmative defense that may not apply isn’t likely to offer much comfort for companies that want to gather information about their attackers.
Orin Kerr
Thanks to Stewart for the interesting exchange on the (un)lawfulness of hacking back. Here are my concluding thoughts.
First, Stewart repeatedly draws analogies to the law of physical trespass that are faulty because they misunderstand the law of physical trespass. Stewart seems to think that it is legal to break into someone else’s house to retrieve your property stored inside. He also assumes that it is always okay for “rescuers, people in hot pursuit of thieves, easement holders, and government officials” to enter private property. From these assumptions, Stewart guesses that trespass law doesn’t apply to such cases because the conduct is authorized and thus can’t be a trespass. He builds his proposal on that assumption. Just treat electronic trespass like physical trespass, he says: Hack back is authorized just like analogous physical entries are authorized.
But trespass law doesn’t work that way. First, you don’t have a right to break into someone else’s house to retrieve your stuff. That’s a trespass. The issue comes up most often in criminal cases when a party who entered someone else’s home and took property is charged with trespass and burglary. It’s common for the defense to claim that that they entered to retrieve their own property: They thus concede liability for a criminal trespass but deny liability for the more serious crime of burglary. Cf. Auman v. People, 109 P.3d 647 (Colo. 2005). Similarly, those who are rescuers or police officers or those in hot pursuit don’t have a general exemption from trespass liability. Instead, they have to invoke an affirmative defense. Rescuers must invoke the necessity defense. See, e.g., City of Wichita v. Tilson, 253 Kan. 285 (1993). Police officers must invoke the affirmative defense of the Fourth Amendment. Either they have to produce a valid warrant or they have to identify an applicable exception to the warrant clause (one of which is hot pursuit). See, e.g., Entick v. Carrington, 95 Eng. Rep. 807 (K.B. 1765); Warden v. Hayden, 387 U.S. 294 (1967). Easement holders can’t trespass, but that’s because easements limit the property owner’s usual right to exclude.
What’s the lesson from physical trespass laws? It’s that trespass liability is actually pretty broad, and the kinds of exceptions that Stewart is using for purposes of analogy are a lot more limited than Stewart thinks. They’re affirmative defenses, not elements of the crime itself. So while I agree that we should treat physical trespass and cybertrespass the same way that means recognizing that hacking back violates 18 U.S.C. 1030 and that the only way to get out of liability is to fit the case into an affirmative defense.
What about the affirmative defense of necessity? It seems to respond to Stewart’s concerns. If any existing criminal law doctrine fits Stewart’s argument, that’s it. Stewart says it isn’t very helpful, though, because it “isn’t likely to offer much comfort for companies that want to gather information about their attackers.” It’s too doctrinally uncertain and vague for companies to rely on safely. I’ll concede that’s true. But how is it relevant? We’re just debating what the law is. What companies feel about that law is irrelevant to the question.
A final comment
Stewart Baker
I still don’t think we’ve quite engaged. My point in discussing the various trespass exceptions is not to import them into the CFAA. My point is that trespass does not turn entirely on title, because the law recognizes that there are times when a right to enter the property is allowed. That’s significant not for the precise content of the right but because the CFAA uses language (“authorization,” “entitlement”) that directly invites an examination of the rights of the intruder.
You might say that “authorization” doesn’t exactly invite a claim of moral right by the person accused of a CFAA violation. But the statute does equate authorization with entitlement, which does invite such a claim. And the Budapest Convention, which is a more or less direct translation of the CFAA into treaty-speak, goes even further, criminalizing access “without right.” Surely this invites defendants to say, “I didn’t access that computer without right. I have a right to pursue my data.”
Put another way, by using such an open-ended word as “authorization,” you could say that the CFAA incorporated the defense of necessity into the crime, along with other claims of right or entitlement. The Justice Department might say that incorporating such a vague and ambiguous defense into the statute is unfair because it makes prosecutions harder. But it was the Justice Department that chose the term in the first place, precisely because it is so ambiguous and capacious that it allowed prosecution of wrongdoers without much worry about changes in technology. To which I would reply, “That’s fine, CCIPS, but you have to take the good with the bad. If ‘authority’ stretches with the times for you, then it stretches with the times for the defendant.”
In fact, let’s carry that point just a bit further for illustrative purposes. CCIPS could have written a (slightly) more capacious and ambiguous statute making it a felony to “do wrong with a computer.” Under that even more future-proofed law it would surely be open to a defendant to argue that counterhacking is not wrong. It seems to me that “authorization” is a slightly more precise and certainly fancier-sounding variant of “doing wrong.”
I still don’t know why Orin thinks that this reading of “authorization” is plainly wrong.
***
The Rhetoric of Opposition to Self-Help:
Eugene Volokh
I was just talking to some people recently about the question of “digital self-defense” — whether organizations that are under cyberattack should be free to (and are free to) fight back against attacking sites by trying to bring those sites down, by hacking into the sites, and so on.
I don’t claim to know the definitive answer to this question; but I did want to say a few words about some common anti-self-help rhetorical tropes, which are sometimes heard both in this context and other contexts.
1. Vigilantism: Allowing digital self-defense (or, to be precise, digital defense of property), the argument goes, would mean sanctioning vigilantism; the nonvigilante right solution is to leave matters to law enforcement.
Yet the law has never treated defense of property as improper “vigilantism.” American law bars you from punishing those who attack you or your property, but it has always allowed you to use force to stop the attack, or prevent an imminent attack. There are limits on the use of force, such as the principle that generally (though not always) property may be defended only with nonlethal force. But generally speaking the use of force is allowed, and shouldn’t be tainted with the pejorative term of “vigilantism,” which connotes illegality. (Black’s Law Dictionary echoes this, defining vigilantism as “The act of a citizen who takes the law into his or her own hands by apprehending and punishing suspected criminals.”)
2. Taking the Law Into Your Own Hands: Critics of self-defense and defense of property also sometimes characterize it as “taking the law into your own hands.” This too implies, it seems to me, extralegal action, through which someone unlawfully taking into his own hands power that the law leaves only in law enforcement’s hands.
Yet the law has always placed in your own hands — or, if you prefer, has never taken away from your own hands — the right to defend yourself and your property (subject to certain limits). By using this right, you aren’t taking the law into your own hands. You’re using the law that has always been in your hands.
There are many reasons the law has allowed such self-defense and defense of property: It’s generally more immediate than what law enforcement can do; even after the fact, law enforcement is often stretched too thin even to investigate all crimes; sometimes law enforcement may be biased against certain people, and may not take their requests for help seriously, so self-help is the only game in town. There are also reasons to limit self-defense and defense of property (I’ll note a few below). But let’s not assume that self-defense and defense of property somehow involve unlawful arrogation of legal authority on the defenders’ part. Rather, they generally involve legally authorized exercise of legal authority.
3. But the Statute Has No Self-Defense Exceptions: Ah, some may say, perhaps in the physical world you have the right to defend yourself and your property — but the CFAA secures no such right, so whatever one’s views on self-help, the fact is that self-help is illegal.
Yet, surprising as it may seem to many, self-defense and defense of property may be allowed even without express statutory authorization. These defenses were generally recognized by judges, back when the criminal law was generally judge-made; and many jurisdictions don’t expressly codify them even now. Federal law, for instance, has no express “self-defense” or “defense of property” statute. The federal statute governing assaults within federal maritime and territorial jurisdiction simply says, in part,
Whoever, within the special maritime and territorial jurisdiction of the United States, is guilty of an assault shall be punished as follows ….
Assault is generally defined (more or less) as “any intentional attempt or threat to inflict injury upon someone else, when coupled with an apparent present ability to do so, and includes any intentional display of force that would give a reasonable person cause to expect immediate bodily harm, whether or not the threat or attempt is actually carried out or the victim is injured.” The federal criminal code thus on its face prohibits all assaults, including ones done to defend one’s life. Yet self-defense is a perfectly sound defense under federal law — because federal courts recognize self-defense as a general criminal defense, available even when the statute doesn’t specifically mention it.
(4) Assault by striking, beating, or wounding, by a fine under this title or imprisonment for not more than six months, or both.
(5) Simple assault, by a fine under this title or imprisonment for not more than six months, or both, or if the victim of the assault is an individual who has not attained the age of 16 years, by fine under this title or imprisonment for not more than 1 year, or both.
(6) Assault resulting in serious bodily injury, by a fine under this title or imprisonment for not more than ten years, or both.
(7) Assault resulting in substantial bodily injury to an individual who has not attained the age of 16 years, by fine under this title or imprisonment for not more than 5 years, or both.
Likewise, federal law generally bans possession of firearms by felons, with no mention of self-defense as a defense. Yet federal courts have recognized an exception for felons’ picking up a gun in self-defense against an imminent deadly threat, again because self-defense is a common-law defense available in federal prosecutions generally.
Given this, a federal statute’s general prohibition on breaking into another’s computer doesn’t dispose of breakins done in defense of property against imminent threat — just as federal statutes’ general prohibitions on assault or on possession of a firearm by a felon don’t dispose of assault or possession done in defense of life (or sometimes property) against imminent threat. Federal criminal law already includes judicially recognized and generally available self-defense and defense of property defenses, even when the defendant is prosecuted under a statute that doesn’t expressly mention such defenses.
There still remains a good deal of uncertainty about how the defense of property defense would play out in any particular digital strikeback situation, and I suppose it’s possible that courts might even decide that it’s categorically unavailable as a matter of law in computer breakin cases (though it would be unusual, given the general availability of self-defense and defense of property defenses). But it is a mistake to simply assert that such a defense is unavailable simply because the statute doesn’t mention it.
* * *
All this having been said, I want to stress that there are plausible arguments in favor of prohibiting digital self-defense (either criminalizing it or making it tortious), and reasons to be skeptical about easy analogies between digital self-defense (or, more precisely, defense of property) and physical self-defense. It may be, for instance, that there’s more of a risk of error in digital self-defense cases, in that you might disable, directly or indirectly, a computer that’s not actually attacking you. (Say, for instance, you’re defending against a worm by launching a counterworm; there’s more risk of massive damage to many third parties from an error in the counterworm than there is in a typical situation where you’re confronting someone who’s trying to run off with your bicycle.) It’s also not obvious what should be allowed when you’re going after a computer that is attacking you but only because it’s been hijacked. Should that turn, for instance, on whether the computer’s owner was negligent in allowing the computer to be hijacked?
It’s also not clear how the general principle that defense of property must generally be nonlethal should play out — what if you’re under attack using a hijacked computer that belongs to a hospital, an airport, a 911 center, or some other life-critical application? Is disabling that computer potentially lethal force, because it may have lethal consequences? How can you tell whether the computer is indeed running some application on which lives turn?
It’s therefore not obvious whether the law should criminalize most or all forms of digital self-defense, criminalize some and make others tortious, leave it entirely to the tort system so long as the actor sincerely believed (or perhaps reasonably believed) the actions were necessary to defend his property, or whatever else. Some limits on digital defense of property may well be proper, especially if we think that on balance allowing such defense would lead to too much harm to the property of third parties. But we need to analyze things carefully, by asking some of the questions I noted in the last few paragraphs — not just by condemning digital self-defense as vigilantism, as taking the law into one’s own hands, or as clearly illegal under current computer crime law.
Thanks to Warren Stramiello, a student whose paper first alerted me to the defense of property analogy; and note the Journal of Law, Economics & Policy symposium on the subject, which is available in volume 1, issue 1 of the Journal, but unfortunately not on the Web. (Participants included our very own Orin Kerr, as well as my incoming colleague Doug Lichtman.)
A Response to Eugene Volokh
Orin Kerr
Does a “Cyber Self-Help” Defense Exist, and Would It Be A Good Idea?: I enjoyed Eugene’s post about “digital self-help,” although I have a very different take on the question.
First, I highly doubt that a defendant can assert a “digital self-help” claim in a prosecution brought under the CFAA, 18 U.S.C. 1030. Eugene is right that federal criminal statutes generally do not mention self-defense and other defenses, and yet courts sometimes have recognized those defenses for some crimes. But I don’t think it’s accurate to say, as Eugene does, that “federal criminal law already includes judicially recognized and generally available self-defense and defense of property defenses.” Some commentators have said this, but I believe it clashes with the Supreme Court’s most recent take on such questions in Dixon v. United States, 126 S.Ct. 2437 (2006).
As I read Dixon, it seems that whether a federal defense exists is a question of Congressional intent. Specifically, the question is whether and how Congress meant to incorporate the common law defenses when it enacted that particular crime. Where Congress was silent, courts are supposed to reconstruct what Congress probably wanted or would have wanted “in an offense-specific context.” Id. at 2447. (It’s true that Dixon was a duress case, not a self-defense case, but it cited the Cannabis opinion, which was a necessity case; to me that suggests that the Court sees all the common law defenses together.)
This is pretty straightforward when considering a federal criminal law that closely tracks a traditional criminal prohibition, such as homicide. As Justice Kennedy put it in his concurrence in Dixon, “When issues of congressional intent with respect to the nature, extent, and definition of federal crimes arise, we assume Congress acted against certain background understandings set forth in judicial decisions in the Anglo-American legal tradition.” It’s hard to imagine Congress enacting a homicide statute without meaning to incorporate a self-defense provision. So in that context, courts have readily applied self-defense even though it’s not technically written into the statute.
I think the CFAA is quite different. I don’t know of any evidence that anyone in Congress had ever even heard about “hacking back” when Congress passed the CFAA in 1986. Congress did consider whether there were some kind of computer intrusions that would be okay based on the context; specifically, it created an exception in 1030(f) exempting “any lawfully authorized investigative, protective, or intelligence activity of a law enforcement agency.” But it didn’t create an exception for self-defense, and I don’t know of any reason to think that there was a background sense that those defenses would apply as seems to be required under Dixon. Given that, I would tend to doubt that a federal “cyber self-defense” doctrine exists.
Although it’s not directly contrary to Eugene’s post, I’ll also add my 2 cents that I think such a defense would be a really, really, really bad idea. Here’s an excerpt of what I wrote on the topic in a 2005 article, Virtual Crime, Virtual Deterrence: A Skeptical View of Self-Help, Architecture, and Civil Liability:
It is very easy to disguise the source of an Internet attack. Internet packets do not indicate their original source. Rather, they indicate the source of their most immediate hop. Imagine I have an account from computer A, and that I want to attack computer D. I will direct my attack from computer A to computer B, from B to computer C, and from C to computer D. The victim at computer D will have no idea that the attack is originating at A. He will see an attack coming from computer C. Further, the use of a proxy server or anonymizer can easily disguise the actual source of attack. These services route traffic for other computers, and make it appear to a downstream victim as if the attack were coming from a different source.
More in the article itself (unfortunately, the version on SSRN is only an early draft, but the final is on Westlaw and Lexis.)
As a result, the chance that a victim of a cyber attack can quickly and accurately identify where the attack originates is quite small. By corollary, the chance that an initial attacker would be identified by his victim and could be attacked back successfully is also quite small. Further, if the law actually encouraged victims of computer crime to attack back at their attackers, it would create an obvious incentive for attackers to be extra careful to disguise their location or use someone else’s computer to launch the attack. In this environment, rules encouraging offensive self-help will not deter online attacks. A reasonably knowledgeable cracker can be confident that he can attack all day with little chance of being hit back. The assumption that an attacker can be identified and targeted may have been true in the Wild West, but tends not to be true for an Internet attack.
Legalizing self-help would also encourage foul play designed to harness the new privileges. One possibility is the bankshot attack: If I want a computer to be attacked, I can route attacks through that one computer towards a series of victims, and then wait for the victims to attack back at that computer because they believe the computer is the source of the attack. By harnessing the ability to disguise the origin of attack, a wrongdoer can get one innocent party to attack another. Indeed, any wrongdoer can act as a catalyst to a chain reaction of hacking back and forth among innocent parties. Imagine that I don’t like two businesses, A and B. I can launch a denial-of-service attack at the computers of A disguised to look like it originates from the computers at B. The incentives of self-help will do the rest. A will defend itself by launching a counterattack at B’s computers. B, thinking it is under attack from A, will then launch an attack back at A. A will respond back at B; B back at A; and so on. As these examples suggest, basing a self-help strategy on the virtual model of the Wild West does not reflect a realistic picture of the Internet. Self-help in cyberspace would almost certainly lead to more computer misuse, not less.
Response to Orin Kerr
Eugene Volokh
Common-Law Federal Criminal Defenses:
I just wanted to very briefly comment on Orin’s post on the subject. Dixon v. United States involved the question of who is to bear the burden of proof as to a duress defense. The “long-established common-law rule” had been that the defendant must prove duress by a preponderance of the evidence, and the Court held that Congress did not intend to displace this rule. This is where the “offense-specific context” language comes up (citation omitted):
Congress can, if it chooses, enact a duress defense that places the burden on the Government to disprove duress beyond a reasonable doubt. In light of Congress’ silence on the issue, however, it is up to the federal courts to effectuate the affirmative defense of duress as Congress “may have contemplated” it in an offense-specific context. In the context of the firearms offenses at issue — as will usually be the case, given the long-established common-law rule — we presume that Congress intended the petitioner to bear the burden of proving the defense of duress by a preponderance of the evidence.
It seems to me that this common-law tradition is the most important factor here, and the longstanding common-law acceptance of the defense-of-property defense should lead federal courts to assume that Congress didn’t mean to preempt it, at least absence a statement from Congress to the contrary.
It’s true that Congress likely didn’t think much about the defense when enacting computer crime laws; but the point of the common-law criminal defenses is precisely that the legislature often doesn’t think much about defenses, which often (as with duress, for instance) involve relatively rare circumstances. The defenses are out there to be used when the triggering circumstances arise, and Congress doesn’t need to think much about them when enacting specific statutes.
So it seems to me that Dixon is quite consistent with my position: Congress legislates against the background of various common-law rules related to criminal law defenses, and the general presumption is that Congress doesn’t mean to displace these background rules.
Response to Eugene Volokh
Orin Kerr
More on the “Hacking Back” Defense: I wanted to add one more round to the exchange Eugene and I were having about whether a defendant charged with a federal computer intrusion crime can assert a “hacking back” defense. I’m still of the opinion that defendants cannot assert such a defense, and I wanted to respond specifically to Eugene’s most recent post about it. Specifically, I want to make two points. First, I’m not entirely sure a general defense of property defense doctrine exists as a default in federal criminal law, and second, if the doctrine exists I don’t think it covers computer intrusions.
The reason I’m unsure that the “defense of property” defense exists as a Congressional default is that the defense seems to be quite rare in federal court, and the cases appear almost entirely in a very specific context. Based on a quick Westlaw check, at least, I could only find about 30 federal criminal cases that seem to apply it or discuss it at all. Further, those cases arise in almost entirely in a very specific context: a defense raised in a prosecution for physical assault. There’s also a bit of homicide and one or other two crimes thrown in, but not much. Perhaps =a lot more cases exist beyond what I could find, but I couldn’t find much — and what I found was quite narrow and applied only on in a very small subset of criminal cases. Clearly this doesn’t rule out that Congress legislates all criminal offense against a general background norm of a “defense of property” defense being available, but I think it does shed some doubt on it.
Second, when stated as a defense in federal criminal cases, “defense of property” seems to mean only defense of physical property from physical access or removal. For example, in the context of the Model Penal Code’s defense of property section, which has been influential in federal court applications of defenses, the provisions are available only “to prevent or terminate an unlawful entry or other trespass upon land or a trespass against or the unlawful carrying away of tangible, movable property . . . , [or] to effect an entry or re-entry upon land or to retake tangible movable property.” MPC 3.06. (The MPC seems to treat the kind of interference with property that includes computer intrusions under a separate section, § 3.10, Justification in Property Crimes, which seems to foillow a different set of principles. Also, while you might think “entry” includes virtual entry, entry in the context of criminal trespass statutes are generally understood to mean physical entry.) Given that, it seems that whatever “defense of property” doctrine is established as a background norm when Congress creates a new criminal law, it doesn’t seem to me to apply to computer attacks.
Anyway, I should stress that we don’t yet have any cases on this, so both Eugene and I are guessing as to what courts would or should do based on the legal materials out there. It’s a very interesting question. Finally, I’ll just add further thoughts in the comment thread in the future, as I’m not sure a lot of readers are interested in this issue.
Response to Orin Kerr
Eugene Volokh
The “Defense of Property” Defense:
I much appreciate Orin’s posts on the subject, and I should note again what I noted at the outset — there are quite plausible policy arguments for barring “hacking back” even when it’s done to defend property against an ongoing attack, and Orin has expressed some of them in the past. That an action falls generally within the ambit of an existing defense, or is closely analogous to an existing defense, doesn’t preclude the conclusion that we should nonetheless bar the action because of special problems associated with it.
Nonetheless, I do disagree with two parts of Orin’s analysis. First, it seems to me that the defense-of-property defense has indeed been recognized as part of a general class of common-law defenses — including justifications such as self-defense and defense of others, and excuses such as duress or insanity — that are by default accepted in all jurisdictions, or at least all jurisdictions that have not expressly codified their defenses. (I say “by default”; they may be expressly statutorily precluded, as a few states have done as to insanity.) Robinson’s treatise on Criminal Law Defenses describes it well, I think,
Every American jurisdiction recognizes a justification for the defense of property. The principle of the defense of property is analogous to that of all defensive force justifications and may be stated as follows: … Conduct constituting an offense is justified if:
More generally, defense of property, self-defense, and defense of others are generally treated by the law more or less similarly, though subject to the general principle that defense of property will generally not justify the use of lethal force. I have never seen in any case, treatise, or other reference any indication that federal law differs from this, and rejects the notion that defense-of-property is a general default.
(1) an aggressor unjustifiably threatens the property of another; and
(2) the actor engages in conduct harmful to the aggressor
(a) when and to the extent necessary to protect the property,
(b) that is reasonable in relation to the harm threatened.
I agree with Orin that the defense has been rare. But I suspect that it is rare because defense of property generally doesn’t authorize the use of deadly force, and because use of supposedly defensive nondeadly force is less likely to draw a federal prosecutor’s attention than the use of supposedly defensive deadly force. The typical nonlethal defense of property scenario — someone says I punched him, and I claim I did this in order to keep him from stealing my briefcase — just isn’t likely to end up prosecuted by the local U.S. Attorney’s office, even if there’s some reason to doubt my side of the story.
Second, Orin points to the Model Penal Code as evidence that “when stated as a defense in federal criminal cases, ‘defense of property’ seems to mean only defense of physical property from physical access or removal”; and the MPC does define defense of property as limited to “use of force upon or toward the person of another … to prevent or terminate an unlawful entry or other trespass upon land or a trespass against or the unlawful carrying away of tangible, movable property …, [or] to effect an entry or re-entry upon land or to retake tangible movable property” (plus provides for a related but different defense in § 3.10).
But the MPC seems to define defenses in a way that’s focused on those crimes that the MPC covers. For instance, the MPC’s self-defense provision literally covers only “the use of force upon or toward another person”; it would not cover imminent self-defense as a defense to a charge of being a felon in possession of a firearm (though no such crime is defined by the MPC in the first place). Yet federal law does recognize this. Likewise, state cases recognize self-defense as a defense to the use of force against an animal, when the use would otherwise be illegal (I could find no federal prosecutions involving the question).
Now perhaps the answer is that federal law would reject even self-defense as a defense to non-physical-force crimes, and that the defense in felon-in-possession cases is actually a species of the necessity defense. But if that’s true (which isn’t clear, since it’s not even clear that federal law recognizes a general necessity defense), then one could equally argue for digital self-defense under the rubric of necessity.
Likewise, while Orin brackets § 3.10, that might very well be the defense-of-property provision (though labeled by the MPC under the more general rubric of “justification in property crimes”) that an MPC-following federal court might adopt, if it chooses to take a narrow view of the common-law defense-of-property defense. Section 3.10 generally allows “intrusion on or interference with property [when tort law would recognize] a defense of privilege in a civil action based [on the conduct],” unless the relevant criminal statute “deals with the specific situation involved” or a “legislative purpose to exclude the justification claimed otherwise plainly appears.” And the common law has generally recognized defense of property as a privilege in civil actions. (See, e.g., Restatement (Second) of Torts § 79, which allows even nonlethal physical force against a person when necessary to terminate the person’s intrusion on your possession of chattels. That doesn’t literally cover use of nonlethal electronic actions against a computer, but the point of common-law defenses is that they are applicable by analogy; the Restatement is thus a guide, not a detailed code to be followed only according to its literal terms even in novel situations.)
So we have to remember, it seems to me, that the federal law of criminal defenses is common law, borrowing from both the substance of the traditionally recognized common-law defenses, and from the common-law method, which involves reasoning by analogy. The common-law method also allows analogies to be resisted, if the new situation is vastly different from the old; and of course Congress can trump common-law defenses by statute. But the background remains that there’s a common-law defense of defense of property (buttressed, where necessary, by the necessity defense, and to the extent one is influenced by the Model Penal Code, by § 3.10′s borrowing from the common-law tort defenses), and that there’s no reason to think that federal law takes a narrow view of this defense.
Parents not liable for their son's illegal music sharing, German court rules
The parents were not obliged to monitor their child's Internet usage, the court ruled
Loek Essers
A German couple are not liable for the filesharing activities of their 13-year old son because they told him unauthorized downloading and sharing of copyrighted material was illegal, and they were not aware the boy violated this prohibition, the German Federal Court of Justice ruled on Thursday.
The parents met their parental obligations supervising a normally developed 13-year-old child by teaching him that filesharing is unlawful, the Federal Court of Justice ruled. The parents were not obliged to check up on the boy, or monitor his Internet behavior.
"Parents are in principal not obliged to monitor the child's Internet usage, to check the child's computer or to (partially) obstruct the child's access to the Internet," the court found. Parents are only committed to such measures when they have reasonable grounds to suspect their child is engaging in infringing activity when using the Internet, it added.
The parents were sued by record producers that hold the exclusive copyright to songs shared by the boy. In 2007, one of the producers discovered that 1,147 songs were offered for download at an IP-address that could be traced back to the parents of the boy, the court said.
When their home was searched, the son's PC was seized and on the computer the filesharing programs "Morpheus" and "Bearshare" were found. After that, the plaintiffs asked the parents of the boy to sign a cease and desist request to get them to agree to stop the filesharing now and in the future. The parents signed the request, but they refused to pay damages or legal costs.
While the boy shared over a thousand songs, the lawsuit was over 15 recordings for which the producers demanded ¬200 (US$255) per title or ¬3,000 in total, plus ¬2,380 in legal costs.
The ruling of the Federal Court of Justice reversed a ruling of the higher regional court of Cologne, which found the parents were liable for the illegal filesharing because they failed to fulfill their parental supervision. That court said the parents could have installed a firewall on their son's computer as well as a security program that would have made it possible to only allow the child to install software with the consent of his parents.
Besides that, the parents could have checked their son's PC once a month, and then the parents would have spotted the Bearshare icon on the computers' desktop, according to the Cologne court. "The Federal Court overturned the decision of the Appeal Court and dismissed it," the court said.
The Federal Court did not respond to a request for comment.
A Family’s Fight for Freedom: Lawyers Move to Block RFID Expulsion
Melissa Melton
A Texas school district has come under legal fire after a student was expelled for failure to comply with the “School Locator Project,” an RFID chip tracking program currently being piloted in a San Antonio middle and high school.
John Jay High School sophomore Andrea Hernandez was involuntarily withdrawn after protesting her school’s tracking badge policy for months. When appeals to respect her rights were repeatedly ignored, the family decided to fight back, seeking legal council.
In a just-released statement, civil liberties organization The Rutherford Institute, which represents the Hernandez family, has announced it will immediately seek a preliminary injunction against the district to prevent Andrea from being moved to another school.
Under the “Smart ID” program, all 4,200 students are forced to wear an ID badge with an RFID tracking chip in it at all times to attend school. Due to her persistent refusal, the school’s administration finally offered Andrea a deal; she would comply with the project by wearing a program badge with the chip removed.
Not wanting to endorse the program in any way, Andrea refused. On November 13, the school sent Andrea’s father a letter expelling her because “all students are expected to comply with the Smart ID policy.”
This case is quickly setting a precedent that students can be kicked out of school for not complying with programs they feel violate their rights.
“I feel it is an invasion of my religious beliefs, I feel that it’s the implementation of the Mark of the Beast, I feel that it’s an invasion of my privacy and an invasion of all my rights as a citizen,” Andrea said at a school RFID protest shown in an Infowars report below.
“What we’re teaching kids is that they live in a total surveillance state and if they do not comply, they will be punished,” John Whitehead, constitutional attorney and Rutherford founder said in a telephone interview with Infowars. “There has to be a point at which schools have to show valid reasons why they’re doing this.”
On the district’s Student Locator Project website, it notes that “Northside ISD is harnessing the power of radio frequency identification technology (RFID) to make schools safer, know where our students are while at school, increase revenues, and provide a general purpose ‘smart’ ID card.” Although the district will pay $500,000 up front for the program, is expects to garner $1.7 million from the state government in increased attendance funds.
The district’s website also confirms the “smart” student ID cards are just the newest edition to the school’s surveillance grid. A letter to parents regarding the Smart ID project’s implementation mentions that digital cameras have been installed in all high and middle schools and all school buses. Whitehead noted that the schools have already been fitted with 290 surveillance cameras.
In addition, according to the district, the Smart ID will “provide access to the library and cafeteria” and “allow for the purchase of tickets to the schools’ extracurricular activities,” meaning students who refuse to comply with the program will not be allowed to access those facilities and activities. The school also makes the ambiguous statement, “Other uses [for the Smart IDs] will be rolled out during the pilot program.”
As Infowars previously pointed out, in addition to a vast privacy encroachment, the Hernandez’s feel the program is a direct violation of their Christian religious beliefs, as it bears a striking resemblance to Revelations 13: 16-18 warning of the Mark of the Beast:
“16. He causes all, both small and great, rich and poor, free and slave, to receive a mark on their right hand or on their foreheads, 17. and that no one may buy or sell except one who has the mark or[a] the name of the beast, or the number of his name. 18. Here is wisdom. Let him who has understanding calculate the number of the beast, for it is the number of a man: His number is 666.” (New King James Version)The Student Locator Card program is set to expand to all 112 schools in the San Antonio Northside Independent School District.
A student’s rights should not end simply because they set foot on school property. This big brother takeover in our schools is an alarming trend, as it would appear schools are attempting to condition the youngest members of our society to accept government intrusion into – and control over – their lives.
“Regimes are formulated in the schools. Every dictator – every regime-changer – has always implemented a dictatorship in the schools first,” Whitehead said. “The ramifications are really ominous: if you grow up in that environment all your life, it’s normal to you. We’re moving into a total compliance society.”
20121124
Shopper who pulled gun at San Antonio mall within rights, cops say
By Ana Ley
A shopper who brandished a handgun during a Black Friday scuffle at South Park Mall was within his rights, according to San Antonio police.
Officers were dispatched to the mall's Sears store about 9 p.m. Thursday in response to a call about a shooting, according to an incident report. When they arrived, they detained Jose Alonzo Salame, 33, who was holding a black 9 mm semi-automatic handgun with a black holster.
"We don't see this very often," Officer Matthew Porter said, adding that Salame did not break the law by displaying the weapon. "He was within his rights."
Police confiscated the gun, which was loaded and had one round in a chamber, the report says.
Salame reportedly showed proof that he had a concealed handgun license, and he told officers that he pulled the gun out to defend himself because he was punched in the face by Alejandro Alex, 35. Salame, who did not fire the weapon, said he feared further injury by Alex.
The store had opened its doors to Black Friday shoppers about an hour before the incident, which occurred as crowds packed into the store.
Witnesses reportedly told police that Salme had behaved rudely that morning and had provoked the situation before pulling the handgun and pointing it at Alex, though San Antonio Police Sgt. Rob Carey said at the scene of the incident that he had actually pointed it at the ground.
Roger Rivera, who was shopping in the Sears, said Salame was punched then pulled a gun. Everyone scattered, "tumbling over things, dropping boxes," Rivera said. The man who was trying to cut in line ran and hid behind a refrigerator before he fled the store.
"It kind of went a little crazy in there," Carey said.
Rivera told his kids to get down. While everyone was panicking, the man with the gun stood there, he said, and looked around, lowering the weapon.
For about 10 minutes, the shopping stopped, said Rivera, and his wife Teresa, who was also in Sears but in another part of the store. She raised concerns about whether Sears had enough security, noting that she only saw men at the store wearing "Security" vests.
Salame was released from police custody and asked to leave the store with the rest of his family. A manager gave him a store voucher, the report says.
"We're glad the incident was resolved peacefully," said Sears spokeswoman Kim Freely. "The safety of our customers and associates are our No. 1 priority."
Dotcom: We've hit the jackpot
By David Fisher
A fresh legal bid to throw out the case against Kim Dotcom in the United States is being made after claims of an FBI double-cross.
Evidence has emerged showing the Department of Homeland Security served a search warrant on Mr Dotcom's file-sharing company Megaupload in 2010 which he claims forced it to preserve pirated movies found in an unrelated piracy investigation.
The 39 files were identified during an investigation into the NinjaVideo website, which had used Megaupload's cloud storage to store pirated movies.
When the FBI applied to seize the Megaupload site in 2012, it said the company had failed to delete pirated content and cited the earlier search warrant against the continued existence of 36 of the same 39 files.
The details emerged after the US District Court in East Virginia allowed partial access to the FBI application which led to the shutdown of the Mega family of websites.
Other information from the case to emerge this week includes a collection of photographs from the day of the raid at Mr Dotcom's Coatesville property on January 20 this year.
The High Court released the material after applications from the Herald.
Mr Dotcom said Megaupload co-operated with the US Government investigation into copyright pirates NinjaVideo and was legally unable to delete the 39 movies identified in the search warrant.
Mr Dotcom said: "We were informed by (the US Government) we were not to interfere with the investigation. We completely co-operated.
"Then the FBI used the fact the files were still in the account of the ... user to get the warrant to seize our own domains. This is outrageous."
He said the revelation was the first insight into the FBI's case against Megaupload and it showed bad faith on the part of the US Government. "Immediately we hit the jackpot - the first little piece of paper is this super-jackpot."
New Zealand's district court has ordered the FBI to provide documents relating to its investigation through an order for discovery. It was currently being appealed.
"I understand why the US is working so hard to appeal the discovery decision."
Mr Dotcom said the warrant obliged Megaupload to keep the files. It was among a string of legal requests from law enforcement agencies around the world.
"We have always co-operated. We have responded to takedown requests, we have been a good corporate citizen."
The FBI application to seize the sites said the "Mega Conspiracy" members were told by "criminal search warrant" in June 2010 "that 39 infringing copies of copyrighted motion pictures were present on their leased servers". The application was approved to allow the seizure of the domain names.
However, the application to seize the domain names, made on January 13, 2012, did not state the earlier search warrant was not issued against Megaupload.
Instead, the Department of Homeland Security application sought the help of Megaupload to track down files of interest in its investigation of NinjaVideo. The warrant application was by Special Agent William Engel and stated that the data storage company Carpathia "will work with its customer Megaupload to access content to provide in response to the search warrant".
The investigation was a success and saw its central figure Hana Amal "Queen Phara" Beshara sentenced to prison for 22 months and ordered to pay $256,000 of her illegally gained money to the Motion Picture Association of America - the same Hollywood lobby group blamed for pitting the FBI against Megaupload.
The access was granted after a bid by the Electronic Frontier Foundation on behalf of a Megaupload customer whose business files were lost when the cloud storage site was shut down.
Mr Dotcom's US-based lawyer Ira Rothken said he would ask the US court to return the Megaupload websites.
He said the discovery of the FBI's evidence of wrongdoing was part of a "trail of misconduct" stretching from the US to New Zealand which would ultimately lead to asking for the FBI charges to be dismissed.
"What we have uncovered, in our view, is misleading conduct. It looks like the Government wants the confidentiality because they would be concerned their conduct would be scrutinised."
The 39 files were not only used by NinjaVideo, according to the FBI affidavit. The Megaupload system identified files which were already on the system and kept only one copy of each. Unique weblinks were produced for each user providing multiple paths to the same file. The FBI indictment cited an email by Mr Dotcom's co-accused Mathias Ortman in which he said more than 2000 users had uploaded the 39 files.
A month after Homeland Security sought MegaUpload's help, NinjaVideo and a range of other sites were shutdown without warning. Coverage of the action led to Mr Dotcom emailing staff about the domain seizures, saying the manner of the US action posed "a serious threat to our business". He asked: "Should we move our domain to another country (Canada or even HK?)." The company, which has maintained it operated inside the law, stayed in the US.