Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree, but that is not the issue.

The issue was how the case was decided.

He incremented a number in a URL, and that was his ultimate crime.

Do you honestly think it's right to send someone to jail for several years because they were messing around with a URL?

Of course he likely deserved to be charged with something, but not what he was charged with, and it hurts the rest of us when things like this set a bad precedent.



This is exactly like saying that someone convicted of trespassing had an ultimate crime of "turning a doorknob", ergo we should all fear for our ability to turn doorknobs. No, that's not how it works. The state must prove not just action, but intent.


Well, its not that similar, in that weev's conviction was overturned on appeal based on improper venue, with the appeals court also quite skeptical (though, as the improper venue was sufficient to dismiss the conviction, not stating an authoritative conclusion on this point) as to the sufficiency of the evidence to support the charges.

Both the people arguing for and those arguing against the result in weev's case seem to be forgetting what the final result actually was.


I'm sorry but I don't see what this has to do with the point I raised. I am very familiar with Auernheimer's case, so if you could spell out in more detail what you're objecting to, I'm pretty sure I can follow along.

The point I was making upthread had less to do with Auernheimer's case than it did with the silly notion that the case turned on "incrementing a URL".


> The point I was making upthread had less to do with Auernheimer's case than it did with the silly notion that the case turned on "incrementing a URL".

Your rebuttal seemed to be based on the premise that the conviction turned on more than action, but substantive evidence of intent.

My response addressed the fact that, while the conviction was dismissed for procedural reasons, the appeals court also appeared skeptical of the substantive result the same reasons that the critics here are -- that the evidence did not appear sufficient to show the intent.


Yes, doesn't that amplify my point? I wasn't making a normative argument about the quality of the case against Auernheimer.


You were responding argumentatively to people who were clearly making a normative argument about the trial conviction (and also blaming the CFAA for it); and it seemed like you were making a contrary normative claim about that conviction; if you were intending merely to argue that the actual structure of the CFAA didn't support the conviction so that the blame-attachment was misplaced, that didn't come across clearly to me.

But if that's what you were saying, then, yeah, there's nothing really to argue about.


I think the Auernheimer case was crappy but object to the notion that it involved criminalizing URL manipulation.


>He incremented a number in a URL, and that was his ultimate crime.

IANAL, but the way I understand it, it's not about the method that you used to access the system. Even if someone was highly incompetent and left their system open to being accessed, the fact that you accessed it knowing you shouldn't have is the actual crime.

After all, even if someone leaves the doors and windows wide open to their house, it's still illegal to go inside if you don't have permission. In this case they left the URLs open to be accessed, but it was clear that that part of the website wasn't meant to be accessed by the general public and the prosecutors were able to convince a jury/judge that weev would have reasonable known that.


I walk up to a grocery store with automatic doors. The lights are all on. The doors open. I walk in. It's 8am.

This grocery store has two sets of doors, about 200 feet apart. At the other set of doors there is a sign that says that the store doesn't open until 9am.

Am I trespassing?


That is a poor analogy. I'd say it's more like walking into the back room of the store even though the door was wide open, and then snooping around in there. That is trespassing.


I actually think it's the perfect analogy. It's a robot that does exactly what you tell it to do. If you apply power it'll open for ANYONE and if you don't apply power, it doesn't.

I can't see a better analogy for the webserver that revealed confidential information as someone who accidentally left the automatic door on. It opened when the owner didn't want it to, but you can't blame the user of an automatic door for taking its working as some kind of implied permission. If the owner of said automatic door didn't want it to open he or she had only to switch the door off to make their desires translate 100% into real action.

Where things get dicey is that there are certain customs and tradition and clues regarding whether a store is open or not. If the lights are off and it's the middle of the night and there are no cars in the parking lot and etc, it's probably not open and the door opening is probably a mistake, not on purpose.

The internet does not provide any kind of context clues like this, except perhaps for robots.txt and that doesn't apply to humans!


Again IANAL and trespassing is a different law than the CFAA, but if the prosecutors could reasonable prove that it wasn't an honest mistake and you were knowingly going into the store when you're aware it's not allowed, then yes I think you could be charged with trespassing.

Like say you're an ex-employee who for some reason wants to go look at the schedule (maybe your stalking one of your old co-workers). If you walked in knowing the store was closed and you shouldn't be there, then I have to imagine you'd be arrested and charged with trespassing amongst other things.


Both deal with unauthorized access and intent don't they?

The point I'm trying to make is that it's VERY difficult to divine intent in the absence of any kind of access control.

In other words, given my above example and that's all the information you have, you can't prove that I intended to trespass. Now if the doors didn't open automatically and there was a broken lock, it's much easier to determine intent.

But in Weev's case, there was no broken lock because there was no lock at all!

Going strictly from the evidence we can surmise that AT&T didn't INTEND to prevent unauthorized access because they did nothing to prevent it.


We always delve into ridiculous analogies on this site for some reason when it comes to this case, trying to somehow justify that someone, knowingly, was accessing a system they, again knowingly, knew they should not have been accessing.

Status codes, locks, no locks, these silly analogies aren't really useful. Proving intent is.


So prove the intent. Prove what was going on inside his head. Prove that he didn't merely SUSPECT that he shouldn't have had the information, but that he KNEW he shouldn't.

The reason that things get so silly is that it's very difficult to prove intent in the absence of any kind of access control. If he had bruteforced an admin password, the intent trail is there to be found. If he had done SQL injection, again, it's easy to argue intent. If he had physically broken into the building and stolen paper documents, again the intent is easy to discern.

I would argue that it's akin to finding an unmarked binding lying in the street with absolutely no way of telling whose it is, and what it contains. You open it and don't see anything that says "AT&T confidential information" and start paging through it. How could you possibly be convicted of a crime for that? I know there's criminal "finding" whereby you don't try hard enough to return something to someone that's obviously lost it. But that doesn't apply in this case because the binder in question isn't marked in any kind of meaningful way. Even if it had a header or footer or cover or something that said ANYTHING then I'd be persuaded differently.

But leaving a web service with confidential information on the internet with no access control, I might argue is criminal negligence.


You can keep arguing the semantics which sound great on a discussion forum, but it really has no bearing when it comes to the law. Intent matters, and he was found guilty of accessing a computer with authorization. You wouldn't be found guilty of such if you stumbled upon such system by accident and then turned around and left it along.

Comments here keep clinging to some black/white technical reason to decide this case due to willful ignorance, or hope that it can be true, but really it isn't like that at all.


> Intent matters, and he was found guilty of accessing a computer with authorization.

Right and a lot of people are saying that it's a miscarriage of justice to prosecute someone based on implied knowledge or assumed knowledge.

It wasn't proven the same way you can prove a great many other things. A jury was convinced of something, and that doesn't constitute actual proof. Just because the government says something is true doesn't make it so.

https://en.wikipedia.org/wiki/Indiana_Pi_Bill


You've been too charitable. The site gave a 200 status code.

It's more like the store having a huge sign that says "OPEN, COME ON IN!"


This is logic that says that SQL Injection is fine, so long as the HTTP request bearing it elicits a 200 response.

Obviously, outside the airless void of a message board argument, this isn't how things work.


> This is logic that says that SQL Injection is fine, so long as the HTTP request bearing it elicits a 200 response.

For my tastes, this is actually a reasonable configuration of things.

Nobody is forcing you to use HTTP. If you decide to, and you provide access to your database via HTTP, and you allow me to submit a payload which makes changes you don't like, you are welcome to stop me and issue a 403. It's your database, after all.

This whole controversy seems like a way of shifting blame for security failures from the parties who actually failed to people who were uninvolved in the implementation and just happened to be the first (or the first noticed) to use applications in a way unintended by the designers.


>This whole controversy seems like a way of shifting blame for security failures from the parties who actually failed to people who were uninvolved in the implementation.

That's basically victim blaming though. If someone commits a crime against you, we don't let the criminal go free just because you were inept or negligent in preventing the crime. In a sense they committed a crime against all of society by breaking our laws. It would be ridiculous to start applying laws based on whether the victim did enough to prevent it. No one should be allowed to break the law no matter how easy and vulnerable a person leaves themselves open to it.


> That's basically victim blaming though.

Not really. Nobody's blaming AT&T's CUSTOMERS for doing anything wrong. They are the victims. They are the ones who had their personal information made available to people outside of AT&T.

This might better be called perpetrator blaming. The perpetrator is the person who commits a crime. AT&T had an obligation to keep customer data safe regarding the privacy policy that they tell customers that they will uphold. They failed to uphold it in any kind of meaningful way, by failing to provide any kind of access control to said personal information.

Calling it victim blaming is to fail to apply any kind of critical thinking to the situation.


Obviously this raises a question about the moral limits and applicability of the law.

If I make a server accessible and someone accesses it, even in a way I didn't expect, I don't think a crime has been committed against me.


This is basically a blanket decriminalization of all computer intrusions, since virtually all hacking boils down to access to a computer in an "unexpected" way.

Again, that simply isn't how the law works. I see you all around this threat repeatedly trying to reason about half a criminal charge: the actus reus used to gain access to computer system. It makes no sense to think of this particular crime that way; you need the other half of the charge, mens rea, the knowing intent to do something with a computer that you aren't authorized to do.


The problem is that, at least for the moment, AI doesn't have a discernable (or even arguably cognizable) mens rea.

So how do we craft laws for a world where very nearly all the network traffic consists of machines talking to each other, learning, and talking again?

You keep pointing me to "how the law works," but I've said in four different messages now that I understand that - I'm pointing out that the law, as written, doesn't work.

I don't think there is a place for government as we know it - much less the completely shamed criminal justice system - on the internet. These institutions can go peacefully and with dignity or they can be stubborn and destructive, to the detriment of humans everywhere.

The more people try to justify their behavior and normalize their insanity, the more likely the latter scenario becomes.


OK so give me all your personal info to put on a public webpage that says "by CFAA using this info is a crime" and since intent and law are the absolute arbiters of justice no harm should come to you right?


Of course harm will come to me, the point is that those doing the harm should still be held accountable if they're caught.

Yes the law is far from perfect and it's easy to imagine absurd scenarios like yours where the law falls apart. Certainly that should be fixed. But in this particular case Weev knew he was going onto a portion of the webpage that wasn't meant to be public, even if technically anyone could access it because of AT&T's ineptitude. There weren't any links to that exact URL, and in a sense it took some reverse engineering (ok basically just guess and check) for him to find it. It wasn't a public webpage in the sense that he stumbled upon it from some link or a google search. He put thought into purposefully discovering that exact URL and opening it, knowing as a computer security enthusiast that it wasn't meant for public consumption.


I'm afriad I have to respectfully disagree with the idea that "he should have known better".

Most analogies fall flat because webservers do EXACTLY WHAT YOU TELL THEM even if what you tell them to do isn't what you actually want them to do.

Our laws are generally organized around the idea of reasonable adults doing reasonable things and being understood by other reasonable people. The average, reasonable person can't comprehend a webserver.

If you give a robot a gun and tell it to shoot anything that comes through that door and it shoots your wife/husband/child/parent, its not them who is at fault nor is it the robot. Its your fault and you should be tried for murder.

I hope the comparison is clear.


>The average, reasonable person can't comprehend a webserver.

But weev could. We don't apply laws as if the defendant is some mythical "reasonable" person. We try each case based on it's unique circumstances. It's not that he "should have known better", it's that he absolutely did know better.

The comparison is clear, but it has no relevance.


> But weev could. We don't apply laws as if the defendant is some mythical "reasonable" person.

What weev could or could not do is 100% irrelevant. What matters with regard to the law is what a reasonable person would do. That's literally a thing.

http://legal-dictionary.thefreedictionary.com/reasonable

AT&T's customers had a REASONABLE expectation of privacy in general, and quite likely AT&T had a privacy policy which spelled this out.

AT&T failed to take the proper steps to protect their customers privacy.

AT&T is at fault here, not weev.

Even if weev did know or not know something, AT&T is at fault for the situation where his knowing or not knowing makes a lick of difference.


Whatever your tastes might be, that's not what the law in the US says.


Oh, I understand that; I have no illusions otherwise. But I thought this thread was precisely about how strange the laws are?


As long as your door is unlocked, do I have permission to enter your home? An unlocked door is essentially a 200 response when I try to open it.

Better yet - your door is open already. Do I have permission to waltz right in?


Given that we already have hundreds or thousands of years of convention and understanding of property rights, no.

Please point me to the long and well worn case law regarding robots that do what you tell them, but not what you actually intend.


Twice in one thread? This started because I objected to that analogy as toxic.

You can't possible think that sending an HTTP packet to an HTTP server is morally tantamount to walking into a stranger's house?!


>> This is logic that says that SQL Injection is fine, so long as the HTTP request bearing it elicits a 200 response.

For my tastes, this is actually a reasonable configuration of things.

Nobody is forcing you to use HTTP. If you decide to, and you provide access to your database via HTTP, and you allow me to submit a payload which makes changes you don't like, you are welcome to stop me and issue a 403. It's your database, after all.

Nobody is forcing you to use a door. If you decide to, and you provide access to your home via door, and you allow me to open the door and do things you don't like, you are welcome to stop me by locking the door. It's your house, after all.

You cannot say that issuing a 403 instead of a 200 is OK but turn around and say unpermitted access (what should give a 403) is okay so long as you are given a 200 in response, even if by accident.

If door is locked > return 403 else return 200

The only difference is that the 403 and 200 are implicit with the door being locked or not, rather than an explicit response from door since door is incapable of giving a response (unlike server). Although both server and door are handled by a human.

The shared point of failure is how the human configured the server//door to return a 403/200//unlocked/locked status to individuals other than itself.

Forgetting to lock your door, failing to set -NOACCESS for ${Robber}, is exactly like forgetting to disable the -READ flag for ${User}. Therefore, the configuration is not reasonable.


It's the entire metaphor that's broken, so the fact that you can vaguely map "locked" to a properly functioning auth system and "unlocked" to an unintentional 200 response is irrelevant.

My neighborhood is not the internet. There is no written, unambigous protocol which my door implements in order to accept or reject guests. In fact, my door isn't programmed to issue responses of any kind; a human or even an answering system might do that, and yes, they might plausibly grant access.

More important is the reverse: the internet is not your neighborhood, and mapping the laws (both legal and social) on a 1-to-1 basis in an effort to recreate the norms of your neighborhood on a worldwide telecommunications system is really inane. I can't for a second make sense of it, much less what lessons it provides us for the proper legal and moral framework to accompany HTTP.


I leave my wallet in my hotel room. You go to the front desk, tell them you're me and get a key to my room. Do you have permission to take my wallet?


Your analogy doesn't really compare as it involves identity fraud.

>You go to the front desk, tell them you're me and get a key to my room.

This would be the equivalent to using an Admin username/password to login to the server at which point you are given permission flags (-access wallet). Logging in with the Admin username/password without permission is against the law - so regardless if you take the wallet or not you broke the law.

You leave your wallet in your hotel room. I go to the front desk and ask them for a key to your room. They don't question me or verify my identity and simply hand over the key.

Would you be angry at the hotel for not verifying who the person is or why they need a key to your room?


Why does it matter whether he'd be angry at the hotel? We're talking about the person who exploits the hotel's lax security to steal from him.

Impose civil liability on people with terrible security. Fine. That's an orthogonal issue, though. There's no reason you can't do both things, and most reasonable people can imagine a variety of things people might do with computers that they'd expect and want to be criminalized.


Are we agreed then that telling a web site your username is ";update users set passwd = ''--" counts as fraud?


Only if you're willing to federally prosecute every person who ever told a website that they were over 18 or 21 if they were not in fact over 18 or 21. Fraud is fraud and justice is blind, right?


Sneaking into an R rated movie. Stealing hundreds of millions of dollars from pension funds. Yep, pretty much the same thing.


If justice is blind, then they are the same thing! They are willfully ignoring or choosing not to obey the rules that society has agreed upon for all people.

If you want to argue that different crimes are different I'm all for it. But if you're going to do that, then please explain to me how Weev not releasing the information publicly is so heinous as to deserve years in prison and a fine worth a substantial fraction of a house.


> After all, even if someone leaves the doors and windows wide open to their house, it's still illegal to go inside if you don't have permission.

Holy hell this metaphor is so bereft of life. When will we stop using it?

Weev didn't go to anyone's home. He stayed at his own computer. Typed in URLs. Received a 200 status code.

There are really no good, compelling similarities between this action and entering someone's home.


Fine let's drop the analogies

> He stayed at his own computer. Typed in URLs.

I don't understand why everyone in this message thread is being so obtuse about how he accessed the site. It DOESN'T matter what method you use to access the system under the CFAA. The thing that matters is intent.

If you disagree with that then fine. And whether anyone can actually prove beyond a reasonable doubt what someone's intentions are is infinitely debatable. But the fact of the matter is that the prosecution proved in a court that weev knowingly accessed a portion of a computer system that he knew wasn't meant to be open to the public and that he knew he did not have authorization to access. It doesn't matter that all he had to do was type in a URL. He knew those URLs weren't meant to be accessed by him, or at least that's what was proven in court.

I'm not really trying to get into a debate about whether the CFAA is a good law, I don't really think it is. But there's certainly evidence that weev broke the law as it's currently written. Yea he got a 200 status code back and we can endlessly debate whether that should mean someone is given permission to access the site. But I think, and the jury/judge agreed, that weev wasn't just poking around their website thinking that those URLs were open to the public. I mean come on, this wasn't just some random user who happened to type in a URL not knowing what it was going to access. Weev knew what the fuck he was doing. Whether there's actually enough evidence to prove that is of course up for debate, but as the law is currently written I think he was guilty.


I agree 100%. The law is stupid, weev knew he was being a jackass and he said so.

I was just saying that the "if I leave my house unlocked..." analogy is totally toxic to an adult conversation on the matter.


> He incremented a number in a URL, and that was his ultimate crime.

What ultimate crime? Are we forgetting that weev's conviction was overturned on appeal, indicating that it was a result of legal error. There was no crime. Not in a "well, some people on the internet think he shouldn't have been convicted" sense, but in a "the legal system has authoritatively declared that his conviction was in error" sense.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: