The Day Ted Cruz Stopped A Bad Internet Bill
from the right-for-a-day dept
Well, this was a bit of a surprise. Over the past couple of weeks I wrote about how Senator Josh Hawley was planning to try to hotline his terrible No Section 230 Immunity for AI Act. As we have explained multiple times, the bill is so poorly drafted that it would make a mess of the entire internet. After rumors of two attempted hotlines (effectively trying to sneak the bill through if no Senator objects) planned for last week, and then a rumor of a Tuesday night attempt, Hawley finally took to the floor Wednesday morning to make the push. If C-SPAN’s clunky embed feature works, you can watch it here:
The key point: a Senator did step up to oppose, and it was not one you would expect. Senator Ted Cruz ended up blocking Hawley’s bill, which is a bit of a surprise, since much of the false mythology about Section 230, including the very wrong claim that under Section 230 a website has to “choose” between being a “platform” or a “publisher” seems to have originated with Cruz.
So if there were any Senator who you’d think would be thrilled to support Hawley’s destructive attack on 230 through the backdoor of AI, you’d think it would be Cruz. But he chose to go the other way. And while he reasoning was mostly misleading bullshit, he actually did make a few good points in his objection.
But, first, let’s deal with Hawley’s nonsense.
We’re here today to ask one very simple question. Are the biggest most powerful technology companies in the world going to be the only companies in this country? The only companies on the face of the Earth who are absolutely immune for anything and everything they do.
I mean, so not surprisingly, Hawley starts off on a lie. Section 230 does not just apply to big tech companies. It applies to anyone (including users for doing things like retweeting). And it does not, in any way, shape or form, make companies “absolutely immune for anything and everything they do.”
All of that is wrong. Section 230 applies to anyone — company or individual — who passes along or hosts someone else’s speech. And all it says is that the entity hosting or passing along someone else’s speech is not liable for that speech. The user who created that speech is still liable. And the intermediary is still liable for anything they, themselves, do to imbue speech with any sort of illegality.
So Hawley’s entire complaint about 230 is just based on a lie. Multiple lies.
From there, Hawley continues to lie:
Are they going to be the only ones who can give our children advice on how to kill themselves? Who can give our children advice on how to procure the romantic interests of 30 and 40 and 50 year olds? Are they going to be the only ones who can push the most unbelievable content at our kids use our kids images to create deep fakes that ruin their lives. Are they going to be able to do all of this and not be held accountable?
Because right now in America, they’re the only companies who cannot be taken to court for a simple suit when they violate their own terms of service when they violate their own commitments to their customers.
Again, everything here is nonsense. Any company or individual is protected in the exact same manner for third party speech. And the reason a website cannot be taken to court for violating their own terms of service is the exact same reason you can’t sue your local bar owner for tossing your ass on the sidewalk when you start a fight. It’s their property, their rules.
From there, Hawley goes a little nuts on misunderstanding AI:
And I would just submit to you that when it comes to AI and the generative technology that AI represents. I know that these big tech companies who own almost all of the AI development tools processes and equipment in this country. I know they promise US that AI is going to be wonderful. It’s gonna be fantastic for all of us. Maybe that’s true Mr. President, but it’s also true that AI is doing all kinds of incredible things. Here’s just one example. Here’s the AI chatbot from Bing. It’s Microsoft, I believe, having an interesting conversation with a journalist, in which the chatbot recommends… he says… or it says… “you’re married, but you’re not happy.” The journalist is a he. “You’re married, but you’re not satisfied. You’re married, but you’re not in love.”
The chatbot goes on to recommend that this individual — by the way, the chatbot has no idea how old this person is or who this person is — the chatbot goes on to recommend that this person leave his spouse, divorce his spouse, break up his family. Oh just another day at the office for AI.
So, look, this famous anecdote, which involved NY Times reporter Kevin Roose, was one that got a lot of attention. What Hawley leaves out (conveniently) is that if you read the entire transcript, Roose repeatedly prodded and pushed Bing’s AI chat to get this kind of result, and it only came very far into the conversation, which is a known issue with AI systems and why many now limit the length of a chat they’ll allow.
It was not just some random AI telling Roose to leave his wife.
And, even if it was… what exactly does Hawley think is a cause of action here that Roose might be able to bring against Microsoft? Is there something illegal about a bot telling you it loves you?
Hawley’s next example is tragic, but again, without any real cause of action:
Here’s another AI chatbot that recommended to a user — there’s no age restrictions here, there’s no way to verify who is having conversations with this technology — this chat bot recommended that the interlocutor kill himself saying if you wanted to die, why didn’t you do it sooner?
The horrifying thing Mr. President is that this individual who is having this conversation did kill himself?
He took the advice of this technology.
Now, as we’ve discussed in the past, there are huge problems with ever putting liability on a third party for someone’s decision to end their life by suicide. For one, it gives those thinking about ending their life even more reason to do it, knowing they can “punish” someone they see as an enemy. Just blame the person whose life you want to ruin, and not only do they have to deal with the eventual guilt, they may also now face legal liability, even if they did nothing to encourage someone’s death.
And, in this case, we have no way of knowing if the AI chat is what pushed this individual over the line. We can argue that AI chat bots should be better about recommending help, and I’d agree with that, but to hold it responsible for someone’s death is a huge leap.
From there, Hawley throws out a bunch of stats that don’t really say what he wants them to say:
Now I’ll just point out that when it comes to our teenagers — and I’m the father of three when it comes to our teenagers — 58% of kids this last year said that they used generative AI. You may think well for research — well, not only for that. No, almost 30% said that they used it to deal with anxiety or mental health issues. 22% said they used it to resolve issues with friends. 16% said they used it to deal with family conflicts.
Now, a normal, in-touch-with-reality Senator might look at those numbers and think “gee, perhaps we should fix our mental healthcare situation in the US, such that teenagers don’t have to rely on bots, but actually have access to good mental health care and social services that can help them!”
But, no, instead, Hawley wants to put liability on AI companies such that kids will have even fewer places to turn to when they run into mental health issues. Great job, Senator!
From there, Hawley goes down the old trope of comparing speech to poison. But, again, that’s not how any of this works:
I just submit to you this. I remember the great phrase of President Reagan. He used to say “trust but verify.” Maybe it’s time to allow the parents of this country to trust but verify, maybe it’s time to put into the hands of the parents, vis-à-vis these companies, the same power they have against pharmaceutical companies who try to put asbestos in the baby powder. The same power that they have against any other company that would try to hurt their kids, harm their kids, lie to their kids, the power go to court.
Again, literally poisoning kids is entirely different from “your kid might come across speech that they don’t like via AI.”
Not understanding that is just one of Hawley’s many confused positions.
And to have their day in court. They don’t have that power now why well because this government gives the big tech companies a sweetheart deal a deal. Nobody else in America gets a subsidy worth billions of dollars a year known as section 230. Big Tech can’t be held accountable. Big Tech can’t be put on the line. Big Tech can’t be made responsible.
Again, literally all of this is wrong. Section 230 does not apply just to big tech. And it’s not a “subsidy.” It’s just making sure that liability gets placed on the party who actually did the speech.
And also, it remains quite incredible that Republicans, who for years have fought for “tort reform” to try to stop “ambulance chasing” lawyers from filing frivolous lawsuits, are now eager to push that when it comes to tech:
What this bill does Mr. President, it’s a simple bill. It doesn’t contain regulation. It doesn’t contain new standards for this and that. None of that. It just says that these huge companies can be liable like any other company. No special protections from government. It just removes government protection. It just breaks up the Big Government Big Tech cartel. It’s all it does and it says parents can go into court on the same terms as anybody else and make their case. Surely that’s not too much to ask, you know, the companies even they don’t want to be on the record saying it’s too much to ask.
Again, all of that is a lie. Section 230 already applies to everyone, not just big tech. And this bill doesn’t put them on an even playing field, instead it says that if you use AI, you suddenly get less protection than everyone else, and will have to go through long and expensive litigation without the protections of 230 for someone else’s speech.
Anyway, there was not much clarity over the past few days if anyone would object to the hotline. I’d heard last week that maybe Senator Cruz would, or possibly Senator Rand Paul, but that no Democrats were interested in opposing it, which is pretty crazy when you think about it. Opposing bad bills from Senator Hawley is the kind of thing that any Democrat should jump at.
Eventually, though, it was Cruz who stood up to point out that Hawley’s bill was a problem, both in how it worked and where it stood procedurally. He starts out with the procedural aspects, highlighting that there’s all sorts of process, process that should have gone through the committee where Cruz is the top GOP Senator, and which Hawley was trying to skip over:
I appreciate my friend from Missouri. I appreciate his passion. And I share his passion for reigning in the abuses of big Tech. Big Tech has a lot that they’re responsible for.
The Senator for Missouri is right that big Tech is doing a lot of harm to our kids. The Senator for Missouri is also right that big Tech has been complicit in the most far-reaching censorship of free speech our nation has ever seen. These are issues. I’ve worked for a long time to rein in Big Tech, to rein in censorship, to protect free speech.
However, the approach this bill takes: I don’t think substantively accomplishes the goals that the senator from Missouri and I both want to accomplish. My concerns are both procedural and substantive procedurally. This bill has not yet been debated. This bill hasn’t been considered by the Commerce Committee. This bill hasn’t been marked up. This bill hasn’t been the subject of testimony to understand the impact of what it would be.
The Commerce Committee on which I’m the ranking member has a strong tradition of passing legislation in its jurisdiction. To date 22 bills have been reported out of the Commerce Committee. I’m more than happy to work with the Senator from Missouri — and he and I have worked on many issues together — on this bill.
But we need to make sure when legislating in this area that we’re doing so in a way that would be effective. And that wouldn’t do unintended consequences
From there, Cruz also rightly points out that a rushed attempt to use this sledgehammer of a tool as a way to regulate AI — especially without anyone bothering to explore what this bill would actually do:
You know when it comes to AI, AI is a transformative technology. It has massive potential. It’s already having massive impacts on productivity and the potential over the coming years is even greater. And there are voices in this chamber, many on the Democrat side of the aisle, that want government to play a very heavy hand and regulating AI. I think that’s dangerous.
I want America to continue to lead innovation.
Just this year in the United States, over 38 billion dollars have been invested in American AI startups. That’s this year. That is more than twice the investments in the rest of the world combined.
Look, there’s a global race for AI. And it’s a race that we are engaged with China. China is pursuing it through government directed funds. It would be bad for America if China became dominant AI. Right now the 38 billion dollars that was invested this past year in American AI companies. It is more than 14 times the investment of Chinese AI companies
We need to keep that differential we need to make sure that America is leading the AI Revolution.
I mean, okay, if that’s a reason not to break the internet, that’s good, I guess?
Then we get to Cruz’s… somewhat odd comments on 230.
And I agree that section 230 is too broad. In fact, the last time this body considered legislation successful legislation to rain in section 230 was in 2017. We had a robust debate over reforms to section 230 to close a loophole for websites that were profiting from sex trafficking on their platforms.
That bill introduced by Senator Portman — the Stop Enabling Sex Trafficking Act — ultimately gained 70 Senate co-sponsors, received extensive debate, and committee and passed out of the Senate with only two no votes.
I personally was proud to be an original co-sponsor of that important legislation, which is now law.
I mean, it’s still kinda bizarre to watch Senators still trying to take credit for FOSTA/SESTA when report after report after report has come out highlighting how it’s been a near total failure and has literally resulted in deaths, and many are calling for it to be repealed. But I guess it’s Senator Cruz, so what can you expect.
Senator Cruz continues to be very confused about Section 230, but, in his confusion he did say one very accurate thing:
When it comes to censorship repealing 230 would not eliminate censorship. In fact repealing 230. I fear would lead to an increase in censorship.
For once, Cruz is actually correct about Section 230. Repealing Section 230 would mean that companies hosting speech would face expensive litigation for 3rd party speech, which would make them way more hesitant to host that speech, and therefore they’d be likely to pull down speech much more quickly (if they allow it at all) in response to government action. Thus, it’s a clear plan for censorship: when the government makes a move to suppress speech.
Of course, Cruz making sense can only last for a little while.
What I’ve long advocate — and I’m happy to work with the senator from Missouri on — is using Section 230 reform to create an incentive not to censor in other words, repealing section 230 protection when Big Tech engages in censorship, when Big Tech stifles free speech, they lose their immunity from Congress in those circumstances. So that 230 becomes a safe harbor an incentive to have a free and open marketplace for ideas.
I think that is tremendously important. It has been a passion of mine for years and I know the senator from Missouri cares deeply about it as well. So I extend an offer to my friend from Missouri. Let’s work together on this.
But this bill right now I think is not the right solution at this time and so object.
Cruz’s formulation for the bill he wants is obviously unconstitutional, as it would be a form of compelled speech, requiring that websites leave up content that they don’t wish to leave up, which violates their own terms of service, and which leads to harassment and abuse. But, you know, we’ll deal with that when it comes.
For now, Senator Cruz actually did the right thing and blocked Hawley’s bill.
Hawley got up after Cruz and the two had a bit of a back and forth that was mostly nonsense, though Hawley seemed immensely pleased by the idea that they could talk to each other rather than at each other (you’d think they’d have figured that out before, but okay…).
Hawley goes back to the point of letting the tort lawyers and the courts hash things out (something Cruz has supported in the past) and asks what’s wrong with that here. Cruz more or less responds that he’s fine unleashing tort lawyers on social media companies, but not fine with unleashing them on AI companies who he wants to lead the world:
Hawley: I remember my friend from Texas saying wisely in a Judiciary Committee hearing not that long ago, and the Senator will correct me if I misremember, but my my memory is that the Senator from Texas said when it comes to these big tech companies, we can try to find a thousand ways to regulate them, but maybe the best thing we can do is just let people get into court have their day in court, you know, just just let them get in there. Let let them make their arguments.
Don’t try to figure out how to micromanage them. Just open up the courtroom doors according to the usual rules.
Does my friend from Texas think that in the AI context that that is any different? I mean, why would it be different there? Why wouldn’t that same approach be effective here?
Cruz: Well, listen, it is a good question and it is true. I am quite open to using exposure to liability as a way to rein in the excesses of Big Tech. But I think we should do so in a focused and targeted way. AI is an incredibly important area of innovation and simply unleashing trial lawyers to sue the living daylights out of every technology company for AI? I don’t think that’s prudent policy. We want America to lead an AI and so I’m a much more of a believer of using the potential of liability in a focused targeted way to stop the behavior that we think is so harmful, whether it is behavior that is harming our kids — and I am deeply deeply concerned about the garbage that big Tech directs at our children — or whether it is the censorship practices.
I support the approach but in my view it needs to be more targeted and produce the outcomes we want, rather than simply harming American technology across the board. That shouldn’t be our objective. Our objective should be changing their behavior so that they’re not engaging in conduct that that is harmful to American consumers and to American children and parents.
Anyway, yesterday, it appears that Senator Ted Cruz helped block a bad internet bill, even if he did so for mostly confused reasons. Still, he did it, and deserves at least some amount of kudos for doing so.
But I sure do hope that someone hangs onto that clip of Cruz revealing that an outright repeal of Section 230 will lead to more censorship. Because I kinda feel that might be handy before too long.
Filed Under: ai, generative ai, josh hawley, liability, section 230, ted cruz