Florida Digital Rights Association

Alternatives to age verification for AI chats

By FLDR, Joseph | 6 min read

privacy respecting solutions

Warning: this blog post discuss the sensitive topic of suicide and self-harm.

The special session of the Florida Legislature, where our lawmakers will once again consider the AI Bill of Rights, starts tomorrow. The bill is mostly fine, but the mistake it makes is in thinking that a privacy-invasive measure like age (identity) verification will protect anyone. As we have detailed already, age verification will only serve to force Floridians to surrender their identities to Big Tech while not mitigating the dangers of AI that it purports to.

However, we do agree that there is a problem! Of course we should be concerned with how Floridians are coming to harm by AI tools like chatbots. There are dangers posed not only to minors but to anyone who goes too far down the rabbit hole of AI. At the Florida Digital Rights Association, we have a positive vision of technology where we consider the benefits, cons, and relationship we ought to have with tech. We work backwards from that to advocate for a world where tech works for us rather than one where we work for tech.

To that end, here are some suggestions for how you could fix the AI Bill of Rights. These are alternatives that come closer to fixing the problem that the Legislature wants to address while respecting the right to privacy of Floridians. These are not perfect solutions, and we could detail the problems we see with these in the future, but they do a better job of respecting our personally identifiable information. Reject age verification and consider these solutions.

Moderate the output of AI

Why is it possible to be talked into suicide by AI? Unfortunately this issue has been raised in the context of a child, but adults can fall into this problem as well. Before considering who should or should not have access to AI, why not consider what kind of topic is available for users to discuss? That way we are protecting everyone, not just our young people.

There are only a few topics which are considered sensitive because of the real world harm they can lead to - discussions of suicide and self-harm, and harming other people. While it is highly likely that OpenAI, Anthropic, and others already have measures in place to avoid these topics coming up, lawmakers can consider codifying rules for what kinds of discussions cannot be had.

Now there may be concerns around moderating LLMs on the grounds of the First Amendment. People want the freedom to express themselves and free access to information. The context around these topics can be sensitive, but many times they can be perfectly fine because the user is not in a place to be adversely affected by the experience. There could be guidelines over how AI is trained to discuss these topics up to a certain point, and perhaps those guidelines need to be written into law.

Age verification for the topics of concern

Instead of having anyone who uses AI give their face scan to an AI company, what if it was limited to only those who prompt the AI enough about the sensitive topics? If there is someone who for one reason or another needs to investigate these issues, then that person can verify their age.

Why allow this? It would be in the context of a two tiered system. Because not every conversation about these sensitive topics actually leads to anything serious, it is probably not necessary for the topics to be banned outright. However, by adding the age verification for those topics, you raise the bar to make sure a minor is not having this conversation without their parents' consent. It raises the barrier in the specific instances in which there is a reason to be concerned rather than invading everyone's privacy when that topic may never come up at all.

In the same way that AI companies can train AI to only speak certain ways about certain topics, there can also be triggers in the application which inform the company when a conversation has veered too far. In that case, you can first have a prompt for age verification in order to continue talking about the sensitive subject.

Using age verification only for the specific topics of concern lets you protect the privacy of the vast majority of users while still providing the space for a subset to talk more deeply about the subject with the AI.

Of course, if the user continues to descend down darker conversations, their account can be suspended. Furthermore, an account could be suspended immediately if the need is apparent, and there is no reason why an AI platform would have to wait between age verification and suspension of an account. There is still room to act swiftly in the name of safety even while maintaining a system that has wiggle room for people to be human.

Mandatory parental controls instead of mandatory age verification

Parental controls have been available on Google and Apple operating systems for a while. Many AI platforms already provide parental controls, including OpenAI. Focus on empowering parents by requiring that all AI chatbot platforms have robust and easy-to-use parental controls.

There are many design considerations you could make. First, the parental controls should connect to whatever parental controls exists at the operating system level. Second, you could make sure they need to have parent controls available even if parental controls are not enabled at the device level. Third, you can prompt the user about parental controls very early in the account creation process to make sure that opportunity is given to enable this.

If you wanted to, you could go a step further and make "parental controls" available to anyone who wants another layer of filtering to their access to AI. If someone is an adult but still wants help to manage what topics may come up with an AI, these tools can be made available to them as well. It's another situation in which thinking about safety for all raises solutions that are commonly only considered for minors.

Increase education about AI

Yes, AI is moving fast. At the same time, AI has not changed since the launch of ChatGPT. The idea that an AI can talk about anything based on the prompts of the user is not new. Therefore we should be educating people, from students to adults, about AI, what it can do, and how it can come to harm them.

By raising awareness, people can be better prepared when using AI. They can be cognizant of how real it might feel to them, or how convincing it might be, by listening to the stories of others and understanding how they can control AI to avoid that.

An awareness campaign is not new for Florida. We have them around tobacco use and texting while driving. We can just as well talk about how AI can be used for better and for worse.

Please find another way

If you are a lawmaker considering the consequences of the AI Bill of Rights, we hope to have nudged you in a better direction. There are more options out there to protect Floridians from the potential harms of technology without trampling on their rights, rights which in many cases may be part of what protects them. There is no reason to be invading the privacy of everyone using ChatGPT when we're trying to solve a very specific problem of user's harming themselves or others. To do so would be to use public safety as an excuse for surveillance.

You can find our original analysis of the bill to understand why we are opposed to age verification.

If you are a citizen in Florida, please call your state lawmakers this week and tell them to consider these alternatives, or else to reject the AI Bill of Rights. We deserve to have technology that respects us. The government should not be forcing technology to disrespect us. Find your Florida State Senator here and your Florida State Representative here.

Let's hope we can start to turn the tide on our right to privacy this year.


Written by Joseph, Organizer

Enthusiast and advocate for digital privacy, cybersecurity, and free & open source software. Hobbyist. Wants to see the world get better.


Follow us on Bluesky and Mastodon. Share your thoughts on this blog post!