Using ChatGPT to find suspicious lines in Terms of Service

The ChatGPT by OpenAI is able to summarize large texts and perform text analysis. I tested it on the Privacy Policy and Terms and Conditions of a P2P cryptocurrency exchange. Will it be able to help me find suspicious lines?

Using ChatGPT to find suspicious lines in Terms of Service

Lately, I've been testing out the new ChatGPT by OpenAI. I must say it amazes me the quality of the answers it gives. But I have found out one thing after testing a lot: it is pretty dumb.

ChatGPT does not know anything. It just combines words logically, putting them in a way that make sense and look good. Nothing more. It lacks knowledge. It can say true things about obvious topics, but many times it is making things up. Likewise, it is dangerous as, if you don't look closely, you may not notice that what it is telling you is completely fake.

But it does one thing very good, and this is text analysis, or at least this is what I thought. You can send any large text, and it will summarize it surprisingly well. It also “learns” about what you just sent and will use it for future answers in that particular chat.

While playing with this feature and making it summarize and analyze some text, it came to my mind that it might be useful to perform one of the most tedious tasks when listing new services on Reading ToS and Privacy Policies. I do have a script that carries out the most tedious part by filtering almost all irrelevant parts, as it searches for particular terms that appear 90% of the time on Terms of Service that potentially carry out KYC practices. But still, I have to review the results and read.

So I copied and pasted the whole Privacy Policy of a particular exchange I had pending to review. I told it to “Read carefully the following Privacy Policy of a P2P cryptocurrency exchange” and next to this I pasted the whole text as they were copied from the site:

A zoomed out image of the pasted text (the text did continue for a bit). Sensible data has been redacted.

Then, I asked ChatGPT to “Check, and list any suspicious lines that may be in conflict with user privacy or anonymity”. I did not mention KYC or AML, just in case it didn't know what these were. This is the answer I was given:

ChatGPT's answer to the above request. Sensible information redacted.

After reading the entire Privacy Policy from this exchange, I think this is a pretty fair summary of what's in it. After this, I did the same with the site's Terms and Conditions text:

A zoomed out image of the pasted text (the text did continue for a bit). Sensible data has been redacted.

In this case, the exchange Terms of Service did mention KYC procedures (although they didn't carry these out):

"In accordance with Swiss law, trades without KYC are regulated to a limit of CHF1000/day and CHF100’000/year. Once this limit has been reached, you will not be able to take or make any further buy and/or sell offers."

If ChatGPT was understanding the text, it should have warned me about the KYC mention in the Terms of Service text. But that was not the case:

It does say that the ToS do not mention any requirements for identity verification, which is true. But it was not able to tell me anything about the line mentioning explicitly the term KYC:

So as a conclusion, I can't rely on ChatGPT to carry out the dirty work for me. It may be useful to summarize large texts, but still it seems clear that it is not able to understand what is being said, neither understands which are the important parts of the text even when you specifically ask for them.

I made some further tests, if you are interested, by sending again the whole ToS text and asking directly after the text, or asking even more specifically. But nothing, it still said that there weren't any "KYC" mentions.