[Avodah] AI and Jewish Law

Akiva Miller akivagmiller at gmail.com
Sun Oct 29 19:24:42 PDT 2023


I've been wanting to write this post for a while, but I thought it might be
more appropriate for a computer-science group than for Avodah. But recent
posts have gone deeper into the comp-sci side, and I'm hoping the
moderators will let it go through. If not, no hard feelings.

For example, R' Micha Berger wrote:

> It produces results that usually seem like it knows what it's
> talking about, because the training texts generally make sense.
> But that's why at times it fills in the pattern with nonsense.
> Hence what people call AI "hallucinations".

Many decades ago, I came to the personal conclusion that the Turing Test (
https://en.wikipedia.org/wiki/Turing_test) might be good entertainment, but
I rejected its value for determining whether or not a computer might ever
be deemed "intelligent". Reasonable responses to a conversation are too
easy for a sufficiently advanced computer, I felt. My feeling was that
genuine intelligence could be proven best by some show of originality.
Computers are great at algorithms and calculations, but a truly
formidable task would be for them to come up with an original thought.

I did not have a method of defining what I meant by "original thought", but
if I can believe what has appeared in the press about AI, that problem
might have been solved: ChatGPT has been inventing lies out of whole cloth,
and to me, that certainly qualifies as an original thought. A parent will
be very upset the first time that their child tells a lie, but inwardly,
they may be very impressed by the child's creativity. There have been many
articles about an AI that wrote a paper on a certain topic, giving quotes
and citations from nonexistent sources. (See, for example,
https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/)

It is one thing to take information and reach wrong conclusions, but when a
paper quotes an article, and gives the title of the article and the URL
where it can be found, but no paper of that title ever existed, nor did
that URL ever work, I don't know how to describe it other than as a
deliberate lie. And my description of it as "deliberate" worries me very
much. (Even HAL would be horrified.)

As a former programmer, I wonder how all this works. I have some sort of
vague inkling of what is meant by things like "language learning", but
isn't there something in the programming code that says, "If you output
something in quotation marks, there must be a real-world source that used
that particular text", or "Don't output a URL unless it works"?

R' Joel Rich wrote:
> In its current format ChatGPT will not have all the data on the
> poseik because it will only include written responses not any of
> the discussions that led up to those and not ones that were oral.

Yes, but that's true of ALL human poskim too. For example, how many times
have I heard, "Acharon XYZ would not have paskened that way if he had seen
what Rishon ABC had written, but it was not yet published in XYZ's day."
But this never stopped anyone from trying their hardest and issuing a psak.

RJR again:
> A more basic issue in the whole meta-physical context is what is
> consciousness or sentience, how do we know the ChatGPT will not
> have hashraat shechina.

Yes indeed, that is THE most basic issue here. I can't help but wonder if
these computers might genuinely have the sentience of a young child. They
sure do lie like one!

Akiva Miller
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.aishdas.org/pipermail/avodah-aishdas.org/attachments/20231029/35618d5c/attachment-0003.htm>


More information about the Avodah mailing list