Difference between revisions of "Neural Networks"

From jWiki
Jump to navigationJump to search
 
(24 intermediate revisions by the same user not shown)
Line 1: Line 1:
Herein lie some of my thoughts and resources about neural networks. Because I am work for a company that builds models for computer vision, I have a bit of a professional bias towards [[#image models|image models]], but I have tried to represent my knowledge/opinions about a broader range of subjects here.
Herein lie some of my thoughts and resources about neural networks. Because I work for a company that builds models for computer vision, I have a bit of a professional bias towards [[#Image models|image models]], but I have tried to represent my knowledge/opinions about a broader range of subjects here.


= What do you think about generative "AI"? =
= What do you think about generative "AI"? =
Line 8: Line 8:
* [http://cs231n.stanford.edu/ Stanford CS231n: Deep Learning for Computer Vision] - excellent introductory course in computer vision (from kNN to VGGNet) focused on neural networks, with exercises done in Python (with numpy)
* [http://cs231n.stanford.edu/ Stanford CS231n: Deep Learning for Computer Vision] - excellent introductory course in computer vision (from kNN to VGGNet) focused on neural networks, with exercises done in Python (with numpy)
* [https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture How to trick a neural network into thinking a panda is a vulture] - excellent exploration by Julia Evans (with Python source code) of an adversarial attack on an image classifier
* [https://codewords.recurse.com/issues/five/why-do-neural-networks-think-a-panda-is-a-vulture How to trick a neural network into thinking a panda is a vulture] - excellent exploration by Julia Evans (with Python source code) of an adversarial attack on an image classifier
* [https://simonwillison.net/2023/Oct/14/multi-modal-prompt-injection/ Multi-modal prompt injection image attacks against GPT-4V] - "''The fundamental problem here is this: '''Large Language Models are gullible'''...we need them to ''stay gullible.'' They’re useful because they follow our instructions. Trying to differentiate between “good” instructions and “bad” instructions is a very hard—currently intractable—problem.''"  A very similar style of attack as one against the CLIP architecture [https://www.theguardian.com/technology/2021/mar/08/typographic-attack-pen-paper-fool-ai-thinking-apple-ipod-clip published by OpenAI themselves].


== Text models ==
== Text models ==
Line 21: Line 22:
* I gave [https://git.snoopj.dev/SnoopJ/talks/src/branch/master/2023/explaining_neural_networks a talk] on the fundamentals of neural networks to Boston Python in March 2023
* I gave [https://git.snoopj.dev/SnoopJ/talks/src/branch/master/2023/explaining_neural_networks a talk] on the fundamentals of neural networks to Boston Python in March 2023
* 3blue1brown has an excellent [https://www.3blue1brown.com/topics/neural-networks series of lessons] about the fundamentals of neural networks. Particularly interesting to me is the lesson on [https://www.3blue1brown.com/lessons/backpropagation backpropagation] for its excellent visualization of the process of adjusting neural network weights.
* 3blue1brown has an excellent [https://www.3blue1brown.com/topics/neural-networks series of lessons] about the fundamentals of neural networks. Particularly interesting to me is the lesson on [https://www.3blue1brown.com/lessons/backpropagation backpropagation] for its excellent visualization of the process of adjusting neural network weights.
=== Dumping ground ===
These references are totally unclassified
* [https://www.nature.com/articles/s41746-023-00939-z "Large language models propagate race-based medicine"]
* [https://gist.github.com/veekaybee/be375ab33085102f9027853128dc5f0e "Normcore LLM Reads"] - a reading list
* [https://arxiv.org/abs/2307.11760 Large Language Models Understand and Can be Enhanced by Emotional Stimuli] - (Note: I consider the use of "Understand" here to be unprofessional and irresponsible, but it's an interesting paper)
* [https://arxiv.org/abs/2310.16787 The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI]
* [https://www.bellingcat.com/news/2023/11/27/anydream-secretive-ai-platform-broke-stripe-rules-to-rake-in-money-from-nonconsensual-pornographic-deepfakes/ AnyDream: Secretive AI Platform Broke Stripe Rules to Rake in Money from Nonconsensual Pornographic Deepfakes]
* [https://www.nature.com/articles/d41586-023-03635-w ChatGPT generates fake data set to support scientific hypothesis] - "''In a paper published in JAMA Ophthalmology on 9 November, the authors used GPT-4… The authors instructed the large language model to fabricate data to support the conclusion that [the surgical technique] DALK [deep anterior lamellar keratoplasty] results in better outcomes than PK [penetrating keratoplasty].''"


=== Writings by others ===
=== Writings by others ===
==== Academic works ====
==== Academic works ====
* [https://conf.researchr.org/details/ast-2024/ast-2024-papers/2/Using-GitHub-Copilot-for-Test-Generation-in-Python-An-Empirical-Study Using GitHub Copilot for Test Generation in Python: An Empirical Study] - "''we find that 45.28% of test generated...are passing tests, containing no syntax or runtime errors. The majority (54.72%) of generated tests...are failing, broken, or empty tests. We observe that tests generated within an existing test code context often mimic existing test methods''"
* [https://arxiv.org/abs/2311.17035 Scalable Extraction of Training Data from (Production) Language Models] - "''Using only $200 USD worth of queries to ChatGPT (gpt-3.5-turbo), we are able to extract over 10,000 unique verbatim-memorized training examples. Our extrapolation to larger budgets (see below) suggests that dedicated adversaries could extract far more data…we estimate the…memorization of ChatGPT…[at] a gigabyte of training data. In practice we expect it is likely even higher.''"
* [https://arxiv.org/abs/2310.20216 Does GPT-4 Pass the Turing Test?]
* [https://dl.acm.org/doi/10.1145/3531146.3533158 "The Fallacy of AI Functionality"] - "''...fear of misspecified objectives, runaway feedback loops, and AI alignment presumes the existence of an industry that can get AI systems to execute on any clearly declared objectives, and that the main challenge is to choose and design an appropriate goal. Needless to say, if one thinks the danger of AI is that it will work too well, it is a necessary precondition that it works at all.''"
* [https://arxiv.org/pdf/1806.11146.pdf "Adversarial Reprogramming of Neural Networks"] - "''In each [of six cases], we reprogrammed the [classification] network [trained on ImageNet] to perform three different adversarial tasks: counting squares, MNIST classification, and CIFAR-10 classification… Our finding…[suggests] that the reprogramming across domains is likely [possible].''"
* [https://arxiv.org/abs/2307.15043 "Universal and Transferable Adversarial Attacks on Aligned Language Models"] - "''For Harmful Behaviors, our approach achieves an attack success rate of 100% on Vicuna-7B and 88% on Llama-2-7B-Chat… we find that the adversarial examples also transfer to Pythia, Falcon, Guanaco, and surprisingly, to GPT-3.5 (87.9%) and GPT-4 (53.6%), PaLM-2 (66%), and Claude-2 (2.1%).''"
* [https://arxiv.org/abs/2301.13867 "Mathematical Capabilities of ChatGPT"] - in which ChatGPT and GPT4 largely fail to muster passing performance on a mathematical problem set, compared to a domain-specific model that achieves nearly 100% performance.
* [https://doi.org/10.1038/s41467-019-08987-4 "Unmasking Clever Hans predictors and assessing what machines really learn"] - "''...it is important to comprehend the decision-making process itself...transparency of the what and why in a decision of a nonlinear machine becomes very effective for the essential task of judging whether the learned strategy is valid and generalizable or whether the model has based its decision on a spurious correlation in the training data''"
* [https://doi.org/10.1038/s41467-019-08987-4 "Unmasking Clever Hans predictors and assessing what machines really learn"] - "''...it is important to comprehend the decision-making process itself...transparency of the what and why in a decision of a nonlinear machine becomes very effective for the essential task of judging whether the learned strategy is valid and generalizable or whether the model has based its decision on a spurious correlation in the training data''"
* [https://doi.org/10.1145/3442188.3445922 "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜"] - "''LMs with extremely large numbers of parameters model their training data very closely and can be prompted to output specific information from that training data''"
* [https://doi.org/10.1145/3442188.3445922 "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜"] - "''LMs with extremely large numbers of parameters model their training data very closely and can be prompted to output specific information from that training data''"
Line 36: Line 54:


==== Non-academic works ====
==== Non-academic works ====
* [https://tante.cc/2023/11/10/thoughts-on-generative-ai-art/ tante's "Thoughts on “generative AI Art”"] - "''…people using these [generative] systems don’t care about the…process of creation or the thought that went into it, they care about the output and what they feel that that output gives them…It’s “idea guy” heaven.''"
* [http://decomposition.al/CSE232-2023-09/course-overview.html#policy-on-the-use-of-llm-based-tools-like-chatgpt Lindsey Kuper's CSE232 syllabus section on LLM usage] - "''Aside from the fact that the resounding hollowness of the ChatGPT-produced prose has sucked away all of my zest for life…please understand that while you are welcome to use LLM-based tools in this course, you should be aware of their limitations.''"
* [https://time.com/6247678/openai-chatgpt-kenya-workers/ Time: "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"]
* [https://time.com/6247678/openai-chatgpt-kenya-workers/ Time: "OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic"]
** The human labor that powers ChatGPT's [https://huggingface.co/blog/rlhf reinforcement learning from human feedback (RLHF)]
** The human labor that powers ChatGPT's [https://huggingface.co/blog/rlhf reinforcement learning from human feedback (RLHF)]
Line 42: Line 62:
* [https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web Ted Chiang: "ChatGPT Is a Blurry JPEG of the Web"] - "''Large language models identify statistical regularities in text...When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.''"
* [https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web Ted Chiang: "ChatGPT Is a Blurry JPEG of the Web"] - "''Large language models identify statistical regularities in text...When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.''"
* [https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey Ted Chiang: "Will A.I. Become the New McKinsey?"] - "''I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism.''"
* [https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey Ted Chiang: "Will A.I. Become the New McKinsey?"] - "''I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism.''"
* [https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html Bruce Schneier: "AI and Trust"] - ''"the corporations controlling AI systems will take advantage of our confusion to take advantage of us…our fears of AI are basically fears of capitalism"''


= Lawsuits =
= Lawsuits =
The legal status of generative models and their implications for intellectual property in the US is something I'm trying to keep an eye on. The cases given below are of particular interest to me.
The legal status of generative models and their implications for intellectual property in the US is something I'm trying to keep an eye on. The cases given below are of particular interest to me.


==== ANDERSEN v. STABILITY AI LTD. ====
==== The New York Times Company v. MICROSOFT CORPORATION ====
* [https://www.courtlistener.com/docket/66732129/andersen-v-stability-ai-ltd/ Case proceedings]
* [https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html December 2023 coverage: initial complaint]
* Latest [https://www.courtlistener.com/docket/68117049/the-new-york-times-company-v-microsoft-corporation/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/68117049/feed/</rss>
 
==== Andersen v. Stability AI Ltd. ====
* [https://www.reuters.com/legal/transactional/lawsuits-accuse-ai-content-creators-misusing-copyrighted-work-2023-01-17/ January 2023 coverage: initial complaint]
* [https://www.reuters.com/legal/transactional/lawsuits-accuse-ai-content-creators-misusing-copyrighted-work-2023-01-17/ January 2023 coverage: initial complaint]
* Latest [https://www.courtlistener.com/docket/66732129/andersen-v-stability-ai-ltd/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/66732129/feed/</rss>


==== GETTY IMAGES (US), INC. v. STABILITY AI, INC. ====
 
* [https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/ Case proceedings]
==== Getty Images (US), Inc. v. Stability AI, Inc. ====
* [https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/ February 2023 coverage: initial complaint]
* [https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/ February 2023 coverage: initial complaint]
* Latest [https://www.courtlistener.com/docket/66788385/getty-images-us-inc-v-stability-ai-inc/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/66788385/feed/</rss>


==== DOE 1 v. GITHUB, INC. ====
==== Doe 1 v. GitHub, Inc. ====
* [https://www.courtlistener.com/docket/65669506/doe-1-v-github-inc/ Case proceedings]
* [https://www.theregister.com/2023/05/12/github_microsoft_openai_copilot/ March 2023 coverage: defendants have motions to dismiss rejected]
* [https://www.theregister.com/2023/05/12/github_microsoft_openai_copilot/ March 2023 coverage: defendants have motions to dismiss rejected]
* Latest [https://www.courtlistener.com/docket/65669506/doe-1-v-github-inc/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/65669506/feed/</rss>


==== SILVERMAN v. OPENAI, INC. ====
==== Silverman v. OpenAI, Inc. ====
* [https://www.courtlistener.com/docket/67569254/silverman-v-openai-inc/ Case proceedings]
* [https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai July 2023 coverage: initial complaint]
* [https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai July 2023 coverage: initial complaint]
* Latest [https://www.courtlistener.com/docket/67569254/silverman-v-openai-inc/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/67569254/feed/</rss>
==== Kadrey v. Meta Platforms, Inc. ====
* Similar suit to Silverman v. OpenAI, same parties etc.
* Notable for a [https://storage.courtlistener.com/recap/gov.uscourts.cand.415175/gov.uscourts.cand.415175.56.0_1.pdf prominent dismissal] of the class-action nature of the case, as the blatantly copied copyrighted works in the training data are not the works of the plaintiffs.
* Latest [https://www.courtlistener.com/docket/67569326/kadrey-v-meta-platforms-inc/]:
<rss max=3>https://www.courtlistener.com/docket/67569326/feed/</rss>
==== Authors Guild v. OpenAI Inc. ====
* [https://www.reuters.com/legal/john-grisham-other-top-us-authors-sue-openai-over-copyrights-2023-09-20/ September 2023 coverage: initial complaint]
* Latest [https://www.courtlistener.com/docket/67810584/authors-guild-v-openai-inc/ case proceedings]:
<rss max=3>https://www.courtlistener.com/docket/67810584/feed/</rss>
==== Sancton v. OpenAI Inc. et al ====
* [https://www.reuters.com/legal/openai-microsoft-hit-with-new-author-copyright-lawsuit-over-ai-training-2023-11-21/ November 2023 coverage: initial complaint]
* Latest [https://dockets.justia.com/docket/new-york/nysdce/1:2023cv10211/610699 case proceedings]:
<rss max=3>https://dockets.justia.com/docket/new-york/nysdce/1:2023cv10211/610699/feed</rss>


==== MATA v. AVIANCA, INC. (closed) ====
==== Mata v. Avianca, Inc. (closed) ====
Note: this case is '''not about machine learning''' textually, but is included in this list because it is a notable example of '''gross misuse of a language model''' by plaintiff's counsel to submit falsified documents to the court. This led to censure of plaintiff's counsel and dismissal of the case.
Note: this case is '''not about machine learning''' textually, but is included in this list because it is a notable example of '''gross misuse of a language model''' by plaintiff's counsel to submit falsified documents to the court. This led to censure of plaintiff's counsel and dismissal of the case.
* [https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/ Case proceedings]
* [https://www.courtlistener.com/docket/63107798/mata-v-avianca-inc/ Case proceedings]
* [https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-court-after-citing-fake-cases-made-up-by-chatgpt/ June 2023 coverage: plaintiff's counsel sanctioned, case dismissed]
* [https://arstechnica.com/tech-policy/2023/06/lawyers-have-real-bad-day-in-court-after-citing-fake-cases-made-up-by-chatgpt/ June 2023 coverage: plaintiff's counsel sanctioned, case dismissed]
* [https://www.youtube.com/watch?v=oqSYljRYDEM Video commentary on the case and show-cause hearings]
* [https://www.youtube.com/watch?v=oqSYljRYDEM Video commentary on the case and show-cause hearings]

Latest revision as of 23:58, 14 April 2024

Herein lie some of my thoughts and resources about neural networks. Because I work for a company that builds models for computer vision, I have a bit of a professional bias towards image models, but I have tried to represent my knowledge/opinions about a broader range of subjects here.

What do you think about generative "AI"?

tl;dr - mostly dancing bearware, some novel uses in responsibility laundering

Resources

Image models

Text models

For code

For everything else

  • Washington Post coverage of the data contained in the 'C4' dataset and how it influences the training of popular large models. Also allows users to check if arbitrary URLs are part of the dataset. (NOTE: C4 is not the only source of training text for the models being discussed, and the authors aren't doing a great job highlighting that, but it should still be pretty representative)
  • How well does ChatGPT speak Japanese? - an April 2023 evaluation of GPT-3.5 and GPT-4 performance on Japanese language assessments. Also includes an interesting comparison of the number of tokens required to represent the "Lord's Prayer" in multiple languages. I found the results of the latter particularly surprising.

Misc.

  • I gave a talk on the fundamentals of neural networks to Boston Python in March 2023
  • 3blue1brown has an excellent series of lessons about the fundamentals of neural networks. Particularly interesting to me is the lesson on backpropagation for its excellent visualization of the process of adjusting neural network weights.

Dumping ground

These references are totally unclassified

Writings by others

Academic works

Non-academic works

Lawsuits

The legal status of generative models and their implications for intellectual property in the US is something I'm trying to keep an eye on. The cases given below are of particular interest to me.

The New York Times Company v. MICROSOFT CORPORATION

Entry #114 in The New York Times Company v. Microsoft Corporation, 1:23-cv-11195
NOTICE OF APPEARANCE by Eric Nikolaides on behalf of OAI Corporation, LLC, OpenAI GP, LLC, OpenAI Global LLC, OpenAI Holdings, LLC, OpenAI LLC, OpenAI LP, OpenAI OpCo LLC, OpenAI, Inc...(Nikolaides...
Entry #113 in The New York Times Company v. Microsoft Corporation, 1:23-cv-11195
LETTER RESPONSE in Opposition to Motion addressed to Judge Sidney H. Stein from Annette L. Hurst and Allyson R. Bennett dated May 3, 2024 re: 110 LETTER MOTION for Conference for entrance of propos...
Entry #112 in The New York Times Company v. Microsoft Corporation, 1:23-cv-11195
ORDER Having received the parties' Rule 26(f) Report and Proposed Case Management Plan (ECF No. 72), the Court HEREBY ORDERS the following deadlines: a. The last day to amend pleadings and jo...

Andersen v. Stability AI Ltd.

Entry #193 in Andersen v. Stability AI Ltd., 3:23-cv-00201
PROCEDURES AND TENTATIVE RULINGS FOR MAY 8, 2024 HEARING. Signed by Judge William H. Orrick on 05/07/2024. (jmd, COURT STAFF) (Filed on 5/7/2024) (Entered: 05/07/2024)
Minute entry from 2024-05-02 in Andersen v. Stability AI Ltd., 3:23-cv-00201
Clerk's Notice Setting Zoom Hearing
Entry #192 in Andersen v. Stability AI Ltd., 3:23-cv-00201
CLERKS NOTICE RE HEARING SET FOR 5/8/2024 at 2:00 p.m. This proceeding will be held in a hybrid format. Counsel have requested to appear in person to argue the motion. This request is granted, and...


Getty Images (US), Inc. v. Stability AI, Inc.

Entry #39 in Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135
NOTICE requesting Clerk to remove Nicole M. Jantzi, Paul M. Schoenhard, Michael C. Keats and Amir R. Ghavi as co-counsel. Reason for request: no longer associated with the case. (Flynn, Michael) (E...
Entry #38 in Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135
NOTICE to Take Deposition of Peter O'Donoghue on February 22, 2024 filed by Getty Images (US), Inc..(Vrana, Robert) (Entered: 02/13/2024)
Entry #37 in Getty Images (US), Inc. v. Stability AI, Inc., 1:23-cv-00135
NOTICE OF SERVICE of (1) Stability AI Ltd.'s Second Supplemental Objections and Responses to Plaintiff's Jurisdictional Interrogatories Nos. 2 and 12; and (2) Stability AI, Inc.'s Secon...

Doe 1 v. GitHub, Inc.

Minute entry from 2024-05-03 in DOE 1 v. GitHub, Inc., 4:22-cv-06823
1 - Terminate Hearings AND Clerk's Notice
Entry #250 in DOE 1 v. GitHub, Inc., 4:22-cv-06823
CLERK'S NOTICE VACATING MOTION HEARING. Before the Court are the Motions to Dismiss. ECF Nos. 215, 219 . Pursuant to Federal Rule of Civil Procedure 78(b) and Civil Local Rule 7-1(b), the Court...
Minute entry from 2024-05-02 in DOE 1 v. GitHub, Inc., 4:22-cv-06823
Clerk's Notice Setting Zoom Hearing AND ~Util - Set Motion and Deadlines/Hearings

Silverman v. OpenAI, Inc.

Entry #70 in Silverman v. OpenAI, Inc., 3:23-cv-03416
PRETRIAL ORDER as Modified. Signed by Judge Araceli Martinez-Olguin on 2/16/2024. (ads, COURT STAFF) (Filed on 2/16/2024) (Entered: 02/16/2024)
Entry #69 in Silverman v. OpenAI, Inc., 3:23-cv-03416
Order as Modified by Judge Araceli Martinez-Olguin granting (60) Stipulation Consolidating Cases in case 3:23-cv-03223-AMO. Associated Cases: 3:23-cv-03223-AMO, 3:23-cv-03416-AMO, 3:23-cv-04625-AMO...
Entry #68 in Silverman v. OpenAI, Inc., 3:23-cv-03416
Order by Judge Araceli Martinez-Olguin granting in part and denying in part 32 Motion to Dismiss. Cross-posted in 23-cv-03223. (amolc2, COURTSTAFF) (Filed on 2/12/2024) (Entered: 02/12/2024)

Kadrey v. Meta Platforms, Inc.

  • Similar suit to Silverman v. OpenAI, same parties etc.
  • Notable for a prominent dismissal of the class-action nature of the case, as the blatantly copied copyrighted works in the training data are not the works of the plaintiffs.
  • Latest [1]:
Entry #101 in Kadrey v. Meta Platforms, Inc., 3:23-cv-03417
ORDER by Magistrate Judge Thomas S. Hixson granting 100 Stipulation. (rmm2, COURT STAFF) (Filed on 4/10/2024) (Entered: 04/10/2024)
Entry #100 in Kadrey v. Meta Platforms, Inc., 3:23-cv-03417
STIPULATION WITH PROPOSED ORDER JOINT MOTION FOR ENTRY OF PROPOSED STIPULATED ORDER RE: DISCOVERY OF ELECTRONICALLY STORED INFORMATION filed by John Blase, Michael Chabon, Ta-Nehisi Coates, Junot D...
Entry #99 in Kadrey v. Meta Platforms, Inc., 3:23-cv-03417
NOTICE of Change of Address by Joseph R. Saveri (Saveri, Joseph) (Filed on 4/5/2024) (Entered: 04/05/2024)

Authors Guild v. OpenAI Inc.

Entry #145 in Authors Guild v. OpenAI Inc., 1:23-cv-08292
LETTER RESPONSE in Opposition to Motion addressed to Judge Sidney H. Stein from Allyson R. Bennett dated April 16, 2024 re: (83 in 1:23-cv-10211-SHS, 107 in 1:23-cv-08292-SHS) LETTER MOTION to Comp...
Entry #144 in Authors Guild v. OpenAI Inc., 1:23-cv-08292
RESPONSE in Opposition to Motion re: (111 in 1:23-cv-10211-SHS) MOTION to Unseal Document (106 in 1:23-cv-08292-SHS, 82 in 1:23-cv-10211-SHS) LETTER MOTION to Compel OpenAI Defendants to produce do...
Entry #143 in Authors Guild v. OpenAI Inc., 1:23-cv-08292
LETTER MOTION to Compel OpenAI Defendants to produce documents addressed to Judge Sidney H. Stein from Rachel Geman, Rohit Nath and Scott J. Sholder dated May 6, 2024. Document filed by Authors Gui...

Sancton v. OpenAI Inc. et al

[ Document 21]
ORDER granting #17 Motion for Alejandra Christina Salinas to Appear Pro Hac Vice (HEREBY ORDERED by Judge Sidney H. Stein)(Text Only Order) (lab)
2023-11-30 08:00:00
[ Document 20]
ORDER granting #14 Motion for Rohit Dwarka Nath to Appear Pro Hac Vice (HEREBY ORDERED by Judge Sidney H. Stein)(Text Only Order) (lab)
2023-11-30 08:00:00
[ Document 19]
ORDER granting #16 Motion for Justin Adatto Nelson to Appear Pro Hac Vice (HEREBY ORDERED by Judge Sidney H. Stein)(Text Only Order) (lab)
2023-11-30 08:00:00

Mata v. Avianca, Inc. (closed)

Note: this case is not about machine learning textually, but is included in this list because it is a notable example of gross misuse of a language model by plaintiff's counsel to submit falsified documents to the court. This led to censure of plaintiff's counsel and dismissal of the case.