Israeli Supreme Court Imposes Duties on Organizations Using AI

Transparency regarding the use of artificial intelligence (AI) and human oversight over AI outputs – these are some of the new duties imposed by the Israeli Supreme Court’s recent landmark decision on artificial intelligence (Administrative Appeal 63194-08-25 Cohen v. Ramat Gan Municipality et al. (March 23, 2026)). The decision was delivered on an appeal filed by a father of a minor with a disability, after the Education Department of the Ramat Gan Municipality denied his request for transportation services from the father's home to the son’s educational institution. The request was denied based on the provisions of a fictitious Ministry of Education Director General Circular. Except that the circular was the result of AI hallucination. It was concocted by the AI tool that the Education Department used when drafting the response to the father.

The Supreme Court (Justices Grosskopf and Canfy-Steinitz, and Deputy President Sohlberg) emphasized that the decision was not intended to "throw the baby out with the bathwater." It is not the use of AI itself that is improper, but rather its use without appropriate oversight, verification, and validation of the accuracy of its output. Relying on AI output without proper review may lead to erroneous decisions that infringe upon the rights of individuals. This is particularly true in the context of the inherent power imbalance between a public body and the citizen, as the latter is less likely to discover that a decision impacting them is based on an erroneous AI output that detrimentally affects them.

Although the decision was issued against a public authority, the normative logic it lays out may also apply to other organizations characterized by knowledge and power gaps. These include organizations dealing with consumers or employers recruiting employees. In a world where companies use AI on a daily basis—in drafting responses to customers, handling complaints, monitoring and moderating content, operating chatbots, providing content recommendations, and more—AI hallucinations and errors run a genuine risk of adversely affecting the rights of consumers, end-users, and employees.

We have compiled several recommendations for the implementation and use of AI systems, arising from the Supreme Court's decision:

  • Transparency! The Supreme Court noted that although a duty of disclosure regarding the use of AI has not yet been codified in Israeli law, there is compelling rationale for establishing such a duty – particularly when the use of AI significantly impacts the individual. Implement an approach that promotes transparency toward individuals in appropriate cases (for example, where a consumer contacts customer service through a chatbot-based service).
  • Human in the Loop. The decision addressed the need for human involvement and oversight in the implementation and use of AI, and emphasized that the responsibility for the final output is not eliminated simply due to the use of a technological tool. Ensure human involvement throughout all stages of the AI life cycle – from development (where relevant), through implementation within the organization, to daily use and decision-making. Document the oversight measures taken at each stage.
  • Be skeptical. The Ramat Gan municipality's primary error was basing its decision on a non-existent Director General Circular. According to the Supreme Court, the more professional, orderly and well-reasoned a response seems, the more difficult it becomes for an individual to detect its flaws. Consequently, it is insufficient for an AI output to merely "sound convincing" – every reference, quotation, legal source, or factual claim in the output must be carefully examined and verified.
  • You decide. The Supreme Court cautioned against delegation of discretion to automated systems, where an entity abdicates its own judgment and exclusively relies on AI. Avoid basing decisions that materially impact individuals solely on AI outputs. Such matters require effective human oversight, the scope of which should be scaled in proportion to the decision’s potential impact.
  • Objection and appeal. Establish an appeal mechanism to allow individuals to challenge AI-driven decisions that affect their rights, even in cases involving human-in-the-loop oversight. The appeal mechanism must be operated and managed by humans rather than by AI tools. Its primary purpose is to bridge the information gap between the decision-making body and the individual.
  • Control. Map the AI systems and tools utilized within the organization. Adopt an internal AI policy governing employees, suppliers, and relevant third parties. The policy should define, among other things, the permissible uses of AI, the thresholds for enhanced human oversight, the criteria for verifying the accuracy of AI outputs, and the personnel entrusted with approving AI-based decisions. Adopting a comprehensive policy is fundamental to mitigating potential risks and strengthening the framework of responsible corporate governance regarding the use of AI.

The Cyber, Privacy and Copyright Group recently launched a DPO (Data Protection Officer) service: a comprehensive, tailor-made service for every organization, designed to provide a complete solution to all privacy and personal data protection needs, including the use of AI tools. The service includes, among other things, the creation and management of an organization-specific privacy program, the drafting and performance of required policy documents and procedures, the performance of law-mandated impact assessments (DPIAs), and other activities required for your organization to comply with applicable law.