Nocton v Lord Ashburton

Nocton v Lord Ashburton [1914] AC 932 is a leading English tort law case concerning professional negligence and the conditions under which a person will be taken to have assumed responsibility for the welfare of another. It confirmed it extended to unequivocal professional advice.

Nocton v Lord Ashburton
CourtHouse of Lords
Decided19 June 1914
Citation(s)[1914] AC 932
Keywords
Professional negligence, assumption of responsibility

Facts

Lord Ashburton bought a property for £60,000 on Church Street, Kensington, London. His solicitor was Nocton who advised him to seek the release (lease or sell) part of the house (which was also security for a mortgage). This was a bad idea, because as Nocton in fact knew, this meant that the security would become insufficient. Lord Ashburton alleged the advice was not given in good faith, but rather in Mr Nocton's self-interest.

Judgment

Viscount Haldane LC for whole judicial committee held that despite Derry v Peek (which had disallowed any claim for misstatements apart from in the tort of deceit), Nocton was liable for his bad advice given the fiduciary relationship between the solicitor and client.

gollark: It must comfort you to think so.
gollark: > There is burgeoning interest in designing AI-basedsystems to assist humans in designing computing systems,including tools that automatically generate computer code.The most notable of these comes in the form of the first self-described ‘AI pair programmer’, GitHub Copilot, a languagemodel trained over open-source GitHub code. However, codeoften contains bugs—and so, given the vast quantity of unvettedcode that Copilot has processed, it is certain that the languagemodel will have learned from exploitable, buggy code. Thisraises concerns on the security of Copilot’s code contributions.In this work, we systematically investigate the prevalence andconditions that can cause GitHub Copilot to recommend insecurecode. To perform this analysis we prompt Copilot to generatecode in scenarios relevant to high-risk CWEs (e.g. those fromMITRE’s “Top 25” list). We explore Copilot’s performance onthree distinct code generation axes—examining how it performsgiven diversity of weaknesses, diversity of prompts, and diversityof domains. In total, we produce 89 different scenarios forCopilot to complete, producing 1,692 programs. Of these, wefound approximately 40 % to be vulnerable.Index Terms—Cybersecurity, AI, code generation, CWE
gollark: https://arxiv.org/pdf/2108.09293.pdf
gollark: This is probably below basically everywhere's minimum wage.
gollark: (in general)

See also

Notes

    This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.