Ernest Gellhorn

Ernest Gellhorn (1935–2005) was an American academic and legal scholar. He graduated from the University of Minnesota, the University of Minnesota Law School,[1] and was a Guggenheim fellow.[2] An expert on administrative and antitrust law, Gellhorn held a number of academic appointments, including dean of law in three universities: (Arizona State University, Case Western Reserve University, and the University of Washington).[3] Additionally, he was Boyd Professor of Law at the University of Virginia, Foundation Professor of Law at George Mason University, and professor of law at Duke University.

Gellhorn also was active in the American Bar Association, practiced law in Washington D.C., and testified before government agencies such as the Federal Trade Commission.[4]

Sources

  1. "Ernest Gellhorn Dies – The Mason Gazette - George Mason University". Archived from the original on 2011-07-19. Retrieved 2011-02-11.
  2. "Ernest Gellhorn - John Simon Guggenheim Memorial Foundation". Archived from the original on 2011-06-28. Retrieved 2011-02-11.
  3. Antitrust & Competition Policy Blog: Ernest Gellhorn Passes Away May 7
  4. "Archived copy" (PDF). Archived from the original (PDF) on 2011-07-21. Retrieved 2011-02-11.CS1 maint: archived copy as title (link)
gollark: There are some important considerations here: it should be able to deal with damaged/partial files, encryption would be nice to have (it would probably work to just run it through authenticated AES-whatever when writing), adding new files shouldn't require tons of seeking, and it might be necessary to store backups on FAT32 disks so maybe it needs to be able of using multiple files somehow.
gollark: Hmm, so, designoidal idea:- files have the following metadata: filename, last modified time, maybe permissions (I may not actually need this), size, checksum, flags (in case I need this later; probably just compression format?)- each version of a file in an archive has this metadata in front of it- when all the files in some set of data are archived, a header gets written to the end with all the file metadata plus positions- when backup is rerun, the system™ just checks the last modified time of everything and sees if its local copies are newer, and if so appends them to the end; when it is done a new header is added containing all the files- when a backup needs to be extracted, it just reads the end and decompresses stuff at the right offset
gollark: I don't know what you mean "dofs", data offsets?
gollark: Well, this will of course be rustaceous.
gollark: So that makes sense.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.