QuickCode

QuickCode (formerly ScraperWiki) was a web-based platform for collaboratively building programs to extract and analyze public (online) data, in a wiki-like fashion. "Scraper" refers to screen scrapers, programs that extract data from websites. "Wiki" means that any user with programming experience can create or edit such programs for extracting new data, or for analyzing existing datasets.[1] The main use of the website is providing a place for programmers and journalists to collaborate on analyzing public data.[4][5][6][7][8][9]

QuickCode
Available inEnglish
RevenueSponsored by 4iP[1]
URLquickcode.io
Alexa rank 133,089 (April 2014)[2]
Current statusActive
Content license
Affero General Public License[3]

The service was renamed circa 2016, as "it isn't a wiki or just for scraping any more".[10] At the same time, the eponymous parent company was renamed 'The Sensible Code Company'.[10]

Scrapers

Scrapers are created using a browser based IDE or by connecting via SSH to a server running Linux. They can be programmed using a variety of programming languages, including Perl, Python, Ruby, JavaScript and R.

History

ScraperWiki was founded in 2009 by Julian Todd and Aidan McGuire. It was initially funded by 4iP, the venture capital arm of TV station Channel 4. Since then, it has attracted an additional £1 Million round of funding from Enterprise Ventures.

Aidan McGuire is the chief executive officer of The Sensible Code Company

gollark: Secondly, the disk in the server *does* have an OS? If you're booting it off a disk drive, make sure that's valid, and is connected.
gollark: So, firstly, is your terminal server connected to the, er, server, in the rack GUI?
gollark: Well, maybe not that slow, I don't know the exact details of OC networking, but at least would make latency a bit higher, and stress any relays you use.
gollark: 4 drives to a server would allow... 12MB? each, which is much more than you can do now, and would give each node a decent amount of computation power (especially with data cards), but splitting everything across the network would be sloooow.
gollark: You could possibly make some sort of storage clustering thing - servers can have 4 drives each, after all, and use all of them for remote-accessible storage if they network-boot with an EEPROM.

See also

References

  1. Jamie Arnold (2009-12-01). "4iP invests in ScraperWiki". 4iP.
  2. "Scraperwiki.com Site Info". Alexa Internet. Retrieved 2014-04-01.
  3. "GNU Affero General Public License v3.0 - sensiblecodeio". GitHub. Retrieved 30 December 2017.
  4. Cian Ginty (2010-11-19). "Hacks and hackers unite to get solid stories from difficult data". The Irish Times.
  5. Paul Bradshaw (2010-07-07). "An introduction to data scraping with Scraperwiki". Online Journalism Blog.
  6. Charles Arthur (2010-11-22). "Analysing data is the future for journalists, says Tim Berners-Lee". The Guardian.
  7. Deirdre McArdle (2010-11-19). "In The Papers 19 November". ENN.
  8. "Journalists and developers join forces for Lichfield 'hack day'". The Lichfield Blog. 2010-11-15.
  9. Alison Spillane (2010-11-17). "Online tool helps to create greater public data transparency". Politico.
  10. "ScraperWiki". ScraperWiki. Retrieved 7 February 2017.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.