14

I am trying to create a programming game where user-supplied programs compete in battle simulations, to be used as a tool to teach and practice programming. (It will likely be a turn-based robot simulation, but for the purposes of this question, it could just as well have been chess or checkers.) One major component of implementing this game will be to provide a mechanism to run user-generated code against game data to determine their bot's moves.

I know that the typical advice is, "Wherever possible, don't run untrusted code," and I understand where it comes from. In my current case, though, it would be the core functionality of the app I would like to make, if at all possible. I know that I will need to take some precautions to ensure that user-supplied code does not cause damage. The ideal setup, from what I can tell, would enforce that:

  • User code reads game state from STDIN
  • User code writes generated move to STDOUT
  • User code is isolated from the host system
  • User code is isolated from each other
  • User code is limited in the resources it can consume (CPU, memory, disk)
  • User code cannot access the network

My use case doesn't seem unique. If anything, I would imagine any of the following types of apps have similar requirements:

  • most programming games
  • competitive programming online judge
  • "Try X programming language"
  • game AI competitions

Yet, I looked around on Google, but I couldn't find any reference implementations that seemed trustworthy. Most of the above sorts of apps are closed-source, and perhaps for good reason.

Given the requirements, I imagine I would need some sort of isolation/virtualization/containerization solution, although I am honestly not sure which one would provide the necessary guarantees.

What are current best practices around sandboxing for user-supplied code? Does anyone have some information or references to trustworthy sources?

Mike Ounsworth
  • 57,707
  • 21
  • 150
  • 207
Ming
  • 241
  • 1
  • 3
  • 1
    Whitelisting certain functions and libraries and running an interpreted language instead of raw C. Run the user code as an low privileged user (restricts access to network, resources, and access to host). – schroeder Jul 07 '15 at 03:37
  • You could check out the code for Node-Red on github. They implement a JavaScript sandpit for their "function" node type. Might give you some ideas. – Julian Knight Jul 09 '15 at 21:47
  • Is the code JS / native? Is building NaCl clients an option for your users, for instance? What kind of disk access do you use? Actual filesystem access or just some abstract notion of permanent storage? – Steve Dodier-Lazaro Aug 06 '15 at 10:28
  • I have not settled on a particular language/format for the user supplied code, and I would be open to picking something that makes the overall process easier/safer. Actually, it may be sufficient if this program has access to input, output, and CPU/memory. Additional storage is more of a nice to have than a requirement. – Ming Aug 06 '15 at 16:22
  • Have them send their code on a rasperberry pie and just run those pies together on their own LAN? – billc.cn Mar 03 '16 at 20:33
  • Take a closer look at JSFiddle or dotnetfiddle ... you can then layer in an interface to interact with the game data in a safe manner. Maybe email them directly pointing to this question or ask them to blog about it. Do post back here! – DeepSpace101 Aug 16 '16 at 16:48
  • One thing you will have to consider is that users might write code that does something malicious in the game, but doesn't actually compromise security in the traditional matter. This might not be applicable to your app, but the user might be able to make the game display discriminatory remarks, play loud noises, display malicious links, etc... – John Smith Aug 16 '16 at 21:01
  • Not an answer, but I can't comment yet. Have you heard of core war? http://www.corewars.org/ – onlyanegg Aug 21 '16 at 04:56

5 Answers5

2

Coming back to this question with a bit of a delay... I'm assuming that the code you receive is executed by the client in their Javascript interpreter, and at some point submitted and interpreted on the server for validation.

You have multiple problems:

  1. you want to ensure that the code executed for one player cannot negatively affect another player
  2. you want to ensure that the code you execute does not permit any OS-level privilege escalation
  3. optional bonus: you want to know when something went wrong during the client code execution

Sanitise input

First things first: remember to white-list the inputs you get from your clients. They must have a known length and format. Use a platform-agnostic format for storing the data you receive, which specifies the lengths and types of all exchanged variables.

Isolate clients

You must then ensure that the computation of any input can either lead to a correct result, or to the computation failing without affecting the OS or other concurrent computations. This means that every input is processed in its own contained thread/process. You could have your Web server forward the input to a custom daemon that spawns one sandboxed process per client, and feeds it the input.

You could also use something like Wedge to directly and safely compartmentalise a single server. Capsicum could also be an option.

Even if you chose to use a single Javascript interpreter to run JS code written by your untrusted clients, you'll soon have the appropriate tools to guarantee isolation since COWL (a confinement mechanism that successfully implements non-interference for JS code) is being standardised by the W3C.

Protect the OS

Whichever way you go, you simply need to ensure the processes that run your code are:

  • not run by root / with root-equivalent capabilities
  • contained within a cgroup to enable QoS limitations on CPU, memory, disk and network bandwidth (i reckon that latter one may require network namespaces)
  • contained within a user namespace

Knowing the computation result is trustworthy

Take a course in language-based security :-) This is a very hard goal usually and requires a lot of assumptions on the language and on the safety properties you want to guarantee.

Good luck!

Steve Dodier-Lazaro
  • 6,798
  • 29
  • 45
0

Since a majority (more than 50%) of web-applications are built in Java, I'm assuming you're going to deploy a Java based web-app.

You could take jar files from participating students and firstl of all sanitize them for well known malware to remove the most obvious sources of trouble.

You could define an interface class upon which all submitted code should be instantiated. This could restrict the set of functions exposed to the game environment, for example - getGameState(), calculateMove(), etc. The game engine would execute these methods for each participant in the required sequence as per the game's rules.

You should restrict the code by defining a custom SecurityManager with a specific ClassLoader to restrict actions in the participant's security domain. This will enable you to enforce a custom security policy in a very flexible manner.

You may consider applying the following access permissions:

  1. Disable all access to the filesystem.
  2. Permit reflective access to only its own classes.
  3. Deny dangerous System calls such as load(), loadLibrary(), gc(), setSecurityManager(), console(), etc.
  4. Disable all network access.
  5. Disable creating new threads.

In addition, consider monitoring the memory allocation and processor usage for the code executed. You could extend this to any and all resources available on the app-server.

To deter mischievous or careless students, you could publish metrics such as memory consumption and processor utilization for their submissions. You could also penalize heavy resource consumers by reducing their rankings so that students are encouraged to be more careful and economical in using computing resources.

  • 1
    I don't think the majority of web applications are built in Java. I think PHP has the first position. – ThoriumBR Sep 20 '16 at 14:46
  • When you count only server side scripting frameworks, then definitely PHP leads the pack (81%). But the most popular language overall is Java and almost all large scale and enterprise web applications are built on Java. Plus being mature & full fledged language (as opposed to a scripting framework) it has the advantages of a well thought out security framework which can be used to solve this particular problem. – Sandeep S. Sandhu Sep 20 '16 at 15:02
0

Have you considered using JavaScript and Google Caja?

The Caja Compiler is a tool for making third party HTML, CSS and JavaScript safe to embed in your website. It enables rich interaction between the embedding page and the embedded applications. Caja uses an object-capability security model to allow for a wide range of flexible security policies, so that your website can effectively control what embedded third party code can do with user data.

The Caja Compiler supports most HTML and CSS and the recently standardized "strict mode" JavaScript version of JavaScript -- even on older browsers that do not support strict mode. It allows third party code to use new JavaScript features on older browsers that do not support them.

ErikE
  • 157
  • 8
-2

So a lot of people are saying sanitize the input(great answer here), but a great approach to this would be to whitelist/blacklist and do string matches.


Blacklist:
If any part of the string contains something known as bad and checked against a list that can be remotely updated(say, a json of known string(s) in bad commands that can be updated from a repository and added too over time), then you can just match against that, and if it meets safety requirement go ahead and run it.


Whitelist:
This even brings in the added functionality of using string matching to call predefined routines you yourself have written to create your own simplified instructions set for the players to use so they can't even attempt to do anything bad. Now all you have to do is scan and see that it only contains your pre approved patterns(again can be remotely updated from some sort of repository) and if it contains anything else, just send back an error stating bad code.


Drawbacks:

  • Maintaining lists in a timely manner
  • Don't make it too simple
  • Most of the time and power will be spent processing the commands

Advantages:

  • YOU defined it
  • You can send the input to be sanitized on another machine(if the machine responds the correct, preformatted way[explicit character count, packet size, and pattern], trust the input and run it)
  • You can make the game much simpler

So as long as you keep your system safe, string matching could make it easy to stay safe since you're already doing it to sanitize the input.

Robert Mennell
  • 6,968
  • 1
  • 13
  • 38
-3

There are several sandboxes built into popular software (antivirus engines, Adobe Reader, Java etc.). What all these programs have in common, is that all their sandboxes was already compromised in the past.

It's not enough to write just sandbox, as someone will find the way to escape it.

What you need to do, is to write a complete virtual machine implementation. Or just it's execution stack, as you can take excellent LUA interpreter for language parsing, tokenization and so on.

Of course the purpose of writing own VM execution stack is to allow fully controlled execution of user programs. You need only some subset of LUA language, so it should be easy to execute everything in controller manner - using local variables, without any network/disk interaction.

Here you have a manual for the start:

http://luaforge.net/docman/83/98/ANoFrillsIntroToLua51VMInstructions.pdf

Good luck!

Tomasz Klim
  • 1,466
  • 12
  • 13
  • I don't really see how your proposal differs from JavaScript engines in browsers for example, and those things have a lot of vulnerabilities. OS level sandboxes (SELinux, MIC, etc.) are _a lot better_ than home-grown solutions. – KovBal Jul 09 '15 at 12:09
  • It depends on complexity level. Ming asked about very simple case, where you don't really need to rely on OS level sandbox, and you don't even need any OS or filesystem interaction. – Tomasz Klim Jul 09 '15 at 13:29
  • A "simple" case like this is much easier to contain with OS level sandboxes (fewer rules, exceptions). I'm not sure what you mean by "don't even need any OS or filesystem interaction." OP says the untrusted code needs disk access, reading from STDIN, writing to STDOUT. These are functions handled by the OS. – KovBal Jul 09 '15 at 13:56
  • 1
    I am really confused by your answer... What's wrong with using mechanisms like Native Client to run untrusted code? Or using mechanisms like seccomp, Wedge or Capsicum to restrict a thread to basic operations? Proper use of OS security primitives is likely to be better than implementing a whole VM in a JS engine. – Steve Dodier-Lazaro Aug 06 '15 at 10:35
  • In general, using dedicated traps (unknown for possible attackers) is better than using standard traps. What's wrong with Native Client? It's relatively secure, however: https://groups.google.com/forum/#!topic/native-client-discuss/HdZn8NBozTw or http://searchsecurity.techtarget.com/news/2240159633/Black-Hat-2012-Google-Chrome-sandbox-security-flaws-to-be-exposed – Tomasz Klim Aug 06 '15 at 10:42
  • Do you propose to serve each client of a Web application (potentially tens of thousands of them) within a VM that has to pull the state of the Web server's database and update it as the player progresses anyway? This is inefficient and merely shifting the problem around. – Steve Dodier-Lazaro Nov 04 '15 at 17:05