Ultimately, this is about trusting not just that the code works as intended, but that the obfuscation step doesn't add more, undocumented steps. Which ultimately means trusting the people who made the code being obfuscated, and the people who made the obfuscator.
I mentioned above in a comment about Ken Thompson's thesis on Reflections on Trusting Trust, and I'll attempt to summarize it for others as I understand it.
The target in Ken Thompson's thesis was the C Compiler, a tool used to convert C code written for human readability into machine code that could be run by the computer itself to achieve goals. For our purposes, it's similar to the obfuscation generation tool you wrote for your JavaScript ad blocker.
The attack in question on the C Compiler is described as such (Well, paraphrased):
1.) Write/find code that, when fed into the a machine-code compiled compiler, generates a version of the compiler from the source code you
provide it.
2.) Modify your source code that you feed in so that the compiler that's produced is checking the lines of code it's compiling for
specific patterns (i.e. a Login class/library), and when it detects
that, add additional code in that area to provided additional
functionality (i.e. "When checking for a username/password
combination, if this password is provided, regardless of what the root
user's password is, grant me root access before checking the rest of
the code."). Now, no matter who writes or what is written into the
Login code for, say, Unix, that compiler will produce a version of the
Unix executable that contains your modified code.
3.) Add an additional step like Step 2.), but for when the compiler detects that it's compiling a compiler, add a step that, Step 2.)'s
code and Step 3)'s code is not already added, add them back in,
regardless of what the source code says.
4.) Compile your source code for the compiler using the machine code compiler that is compiling your source code into its own machine code
compiler, then remove references in the source code of Step 2.) and
Step 3.), and provide the machine code compiler you have just compiled
to someone else (i.e. Someone working on the Unix codebase).
Step 3.) and Step 4.) allow you to have code that straight up doesn't give the game away that you've inserted backdoors into the compiler, while allowing you to have backdoors that means that, if someone tries to compile stuff, it generally works as expected, but you can't trust that it does specifically what it says it does, all the time. You would have to use a different machine code compiler to compile a version without the backdoor, if you were able to notice it at all.
What Ken Thompson makes a point of saying is that when code is being generated or run by a thing you did not personally create, you're putting your trust in the person who made the thing compiling your code, or executing your code.
What does this have to do with an obfuscator?
In this case, your obfuscator is essentially this compiler problem, in that, even if you gave them the original source code. When passed through your obfuscator to turn it into unique code that specifically adds the ability to avoid ad blockers via the changing of how it looks, how can they trust that the obfuscator doesn't add a few extra details?
The nominal and simple solution to this is trusting the people who made the obfuscator and the people who made the code fed into the obfuscator, but that is sometimes harder to gain at times.
Responding to a comment about proving one's own trustworthiness, since it's relevant to gaining trust:
I understand that this is also more of a reputation thing as well - if you have a trustworthy history as a developer (have I achieved that widely? probably in my city and immediate circle yes, probably not elsewhere)
This is somewhat dependent on a situation in question - in this case, one thing that could be useful is to show the obfuscator itself in regular, unobfuscated code itself, and let people test that particular program on programs they write themselves. (i.e. If they threw document.write()
into the obfuscator, would that allow them to look at the resulting obfuscated code and verify that, as weird as it may look, it does do what their source intended to do?
If the obfuscator has been used by multiple projects, across different teams, that also helps, although you'll always be inching towards trust in the system you've created. As mentioned in the other comments, most people will likely resort to "Has it been used by others? Is it still being used by others? Then I'll trust that the developer has made code that I can trust, on account of trusting these other people who are trusting them, until someone finds that this is actually untrustworthy code.". In a sense, trust eventually becomes recursive, in a way that saves time in trusting code does what it says it does, and not having to definitively prove that no given step is adjusting things outside of what they're expected to do.