Octave, 53 52 bytes
Making a complete rewrite helped me golf the code 5 bytes, but I had to add more no-ops, making it a net-save of only 1 byte.
@(_)~diff(sum(de2bi(+_)))%RRPPPVVVW?????????________
I can't add a TIO-link, since none of the online interpreters have implemented the communication toolbox necessary for de2bi
. Changing it to dec2bin
instead would cost 4 bytes (2 for working code, and two no-ops).
I found no way to avoid any of the 27 no-ops. All function names and parentheses are between either below 64, or higher than 96, meaning all "necessary" characters have a 1 in the 6th position (from the right, 2^5). I had a solution with only 23 no-ops, but the code itself was longer. The actual code is 25 bytes, and has the following column sum when counting the bits of the binary equivalent:
15 22 6 15 10 9 13
There are 22 bits in the 6th position from the right (2^5), and only 6 bits in the 4th position from the right (2^3). That means, we have to add at least 16 bytes, to get the 6 up to 22. Now, the comment character %
adds a bit to the 6th position, increasing it to 23. All printable ASCII-characters needs at least one of the two top bits to be 1
. Therefore, adding 17 bytes will give us at least 27 bits in each of the two "top spots" (2^6 and 2^5). Now, we have 27 bits in the top two spots, and 22 in the rest. In order to get to an equilibrium, we have to add 10 bytes, to get to an even 32 bits in each position.
An explanation of the new code (52 bytes):
@(_)~diff(sum(de2bi(+_)))
@(_) % An anonymous function that take a variable _ as input
% We use underscore, instead of a character, since it has the
% most suitable binary represetation
de2bi(+_) % Convert the input string to a binary matrix
sum(de2bi(+_)) % Take the sum of each column
diff(sum(de2bi(+_))) % And calculate the difference between each sum
~diff(sum(de2bi(+_))) % Negate the result, meaning 0 becomes true,
% and everything else becomes false
A vector containing only 1s (true) is evaluated to true in Octave, and a vector containing at least one zero is evaluated to false in Octave.
An explanation of the old code (53 bytes):
@(_)!((_=sum(de2bi(+_)))-_(1))%RRRFVVVVVVVVV_____????
@(_) % An anonymous function that take a variable _ as input
% We use underscore, instead of a character, since it has the
% most suitable binary represetation
! % Negate the result, meaning 0 becomes true, and everything else becomes false
de2bi(+_) % Convert the input string to a binary matrix
sum(de2bi(+_)) % Take the sum of each column
(_=sum(de2bi(+_))) % Assign the result to a new variable, also called _
% It's not a problem that we use the same variable name, due
% to the order of evaluation
((_=sum(de2bi(+_)))-_(1)) % Subtract the first element of the new variable _
% If all elements of the new variable _ are identical, then this
% should give us a vector containing only zeros,
% otherwise, at least one element should be non-zero
!((_=sum(de2bi(+_)))-_(1)) % And finally, we negate this.
A vector containing only 1s (true) is evaluated to true in Octave, and a vector containing at least one zero is evaluated to false in Octave.
How long will the input string be? Can we assume the sum will always be 7 digits long? – Okx – 2017-02-25T21:53:35.150
Also, if our program uses characters other than ASCII characters, what happens? – Okx – 2017-02-25T21:58:31.613
I guess that "then the binary representation of it should adhere to the same rules" should explicitly exclude the clause "only needs to handle printable ASCII as input" (otherwise one could write code with just one byte that maps to non-printable ASCII). – Jonathan Allan – 2017-02-25T22:07:16.153
@Okx you may assume the input string is less than 1kB. The input will only be printable ASCII that can be represented using 7 bits, so yes: There will always be 7 integer (not necessarily digits) sums. – Stewie Griffin – 2017-02-25T22:35:57.200
@JonathanAllan, yes. The code obviously doesn't have the only printable ascii limitation. I'll clarify. – Stewie Griffin – 2017-02-25T22:38:30.280
2@StewieGriffin That's not a very good clarification. If I have a non-ASCII answer, and you try and input the program into the program, and it doesn't work because it only supports ASCII, what happens? – Okx – 2017-02-25T22:52:50.020
@Okx you don't have to support non-ASCII input. So if your program contains non-ASCII characters then you must verify that it's correct "manually", using something else than your program. – Stewie Griffin – 2017-02-25T22:54:25.270
@StewieGriffin How would you do that? – Okx – 2017-02-25T22:55:02.003
It should be fairly simple. Convert your code to binary using your specific encoding and check it the same way as you would with ascii characters. – Stewie Griffin – 2017-02-25T23:05:59.760
@StewieGriffin: So for ASCII input, you always check the lowest 7 bits (even if all codepoints in the input happen to have fewer significant bits than that)? How many bits would one have to check to validate UTF8 source code? – smls – 2017-02-26T00:17:30.640
@smls, you may skip "leading" significant bits. So, if all codepoints are less than 63 then you can check only 6 bits. Same with UTF8. You can not skip bits the other way, i.e. you can't skip the first bit even if all codepoints are even. – Stewie Griffin – 2017-02-26T20:19:02.323
@Okx, just wondering: Is it still unclear (if so, what)? I'll try to clarify if it is. :) – Stewie Griffin – 2017-02-26T20:21:44.320
Would a correct non-8bit reaction be to simply return true? – Riking – 2017-03-01T00:35:21.983