Many cryptographic algorithms (hash functions, symmetric encryption...) are organized as a sequence of "rounds", which are more or less similar to each other. It was empirically noticed that for a given algorithm structure, usually, more rounds imply more security; precisely, some classes of attacks (e.g. differential and linear cryptanalysis) see their efficiency decrease more or less exponentially with the number of rounds.
When cryptographers don't know how to break a complete algorithm, they try to break reduced versions of the same algorithm, with some features removed; in particular less rounds, for algorithms which have rounds. When SHA-256 is said to be broken "up to 46 rounds", then this means that another hash function, obtained by taking SHA-256 but removing the last 18 rounds, can be attacked (at least in an "academic way": the attack needs not be realistic, it just needs to be less impossible than attacking the full function); but removing only the last 17 rounds yields a function which, as far as we know, is as good as the full thing. Cryptographers will feel safer if the maximum number of attacked rounds is substantially lower than the actual number of rounds in the full algorithm.
Since attack efficiency behaviour tends to be exponential in the number of rounds, not linear, this cannot (and must not) be translated, even intuitively, into: "can break 80% of the rounds -> algorithm broken at 80%". Analogy: suppose you want to increase your fortune, by doubling it ten times. You start with one dollar; after ten doublings, you would have 1024 dollars; that's your goal. After eight doublings, you have 256 dollars. Eight doublings done out of ten: that's 80% of the work, right ? Then how comes you have only 256 dollars, and not 80% of 1024 dollars (which would be 819.2 dollars) ?
To sum up, you should not try to read too much in these assertions. They are scientific results which are interesting for scientists, but which can easily be overinterpreted.